Monday, August 13, 2018

Book Review: “Through Two Doors at Once” by Anil Ananthaswamy

Through Two Doors at Once: The Elegant Experiment That Captures the Enigma of Our Quantum Reality Hardcover 
By Anil Ananthaswamy
Dutton (August 7, 2018)

The first time I saw the double-slit experiment, I thought it was a trick, an elaborate construction with mirrors, cooked up by malicious physics teachers. But no, it was not, as I was soon to learn. A laser beam pointed at a plate with two parallel slits will make 5 or 7 or any odd number of dots aligned on the screen, their intensity fading the farther away they are from the middle. Light is a wave, this experiment shows, it can interfere with itself.

But light is also a particle, and indeed the double-slit experiment can, and has been, done with single photons. Perplexingly, these photons will create the same interference pattern; it will gradually build up from single dots. Strange as it sounds, the particles seem to interfere with themselves. The most common way to explain the pattern is that a single particle can go through two slits at once, a finding so unintuitive that physicists still debate just what the results tell us about reality.

The double-slit experiment is without doubt one of the most fascinating physics experiments ever. In his new book “Through Two Doors at Once,” Anil Anathaswamy lays out both the history and the legacy of the experiment.

I previously read Anil’s 2013 book “The Edge of Physics” which got him a top rank on my list of favorite science writers. I like Anil’s writing because he doesn’t waste your time. He says what he has to say, doesn’t make excuses when it gets technical, and doesn’t wrap the science into layers of flowery cushions. He also has a good taste in deciding what the reader should know.

A book about an experiment and its variants might sound like a washing list of technical detail with increasing sophistication, but Anil has picked only the best of the best. Besides the first double-slit experiment, and the first experiment with single particles, there’s also the delayed choice, the quantum eraser, weak measurement, and interference of large molecules (“Schrödinger’s cat”). The reader of course also learns how to detect a live bomb without detonating it, what Anton Zeilinger did on the Canary Islands, and what Yves Couder’s oil droplets may or may not have to do with any of that.

Along with the experiments, Anil explains the major interpretations of quantum mechanics, Copenhagen, Pilot-Wave, Many Worlds, and QBism, and what various people have to say about this. He also mentions spontaneous collapse models, and Penrose’s gravitationally induced collapse in particular.

The book contains a few equations and Anil expects the reader to cope with sometimes rather convoluted setups of mirrors and beam splitters and detectors, but the heavier passages are balanced with stories about the people who made the experiments or who worked on the theories. The result is a very readable account of the past and current status of quantum mechanics. It’s a book with substance and I can recommend it to anyone who has an interest in the foundation of quantum mechanics.

[Disclaimer: free review copy]

Tuesday, August 07, 2018

Dear Dr B: Is it possible that there is a universe in every particle?

“Is it possible that our ‘elementary’ particles are actually large scale aggregations of a different set of something much smaller? Then, from a mathematical point of view, there could be an infinite sequence of smaller (and larger) building blocks and universes.”

                                                                      ~Peter Letts
Dear Peter,

I love the idea that there is a universe in every elementary particle! Unfortunately, it is really hard to make this hypothesis compatible with what we already know about particle physics.

Simply conjecturing that the known particles are made up of smaller particles doesn’t work well. The reason is that the masses of the constituent particles must be smaller than the mass of the composite particle, and the lighter a particle, the easier it is to produce in particle accelerators. So why then haven’t we seen these constituents already?

One way to get around this problem is to make the new particles strongly bound, so that it takes a lot of energy to break the bond even though the particles themselves are light. This is how it works for the strong nuclear force which holds quarks together inside protons. The quarks are light but still difficult to produce because you need a high energy to tear them apart from each other.

There isn’t presently any evidence that any of the known elementary particles are made up of new strongly-bound smaller particles (usually referred to as preons), and many of the models which have been proposed for this have run into conflict with data. Some are still viable, but with such strongly bound particles you cannot create something remotely resembling our universe. To get structures similar to what we observe you need an interplay of both long-distance forces (like gravity) and short-distance forces (like the strong nuclear force).

The other thing you could try is to make the constituent particles really weakly interacting with the particles we know already, so that producing them in particle colliders would be unlikely. This, however, causes several other problems, one of which is that even the very weakly interacting particles carry energy and hence have a gravitational pull. If they are produced at any substantial rates at any time in the history of the universe, we should see evidence for their presence but we don’t. Another problem is that by Heisenberg’s uncertainty principle, particles with small masses are difficult to keep inside small regions of space, like inside another elementary particle.

You can circumvent the latter problem by conjecturing that the inside of a particle actually has a large volume, kinda like Mary Poppins’ magical bag, if anyone recalls this.


via GIPHY

Sounds crazy, I know, but you can make this work in general relativity because space can be strongly curved. Such cases are known as “baby universes”: They look small from the outside but can be huge on the inside. You then need to sprinkle a little quantum gravity magic over them for stability. You also need to add some kind of strange fluid, not unlike dark energy, to make sure that even though there are lots of massive particles inside, from the outside the mass is small.

I hope you notice that this was already a lot of hand-waving, but the problems don’t stop there. If you want every elementary particle to each have a universe inside, you need to explain why we only know 25 different elementary particles. Why aren’t there billions of them? An even bigger problem is that elementary particles are quantum objects: They get constantly created and destroyed and they can be in several places at once. How would structure formation ever work in such a universe? It is also a generally the case in quantum theories that the more variants there are of a particle, the more of them you produce. So why don’t we produce humongous amounts of elementary particles if they’re all different inside?

The problems that I listed do not of course rule out the idea. You can try to come up with explanations for all of this so that the model does what you want and is compatible with all observations. But what you then end up with is a complicated theory that has no evidence speaking for it, designed merely because someone likes the idea. It’s not necessarily wrong. I would even say it’s interesting to speculate about (as you can tell, I have done my share of speculation). But it’s not science.

Thanks for an interesting question!

Wednesday, August 01, 2018

Trostpreis (I’ve been singing again)

I promised my daughters I would write a song in German, so here we go:

 

“Trostpreis” means “consolation prize”. This song was inspired by the kids’ announcement that we have a new rule for pachisi: Adults always lose. I think this conveys a deep truthiness about life in general.

 After I complained the last time that the most frequent question I get about my music videos is “where do you find the time?” (answer: I don’t), I now keep getting the question “Do you sing yourself?” The answer to this one is, yes, I sing myself. Who else do you think would sing for me?

(soundcloud version here)

Monday, July 30, 2018

10 physics facts you should have learned in school but probably didn’t

[Image: Dreamstime.com]
1. Entropy doesn’t measure disorder, it measures likelihood.

Really the idea that entropy measures disorder is totally not helpful. Suppose I make a dough and I break an egg and dump it on the flour. I add sugar and butter and mix it until the dough is smooth. Which state is more orderly, the broken egg on flour with butter over it, or the final dough?

I’d go for the dough. But that’s the state with higher entropy. And if you opted for the egg on flour, how about oil and water? Is the entropy higher when they’re separated, or when you shake them vigorously so that they’re mixed? In this case the better sorted case has the higher entropy.

Entropy is defined as the number of “microstates” that give the same “macrostate”. Microstates contain all details about a system’s individual constituents. The macrostate on the other hand is characterized only by general information, like “separated in two layers” or “smooth on average”. There are a lot of states for the dough ingredients that will turn to dough when mixed, but very few states that will separate into eggs and flour when mixed. Hence, the dough has the higher entropy. Similar story for oil and water: Easy to unmix, hard to mix, hence the unmixed state has the higher entropy.

2. Quantum mechanics is not a theory for short distances only, it’s just difficult to observe its effects on long distances.

Nothing in the theory of quantum mechanics implies that it’s good on short distances only. It just so happens that large objects we observe are composed of many smaller constituents and these constituents’ thermal motion destroys the typical quantum effects. This is a process known as decoherence and it’s the reason we don’t usually see quantum behavior in daily life.

But quantum effect have been measured in experiments spanning hundreds of kilometers and they could span longer distances if the environment is sufficiently cold and steady. They could even span through entire galaxies.

3. Heavy particles do not decay to reach a state of smallest energy, but to reach a state of highest entropy.

Energy is conserved. So the idea that any system tries to minimize its energy is just nonsense. The reason that heavy particles decay if they can is because they can. If you have one heavy particle (say, a muon) it can decay into an electron, a muon-neutrino and an electron anti-neutrino. The opposite process is also possible, but it requires that the three decay products come together. It is hence unlikely to happen.

This isn’t always the case. If you put heavy particles in a hot enough soup, production and decay can reach equilibrium with a non-zero fraction of the heavy particles around.

4. Lines in Feynman diagrams do not depict how particles move, they are visual aids for difficult calculations.

Every once in a while I get an email from someone who notices that many Feynman diagrams have momenta assigned to the lines. And since everyone knows one cannot at the same time measure the position and momentum of a particle arbitrarily well, it doesn’t make sense to draw lines for the particles. It follows that all of particle physics is wrong!

But no, nothing is wrong with particle physics. There are several types of Feynman diagrams and the ones with the momenta are for momentum space. In this case the lines have nothing to do with paths the particles move on. They really don’t. They are merely a way to depict certain types of integrals.

There are some types of Feynman diagrams in which the lines do depict the possible paths that a particle could go, but also in this case the diagram itself doesn’t tell you what the particle actual does. For this you actually have to do the calculation.

5. Quantum mechanics is non-local, but you cannot use it to transfer information non-locally.

Quantum mechanics gives rise to non-local correlations that are quantifiably stronger than those of non-quantum theories. This is what Einstein referred to as  “spooky action at a distance.”

Alas, quantum mechanics is also fundamentally random. So, while you have those awesome non-local correlations, you cannot use them to send messages. Quantum mechanics is indeed perfectly compatible with Einstein’s speed-of-light limit.

6. Quantum gravity becomes relevant at high curvature, not at short distances.

If you estimate the strength of quantum gravitational effects, you find that they should become non-negligible if the curvature of space-time is comparable to the inverse of the Planck length squared. This does not mean that you would see this effect at distances close by the Planck length. I believe the confusion here comes from the term “Planck length.” The Planck length has the unit of a length, but it’s not the length of anything.

Importantly, that the curvature gets close to the inverse of the Planck length squared is an observer-independent statement. It does not depend on the velocity by which you move. The trouble with thinking that quantum gravity becomes relevant at short distances is that it’s incompatible with Special Relativity.

In Special Relativity, lengths can contract. For an observer who moves fast enough, the Earth is a pancake of a width below the Planck length. This would mean we should either long have seen quantum gravitational effects, or Special Relativity must be wrong. Evidence speaks against both.

7. Atoms do not expand when the universe expands. Neither does Brooklyn.

The expansion of the universe is incredibly slow and the force it exerts is weak. Systems that are bound together by forces exceeding that of the expansion remain unaffected. The systems that are being torn apart are those larger than the size of galaxy clusters. The clusters themselves still hold together under their own gravitational pull. So do galaxies, solar systems, planets and of course atoms. Yes, that’s right, atomic forces are much stronger than the pull of the whole universe.

8. Wormholes are science fiction, black holes are not.

The observational evidence for black holes is solid. Astrophysicists can tell the presence of a black hole in various ways.

The easiest way may be to deduce how much mass must be combined in some volume of space to cause the observed motion of visible objects. This alone does not tell you whether the dark object that influences the visible ones has an event horizon. But you can tell the difference between an event horizon and a solid surface by examining the radiation that is emitted by the dark object. You can also use black holes as extreme gravitational lenses to test that they comply with the predictions of Einstein’s theory of General Relativity. This is why physicists are excitedly looking forward to the data from the Event Horizon Telescope.

Maybe most importantly, we know that black holes are a typical end-state of certain types of stellar collapse. It is hard to avoid them, not hard to get them, in general relativity.

Wormholes on the other hand are space-time deformations for which we don’t know any way how they could come about in natural processes. Their presence also requires negative energy, something that has never been observed, and that many physicists believe cannot exist.

9. You can fall into a black hole in finite time. It just looks like it takes forever.

Time slows down if you approach the event horizon, but this doesn’t mean that you actually stop falling before you reach the horizon. This slow-down is merely what an observer in the distance would see. You can calculate how much time it would take to fall into a black hole, as measured by a clock that the observer herself carries. The result is finite. You do indeed fall into the black hole. It’s just that your friend who stays outside never sees you falling in.

10. Energy is not conserved in the universe as a whole, but the effect is so tiny you won’t notice it.

So I said that energy is conserved, but that is only approximately correct. It would be entirely correct for a universe in which space does not change with time. But we know that in our universe space expands, and this expansion results in a violation of energy conservation.

This violation of energy conservation, however, is so minuscule that you don’t notice it in any experiment on Earth. It takes very long times and long distances to notice. Indeed, if the effect was any larger we would have noticed much earlier that the universe expands! So don’t try to blame your electricity bill on the universe, but close the window when the AC is running.

Monday, July 23, 2018

Evidence for modified gravity is now evidence against it.

Hamster. Not to scale.
Img src: Petopedia.
It’s day 12,805 in the war between modified gravity and dark matter. That’s counting the days since the publication of Mordehai Milgrom’s 1983 paper. In this paper he proposed to alter Einstein’s theory of general relativity rather than conjecturing invisible stuff.

Dark matter, to remind you, are hypothetical clouds of particles that hover around galaxies. We can’t see them because they neither emit nor reflect light, but we do notice their gravitational pull because it affects the motion of the matter that we can observe. Modified gravity, on the other hand, posits that normal matter is all there is, but the laws of gravity don’t work as Einstein taught us.

Which one is right? We still don’t know, though astrophysicists have been on the case since decades.

Ruling out modified gravity is hard because it was invented to fit observed correlations, and this achievement is difficult to improve on. The idea which Milgrom came up with in 1983 was a simple model called Modified Newtonian Dynamics (MOND). It does a good job fitting the rotation curves of hundreds of observed galaxies, and in contrast to particle dark matter this model requires only one parameter as input. That parameter is an acceleration scale which determines when the gravitational pull begins to be markedly different from that predicted by Einstein’s theory of General Relativity. Based on his model, Milgrom also made some predictions which held up so far.

In a 2016 paper, McGaugh, Lelli, and Schomberg analyzed data from a set of about 150 disk galaxies. They identified the best-fitting acceleration scale for each of them and found that the distribution is clearly peaked around a mean-value:

Histogram of best-fitting acceleration scale.
Blue: Only high quality data. Via Stacy McGaugh.


McGaugh et al conclude that the data contains evidence for a universal acceleration scale, which is strong support for modified gravity.

Then, a month ago, Nature Astronomy published a paper titled “Absence of a fundamental acceleration scale in galaxies“ by Rodrigues et al (arXiv-version here). The authors claim to have ruled out modified gravity with at least 5 σ, ie with high certainty.

That’s pretty amazing given that two months ago modified gravity worked just fine for galaxies. It’s even more amazing once you notice that they ruled out modified gravity using the same data from which McGaugh et al extracted the universal acceleration scale that’s evidence for modified gravity.

Here is the key figure from the Rodrigues et al paper:

Figure 1 from Rodrigues et al


Shown on the vertical axis is their best-fit parameter for the (log of) the acceleration scale. On the horizontal axis are the individual galaxies. The authors have sorted the galaxies so that the best-fit value is monotonically increasing from left to right, so the increase is not relevant information. Relevant is that if you compare the error-margins marked by the colors, then the best-fit value for the galaxies on the very left side of the plot are incompatible with the best-fit values for the galaxies on the very right side of the plot.

So what the heck is going on?

A first observation is that the two studies don’t use the same data analysis. The main difference is the priors for the distribution of the parameters which are the acceleration scale of modified gravity and the stellar mass-to-light ratio. Where McGaugh et al use Gaussian priors, Rodrigues et al use flat priors over a finite bin. The prior is the assumption you make for what the likely distribution of a parameter is, which you then feed into your model to find the best-fit parameters. A bad prior can give you misleading results.

Example: Suppose you have an artificially intelligent infrared camera. One night it issues an alert: Something’s going on in the bushes of your garden. The AI tells you the best fit to the observation is a 300-pound hamster, the second-best fit is a pair of humans in what seems a peculiar kind of close combat. Which option do you think is more likely?

I’ll go out on a limb and guess the second. And why is that? Because you probably know that 300-pound hamsters are somewhat of a rare occurrence, whereas pairs of humans are not. In other words, you have a different prior than your camera.

Back to the galaxies. As we’ve seen, if you start with an unmotivated prior, you can end up with a “best fit” (the 300 pound hamster) that’s unlikely for reasons your software didn’t account for. At the very least, therefore, you should check that whatever the resulting best-fit distribution of your parameters is doesn’t contradict other data. The Rodrigues et al analysis hence raises the concern that their best-fit distribution for the stellar mass-to-light ratio doesn’t match commonly observed distributions. The McGaugh paper on the other hand starts with a Gaussian prior, which is a reasonable expectation, and hence their analysis makes more physical sense.

Having said this though, it turns out the priors don’t make much of a difference for the results. Indeed, for what the numbers are concerned the results in both papers are pretty much the same. What differs is the conclusion the authors draw from it.

Let me tell you a story to illustrate what’s going on. Suppose you are Isaac Newton and an apple just banged on your head. “Eureka,” you shout and postulate that the gravitational potential fulfils the Poisson-equation.* Smart as you are, you assume that the Earth is approximately a homogeneous sphere, solve the equation and find an inverse-square law. It contains one free parameter which you modestly call “Newton’s constant.”

You then travel around the globe, note down your altitude and measure the acceleration of a falling test-body. Back home you plot the results and extract Newton’s constant (times the mass of the Earth) from the measurements. You find that the measured values cluster around a mean. You declare that you have found evidence for a universal law of gravity.

Or have you?

A week later your good old friend Bob knocks on the door. He points out that if you look at the measurement errors (which you have of course recorded), then some of the measurement results are incompatible with each other at five sigma certainty. There, Bob declares, I have ruled out your law of gravity.

Same data, different conclusion. How does this make sense?

“Well,” Newton would say to Bob, “You have forgotten that besides the measurement uncertainty there is theoretical uncertainty. The Earth is neither homogeneous nor a sphere, so you should expect a spread in the data that exceeds the measurement uncertainty.” – “Ah,” Bob says triumphantly, “But in this case you can’t make predictions!” – “Sure I can,” Newton speaks and points to his inverse square law, “I did.” Bob frowns, but Newton has run out of patience. “Look,” he says and shoves Bob out of the door, “Come back when you have a better theory than I.”

Back to 2018 and modified gravity. Same difference. In the Rodrigues et al paper, the authors rule out that modified gravity’s one-parameter law fits all disk galaxies in the sample. This shouldn’t come as much of a surprise. Galaxies aren’t disks with bulges any more than the Earth is a homogeneous sphere. It’s such a crude oversimplification it’s remarkable it works at all.

Indeed, it would be an interesting exercise to quantify how well modified gravity does in this set of galaxies compared to particle dark matter with the same number of parameters. Chances are, you’d find that particle dark matter too is ruled out at 5 σ. It’s just that no one is dumb enough to make such a claim. When it comes to particle dark matter, astrophysicists will be quick to tell you galaxy dynamics involves loads of complicated astrophysics and it’s rather unrealistic that one parameter will account for the variety in any sample.

Without the comparison to particle dark matter, therefore, the only thing I learn from the Rodrigues et al paper is that a non-universal acceleration scale fits the data better than a universal one. And that I could have told you without even looking at the data.

Summary: I’m not impressed.

It’s day 12,805 in the war between modified gravity and dark matter and dark matter enthusiasts still haven’t found the battle field.


*Dude, I know that Newton isn’t Archimedes. I’m telling a story not giving a history lesson.

Monday, July 16, 2018

SciMeter.org: A new tool for arXiv users

Time is money. It’s also short. And so we save time wherever we can, even when we describe our own research. All too often, one word must do: You are a cosmologist, or a particle physicist, or a string theorist. You work on condensed matter, or quantum optics, or plasma physics.

Most departments of physics use such simple classifications. But our scientific interests cannot be so easily classified. All too often, one word is not enough.

Each scientists has their own, unique, research interests. Maybe you work on astrophysics and cosmology and particle physics and quantum gravity. Maybe you work on condensed matter physics and quantum computing and quantitative finance.

Whatever your research interests, now you can show off its full breadth, not in one word, but in one image. On our new website SciMeter, you can create a keyword cloud from your arXiv papers. For example here is the cloud for Stephen Hawking’s papers:




You can also search for similar authors and for people who have worked on a certain topic, or a set of topics.

As I promised previously, on this website you can also find out your broadness-value (it is listed below the cloud). Please note that the value we quote on the website is standard deviations from the average, so that negative values of broadness are below average and positive values above. Also keep in mind that we measure the broadness relative to the total average, ie for all arXiv categories.

While this website is mostly aimed at authors in the field of physics, we hope it will also be of use to journalists looking for an expert or for editors looking for reviewers.

The software for this website was developed by Tom Price and Tobias Mistele, who were funded on an FQXi minigrant. It is entirely non-profit and we do not plan on making money with it. This means maintaining and expanding this service (eg to include other data) will only be possible if we can find sponsors.

If you encounter any problems with the website, please to not submit the issue here, but use the form that you find on the help-page.

Wednesday, July 11, 2018

What's the purpose of working in the foundations of physics?

That’s me. Photo by George Musser.
Yes, I need a haircut.
[Several people asked me for a transcript of my intro speech that I gave yesterday in Utrecht at the 19th UK and European conference on foundations of physics. So here it is.]

Thank you very much for the invitation to this 19th UK and European conference on Foundations of physics.

The topic of this conference combines everything that I am interested in, and I have seen the organizers have done an awesome job lining up the program. From locality and non-locality to causality, the past hypothesis, determinism, indeterminism, and irreversibility, the arrow of time and presentism, symmetries, naturalness and finetuning, and, of course, everyone’s favorites: black holes and the multiverse.

This is sure to be a fun event. But working in the foundations of physics is not always easy.

When I write a grant proposal, inevitably I will get to the part in which I have to explain the purpose of my work. My first reaction to this is always: What’s the purpose of anything anyway?

My second thought is. Why do only scientists get this question? Why doesn’t anyone ask Gucci what’s the purpose of the Spring collection? Or Ed Sheeran what’s the purpose of singing about your ex-lover? Or Ronaldo what’s the purpose of running after a leather ball and trying to kick it into a net?

Well, you might say, the purpose is that people like to buy it, hear it, watch it. But what’s the purpose of that? Well, it makes their lives better. And what’s the purpose of that?

If you go down the rabbit hole, you find that whenever you ask for purpose you end up asking what’s the purpose of life. And to that, not even scientists have an answer.

Sometimes I therefore think maybe that’s why they ask us to explain the purpose of our work. Just to remind us that science doesn’t have answers to everything.

But then we all know that the purpose of the purpose section in a grant proposal is not to actually explain the purpose of what you do. It is to explain how your work contributes to what other people think its purpose should be. And that often means applications and new technology. It means something you can build, or sell, or put under the Christmas tree.

I am sure I am not the only one here who has struggled to explain the purpose of work in the foundations of physics. I therefore want to share with you an observation that I have made during more than a decade of public outreach: No one from the public ever asks this question. It comes from funding bodies and politicians exclusively.

Everyone else understands just fine what’s the purpose of trying to describe space and time and matter, and the laws they are governed by. The purpose is to understand. These laws describe our universe; they describe us. We want to know how they work.

Seeking this knowledge is the purpose of our work. And, if you collect it in a book, you can even put it under a Christmas tree.

So I think we should not be too apologetic about what we are doing. We are not the only ones who care about the questions we are trying to answer. A lot of people want to understand how the universe works. Because understanding makes their lives better. Whatever is the purpose of that.

But I must add that through my children I have rediscovered the joys of materialism. Kids these days have the most amazing toys. They have tablets that take videos – by voice control. They have toy helicopters – that actually fly. They have glittery slime that glows in the dark.

So, stuff is definitely fun. Let me say some words on applications of the foundations of physics.

In contrast to most people who work in the field – and probably most of you – I do not think that whatever new we will discover in the foundations will remain pure knowledge, detached from technology. The reason is that I believe we are missing something big about the way that quantum theory cooperates with space and time.

And if we solve this problem, it will lead to new insights about quantum mechanics, the theory behind all our fancy new electronic gadgets. I believe the impact will be substantial.

You don’t have to believe me on this.

I hope you will believe me, though, when I say that this conference gathers some of the brightest minds on the planet and tackles some of the biggest questions we know.

I wish all of you an interesting and successful meeting.

Sunday, July 08, 2018

Away Note

I’ll be in Utrecht next week for the 19th UK and European Conference on Foundations of Physics. August 28th I’ll be in Santa Fe, September 6th in Oslo, September 22nd I’ll be in London for another installment of the HowTheLightGetsIn Festival.

I have been educated that this festival derives its name from Leonard Cohen’s song “Anthem” which features the lines
“Ring the bells that still can ring
Forget your perfect offering
There is a crack in everything
That’s how the light gets in.”
If you have read my book, the crack metaphor may ring a bell. If you haven’t, you should.

October 3rd I’m in NYC, October 4th I’m in Richmond, Kentucky, and the second week of October I am at the International Book Fair in Frankfurt.

In case our paths cross, please say “Hi” – I’m always happy to meet readers irl.

Thursday, July 05, 2018

Limits of Reductionism

Almost forgot to mention I made it 3rd prize in the 2018 FQXi essay contest “What is fundamental?”

The new essay continues my thoughts about whether free will is or isn’t compatible with what we know about the laws of nature. For many years I was convinced that the only way to make free will compatible with physics is to adopt a meaningless definition of free will. The current status is that I cannot exclude it’s compatible.

The conflict between physics and free will is that to our best current knowledge everything in the universe is made of a few dozen particles (take or give some more for dark matter) and we know the laws that determine those particles’ behavior. They all work the same way: If you know the state of the universe at one time, you can use the laws to calculate the state of the universe at all other times. This implies that what you do tomorrow is already encoded in the state of the universe today. There is, hence, nothing free about your behavior.

Of course nobody knows the state of the universe at any one time. Also, quantum mechanics makes the situation somewhat more difficult in that it adds randomness. This randomness would prevent you from actually making a prediction for exactly what happens tomorrow even if you knew the state of the universe at one moment in time. With quantum mechanics, you can merely make probabilistic statements. But just because your actions have a random factor doesn’t mean you have free will. Atoms randomly decay and no one would call that free will. (Well, no one in their right mind anyway, but I’ll postpone my rant about panpsychic pseudoscience to some other time.)

People also often quote chaos to insist that free will is a thing, but please note that chaos is predictable in principle, it’s just not predictable in practice because it makes a system’s behavior highly dependent on the exact values of initial conditions. The initial conditions, however, still determine the behavior. So, neither quantum mechanics nor chaos bring back free will into the laws of nature.

Now, there are a lot of people who want you to accept watered-down versions of free will, eg that you have free will because no one can in practice predict your behavior, or because no one can tell what’s going on in your brain, and so on. But I think this is just verbal gymnastics. If you accept that the current theories of particle physics are correct, free will doesn’t exist in a meaningful way.

That is as long as you believe – as almost all physicists do – that the laws that dictate the behavior of large objects follow from the laws that dictate the behavior of the object’s constituents. That’s what reductionism tells us, and let me emphasize that reductionism is not a philosophy, it’s an empirically well-established fact. It describes what we observe. There are no known exceptions to it.

And we have methods to derive the laws of large objects from the laws for small objects. In this case, then, we know that predictive laws for human behavior exist, it’s just that in practice we can’t compute them. It is the formalism of effective field theories that tells us just what is the relation between the behavior of large objects and their interactions to the behavior of smaller objects and their interactions.

There are a few examples in the literature where people have tried to find systems for which the behavior on large scales cannot be computed from the behavior at small scales. But these examples use unrealistic systems with an infinite number of constituents and I don’t find them convincing cases against reductionism.

It occurred to me some years ago, however, that there is a much simpler example for how reductionism can fail. It can fail simply because the extrapolation from the theory at short distances to the one at long distances is not possible without inputting further information. This can happen if the scale-dependence of a constant has a singularity, and that’s something which we cannot presently exclude.

With singularity I here do not mean a divergence, ie that something becomes infinitely large. Such situations are unphysical and not cases I would consider plausible for realistic systems. But functions can have singularities without anything becoming infinite: A singularity is merely a point beyond which a function cannot be continued.

I do not currently know of any example for which this actually happens. But I also don’t know a way to exclude it.

Now consider you want to derive the theory for the large objects (think humans) from the theory for the small objects (think elementary particles) but in your derivation you find that one of the functions has a singularity at some scale in between. This means you need new initial values past the singularity. It’s a clean example for a failure of reductionism, and it implies that the laws for large objects indeed might not follow from the laws for small objects.

It will take more than this to convince me that free will isn’t an illusion, but this example for the failure of reductionism gives you an excuse to continue believing in free will.

Full essay with references here.

Thursday, June 28, 2018

How nature became unnatural

Naturalness is an old idea; it dates back at least to the 16th century and captures the intuition that a useful explanation shouldn’t rely on improbable coincidences. Typical examples for such coincidences, often referred to as “conspiracies,” are two seemingly independent parameters that almost cancel each other, or an extremely small yet nonzero number. Physicists believe that theories which do not have such coincidences, and are natural in this particular sense, are more promising than theories that are unnatural.

Naturalness has its roots in human experience. If you go for a walk and encounter a delicately balanced stack of stones, you conclude someone constructed it. This conclusion is based on your knowledge that stones distributed throughout landscapes by erosion, weathering, deposition, and other geological processes aren’t likely to end up on neat piles. You know this quite reliably because you have seen a lot of stones, meaning you have statistics from which you can extract a likelihood.

As the example hopefully illustrates, naturalness is a good criterion in certain circumstances, namely when you have statistics, or at least means to derive statistics. A solar system with ten planets in almost the same orbit is unlikely. A solar system with ten planets in almost the same plane isn’t. We know this both because we’ve observed a lot of solar systems, and also because we can derive their likely distribution using the laws of nature discovered so far, and initial conditions that we can extract from yet other observations. So that’s a case where you can use arguments from naturalness.

But this isn’t how arguments from naturalness are used in theory-development today. In high energy physics and some parts of cosmology, physicists use naturalness to select a theory for which they do not have – indeed cannot ever have – statistical distributions. The trouble is that they ask which values of parameters in a theory are natural. But since we can observe only one set of parameters – the one that describes our universe – we have no way of collecting data for the likelihood of getting a specific set of parameters.

Physicists use criteria from naturalness anyway. In such arguments, the probability distribution is unspecified, but often implicitly assumed to be almost uniform over an interval of size one. There is, however, no way to justify this distribution; it is hence an unscientific assumption. This problem was made clear already in a 1994 paper by Anderson and Castano.

The standard model of particle physics, or the mass of the Higgs-boson more specifically, is unnatural in the above described way, and this is currently considered ugly. This is why theorists invented new theories to extend the Standard Model so that naturalness would be reestablished. The most popular way to do this is by making the Standard Model supersymmetric, thereby adding a bunch of new particles.

The Large Hadron Collider (LHC), as several previous experiments, has not found any evidence for supersymmetric particles. This means that according to the currently used criterion of naturalness, the theories of particle physics are, in fact, unnatural. That’s also why we presently do not have reason to think that a larger particle collider would produce so-far unknown particles.

In my book “Lost in Math: How Beauty Leads Physics Astray,” I use naturalness as an example for unfounded beliefs that scientists adhere to. I chose naturalness because it’s timely, as with the LHC ruling it out, but I could have used other examples.

A lot of physicists, for example, believe that experiments have ruled out hidden variables explanations of quantum mechanics, which is just wrong (experiments have ruled out only certain types of local hidden variable models). Or they believe that observations of the Bullet Cluster have ruled out modified gravity, which is similarly wrong (the Bullet Clusters is a statistical outlier that is hard to explain both with dark matter and modified gravity). Yes, the devil’s in the details.

Remarkable about these cases isn’t that scientists make mistakes – everyone does – but that they insist on repeating wrong claims, in many cases publicly, even after you explained them why they’re wrong. These and other examples like this leave me deeply frustrated because they demonstrate that even in science it’s seemingly impossible to correct mistakes once they have been adopted by sufficiently many practitioners. It’s this widespread usage that makes it “safe” for individuals to repeat statements they know are wrong, or at least do not know to be correct.

I think this highlights a serious problem with the current organization of academic research. That this can happen worries me considerably because I have no reason to think it’s confined to my own discipline.

Naturalness is an interesting case to keep an eye on. That’s because the LHC now has delivered data that shows the idea was wrong – none of the predictions for supersymmetric particles, or extra dimensions, or tiny black holes, and so on, came true. One possible way for particle physicists to deal with the situation is to amend criteria of naturalness so that they are no longer in conflict with data. I sincerely hope this is not the way it’ll go. The more enlightened way would be to find out just what went wrong.

That you can’t speak about probabilities without a probability distribution isn’t a particularly deep insight, but I’ve had a hard time getting particle physicists to acknowledge this. I summed up my arguments in my January paper, but I’ve been writing and talking about this for 10+ years without much resonance.

I was therefore excited to see that James Wells has a new paper on the arXiv
In his paper, Wells lays out the problems with the lacking probability distribution with several simple examples. And in contrast to me, Wells isn’t a no-one; he’s a well-known US-American particle physicist and Professor at the University of Michigan.

So, now that a man has said it, I hope physicists will listen.



Aside: I continue to have technical troubles with the comments on this blog. Notification has not been working properly for several weeks, which is why I am approving comments with much delay and reply erratically. In the current arrangement, I can neither read the full comment before approving it, nor can I keep comments unread, so as to remind myself to reply, what I did previously. Google says they’ll be fixing it, but Im not sure what, if anything, they’re doing to make that happen.

Also, my institute wants me to move my publicly available files elsewhere because they are discontinuing the links that I have used so far. For this reason most images in the older blogposts have disappeared. I have to manually replace all these links which will take a while. I am very sorry for the resulting ugliness.

Saturday, June 23, 2018

Particle Physics now Belly Up

Particle physics. Artist’s impression.
Professor Ben Allanach is a particle physicist at Cambridge University. He just wrote an advertisement for my book that appeared on Aeon some days ago under the title “Going Nowhere Fast”.

I’m kidding of course, Allanach’s essay has no relation to my book. At least not that I know of. But it’s not a coincidence he writes about the very problems that I also discuss in my book. After all, the whole reason I wrote the book was that this situation was foreseeable: The Large Hadron Collider hasn’t found evidence for any new particles besides the Higgs-boson (at least not so far), so now particle physicists are at a loss for how to proceed. Even if they find something in the data that’s yet to come, it is clear already that their predictions were wrong.

Theory-development in particle physics for the last 40 years has worked mostly by what is known as “top-down” approaches. In these approaches you invent a new theory based on principles you cherish and then derive what you expect to see at particle colliders. This approach has worked badly, to say the least. The main problem, as I lay out in my book, is that the principles which physicists used to construct their theories are merely aesthetic requirements. Top-down approaches, for example, postulate that the fundamental forces are unified or that the universe has additional symmetries or that the parameters in the theory are “natural.” But none of these assumptions are necessary, they’re just pretty guesses.

The opposite to a top-down approach, as Allanach lays out, is a “bottom-up” approach. For that you begin with the theories you have confirmed already and add possible modifications. You do this so that the modifications only become relevant in situations that you have not yet tested. Then you look at the data to find out which modifications are promising because they improve the fit to the data. It’s an exceedingly unpopular approach because the data have just told us over and over and over again that the current theories are working fine and require no modification. Also, bottom-up approaches aren’t pretty which doesn’t help their popularity.

Allanach, as several other people who I know, has stopped working on supersymmetry, an idea that has for a long time been the most popular top-down approach. In principle it’s a good development that researchers in the field draw consequences from the data. But if they don’t try to understand just what went wrong – why so many theoretical physicists believed in ideas that do not describe reality – they risk repeating the same mistake. It’s of no use if they just exchange one criterion of beauty with another.

Bottom-up approaches are safely on the scientific side. But they also increase the risk that we get stuck with the development of new theories because without top-down approaches we do not know where to look for new data. That’s why I argue in my book that some mathematical principles for theory-development are okay to use, namely those which prevent internal contradictions. I know this sounds lame and rather obvious, but in fact it is an extremely strong requirement that, I believe, hasn’t been pushed as far as we could push it.

This top-down versus bottom-up discussion isn’t new. It has come up each time the supposed predictions for new particles turned out to be wrong. And each time the theorists in the field, rather than recognizing the error in their ways, merely adjusted their models to evade experimental bounds and continued as before. Will you let them get away with this once again?

Tuesday, June 19, 2018

Astrophysicists try to falsify multiverse, find they can’t.

Ben Carson, trying to
make sense of the multiverse.
The idea that we live in a multiverse – an infinite collection of universes from which ours is merely one – is interesting but unscientific. It postulates the existence of entities that are unnecessary to describe what we observe. All those other universes are inaccessible to experiment. Science, therefore, cannot say anything about their existence, neither whether they do exist nor whether they don’t exist.

The EAGLE collaboration now knows this too. They recently published results of a computer simulation that details how the formation of galaxies is affected when one changes the value of the cosmological constant, the constant which quantifies how fast the expansion of the universe accelerates. The idea is that, if you believe in the multiverse, then each simulation shows a different universe. And once you know which universes give rise to galaxies, you can calculate how likely we are to be in a universe that contains galaxies and also has the cosmological constant that we observe.

We already knew before the new EAGLE paper that not all values of the cosmological constant are compatible with our existence. If the cosmological constant is too large, the universe either collapses quickly after formation (if the constant is negative) and galaxies are never formed, or it expands so quickly that structures are torn apart before galaxies can form (if the constant is positive).

New is that by using computer simulations, the EAGLE collaboration is able to quantify and also illustrate just how the structure formation differs with the cosmological constant.

The quick summary of their results is that if you turn up the cosmological constant and keep all other physics the same, then making galaxies becomes difficult once the cosmological constant exceeds about 100 times the measured value. The authors haven’t looked at negative values of the cosmological constant because (so they write) that would be difficult to include in their code.

The below image from their simulation shows an example for the gas density. On the left you see a galaxy prototype in a universe with zero cosmological constant. On the right the cosmological constant is 30 times the measured value. In the right image structures are smaller because the gas halos have difficulties growing in a rapidly expanding universe.

From Figure 7 of Barnes et al, MNRAS 477, 3, 1 3727–3743 (2018).

This, however, is just turning knobs on computer code, so what does this have to do with the multiverse? Nothing really. But it’s fun to see how the authors are trying really hard to make sense of the multiverse business.

A particular headache for multiverse arguments, for example, is that if you want to speak about the probability of an observer finding themselves in a particular part of the multiverse, you have to specify what counts as observer. The EAGLE collaboration explains:
“We might wonder whether any complex life form counts as an observer (an ant?), or whether we need to see evidence of communication (a dolphin?), or active observation of the universe at large (an astronomer?). Our model does not contain anything as detailed as ants, dolphins or astronomers, so we are unable to make such a fine distinction anyway.”
But even after settling the question whether dolphins merit observer-status, a multiverse per se doesn’t allow you to calculate the probability for finding this or that universe. For this you need additional information: a probability distribution or “measure” on the multiverse. And this is where the real problem begins. If the probability of finding yourself in a universe like ours is small you may think that disfavors the multiverse hypothesis. But it doesn’t: It merely disfavors the probability distribution, not the multiverse itself.

The EAGLE collaboration elaborates on the conundrum:
“What would it mean to apply two different measures to this model, to derive two different predictions? How could all the physical facts be the same, and yet the predictions of the model be different in the two cases? What is the measure about, if not the universe? Is it just our own subjective opinion? In that case, you can save yourself all the bother of calculating probabilities by having an opinion about your multiverse model directly.”
Indeed. You can even save yourself the bother of having a multiverse to begin with because it doesn’t explain any observation that a single universe wouldn’t also explain.

The authors eventually find that some probability distributions make our universe more, others less probable. Not that you need a computer cluster for that insight. Still, I guess we should applaud the EAGLE people for trying. In their paper, they conclude: “A specific multiverse model must justify its measure on its own terms, since the freedom to choose a measure is simply the freedom to choose predictions ad hoc.”

But of course a model can never justify itself. The only way to justify a physical model is that it fits observation. And if you make ad hoc choices to fit observations you may as well just chose the cosmological constant to be what we observe and be done with it.

In summary, the paper finds that the multiverse hypothesis isn’t falsifiable. If you paid any attention to the multiverse debate, that’s hardly surprising, but it is interesting to see astrophysicists attempting to squeeze some science out of it.

I think the EAGLE study makes a useful contribution to the literature. Multiverse proponents have so far argued that what they do is science because some versions of the multiverse are testable in our universe, for example by searching for entanglement between universes, or for evidence that our universe has collided with another one in the past.

It is correct that some multiverse types are testable, but to the extent that they have been tested, they have been ruled out. This, of course, has not ruled out the multiverse per se, because there are still infinitely many types of multiverses left. For those, the only thing you can do is make probabilistic arguments. The EAGLE paper now highlights that these can’t be falsified either.

I hope that showcasing the practical problem, as the EAGLE paper does, will help clarify the unscientific basis of the multiverse hypothesis.

Let me be clear that the multiverse is a fringe idea in a small part of the physics community. Compared to the troubled scientific methodologies in some parts of particle physics and cosmology, multiverse madness is a minor pest. No, the major problem with the multiverse is its popularity outside of physics. Physicists from Brian Greene to Leonard Susskind to Andrei Linde have publicly spoken about the multiverse as if it was best scientific practice. And that well-known physicists pass the multiverse off as science isn’t merely annoying, it actively damages the reputation of science. A prominent example for the damage that can result comes from the 2015 Republican Presidential Candidate Ben Carson.

Carson is a retired neurosurgeon who doesn’t know much physics, but what he knows he seems to have learned from multiverse enthusiasts. On September 22, 2015, Carson gave a speech at a Baptist school in Ohio, informing his audience that “science is not always correct,” and then went on to justify his science skepticism by making fun of the multiverse:
“And then they go to the probability theory, and they say “but if there’s enough big bangs over a long enough period of time, one of them will be the perfect big bang and everything will be perfectly organized.””
In an earlier speech he cheerfully added: “I mean, you want to talk about fairy tales? This is amazing.”

Now, Carson has misunderstood much of elementary thermodynamics and cosmology, and I have no idea why he thinks he’s even qualified to give speeches about physics. But really this isn’t the point. I don’t expect neurosurgeons to be experts in the foundations of physics and I hope Carson’s audience doesn’t expect that either. Point is, he shows what happens when scientists mix fact with fiction: Non-experts throw out both together.

In his speech, Carson goes on: “I then say to them, look, I’m not going to criticize you. You have a lot more faith than I have… I give you credit for that. But I’m not going to denigrate you because of your faith and you shouldn’t denigrate me for mine.”

And I’m with him on that. No one should be denigrated for what they believe in. If you want to believe in the existence of infinitely many universes with infinitely many copies of yourself, that’s all fine with me. But please don’t pass it off as science.


If you want to know more about the conflation between faith and knowledge in theoretical physics, read my book “Lost in Math: How Beauty Leads Physics Astray.”

Friday, June 15, 2018

Physicists are very predictable

I have a new paper on the arXiv, which came out of a collaboration with Tobias Mistele and Tom Price. We fed a neural network with data about the publication activity of physicists and tried to make a “fake” prediction, for which we used data from the years 1996 up to 2008 to predict the next ten years. Data come from the arXiv via the Open Archives Initiative.

To train the network, we took a random sample of authors and asked the network to predict these authors’ publication data. In each cycle the network learned how good or bad its prediction was and then tried to further improve it.

Concretely, we trained the network to predict the h-index, a measure for the number of citations a researcher has accumulated. We didn’t use this number because we think it’s particularly important, but simply because other groups have previously studied it with neural networks in disciplines other than physics. Looking at the h-index, therefore, allowed us to compare our results with those of the other groups.

After completing the training, we asked how well the network can predict the citations accumulated by authors that were not in the training group. The common way to quantify the goodness of such a prediction is with the coefficient of determination, R2. The higher the coefficient of determination, the stronger the correlation of the prediction with the actual number, hence the better the prediction. The below figure shows the result of our neural network, compared with some other predictors. As you can see we did pretty well!

The blue (solid) curve labelled “Net” shows how good the prediction
of our network is for extrapolating the h-index over the number of years.
The other two curves use simpler predictors on same data. 

We found a coefficient of determination of  0.85 for a prediction over ten years. Earlier studies based on machine learning found 0.48 in the life-sciences and 0.72 in the computer sciences.

But admittedly the coefficient of determination doesn’t tell you all that much unless you’re a statistician. So for illustration, here are some example trajectories that show the network’s prediction compared with the actual trend (more examples in the paper).

However, that our prediction is better than the earlier ones is only partly due to our network’s performance. Turns out, our data are also intrinsically easier to predict, even with simple measures. You can for example just try to linearly extrapolate the h-index, and while that prediction isn’t as good as that of the network, it is still better than the prediction from the other disciplines. You see this in the figure I showed you above for the coefficient of determination. Used on the arXiv data even the simple predictors achieve something like 0.75.

Why that is so, we don’t know. One possible reason could be that the sub-disciplines of physicists are more compartmentalized and researchers often stay in the fields that they started out with. Or, as Nima Arkani-Hamed put it when I interviewed him “everybody does the analytic continuation of what they’ve been doing for their PhD”. (Srsly, the book is fun, you don’t want to miss it.) In this case you establish a reputation early on and your colleagues know what to expect from you. It seems plausible to me that in such highly specialized communities it would be easier to extrapolate citations than in more mixed-up communities. But really this is just speculation; the data don’t tell us that.

Having said this, by and large the network predictions are scarily good. And that’s even though our data is woefully incomplete. We cannot presently, for example, include any papers that are not on the arXiv. Now, in some categories, like hep-th, pretty much all papers are on the arXiv. But in other categories that isn’t the case. So we are simply missing information about what researchers are doing. We also have the usual problem of identifying authors by their names, and haven’t always been able to find the journal in which a paper was published.

Now, if you allow me to extrapolate the present situation, data will become better and more complete. Also the author-identification problem will, hopefully, be resolved at some point. And this means that the predictivity of neural networks chewing on this data is likely to increase some more.

Of course we did not actually make future predictions in the present paper, because in this case we wouldn’t have been able to quantify how good the prediction was. But we could now go and train the network with data up to 2018 and extrapolate up to 2028. And I predict it won’t be long until such extrapolations of scientists’ research careers will be used in hiring and funding decisions. Sounds scary?

Oh, I know, many of you are now dying to see the extrapolation of their own publishing history. I haven’t seen mine. (Really I haven’t. We treat the authors as anonymous numbers.) But (if I can get funding for it) we will make these predictions publicly available in the coming year. If we don’t, rest assured someone else will. And in this case it might end up being proprietary software.

My personal conclusion from this study is that it’s about time we think about how to deal with personalized predictors for research activity.

Tuesday, June 12, 2018

Lost in Math: Out Now.

Today is the official publication date of my book “Lost in Math: How Beauty Leads Physics Astray.” There’s an interview with me in the current issue of “Der Spiegel” (in German) with a fancy photo, and an excerpt at Scientific American.

In the book I don’t say much about myself or my own research. I felt that was both superfluous and not particularly interesting. However, a lot of people have asked me about a comment I made in the passing in an earlier blogpost: “By writing [this book], I waived my hopes of ever getting tenure.” Even the Spiegel-guy who interviewed me asked about this! So I feel like I should add some explanation here to prevent misunderstandings. I hope you excuse that this will be somewhat more personal than my usual blogposts.

I am not tenured and I do not have a tenure-track position, so not like someone threatened me. I presently have a temporary contract which will run out next year. What I should be doing right now is applying for faculty positions. Now imagine you work at some institution which has a group in my research area. Everyone is happily producing papers in record numbers, but I go around and say this is a waste of money. Would you give me a job? You probably wouldn’t. I probably wouldn’t give me a job either.

That’s what prompted my remark, and I think it is a realistic assessment. But please note that the context in which I wrote this wasn’t a sudden bout of self-pity, it was to report a decision I made years ago.

I know you only get to see the results now, but I sold the book proposal in 2015. In the years prior to this, I was shortlisted for some faculty positions. In the end that didn’t pan out, but the interviews prompted me to imagine the scenario in which I actually got the job. And if I was being honest to myself, I didn’t like the scenario.

I have never been an easy fit to academia. I guess I was hoping I’d grow into it, but with time my fit has only become more uneasy. At some point I simply concluded I have had enough of this nonsense. I don’t want to be associated with a community which wastes tax-money because its practitioners think they are morally and intellectually so superior that they cannot possibly be affected by cognitive biases. You only have to read the comments on this blog to witness the origin of the problem, as with commenters who work in the field laughing off the idea that their objectivity can possibly be affected by working in echo-chambers. I can’t even.

As to what I’ll do when my contract runs out, I don’t know. As everyone who has written popular science books will confirm, you don’t get rich from it. The contract with Basic Books would never have paid for the actual working time, and that was before my agent got his share and the tax took its bite. (And while I am already publicly answering questions about my income, I also frequently get asked how much “money” I make with the ads on this blog. It’s about one dollar a day. Really the ads are only there so I don’t feel like I am writing here entirely for nothing.)

What typically happens when I write about my job situation is that everyone offers me advice. This is very kind, but I assure you I am not writing this because I am asking for help. I will be fine, do not worry about me. Yes, I don’t know what I’ll do next year, but something will come to my mind.

What needs help isn’t me, but academia: The current organization amplifies rather than limits the pressure to work on popular and productive topics. If you want to be part of the solution, the best starting point is to read my book.

Thanks for listening. And if you still haven’t had enough of me, Tam Hunt has an interview with me up at Medium. You can leave comments on this interview here.

More info on the official book website: lostinmathbook.com

Monday, June 11, 2018

Are the laws of nature beautiful? (2nd book trailer)

Here is the other video trailer for my book "Lost in Math: How Beauty Leads Physics Astray". 



Since I have been asked repeatedly, let me emphasize again that the book is aimed at non-experts, or "the interested lay-reader" as they say. You do not need to bring any background knowledge in math or physics and, no, there are no equations in the book, just a few numbers. It's really about the question what we mean by "beautiful theories" and how that quest for beauty affects scientists' research interests. You will understand that without equations.

The book has meanwhile been read by several non-physicists and none of them reported great levels of frustration, so I have reason to think I roughly aimed at the right level of technical detail.

Having said that, the book should also be of interest for you if you are a physicist, not because I explain what the standard model is, but because you will get to hear what some top people in the field think about the current situation. (And I swear I was nice to them. My reputation is far worse than I.)

You find a table of contents, and a list of people who I interviewed, as well as transcripts of the video trailers on the book website.

Saturday, June 09, 2018

Video Trailer for "Lost in Math"

I’ve been told that one now does video trailers for books and so here’s me explaining what led me to write the book.

Friday, June 08, 2018

Science Magazine had my book reviewed, and it’s glorious, glorious.

Science Magazine had my book “Lost in Math” reviewed by Dr. Djuna Lize Croon, a postdoctoral associate at the Department of Physics at Dartmouth College, in New Hampshire, USA. Dr. Croon has worked on the very research topics that my book exposes as mathematical fantasies, such as “extra natural inflation” or “holographic composite Higgs models,” so choosing her to review my book is an interesting move for sure.

Dr. Croon does not disappoint. After just having read a whole book that explains how scientists fail to acknowledge that their opinions are influenced by the communities they are part of, Dr. Croon begins her review by quoting an anonymous Facebook friend who denigrates me as a blogger and tells Dr. Croon to dislike my book because I am not “offering solutions.” In her review, then, Dr. Croon reports being shocked to find that I disagree with her scientific heroes, dislikes that I put forward own opinions, and then promptly arrives at the same conclusion that her Facebook friend kindly offered beforehand.

The complaint that I merely criticize my community without making recommendations for improvement is particularly interesting because to begin with it’s wrong. I do spell out very clearly in the book that I think theoretical physicists in the foundations should focus on mathematical consistency and making contact to experiment rather than blathering about beauty. I also say concretely what topics I think are most promising, though I warn the reader that of course I too am biased and they should come to their own conclusions.

Even leaving aside that I do offer recommendations for improvement, I don’t know why it’s my task to come up with something else for those people to do. If they can’t come up with something else themselves, maybe we should just stop throwing money at them?

On a more technical note, I find it depressing that Dr. Croon in her review writes that naturalness has “a statistical meaning” even though I explain like a dozen times throughout the book that you can’t speak of probabilities without having a probability distribution. We have only this set of laws of nature. We have no statistics from which we could infer a likelihood of getting exactly these laws.

In summary, this review does an awesome job highlighting the very problems that my book is about.

Update June 17th: Remarkably enough, the editors at Science decided to remove the facebook quote from the review.

Physicist concludes there are no laws of physics.

It was June 4th, 2018, when Robbert Dijkgraaf, director of the world-renown Princeton Institute for Advanded Study, announced his breakthrough insight. After decades of investigating string theory, Dijkgraaf has concluded that there are no laws of physics.

Guess that’s it, then, folks. Don’t forget to turn off the light when you close the lab door for the last time.

Dijkgraaf knows what he is talking about. “Once you have understood the language of which [the cosmos] is written, it is extremely elegant, beautiful, powerful and natural,” he explained already in 2015, “The universe wants to be understood and that’s why we are all driven by this urge to find an all-encompassing theory.”

This urge has driven Dijkgraaf and many of his colleagues to pursue string theory, which they originally hoped would give rise to the unique theory of everything. That didn’t quite pan out though, and not surprisingly so: The idea of a unique theory is a non-starter. Whether a theory is unique or not depends on what you require of it. The more requirements, the better specified the theory. And whether a theory is the unique one that fulfils these or those requirements tells you nothing about whether it actually correctly describes nature.

But in the last two decades it has become popular in the foundations of physics to no longer require a theory to describe our universe. Without that requirement, then, theories contain an infinite number of universes that are collectively referred to as “the multiverse.” Theorists like this idea because having fewer requirements makes their theory simpler, and thus more beautiful. The resulting theory then uniquely explains nothing.

Of course if you have a theory with a multiverse and want to describe our universe, you have to add back the requirements you discarded earlier. That’s why no one who actually works with data ever starts with a multiverse – it’s utterly useless; Occam’s razor safely shaves it off. The multiverse doesn’t gain you anything, except possibly the ability to make headlines and speak of “paradigm changes.”

In string theory in particular, to describe our universe we’d need to specify just what happens with the additional dimensions of space that the theory needs. String theorists don’t like to make this specification because they don’t know how to make it. So instead they say that since string theory offers so many options for how to make a universe, one of them will probably fit the bill. And maybe one day they will find a meta-law that selects our universe.

Maybe they will. But until then the rest of us will work under the assumption that there are laws of physics. So far, it’s worked quite well, thank you very much.

If you want to know more about what bizarre ideas theoretical physicists have lately come up with, read my book “Lost in Math.”

Sunday, June 03, 2018

Book Update: Books are printed!

Lara.
I had just returned from my trip to Dublin when the door rang and the UPS man dumped two big boxes on our doorstep. My husband has a habit of ordering books by the dozens, so my first thought was that this time he’d really outdone himself. Alas, the UPS guy pointed out, the boxes were addressed to me.

I signed, feeling guilty for having forgotten I ordered something from Lebanon, that being the origin of the parcels. But when I cut the tape and opened the boxes I found – drumrolls please – 25 copies “Lost in Math”. Turns out my publisher has their books printed in Lebanon.

I hadn’t gotten neither galleys nor review copies, so that was the first time I actually saw The-Damned-Book, as it’s been referred to in our household for the past three years. And The-Damned-Book is finally, FINALLY, a real book!

The cover looks much better in print than it does in the digital version because it has some glossy and some matte parts and, well, at least two seven-year-old girls agree that it’s a pretty book and also mommy’s name is on the cover and a mommy photo in the back, and that’s about as far as their interest went.

Gloria.
I’m so glad this is done. When I signed the contract in 2015, I had no idea how nerve-wrecking it would be to wait for the publication. In hindsight, it was a totally nutty idea to base the whole premise of The-Damned-Book on the absence of progress in the foundations of physics when such progress could happen literally any day. For three years now I’ve been holding my breath every time there was a statistical fluctuation in the data.

But now – with little more than a week to go until publication – it seems exceedingly unlikely anything will change about the story I am telling: Fact is, theorists in the foundations of physics have been spectacularly unsuccessful with their predictions for more than 30 years now. (The neutrino-anomaly I recently wrote about wasn’t a prediction, so even if it holds up it’s not something you could credit theorists with.)

The story here isn’t that theorists have been unsuccessful per se, but that they’ve been unsuccessful and yet don’t change their methods. That’s especially perplexing if you know that these methods rely on arguments from beauty even though everyone agrees that beauty isn’t a scientific criterion. Parallels to the continued use of flawed statistical methods in psychology and the life sciences are obvious. There too, everyone kept using bad methods that were known to be bad, just because it was the state of the art. And that’s the real story here: Scientists get stuck on unsuccessful methods.

Some people have voiced their disapproval that I went and argued with some prominent people in the field without them knowing they’d end up in my book. First, I recommend you read the book before you disapprove of what you believe it contains. I think I have treated everyone politely and respectfully.

Second, it should go without saying but apparently doesn’t, that everyone who I interviewed signed an interview waiver, transferring all rights for everything they told me to my publisher in all translations and all formats, globally and for all eternity, Amen. They knew what they were being interviewed for. I’m not an undercover agent, and my opinions about arguments from beauty are no secret.

Furthermore, everyone I interviewed got to see and approved a transcript with the exact wording that appears in the book. Though I later removed some parts entirely because it was just too much material. (And no, I cannot reuse it elsewhere because that was indeed not what they agreed on.) I had to replace a few technical terms here or there that most readers wouldn’t have understood, but these instances are marked in the text.

So, I think I did my best to accurately represent their opinions, and if anyone comes off looking like an idiot it should be me.

Most importantly though, the very purpose of these interviews is to offer the reader a variety of viewpoints rather than merely my own. So of course I disagree with the people I spoke with here and there – because who’d read a dialogue in which two people constantly agree with each other?

In any case, everything’s been said and done and now I can only wait and hope. This isn’t a problem that physicists can solve themselves. The whole organization of academic research today acts against major changes in methodology because that would result in a sudden and drastic decrease of publishing activity. The only way I can see change come about is public pressure. We have had enough talk about elegant universes and beautiful theories.

If you still haven’t made up your mind whether to buy the book, we now have a website which contains a table of contents and links to reviews and such, and Amazon offers you can “Look Inside” the book. Two video trailers will be coming next week. Silicon Republic writes about the book here and Dan Falk has a piece at NBC titled “Why some scientists say physics has gone off the rails.”