Monday, July 30, 2018

10 physics facts you should have learned in school but probably didn’t

[Image: Dreamstime.com]
1. Entropy doesn’t measure disorder, it measures likelihood.

Really the idea that entropy measures disorder is totally not helpful. Suppose I make a dough and I break an egg and dump it on the flour. I add sugar and butter and mix it until the dough is smooth. Which state is more orderly, the broken egg on flour with butter over it, or the final dough?

I’d go for the dough. But that’s the state with higher entropy. And if you opted for the egg on flour, how about oil and water? Is the entropy higher when they’re separated, or when you shake them vigorously so that they’re mixed? In this case the better sorted case has the higher entropy.

Entropy is defined as the number of “microstates” that give the same “macrostate”. Microstates contain all details about a system’s individual constituents. The macrostate on the other hand is characterized only by general information, like “separated in two layers” or “smooth on average”. There are a lot of states for the dough ingredients that will turn to dough when mixed, but very few states that will separate into eggs and flour when mixed. Hence, the dough has the higher entropy. Similar story for oil and water: Easy to unmix, hard to mix, hence the unmixed state has the higher entropy.

2. Quantum mechanics is not a theory for short distances only, it’s just difficult to observe its effects on long distances.

Nothing in the theory of quantum mechanics implies that it’s good on short distances only. It just so happens that large objects we observe are composed of many smaller constituents and these constituents’ thermal motion destroys the typical quantum effects. This is a process known as decoherence and it’s the reason we don’t usually see quantum behavior in daily life.

But quantum effect have been measured in experiments spanning hundreds of kilometers and they could span longer distances if the environment is sufficiently cold and steady. They could even span through entire galaxies.

3. Heavy particles do not decay to reach a state of smallest energy, but to reach a state of highest entropy.

Energy is conserved. So the idea that any system tries to minimize its energy is just nonsense. The reason that heavy particles decay if they can is because they can. If you have one heavy particle (say, a muon) it can decay into an electron, a muon-neutrino and an electron anti-neutrino. The opposite process is also possible, but it requires that the three decay products come together. It is hence unlikely to happen.

This isn’t always the case. If you put heavy particles in a hot enough soup, production and decay can reach equilibrium with a non-zero fraction of the heavy particles around.

4. Lines in Feynman diagrams do not depict how particles move, they are visual aids for difficult calculations.

Every once in a while I get an email from someone who notices that many Feynman diagrams have momenta assigned to the lines. And since everyone knows one cannot at the same time measure the position and momentum of a particle arbitrarily well, it doesn’t make sense to draw lines for the particles. It follows that all of particle physics is wrong!

But no, nothing is wrong with particle physics. There are several types of Feynman diagrams and the ones with the momenta are for momentum space. In this case the lines have nothing to do with paths the particles move on. They really don’t. They are merely a way to depict certain types of integrals.

There are some types of Feynman diagrams in which the lines do depict the possible paths that a particle could go, but also in this case the diagram itself doesn’t tell you what the particle actual does. For this you actually have to do the calculation.

5. Quantum mechanics is non-local, but you cannot use it to transfer information non-locally.

Quantum mechanics gives rise to non-local correlations that are quantifiably stronger than those of non-quantum theories. This is what Einstein referred to as  “spooky action at a distance.”

Alas, quantum mechanics is also fundamentally random. So, while you have those awesome non-local correlations, you cannot use them to send messages. Quantum mechanics is indeed perfectly compatible with Einstein’s speed-of-light limit.

6. Quantum gravity becomes relevant at high curvature, not at short distances.

If you estimate the strength of quantum gravitational effects, you find that they should become non-negligible if the curvature of space-time is comparable to the inverse of the Planck length squared. This does not mean that you would see this effect at distances close by the Planck length. I believe the confusion here comes from the term “Planck length.” The Planck length has the unit of a length, but it’s not the length of anything.

Importantly, that the curvature gets close to the inverse of the Planck length squared is an observer-independent statement. It does not depend on the velocity by which you move. The trouble with thinking that quantum gravity becomes relevant at short distances is that it’s incompatible with Special Relativity.

In Special Relativity, lengths can contract. For an observer who moves fast enough, the Earth is a pancake of a width below the Planck length. This would mean we should either long have seen quantum gravitational effects, or Special Relativity must be wrong. Evidence speaks against both.

7. Atoms do not expand when the universe expands. Neither does Brooklyn.

The expansion of the universe is incredibly slow and the force it exerts is weak. Systems that are bound together by forces exceeding that of the expansion remain unaffected. The systems that are being torn apart are those larger than the size of galaxy clusters. The clusters themselves still hold together under their own gravitational pull. So do galaxies, solar systems, planets and of course atoms. Yes, that’s right, atomic forces are much stronger than the pull of the whole universe.

8. Wormholes are science fiction, black holes are not.

The observational evidence for black holes is solid. Astrophysicists can tell the presence of a black hole in various ways.

The easiest way may be to deduce how much mass must be combined in some volume of space to cause the observed motion of visible objects. This alone does not tell you whether the dark object that influences the visible ones has an event horizon. But you can tell the difference between an event horizon and a solid surface by examining the radiation that is emitted by the dark object. You can also use black holes as extreme gravitational lenses to test that they comply with the predictions of Einstein’s theory of General Relativity. This is why physicists are excitedly looking forward to the data from the Event Horizon Telescope.

Maybe most importantly, we know that black holes are a typical end-state of certain types of stellar collapse. It is hard to avoid them, not hard to get them, in general relativity.

Wormholes on the other hand are space-time deformations for which we don’t know any way how they could come about in natural processes. Their presence also requires negative energy, something that has never been observed, and that many physicists believe cannot exist.

9. You can fall into a black hole in finite time. It just looks like it takes forever.

Time slows down if you approach the event horizon, but this doesn’t mean that you actually stop falling before you reach the horizon. This slow-down is merely what an observer in the distance would see. You can calculate how much time it would take to fall into a black hole, as measured by a clock that the observer herself carries. The result is finite. You do indeed fall into the black hole. It’s just that your friend who stays outside never sees you falling in.

10. Energy is not conserved in the universe as a whole, but the effect is so tiny you won’t notice it.

So I said that energy is conserved, but that is only approximately correct. It would be entirely correct for a universe in which space does not change with time. But we know that in our universe space expands, and this expansion results in a violation of energy conservation.

This violation of energy conservation, however, is so minuscule that you don’t notice it in any experiment on Earth. It takes very long times and long distances to notice. Indeed, if the effect was any larger we would have noticed much earlier that the universe expands! So don’t try to blame your electricity bill on the universe, but close the window when the AC is running.

Monday, July 23, 2018

Evidence for modified gravity is now evidence against it.

Hamster. Not to scale.
Img src: Petopedia.
It’s day 12,805 in the war between modified gravity and dark matter. That’s counting the days since the publication of Mordehai Milgrom’s 1983 paper. In this paper he proposed to alter Einstein’s theory of general relativity rather than conjecturing invisible stuff.

Dark matter, to remind you, are hypothetical clouds of particles that hover around galaxies. We can’t see them because they neither emit nor reflect light, but we do notice their gravitational pull because it affects the motion of the matter that we can observe. Modified gravity, on the other hand, posits that normal matter is all there is, but the laws of gravity don’t work as Einstein taught us.

Which one is right? We still don’t know, though astrophysicists have been on the case since decades.

Ruling out modified gravity is hard because it was invented to fit observed correlations, and this achievement is difficult to improve on. The idea which Milgrom came up with in 1983 was a simple model called Modified Newtonian Dynamics (MOND). It does a good job fitting the rotation curves of hundreds of observed galaxies, and in contrast to particle dark matter this model requires only one parameter as input. That parameter is an acceleration scale which determines when the gravitational pull begins to be markedly different from that predicted by Einstein’s theory of General Relativity. Based on his model, Milgrom also made some predictions which held up so far.

In a 2016 paper, McGaugh, Lelli, and Schomberg analyzed data from a set of about 150 disk galaxies. They identified the best-fitting acceleration scale for each of them and found that the distribution is clearly peaked around a mean-value:

Histogram of best-fitting acceleration scale.
Blue: Only high quality data. Via Stacy McGaugh.


McGaugh et al conclude that the data contains evidence for a universal acceleration scale, which is strong support for modified gravity.

Then, a month ago, Nature Astronomy published a paper titled “Absence of a fundamental acceleration scale in galaxies“ by Rodrigues et al (arXiv-version here). The authors claim to have ruled out modified gravity with at least 5 σ, ie with high certainty.

That’s pretty amazing given that two months ago modified gravity worked just fine for galaxies. It’s even more amazing once you notice that they ruled out modified gravity using the same data from which McGaugh et al extracted the universal acceleration scale that’s evidence for modified gravity.

Here is the key figure from the Rodrigues et al paper:

Figure 1 from Rodrigues et al


Shown on the vertical axis is their best-fit parameter for the (log of) the acceleration scale. On the horizontal axis are the individual galaxies. The authors have sorted the galaxies so that the best-fit value is monotonically increasing from left to right, so the increase is not relevant information. Relevant is that if you compare the error-margins marked by the colors, then the best-fit value for the galaxies on the very left side of the plot are incompatible with the best-fit values for the galaxies on the very right side of the plot.

So what the heck is going on?

A first observation is that the two studies don’t use the same data analysis. The main difference is the priors for the distribution of the parameters which are the acceleration scale of modified gravity and the stellar mass-to-light ratio. Where McGaugh et al use Gaussian priors, Rodrigues et al use flat priors over a finite bin. The prior is the assumption you make for what the likely distribution of a parameter is, which you then feed into your model to find the best-fit parameters. A bad prior can give you misleading results.

Example: Suppose you have an artificially intelligent infrared camera. One night it issues an alert: Something’s going on in the bushes of your garden. The AI tells you the best fit to the observation is a 300-pound hamster, the second-best fit is a pair of humans in what seems a peculiar kind of close combat. Which option do you think is more likely?

I’ll go out on a limb and guess the second. And why is that? Because you probably know that 300-pound hamsters are somewhat of a rare occurrence, whereas pairs of humans are not. In other words, you have a different prior than your camera.

Back to the galaxies. As we’ve seen, if you start with an unmotivated prior, you can end up with a “best fit” (the 300 pound hamster) that’s unlikely for reasons your software didn’t account for. At the very least, therefore, you should check that whatever the resulting best-fit distribution of your parameters is doesn’t contradict other data. The Rodrigues et al analysis hence raises the concern that their best-fit distribution for the stellar mass-to-light ratio doesn’t match commonly observed distributions. The McGaugh paper on the other hand starts with a Gaussian prior, which is a reasonable expectation, and hence their analysis makes more physical sense.

Having said this though, it turns out the priors don’t make much of a difference for the results. Indeed, for what the numbers are concerned the results in both papers are pretty much the same. What differs is the conclusion the authors draw from it.

Let me tell you a story to illustrate what’s going on. Suppose you are Isaac Newton and an apple just banged on your head. “Eureka,” you shout and postulate that the gravitational potential fulfils the Poisson-equation.* Smart as you are, you assume that the Earth is approximately a homogeneous sphere, solve the equation and find an inverse-square law. It contains one free parameter which you modestly call “Newton’s constant.”

You then travel around the globe, note down your altitude and measure the acceleration of a falling test-body. Back home you plot the results and extract Newton’s constant (times the mass of the Earth) from the measurements. You find that the measured values cluster around a mean. You declare that you have found evidence for a universal law of gravity.

Or have you?

A week later your good old friend Bob knocks on the door. He points out that if you look at the measurement errors (which you have of course recorded), then some of the measurement results are incompatible with each other at five sigma certainty. There, Bob declares, I have ruled out your law of gravity.

Same data, different conclusion. How does this make sense?

“Well,” Newton would say to Bob, “You have forgotten that besides the measurement uncertainty there is theoretical uncertainty. The Earth is neither homogeneous nor a sphere, so you should expect a spread in the data that exceeds the measurement uncertainty.” – “Ah,” Bob says triumphantly, “But in this case you can’t make predictions!” – “Sure I can,” Newton speaks and points to his inverse square law, “I did.” Bob frowns, but Newton has run out of patience. “Look,” he says and shoves Bob out of the door, “Come back when you have a better theory than I.”

Back to 2018 and modified gravity. Same difference. In the Rodrigues et al paper, the authors rule out that modified gravity’s one-parameter law fits all disk galaxies in the sample. This shouldn’t come as much of a surprise. Galaxies aren’t disks with bulges any more than the Earth is a homogeneous sphere. It’s such a crude oversimplification it’s remarkable it works at all.

Indeed, it would be an interesting exercise to quantify how well modified gravity does in this set of galaxies compared to particle dark matter with the same number of parameters. Chances are, you’d find that particle dark matter too is ruled out at 5 σ. It’s just that no one is dumb enough to make such a claim. When it comes to particle dark matter, astrophysicists will be quick to tell you galaxy dynamics involves loads of complicated astrophysics and it’s rather unrealistic that one parameter will account for the variety in any sample.

Without the comparison to particle dark matter, therefore, the only thing I learn from the Rodrigues et al paper is that a non-universal acceleration scale fits the data better than a universal one. And that I could have told you without even looking at the data.

Summary: I’m not impressed.

It’s day 12,805 in the war between modified gravity and dark matter and dark matter enthusiasts still haven’t found the battle field.


*Dude, I know that Newton isn’t Archimedes. I’m telling a story not giving a history lesson.

Monday, July 16, 2018

SciMeter.org: A new tool for arXiv users

Time is money. It’s also short. And so we save time wherever we can, even when we describe our own research. All too often, one word must do: You are a cosmologist, or a particle physicist, or a string theorist. You work on condensed matter, or quantum optics, or plasma physics.

Most departments of physics use such simple classifications. But our scientific interests cannot be so easily classified. All too often, one word is not enough.

Each scientists has their own, unique, research interests. Maybe you work on astrophysics and cosmology and particle physics and quantum gravity. Maybe you work on condensed matter physics and quantum computing and quantitative finance.

Whatever your research interests, now you can show off its full breadth, not in one word, but in one image. On our new website SciMeter, you can create a keyword cloud from your arXiv papers. For example here is the cloud for Stephen Hawking’s papers:




You can also search for similar authors and for people who have worked on a certain topic, or a set of topics.

As I promised previously, on this website you can also find out your broadness-value (it is listed below the cloud). Please note that the value we quote on the website is standard deviations from the average, so that negative values of broadness are below average and positive values above. Also keep in mind that we measure the broadness relative to the total average, ie for all arXiv categories.

While this website is mostly aimed at authors in the field of physics, we hope it will also be of use to journalists looking for an expert or for editors looking for reviewers.

The software for this website was developed by Tom Price and Tobias Mistele, who were funded on an FQXi minigrant. It is entirely non-profit and we do not plan on making money with it. This means maintaining and expanding this service (eg to include other data) will only be possible if we can find sponsors.

If you encounter any problems with the website, please to not submit the issue here, but use the form that you find on the help-page.

Wednesday, July 11, 2018

What's the purpose of working in the foundations of physics?

That’s me. Photo by George Musser.
Yes, I need a haircut.
[Several people asked me for a transcript of my intro speech that I gave yesterday in Utrecht at the 19th UK and European conference on foundations of physics. So here it is.]

Thank you very much for the invitation to this 19th UK and European conference on Foundations of physics.

The topic of this conference combines everything that I am interested in, and I have seen the organizers have done an awesome job lining up the program. From locality and non-locality to causality, the past hypothesis, determinism, indeterminism, and irreversibility, the arrow of time and presentism, symmetries, naturalness and finetuning, and, of course, everyone’s favorites: black holes and the multiverse.

This is sure to be a fun event. But working in the foundations of physics is not always easy.

When I write a grant proposal, inevitably I will get to the part in which I have to explain the purpose of my work. My first reaction to this is always: What’s the purpose of anything anyway?

My second thought is. Why do only scientists get this question? Why doesn’t anyone ask Gucci what’s the purpose of the Spring collection? Or Ed Sheeran what’s the purpose of singing about your ex-lover? Or Ronaldo what’s the purpose of running after a leather ball and trying to kick it into a net?

Well, you might say, the purpose is that people like to buy it, hear it, watch it. But what’s the purpose of that? Well, it makes their lives better. And what’s the purpose of that?

If you go down the rabbit hole, you find that whenever you ask for purpose you end up asking what’s the purpose of life. And to that, not even scientists have an answer.

Sometimes I therefore think maybe that’s why they ask us to explain the purpose of our work. Just to remind us that science doesn’t have answers to everything.

But then we all know that the purpose of the purpose section in a grant proposal is not to actually explain the purpose of what you do. It is to explain how your work contributes to what other people think its purpose should be. And that often means applications and new technology. It means something you can build, or sell, or put under the Christmas tree.

I am sure I am not the only one here who has struggled to explain the purpose of work in the foundations of physics. I therefore want to share with you an observation that I have made during more than a decade of public outreach: No one from the public ever asks this question. It comes from funding bodies and politicians exclusively.

Everyone else understands just fine what’s the purpose of trying to describe space and time and matter, and the laws they are governed by. The purpose is to understand. These laws describe our universe; they describe us. We want to know how they work.

Seeking this knowledge is the purpose of our work. And, if you collect it in a book, you can even put it under a Christmas tree.

So I think we should not be too apologetic about what we are doing. We are not the only ones who care about the questions we are trying to answer. A lot of people want to understand how the universe works. Because understanding makes their lives better. Whatever is the purpose of that.

But I must add that through my children I have rediscovered the joys of materialism. Kids these days have the most amazing toys. They have tablets that take videos – by voice control. They have toy helicopters – that actually fly. They have glittery slime that glows in the dark.

So, stuff is definitely fun. Let me say some words on applications of the foundations of physics.

In contrast to most people who work in the field – and probably most of you – I do not think that whatever new we will discover in the foundations will remain pure knowledge, detached from technology. The reason is that I believe we are missing something big about the way that quantum theory cooperates with space and time.

And if we solve this problem, it will lead to new insights about quantum mechanics, the theory behind all our fancy new electronic gadgets. I believe the impact will be substantial.

You don’t have to believe me on this.

I hope you will believe me, though, when I say that this conference gathers some of the brightest minds on the planet and tackles some of the biggest questions we know.

I wish all of you an interesting and successful meeting.

Sunday, July 08, 2018

Away Note

I’ll be in Utrecht next week for the 19th UK and European Conference on Foundations of Physics. August 28th I’ll be in Santa Fe, September 6th in Oslo, September 22nd I’ll be in London for another installment of the HowTheLightGetsIn Festival.

I have been educated that this festival derives its name from Leonard Cohen’s song “Anthem” which features the lines
“Ring the bells that still can ring
Forget your perfect offering
There is a crack in everything
That’s how the light gets in.”
If you have read my book, the crack metaphor may ring a bell. If you haven’t, you should.

October 3rd I’m in NYC, October 4th I’m in Richmond, Kentucky, and the second week of October I am at the International Book Fair in Frankfurt.

In case our paths cross, please say “Hi” – I’m always happy to meet readers irl.

Thursday, July 05, 2018

Limits of Reductionism

Almost forgot to mention I made it 3rd prize in the 2018 FQXi essay contest “What is fundamental?”

The new essay continues my thoughts about whether free will is or isn’t compatible with what we know about the laws of nature. For many years I was convinced that the only way to make free will compatible with physics is to adopt a meaningless definition of free will. The current status is that I cannot exclude it’s compatible.

The conflict between physics and free will is that to our best current knowledge everything in the universe is made of a few dozen particles (take or give some more for dark matter) and we know the laws that determine those particles’ behavior. They all work the same way: If you know the state of the universe at one time, you can use the laws to calculate the state of the universe at all other times. This implies that what you do tomorrow is already encoded in the state of the universe today. There is, hence, nothing free about your behavior.

Of course nobody knows the state of the universe at any one time. Also, quantum mechanics makes the situation somewhat more difficult in that it adds randomness. This randomness would prevent you from actually making a prediction for exactly what happens tomorrow even if you knew the state of the universe at one moment in time. With quantum mechanics, you can merely make probabilistic statements. But just because your actions have a random factor doesn’t mean you have free will. Atoms randomly decay and no one would call that free will. (Well, no one in their right mind anyway, but I’ll postpone my rant about panpsychic pseudoscience to some other time.)

People also often quote chaos to insist that free will is a thing, but please note that chaos is predictable in principle, it’s just not predictable in practice because it makes a system’s behavior highly dependent on the exact values of initial conditions. The initial conditions, however, still determine the behavior. So, neither quantum mechanics nor chaos bring back free will into the laws of nature.

Now, there are a lot of people who want you to accept watered-down versions of free will, eg that you have free will because no one can in practice predict your behavior, or because no one can tell what’s going on in your brain, and so on. But I think this is just verbal gymnastics. If you accept that the current theories of particle physics are correct, free will doesn’t exist in a meaningful way.

That is as long as you believe – as almost all physicists do – that the laws that dictate the behavior of large objects follow from the laws that dictate the behavior of the object’s constituents. That’s what reductionism tells us, and let me emphasize that reductionism is not a philosophy, it’s an empirically well-established fact. It describes what we observe. There are no known exceptions to it.

And we have methods to derive the laws of large objects from the laws for small objects. In this case, then, we know that predictive laws for human behavior exist, it’s just that in practice we can’t compute them. It is the formalism of effective field theories that tells us just what is the relation between the behavior of large objects and their interactions to the behavior of smaller objects and their interactions.

There are a few examples in the literature where people have tried to find systems for which the behavior on large scales cannot be computed from the behavior at small scales. But these examples use unrealistic systems with an infinite number of constituents and I don’t find them convincing cases against reductionism.

It occurred to me some years ago, however, that there is a much simpler example for how reductionism can fail. It can fail simply because the extrapolation from the theory at short distances to the one at long distances is not possible without inputting further information. This can happen if the scale-dependence of a constant has a singularity, and that’s something which we cannot presently exclude.

With singularity I here do not mean a divergence, ie that something becomes infinitely large. Such situations are unphysical and not cases I would consider plausible for realistic systems. But functions can have singularities without anything becoming infinite: A singularity is merely a point beyond which a function cannot be continued.

I do not currently know of any example for which this actually happens. But I also don’t know a way to exclude it.

Now consider you want to derive the theory for the large objects (think humans) from the theory for the small objects (think elementary particles) but in your derivation you find that one of the functions has a singularity at some scale in between. This means you need new initial values past the singularity. It’s a clean example for a failure of reductionism, and it implies that the laws for large objects indeed might not follow from the laws for small objects.

It will take more than this to convince me that free will isn’t an illusion, but this example for the failure of reductionism gives you an excuse to continue believing in free will.

Full essay with references here.