Friday, December 27, 2019

How did the universe begin?

The year is almost over and a new one about to begin. So today I want to talk about the beginning of everything, the whole universe. What do scientists think how it all started?


We know that the universe expands, and as the universe expands, matter and energy in it dilutes. So when the universe was younger, matter and energy was much denser. Because it was denser, it had a higher temperature. And a higher temperature means that on the average particles collided at higher energies.

Now you can ask, what do we know about particles colliding at high energies? Well, the highest collision energies between particles that we have experimentally tested are those produced at the Large Hadron Collider. These are energies about a Tera-electron Volt or TeV for short, which, if you convert it into a temperature, comes out to be 1016 Kelvin. In words that’s ten million billion Kelvin which sounds awkward and is the reason no one quotes such temperatures in Kelvin.

So, up to a temperature of about a TeV, we understand the physics of the early universe and we can reliably tell what happened. Before that, we have only speculation.

The simplest way to speculate about the early universe is just to extrapolate the known theories back to even higher temperatures, assuming that the theories do not change. What happens then is that you eventually reach energy densities so high that the quantum fluctuations of space and time become relevant. To calculate what happens then, we would need a theory of quantum gravity, which we do not have. So, in brief, the scientific answer is that we have no idea how the universe began.

But that’s a boring answer and one you cannot publish, so it’s not how the currently most popular theories for the beginning of the universe work. The currently most popular theories assume that the electromagnetic interaction must have been unified with the strong and the weak nuclear force at high energies. They also assume that an additional field exists, which is the so-called inflaton field.

The purpose of the inflaton is to cause the universe to expand very rapidly early on, in a period which is called “inflation”. The inflaton field then has to create all the other matter in the universe and basically disappear because we don’t see it today. In these theories, our universe was born from a quantum fluctuation of the inflaton field and this birth event is called the “Big Bang”.

Actually, if you believe this idea, the quantum fluctuations still go on outside of our universe, so there are constantly other universes being created.

How scientific is this idea? Well, we have zero evidence that the forces were ever unified and have equally good evidence, namely none, that the inflaton field exists. The idea that the early universe underwent a phase of rapid expansion fits to some data, but the evidence is not overwhelming, and in any case, what the cause of this rapid expansion would have been – an inflaton field or something else – the data don’t tell us.

So that the universe began from a quantum fluctuations is one story. Another story has it that the universe was not born once but is born over and over again in what is called a “cyclic” model. In cyclic models, the Big Bang is replaced by an infinite sequence of Big Bounces.

There are several types of cyclic models. One is called the Ekpyrotic Universe. The idea of the Ekpyrotic Universe was originally borrowed from string theory and had it that higher-dimensional membranes collided and our universe was created from that collision.

Another idea of a cyclic universe is due to Roger Penrose and is called Conformal Cyclic Cosmology. Penrose’s idea is basically that when the universe gets very old, it loses all sense of scale, so really there is no point in distinguishing the large from the small anymore, and you can then glue together the end of one universe with the beginning of a new one.

Yet another theory has it that new universes are born inside black holes. You can speculate about this because no one has any idea what goes on inside black holes anyway.

An idea that sounds similar but is actually very different is that the universe started from a black hole in 4 dimensions of space. This is a speculation that was put forward by Niayesh Afshordi some years ago.

 Then there is the possibility that the universe didn’t really “begin” but that before a certain time there was only space without any time. This is called the “no-boundary proposal” and it goes back to Jim Hartle and Stephen Hawking. A very similar disappearance of time was more recently found in calculations based on loop quantum cosmology where the researchers referred to it as “Asymptotic Silence”.

Then we have String Gas Cosmology, in which the early universe lingered in an almost steady state for an infinite amount of time before beginning to expand, and then there is the so-called Unicorn Cosmology, according to which our universe grew out of unicorn shit. Nah, I made this one up.

So, as you see, physicists have many ideas about how the universe began. The trouble is that not a single one of those ideas is backed up by evidence. And they may never be backed up by evidence, because the further back in time you try to look, the fewer data we have. While some of those speculations for the early universe result in predictions, confirming those predictions would not allow us to conclude that the theory must have been correct because there are many different theories that could give rise to the same prediction.

This is a way in which our scientific endeavors are fundamentally limited. Physicists may simply have produced a lot of mathematical stories about how it all began, but these aren’t any better than traditional tales of creation.

Friday, December 20, 2019

What does a theoretical physicist do?

This week, I am on vacation and so I want to answer a question that I get a lot but that doesn’t really fit into the usual program: What does a theoretical physicist do? Do you sit around all day and dream up new particles or fantasize about the beginning of the universe? How does it work?


Research in theoretical physics generally does one of two things: Either we have some data that require explanation for which a theory must be developed. Or we have a theory that requires improvement, and the improved theory leads to a prediction which is then experimentally tested.

I have noticed that some people think theoretical physics is something special to the foundations of physics. But that isn’t so. All subdisciplines of physics have an experimental part and a theoretical part. How much the labor is divided into different groups of people depends strongly on the field. In some parts of astrophysics, for example, data collection, analysis, and theory-development is done by pretty much the same people. That’s also the case in some parts of condensed matter physics. In these areas many experimentalists are also theorists. But if you look at fields like cosmology or high energy particle physics, people tend to specialize either in experiment or in theory development.

Theoretical physics is pretty much a job like any other in that you get an education and then you put your knowledge to work. You find theoretical physicists in public higher education institutions, which is probably what you are most familiar with, but you also find them in the industry or in non-profit research institution like the one I work at. Just what the job entails depends on the employer. Besides the research, a theoretical physicist may have administrational duties, or may teach, mentor students, do public outreach, organize scientific meetings, sit on committees and so on.

When it comes to the research itself, theoretical physics doesn’t work any different from other disciplines of science. The largest part of research, ninetynine percent, is learning what other people have done. This means you read books and papers, go to seminars, attend conferences, listen to lectures and you talk to people until you understand what they have done.

And as you do that, you probably come across some open problems. And from those you pick one for your own research. You would pick a problem that, well, you are interested in, but also something that you think will move the field forward and, importantly, you pick a problem that you think you have a reasonable chance of solving with what you know. Picking a research topic that is both interesting and feasible is not easy and requires quite some familiarity with the literature, which is why younger researchers usually rely on more senior colleagues to pick a topic.

Where theoretical physics is special is in the amount of mathematics that we use in our research. In physics all theories are mathematical. This means both that you must know how to model a natural system with mathematics and you must know how to do calculations within that model. Of course we now do a lot of calculations numerically, on a computer, but you still have to understand the mathematics that goes into this. There is really no way around it. So that’s the heart of the job, you have to find, understand, and use the right mathematics to describe nature.

The thing that a lot people don’t understand is just how constraining mathematics is in theory development. You cannot just dream up a particle, because almost everything that you can think of will not work if you write down the mathematics. It’s either just nonsense or you find quickly that it is in conflict with observation already.

But the job of a theoretical physicist is not done with finishing a calculation. Once you have your results, you have to write them up and publish them and then you will give lectures about it so that other people can understand what you have done and hopefully build on your work.

What’s fascinating about theoretical physics is just how remarkably well mathematics describes nature. I am always surprised if people tell me that they never understood physics because I would say that physics is the only thing you can really understand. It’s the rest of the world that doesn’t make sense to me.

Monday, December 16, 2019

The path we didn’t take


“There are only three people in the world who understand Superdeterminism,” I used to joke, “Me, Gerard ‘t Hooft, and a guy whose name I can’t remember.” In all honesty, I added the third person just in case someone would be offended I hadn’t heard of them.

What the heck is Superdeterminism?, you ask. Superdeterminism is what it takes to solve the measurement problem of quantum mechanics. And not only this. I have become increasingly convinced that our failure to solve the measurement problem is what prevents us from making progress in the foundations of physics overall. Without understanding quantum mechanics, we will not understand quantum field theory, and we will not manage to quantize gravity. And without progress in the foundations of physics, we are left to squeeze incrementally better applications out of the already known theories.

The more I’ve been thinking about this, the more it seems to me that quantum measurement is the mother of all problems. And the more I am talking about what I have been thinking about, the crazier I sound. I’m not even surprised no one wants to hear what I think is the obvious solution: Superdeterminism! No one besides ‘t Hooft, that is. And that no one listens to ‘t Hooft, despite him being a Nobel laureate, doesn’t exactly make me feel optimistic about my prospects of getting someone to listen to me.

The big problem with Superdeterminism is that the few people who know what it is, seem to have never thought about it much, and now they are stuck on the myth that it’s an unscientific “conspiracy theory”. Superdeterminism, so their story goes, is the last resort of the dinosaurs who still believe in hidden variables. According to these arguments, Superdeterminism requires encoding the outcome of every quantum measurement in the initial data of the universe, which is clearly outrageous. Not only that, it deprives humans of free will, which is entirely unacceptable.

If you have followed this blog for some while, you have seen me fending off this crowd that someone once aptly described to me as “Bell’s Apostles”. Bell himself, you see, already disliked Superdeterminism. And the Master cannot err, so it must be me who is erring. Me and ‘t Hooft. And that third person whose name I keep forgetting.

Last time I made my 3-people-joke was in February during a Facebook discussion about the foundations of quantum mechanics. On this occasion, someone offered in response the name “Tim Palmer?” Alas, the only Tim Palmer I’d heard of is a British music producer from whose videos I learned a few things about audio mixing. Seemed like an unlikely match.

But the initial conditions of the universe had a surprise in store for me.

The day of that Facebook comment I was in London for a dinner discussion on Artificial Intelligence. How I came to be invited to this event is a mystery to me. When the email came, I googled the sender, who turned out to be not only the President of the Royal Society of London but also a Nobel Prize winner. Thinking this must be a mistake, I didn’t reply. A few weeks later, I tried to politely decline, pointing out, I paraphrase, that my knowledge about Artificial Intelligence is pretty much exhausted by it being commonly abbreviated AI. In return, however, I was assured no special expertise was expected of me. And so I thought, well, free trip to London, dinner included. Would you have said no?

When I closed my laptop that evening and got on the way to the AI meeting, I was still wondering about the superdeterministic Palmer. Maybe there was a third person after all? The question was still circling in my head when the guy seated next to me introduced himself as... Tim Palmer.

Imagine my befuddlement.

This Tim Palmer, however, talked a lot about clouds, so I filed him under “weather and climate.” Then I updated my priors for British men to be called Tim Palmer. Clearly a more common name than I had anticipated.

But the dinner finished and our group broke up and, as we walked out, the weather-Palmer began talking about free will! You’d think it would have dawned on me then I’d stumbled over the third Superdeterminist. However, I was merely thinking I’d had too much wine. Also, I was now somewhere in London in the middle of the night, alone with a man who wanted to talk to me about free will. I excused myself and left him standing in the street.

But Tim Palmer turned out to not only be a climate physicist with an interest in the foundations of quantum mechanics, he also turned out to be remarkably persistent. He wasn’t remotely deterred by my evident lack of interest. Indeed, I later noticed he had sent me an email already two years earlier. Just that I dumped it unceremoniously in my crackpot folder. Worse, I seem to vaguely recall telling my husband that even the climate people now have ideas for how to revolutionize quantum mechanics, hahaha.

Cough.

Tim, in return, couldn’t possibly have known I was working on Superdeterminism. In February, I had just been awarded a small grant from the Fetzer Franklin Fund to hire a postdoc to work on the topic, but the details weren’t public information.

Indeed, Tim and I didn’t figure out we have a common interest until I interviewed him on a paper he had written about something entirely different, namely how to quantify the uncertainty of climate models.

I’d rather not quote cranks, so I usually spend some time digging up information about people before interviewing them. That’s when I finally realized Tim’s been writing about Superdeterminism when I was still in high school, long before even ‘t Hooft got into the game. Even more interestingly, he wrote his PhD thesis in the 1970s about general relativity before gracefully deciding that working with Stephen Hawking would not be a good investment of his time (a story you can hear here at 1:12:15). Even I was awed by that amount of foresight.

Tim and I then spent some months accusing each other of not really understanding how Superdeterminism works. In the end, we found we agree on more points than not and wrote a paper to explain what Superdeterminism is and why the objections often raised against it are ill-founded. Today, this paper is on the arXiv.


Thanks to support from the Fetzer Franklin Fund, we are also in the midst of organizing a workshop on Superdeterminism and Retrocausality. So this isn’t the end of the story, it’s the beginning.

Saturday, December 14, 2019

How Scientists Can Avoid Cognitive Bias

Today I want to talk about a topic that is much, much more important than anything I have previously talked about. And that’s how cognitive biases prevent science from working properly.


Cognitive biases have received some attention in recent years, thanks to books like “Thinking Fast and Slow,” “You Are Not So Smart,” or “Blind Spot.” Unfortunately, this knowledge has not been put into action in scientific research. Scientists do correct for biases in statistical analysis of data and they do correct for biases in their measurement devices, but they still do not correct for biases in the most important apparatus that they use: Their own brain.

Before I tell you what problems this creates, a brief reminder what a cognitive bias is. A cognitive bias is a thinking shortcut which the human brain uses to make faster decisions.

Cognitive biases work much like optical illusions. Take this example of an optical illusion. If your brain works normally, then the square labelled A looks much darker than the square labelled B.

[Example of optical illusion. Image: Wikipedia]
But if you compare the actual color of the pixels, you see that these squares have exactly the same color.
[Example of optical illusion. Image: Wikipedia]
The reason that we intuitively misjudge the color of these squares is that the image suggests it is really showing a three-dimensional scene where part of the floor is covered by a shadow. Your brain factors in the shadow and calculates back to the original color, correctly telling you that the actual color of square B must have been lighter than that of square A.

So, if someone asked you to judge the color in a natural scene, your answer would be correct. But if your task was to evaluate the color of pixels on the screen, you would give a wrong answer – unless you know of your bias and therefore do not rely on your intuition.

Cognitive biases work the same way and can be prevented the same way: by not relying on intuition. Cognitive biases are corrections that your brain applies to input to make your life easier. We all have them, and in every-day life, they are usually beneficial.

The maybe best-known cognitive bias is attentional bias. It means that the more often you hear about something, the more important you think it is. This normally makes a lot of sense. Say, if many people you meet are talking about the flu, chances are the flu’s making the rounds and you are well-advised to pay attention to what they’re saying and get a flu shot.

But attentional bias can draw your attention to false or irrelevant information, for example if the prevalence of a message is artificially amplified by social media, causing you to misjudge its relevance for your own life. A case where this frequently happens is terrorism. Receives a lot of media coverage, has people hugely worried, but if you look at the numbers for most of us terrorism is very unlikely to directly affect our life.

And this attentional bias also affects scientific judgement. If a research topic receives a lot of media coverage, or scientists hear a lot about it from their colleagues, those researchers who do not correct for attentional bias are likely to overrate the scientific relevance of the topic.

There are many other biases that affect scientific research. Take for example loss aversion. This is more commonly known as “throwing good money after bad”. It means that if we have invested time or money into something, we are reluctant to let go of it and continue to invest in it even if it no longer makes sense, because getting out would mean admitting to ourselves that we made a mistake. Loss aversion is one of the reasons scientists continue to work on research agendas that have long stopped being promising.

But the most problematic cognitive bias in science is social reinforcement, also known as group think. This is what happens in almost closed, likeminded, communities, if you have people reassuring each other that they are doing the right thing. They will develop a common narrative that is overly optimistic about their own research, and they will dismiss opinions from people outside their own community. Group think makes it basically impossible for researchers to identify their own mistakes and therefore stands in the way of the self-correction that is so essential for science.

A bias closely linked to social reinforcement is the shared information bias. This bias has the consequence that we are more likely to pay attention to information that is shared by many people we know, rather than to the information held by only few people. You can see right away how this is problematic for science: That’s because how many people know of a certain fact tells you nothing about whether that fact is correct or not. And whether some information is widely shared should not be a factor for evaluating its correctness.

Now, there are lots of studies showing that we all have these cognitive biases and also that intelligence does not make it less likely to have them. It should be obvious, then, that we organize scientific research so that scientists can avoid or at least alleviate their biases. Unfortunately, the way that research is currently organized has exactly the opposite effect: It makes cognitive biases worse.

For example, it is presently very difficult for a scientist to change their research topic, because getting a research grant requires that you document expertise. Likewise, no one will hire you to work on a topic you do not already have experience with.

Superficially this seems like good strategy to invest money into science, because you reward people for bringing expertise. But if you think about the long-term consequences, it is a bad investment strategy. Because now, not only do researchers face a psychological hurdle to leaving behind a topic they have invested time in, they would also cause themselves financial trouble. As a consequence, researchers are basically forced to continue to claim that their research direction is promising and to continue working on topics that lead nowhere.

Another problem with the current organization of research is that it rewards scientists for exaggerating how exciting their research is and for working on popular topics, which makes social reinforcement worse and adds to the shared information bias.

I know this all sounds very negative, but there is good news too: Once you are aware that these cognitive biases exist and you know the problems that they can cause, it is easy to think of ways to work against them.

For example, researchers should be encouraged to change topics rather than basically being forced to continue what they’re already doing. Also, researchers should always list shortcoming of their research topics, in lectures and papers, so that the shortcomings stay on the collective consciousness. Similarly, conferences should always have speakers from competing programs, and scientists should be encouraged to offer criticism on their community and not be avoided for it. These are all little improvements that every scientist can make individually, and once you start thinking about it, it’s not hard to come up with further ideas.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality.

The reason this is so, so important to me, is that science drives innovation and if science does not work properly, progress in our societies will slow down. But cognitive bias in science is a problem we can solve, and that we should solve. Now you know how.

Tuesday, December 10, 2019

Why the laws of nature are not inevitable, never have been, and never will be.

[Still from the 1956 movie The Ten Commandments]

No one has any idea why mathematics works so well to describe nature, but it is arguably an empirical fact that it works. A corollary of this is that you can formulate theories in terms of mathematical axioms and derive consequences from this. This is not how theories in physics have historically been developed, but it’s a good way to think about the relation between our theories and mathematics.

All modern theories of physics are formulated in mathematical terms. To have a physically meaningful theory, however, mathematics alone is not sufficient. One also needs to have an identification of mathematical structures with observable properties of the universe.

The maybe most important lesson physicists have learned over the past centuries is that if a theory has internal inconsistencies, it is wrong. By internal inconsistencies, I mean that the theory’s axioms lead to statements that contradict each other. A typical example is that a quantity defined as a probability turns out to take on values larger than 1. That’s mathematical rubbish; something is wrong.

Of course a theory can also be wrong if it makes predictions that simply disagree with observations, but that is not what I am talking about today. Today, I am writing about the nonsense idea that the laws of nature are somehow “inevitable” just because you can derive consequences from postulated axioms.

It is easy to see that this idea is wrong even if you have never heard the word epistemology. Consequences which you can derive from axioms are exactly as “inevitable” as postulating the axioms, which means the consequences are not inevitable. But that this idea is wrong isn’t the interesting part. The interesting part is that it remains popular among physicists and science writers who seem to believe that physics is somehow magically able to explain itself.

But where do we get the axioms for our theories from? We use the ones that, according to present knowledge, do the best job to describe our observations. Sure, once you have written down some axioms, then anything you can derive from these axioms can be said to be an inevitable consequence. This is just the requirement of internal consistency.

But the axioms themselves can never be proved to be the right ones and hence will never be inevitable themselves. You can say they are “right” only to the extent that they give rise to predictions that agree with observations.

This means not only that we may find tomorrow that a different set of axioms describes our observations better. It means more importantly that any statement about the inevitability of the laws of nature is really a statement about our inability to find a better explanation for our observations.

This confusion between the inevitability of conclusions given certain axioms, and the inevitability of the laws of nature themselves, is not an innocuous one. It is the mistake behind string theorists’ conviction that they must be on the right track just because they have managed to create a mostly consistent mathematical structure. That this structure is consistent is of course necessary for it to be a correct description of nature. But it is not sufficient. Consistency tells you nothing whatsoever about whether the axioms you postulated will do a good job to describe observations.

Similar remarks apply to the Followers of Loop Quantum Gravity who hold background independence to be a self-evident truth, or to everybody who believes that statistical independence is sacred scripture, rather than being what it really is: A mathematical axiom, that may or may not continue to be useful.

Another unfortunate consequence of physicists’ misunderstanding of the role of mathematics in science are multiverse theories.

This comes about as follows. If your theory gives rise to internal contradictions, it means that at least one of your axioms is wrong. But one way to remove internal inconsistencies is to simply discard axioms until the contradiction vanishes.

Dropping axioms is not a scientifically fruitful strategy because you then end up with a theory that is ambiguous and hence unpredictive. But it is a convenient, low-effort solution to get rid of mathematical problems and has therefore become fashionable in physics. And this is in a nutshell where multiverse theories come from: These are theories which lack sufficiently many axioms to describe our universe.

Somehow an increasing number of physicists has managed to convince themselves that multiverse ideas are good scientific theories instead of what they de facto are: Useless.

There are infinitely many sets of axioms that are mathematically consistent but do not describe our universe. The only rationale scientists have to choose one over the other is that the axioms give rise to correct predictions. But there is no way to ever prove that a particular set of axioms is inevitably the correct one. Science has its limits. This is one of them.

Friday, December 06, 2019

Is the Anthropic Principle scientific?

Today I want to explain why the anthropic principle is a good, scientific principle. I want to talk about this, because the anthropic principle seems to be surrounded by a lot of misunderstanding, especially for what its relation to the multiverse is concerned.


Let me start with clarifying what we are talking about. I often hear people refer to the anthropic principle to say that a certain property of our universe is how it is because otherwise we would not be here to talk about it. That’s roughly correct, but there are two ways of interpreting this statement, which gives you a strong version of the anthropic principle, and a weak version.

The strong version has it that our existence causes the universe to be how it is. This is not necessarily an unscientific idea, but so-far no one has actually found a way to make it scientifically useful. You could for example imagine that if you managed to define well enough what a “human being” is, then you could show that the universe must contain certain forces with certain properties and thereby explain why the laws of nature are how they are.

However, I sincerely doubt that we will ever have a useful theory based on the strong anthropic principle. The reason is that for such a theory to be scientific, it would need to be a better explanation for our observations than the theories we presently have, which just assume some fundamental forces and particles, and build up everything else from that. I find it hard to see how a theory that starts from something as complicated as a human being could possibly ever be more explanatory than these simple, reductionist theories we currently use in the foundations of physics.

Let us then come to the weak version of the anthropic principle. It says that the universe must have certain properties because otherwise our own existence would not be possible. Please note the difference to the strong version. In the weak version of the anthropic principle, human existence is neither necessary nor unavoidable. It is simply an observed fact that humans exist in this universe. And this observed fact leads to constraints on the laws of nature.

These constraints can be surprisingly insightful. The best-known historical example for the use of the weak anthropic principle is Fred Hoyle’s prediction that a certain isotope of the chemical element carbon must have a resonance because, without that, life as we know it would not be possible. That prediction was correct. As you can see, there is nothing unscientific going on here. An observation gives rise to a hypothesis which makes a prediction that is confirmed by another observation.

Another example that you often find quoted is that you can use the fact of our own existence to tell that the cosmological constant has to be within certain bounds. If the cosmological constant was large and negative, the universe would have collapsed long ago. If the cosmological constant was large and positive, the universe would expand too fast for stars to form. Again, there is nothing mysterious going on here.

You could use a similar argument to deduce that the air in my studio contains oxygen. Because if it didn’t I wouldn’t be talking. Now, that this room contains oxygen is not an insight you can publish in a scientific journal because it’s pretty useless. But as the example with Fred Hoyle’s carbon resonance illustrates, anthropic arguments can be useful.

To be fair, I should add that to the extent that anthropic arguments are being used in physics, they do not usually draw on the existence of human life specifically. They more generally use the existence of certain physical preconditions that are believed to be necessary for life, such as a sufficiently complex chemistry or sufficiently large structures.

So, the anthropic principle is neither unscientific, nor is it in general useless. But then why is the anthropic principle so controversial? It is controversial because it is often brought up by physicists who believe that we live in a multiverse, in which our universe is only one of infinitely many. In each of these universes, the laws of nature can be slightly different. Some may allow for life to exist, some may not.

(If you want to know more about the different versions of the multiverse, please watch my earlier video.)

If you believe in the multiverse, then the anthropic principle can be reformulated to say that the probability we find ourselves in a universe that is not hospitable to life is zero. In the multiverse, the anthropic principle then becomes a statement about the probability distribution over an ensemble of universes. And for multiverse people, that’s an important quantity to calculate. So the anthropic principle smells controversial because of this close connection to the multiverse.

However, the anthropic principle is correct regardless of whether or not you believe in a multiverse. In fact, the anthropic principle is a rather unsurprising and pretty obvious constraint on the properties that the laws of nature must have. The laws of nature must be so that they allow our existence. That’s what the anthropic principle says, no more and no less.

Saturday, November 30, 2019

Dark energy might not exist after all

Last week I told you what dark energy is and why astrophysicists believe it exists. This week I want to tell you about a recent paper that claims dark energy does not exist.


To briefly remind you, dark energy is what speeds up the expansion of the universe. In contrast to all other types of matter and energy, dark energy does not dilute if the universe expands. This means that eventually all the other stuff is more dilute than dark energy and, therefore, it’s the dark energy that determines the ultimate fate of our universe. If dark energy is real, the universe will expand faster and faster until all eternity. If there’s no dark energy, the expansion will slow down instead and it might even reverse, in which case the universe will collapse back to a point.

I don’t know about you, but I would like to know what is going to happen with our universe.

So what do we know about dark energy. The most important evidence we have for the existence of dark energy comes from supernova redshifts. Saul Perlmutter and Adam Riess won a Nobel Prize for this observation in 2011. It’s this Nobel-prize winning discovery which the new paper calls into question.

Supernovae give us information about dark energy because some of them are very regular. These are the so-called type Ia supernovae. Astrophysicists understand quite well how these supernovae happen. This allows physicists to calculate how much light these blasts emit as a function of time, so they know what was emitted. But the farther the supernova is away, the dimmer it appears. So, if you observe one of these supernova, you can infer its distance from the brightness.

At the same time, you can also determine the color of the light. Now, and this is the important point, this light from the supernova will stretch if space expands while the light travels from the supernova to us. This means that the wave-lengths we observe here on earth are longer than they were at emission or, to put it differently, the light arrives here with a frequency that is shifted to the red. This red-shift of the light therefore tells us something about the expansion of the universe.

Now, the farther away a supernova is, the longer it takes the light to reach us, and the longer ago the supernova must have happened. This means that if you measure supernovae at different distances, they really happened at different times, and you know how the expansion of space changes with time.

And this is, in a nutshell, what Perlmutter and Riess did. They used the distance inferred from the brightness and the redshift of type 1a supernovae, and found that the only way to explain both types of measurements is that the expansion of the universe is getting faster. And this means that dark energy must exist.

Now, Perlmutter and Riess did their analysis 20 years ago and they used a fairly small sample of about 110 supernovae. Meanwhile, we have data for more than 1000 supernovae. For the new paper, the researchers used 740 supernovae from the JLA catalogue. But they also explain that if one just uses the data from this catalogue as it is, one gets a wrong result. The reason is that the data has been “corrected” already.

This correction is made because the story that I just told you about the redshift is more complicated than I made it sound. That’s because the frequency of light from a distant source can also shift just because our galaxy moves relative to the source. More generally, both our galaxy and the source move relative to the average restframe of stuff in the universe. And it is this latter frame that one wants to make a statement about when it comes to the expansion of the universe.

How do you even make such a correction? Well, you need to have some information about the motion of our galaxy from observations other than supernovae. You can do that by relying on regularities in the emission of light from galaxies and galaxy clusters. This allow astrophysicist to create a map with the velocities of galaxies around us, called the “bulk flow” .

But the details don’t matter all that much. To understand this new paper you only need to know that the authors had to go and reverse this correction to get the original data. And *then they fitted the original data rather than using data that were, basically, assumed to converge to the cosmological average.

What they found is that the best fit to the data is that the redshift of supernovae is not the same in all directions, but that it depends on the direction. This direction is aligned with the direction in which we move through the cosmic microwave background. And – most importantly – you do not need further redshift to explain the observations.

If what they say is correct, then it is unnecessary to postulate dark energy which means that the expansion of the universe might not speed up after all.

Why didn’t Perlmutter and Riess come to this conclusions? They could not, because the supernovae that they looked were skewed in direction. The ones with low redshift were in the direction of the CMB dipole; and high redshift ones away from it. With a skewed sample like this, you can’t tell if the effect you see is the same in all directions.*

What is with the other evidence for dark energy? Well, all the other evidence for dark energy is not evidence for dark energy in particular, but for a certain combination of parameters in the concordance model of cosmology. These parameters include, among other things, the amount of dark matter, the amount of normal matter, and the Hubble rate.

There is for example the data from baryon acoustic oscillations and from the cosmic microwave background which are currently best fit by the presence of dark energy. But if the new paper is correct, then the current best-fit parameters for those other measurements no longer agree with those of the supernovae measurements. This does not mean that the new paper is wrong. It means that one has to re-analyze the complete set of data to find out what is overall the combination of parameters that makes the best fit.

This paper, I have to emphasize, has been peer reviewed, is published in a high quality journal, and the analysis meets the current scientific standard of the field. It is not a result that can be easily dismissed and it deserves to be taken very seriously, especially because it calls into question a Nobel Prize winning discovery. This analysis has of course to be checked by other groups and I am sure we will hear about this again, so stay tuned.



* Corrected this paragraph which originally said that all their supernovae were in the same direction of the sky.

Saturday, November 23, 2019

What is Dark Energy?

What’s the difference between dark energy and dark matter? What does dark energy have to do with the cosmological constant and is the cosmological constant really the worst prediction ever? At the end of this video, you will know.


First things first, what is dark energy? Dark energy is what causes the expansion of the universe to accelerate. It’s not only that astrophysicists think the universe expands, but that the expansion is actually getting faster. And, here’s the important thing, matter alone cannot do that. If there was only matter in the universe, the expansion would slow down. To make the expansion of the universe accelerate, it takes negative pressure, and neither normal matter nor dark matter has negative pressure – but dark energy has it.

We do not actually know that dark energy is really made of anything, so interpreting this pressure in the normal way as by particles bumping into each other may be misleading. This negative pressure is really just something that we write down mathematically and that fits to the observations. It is similarly misleading to call dark energy “dark”, because “dark” suggests that it swallows light like, say, black holes do. But neither dark matter nor dark energy is actually dark in this sense. Instead, light just passes through them, so they are really transparent and not dark.

What’s the difference between dark energy and dark matter? Dark energy is what makes the universe expand, dark matter is what makes galaxies rotate faster. Dark matter does not have the funny negative pressure that is characteristic of dark energy. Really the two things are different and have different effects. There are of course some physicists speculating that dark energy and dark matter might have a common origin, but we don’t know whether that really is the case.

What does dark energy have to do with the cosmological constant? The cosmological constant is the simplest type of dark energy. As the name says, it’s really just a constant, it doesn’t change in time. Most importantly this means that it doesn’t change when the universe expands. This sounds innocent, but it is a really weird property. Think about this for a moment. If you have any kind of matter or radiation in some volume of space and that volume expands, then the density of the energy and pressure will decrease just because the stuff dilutes. But dark energy doesn’t dilute! It just remains constant.

Doesn’t this violate energy conservation? I get this question a lot. The answer is yes, and no. Yes, it does violate energy conservation in the way that we normally use the term. That’s because if the volume of space increases but the density of dark energy remains constant, then it seems that there is more energy in that volume. But energy just is not a conserved quantity in general relativity, if the volume of space can change with time. So, no, it does not violate energy conservation because in general relativity we have to use a different conservation law, that is the local conservation of all kinds of energy densities. And this conservation law is fulfilled even by dark energy. So the mathematics is all fine, don’t worry.

The cosmological constant was famously already introduced by Einstein and then discarded again. But astrophysicists think today that is necessary to explain observations, and it has a small, positive value. But I often hear physicists claiming that if you try to calculate the value of the cosmological constant, then the result is 120 orders of magnitude larger than what we observe. This, so the story has it, is the supposedly worst prediction ever.

Trouble is, that’s not true! It just isn’t a prediction. If it was a prediction, I ask you, what theory was ruled out by it being so terribly wrong? None, of course. The reason is that this constant which you can calculate – the one that is 120 orders of magnitude too large – is not observable. It doesn’t correspond to anything we can measure. The actually measureable cosmological constant is a free parameter of Einstein’s theory of general relativity that cannot be calculated by the theories we currently have.

Dark energy now is a generalization of the cosmological constant. This generalization allows that the energy density and pressure of dark energy can change with time and maybe also with space. In this case, dark energy is really some kind of field that fills the whole universe.

What observations speak for dark energy? Dark energy in the form of a cosmological constant is one of the parameters in the concordance model of cosmology. This model is also sometimes called ΛCDM. The Λ (Lambda) in this name is the cosmological constant and CDM stands for cold dark matter.

The cosmological constant in this model is not extracted from one observation in particular, but from a combination of observations. Notably that is the distribution of matter in the universe, the properties of the cosmic microwave background, and supernovae redshifts. Dark energy is necessary to make the concordance model fit to the data.

At least that’s what most physicists say. But some of them are claiming that really the data has been wrongly analyzed and the expansion of the universe doesn’t speed up after all. Isn’t science fun? If I come around to do it, I’ll tell you something about this new paper next week, so stay tuned.

Friday, November 22, 2019

What can artificial intelligence do for physics? And what will it do to physics?

Neural net illustration. Screenshot from this video.

In the past two years, governments all over the world have launched research initiatives for Artificial Intelligence (AI). Canada, China, the United States, the European Commission, Australia, France, Denmark, the UK, Germany – everyone suddenly has a strategy for “AI made in” whatever happens to be their own part of the planet. In the coming decades, it is now foreseeable, tens of billions of dollars will flow into the field.

But ask a physicist what they think of artificial intelligence, and they’ll probably say “duh.” For them, AI was trendy in the 1980s. They prefer to call it “machine learning” and pride themselves having used it for decades.

Already in the mid 1980s, researchers working in statistical mechanics – a field concerned with the interaction of large number of particles – set out to better understand how machines learn. They noticed that magnets with disorderly magnetization (known as “spin glasses”) can serve as a physical realization for certain mathematical rules used in machine learning. This in turn means that the physical behavior of these magnets shed light on some properties of learning machines, such as their storage capacity. Back then, physicists also used techniques from statistical mechanics to classify the learning abilities of algorithms.

Particle physicists, too, were on the forefront of machine learning. The first workshop on Artificial Intelligence in High Energy and Nuclear Physics (AIHENP) was held already in 1990. Workshops in this series still take place, but have since been renamed to Advanced Computing and Analysis Techniques. This may be because the new acronym, ACAT, is catchier. But it also illustrates that the phrase “Artificial Intelligence” is no longer common use among researchers. It now appears primarily as an attention-grabber in the mass media.

Physicists avoid the term “Artificial Intelligence” not only because it reeks of hype, but because the analogy to natural intelligence is superficial at best, misleading at worst. True, the current models are loosely based on the human brain’s architecture. These so-called “neural networks” are algorithms based on mathematical representations of “neurons” connected by “synapses.” Using feedback about its performance – the “training” – the algorithm then “learns” to optimize a quantifiable goal, such as recognizing an image, or predicting a data-trend.

This type of iterative learning is certainly one aspect of intelligence, but it leaves much wanting. The current algorithms heavily rely on humans to provide suitable input data. They do not formulate own goals. They do not propose models. They are, as far as physicists are concerned, but elaborate ways of fitting and extrapolating data.

But then, what novelty can AI bring to physics? A lot, it turns out. While the techniques are not new – even “deep learning” dates back to the early 2000s – today’s ease of use and sheer computational power allows physicists to now assign computers to tasks previously reserved for humans. It has also enabled them to explore entirely new research directions. Until a few years ago, other computational methods often outperformed machine learning, but now machine learning leads in many different areas. This is why, in the past years, interest in machine learning has spread into seemingly every niche.

Most applications of AI in physics loosely fall into three main categories: Data analysis, modeling, and model analysis.

Data analysis is the most widely known application of machine learning. Neural networks can be trained to recognize specific patterns, and can also learn to find new patterns on their own. In physics, this is used in image analysis, for example when astrophysicists search for signals of gravitational lensing. Gravitational lensing happens when space-time around an object is deformed so much that it noticeably distorts the light coming from behind it. The recent, headline-making, black hole image is an extreme example. But most gravitational lensing events are more subtle, resulting in smears or partial arcs. AIs can learn to identify these.

Particle physicists also use neural networks to find patterns, both specific and unspecific ones. Highly energetic particle collisions, like those done at the Large Hadron Collider, produce huge amounts of data. Neural networks can be trained to flag interesting events. Similar techniques have been used to identify certain types of radio bursts, and may soon help finding gravitational waves.

Machine learning aids the modeling of physical systems both by speeding up calculations and by enabling new types of calculations. For example, simulations for the formation of galaxies take a long time even on the current generation of super-computers. But neural networks can learn to extrapolate from the existing simulations, without having to re-run the full simulation each time, a technique that was recently successfully used to match the amount of dark matter to the amount of visible matter in galaxies. Neural networks have also been used to reconstruct what happens when cosmic rays hit the atmosphere, or how elementary particles are distributed inside composite particles.

For model analysis, machine learning is applied to understand better the properties of already known theories which cannot be extracted by other mathematical methods, or to speed up computation. For example, the interaction of many quantum particles can result in a variety of phases of matter. But the existing mathematical methods have not allowed physicists to calculate these phases. Neural nets can encode the many quantum particles and then classify the different types of behavior.

Similar ideas underlie neural networks that seek to classify the properties of materials, such as conductivity or compressibility. While the theory for the materials’ atomic structure is known in principle, many calculations have so-far exceeded the existing computational resources. Machine learning is beginning to change that. Many hope that it may one day allow physicists to find materials that are superconducting at room temperature. Another fertile area for applications of neural nets is “quantum tomography,” that is the reconstruction of quantum state from the measurements performed on it, a problem of high relevance for quantum computing.

And it is not only that machine learning advances physics, physics can in return advance machine learning. At present, it is not well understood just why neural nets work as well as they do. Since some neural networks can be represented as physical systems, knowledge from physics may shed light on the situation.

In summary, machine learning rather suddenly allows physicists to tackle a lot of problems that were previously intractable, simply because of the high computational burden.

What does this mean for the future of physics? Will we see the “End of Theory” as Chris Anderson oracled in 2008?

I do not think so. There are many different types of neural networks, which differ in their architecture and learning scheme. Physicists now have to understand which algorithm works for which case and how well, the same way they previously had to understand which theory works for which case and how well. Rather than spelling the end of theory, machine learning will take it to the next level.

[You can help me keep my writing freely available by using the donate button in the top right corner of the page.]

Wednesday, November 20, 2019

Can we tell if there’s a wormhole in the Milky-Way?

This week I got a lot of questions about an article by Dennis Overbye in the New York Times, titled “How to Peer Through a Wormhole.” This article says “Theoretically, the universe may be riddled with tunnels through space and time” and goes on to explain that “Wormholes are another prediction of Einstein’s theory of general relativity, which has already delivered such wonders as an expanding universe and black holes.” Therefore, so Overbye tells his readers, it is reasonable to study whether the black hole in the center of our Milky Way is such a wormhole.


The trouble with this article is that it makes it appear as if wormholes are a prediction of general relativity comparable to the prediction of the expansion of the universe and the prediction of black holes. But this is most definitely not so. Overbye kind of says this by alluding to some “magic” that is necessary to have wormholes, but unfortunately he does not say it very clearly. This has caused quite some confusion. On twitter, for example, Natalie Wolchover, has put wormholes on par with gravitational waves.

So here are the facts. General Relativity is based on Einstein’s field equations which determine the geometry of space-time as a consequence of the energy and matter that is in that space-time. General Relativity has certain kinds of wormholes as solutions. These are the so-called Einstein-Rosen bridges. There are two problems with those.

First, no one knows how to create them with a physically possible process. It’s one thing to say that the solution exists in the world of mathematics. It’s another thing entirely to say that such a solution describes something in our universe. There are whole books full with solutions to Einstein’s field equations. Most of these solutions have no correspondence in the real world.

Second, even leaving aside that they won’t be created during the evolution of the universe, nothing can travel through these wormholes.

If you want to keep a wormhole open, you need some kind of matter that has a negative energy density, which is stuff that for all we know does not exist. Can you write down the mathematics for it? Yes. Do we have any reason whatsoever to think that this mathematics describes the real world? No. And that, folks, is really all there is to say about it. It’s mathematics and we have no reason to think it’s real.

In this, wormholes are very, very different to the predictions of the expanding universe, gravitational waves, and black holes. The expanding universe, gravitational waves and black holes are consequences of general relativity. You have to make an effort to avoid that they exist. It’s the exact opposite with wormholes. You have to bend over backwards to make the math work so that they can exist.

Now, certain people like to tell me that this should count as “healthy speculation” and I should stop complaining about it. These certain people are either physicists who produce such speculations or science writers who report about it. In other words, they are people who make a living getting you to believe this mathematical fiction. But there is nothing healthy about this type of speculation. It’s wasting time and money that would be better used on research that could actually advance physics.

Let me give you an example to see the problem. Suppose the same thing would happen in medicine. Doctors would invent diseases that we have no reason to think exist. They would then write papers about how to diagnose those invented diseases and how to cure those invented diseases and, for good measure, argue that someone should do an experiment to look for their invented diseases.

Sounds ridiculous? Yeah, it is ridiculous. But that’s exactly what is going on in the foundations of physics, and it has been going on for decades, which is why no one sees anything wrong with it anymore.

Is there at least something new that would explain why the NYT reports on this? What’s new is that two physicists have succeeded in publishing a paper which says that if the black hole in the center of our galaxy is a traversable wormhole then maybe we might be able to see this. The idea is that if there is stuff moving around the other end of the wormhole then we might notice the gravitational influence of that stuff on our side of the wormhole.

Is it possible to look for this? Yes, it is also possible to look for alien spaceships coming through, and chances are, next week a paper will get published about this and the New York Times reports it.

On a more technical note, a quick remark about the paper, which you find here:
The authors look at what happens with the gravitational field on one side of a non-traversable wormhole if a shell of matter is placed around the other side of the wormhole. They conclude:
“[T]he gravitational field can cross from one to the other side of the wormhole even from inside the horizon... This is very interesting since it implies that gravity can leak even through the non-traversable wormhole.”
But the only thing their equation says is that the strength of the gravitational field on one side of the wormhole depends on the matter on the other side of the wormhole. Which is correct of course. But there is no information “leaking” through the non-traversable (!) wormhole because it’s a time-independent situation. There is no change that can be measured here.

This isn’t simply because they didn’t look at the time-dependence, but because the spherically symmetric case is always time-independent. We know that thanks to Birkhoff’s theorem. We also know that gravitational waves have no monopole contribution, so there are no propagating modes in this case either.

The case that they later discuss, the one that is supposedly observable, instead talks of objects on orbits around the other end of the wormhole. This is, needless to say, not a spherically symmetric case and therefore this argument that the effect is measurable for non-traversable wormholes is not supported by their analysis. If you want more details, this comment gets it right.

Friday, November 15, 2019

Did scientists get climate change wrong?

On my recent trip to the UK, I spoke with Tim Palmer about the uncertainty in climate predictions.

Saturday, November 09, 2019

How can we test a Theory of Everything?

How can we test a Theory of Everything? That’s a question I get a lot in my public lectures. In the past decade, physicists have put forward some speculations that cannot be experimentally ruled out, ever, because you can always move predictions to energies higher than what we have tested so far. Supersymmetry is an example of a theory that is untestable in this particular way. After I explain this, I am frequently asked if it is possible to test a theory of everything, or whether such theories are just entirely unscientific.


It’s a good question. But before we get to the answer, I have tell you exactly what physicists mean by “theory of everything”, so we’re on the same page. For all we currently know the world is held together by four fundamental forces. That’s the electromagnetic force, the strong and the weak nuclear force, and gravity. All other forces, like for example Van-der-Waals forces that hold together molecules or muscle forces derive from those four fundamental forces.

The electromagnetic force and the strong and the weak nuclear force are combined in the standard model of particle physics. These forces have in common that they have quantum properties. But the gravitational force stands apart from the three other forces because it does not have quantum properties. That’s a problem, as I have explained in an earlier video. A theory that solves the problem of the missing quantum behavior of gravity is called “quantum gravity”. That’s not the same as a theory of everything.

If you combine the three forces in the standard model to only one force from which you can derive the standard model, that is called a “Grand Unified Theory” or GUT for short. That’s not a theory of everything either.

If you have a theory from which you can derive gravity and the three forces of the standard model, that’s called a “Theory of Everything” or TOE for short. So, a theory of everything is both a theory of quantum gravity and a grand unified theory.

The name is somewhat misleading. Such a theory of everything would of course *not explain everything. That’s because for most purposes it would be entirely impractical to use it. It would be impractical for the same reason it’s impractical to use the standard model to explain chemical reactions, not to mention human behavior. The description of large objects in terms of their fundamental constituents does not actually give us much insight into what the large objects do. A theory of everything, therefore, may explain everything in principle, but still not do so in practice.

The other problem with the name “theory of everything” is that we will never know that not at some point in the future we will discover something that the theory does not explain. Maybe there is indeed a fifth fundamental force? Who knows.

So, what physicists call a theory of everything should really be called “a theory of everything we know so far, at least in principle.”

The best known example of a theory of everything is string theory. There are a few other approaches. Alain Connes, for example, has an approach based on non-commutative geometry. Asymptotically safe gravity may include a grand unification and therefore counts as a theory of everything. Though, for reasons I don’t quite understand, physicists do not normally discuss asymptotically safe gravity as a candidate for a theory of everything. If you know why, please leave a comment.

These are the large programs. Then there are a few small programs, like Garrett Lisi’s E8 theory, or Xiao-Gang Wen’s idea that the world is really made of qbits, or Felix Finster’s causal fermion systems.

So, are these theories testable?

Yes, they are testable. The reason is that any theory which solves the problem with quantum gravity must make predictions that deviate from general relativity. And those predictions, this is really important, cannot be arbitrarily moved to higher and higher energies. We know that because combining general relativity with the standard model, without quantizing gravity, just stops working near an energy known as the Planck energy.

These approaches to a theory of everything normally also make other predictions. For example they often come with a story about what happened in the early universe, which can have consequences that are still observable today. In some cases they result in subtle symmetry violations that can be measurable in particle physics experiments. The details about this differ from one theory to the next.

But what you really wanted to know, I guess, is whether these tests are practically possible any time soon? I do think it is realistically possible that we will be able to see these deviations from general relativity in the next 50 years or so. About the other tests that rely on models for the early universe or symmetry violations, I’m not so sure, because for these it is again possible to move the predictions and then claim that we need bigger and better experiments to see them.

Is there any good reason to think that such a theory of everything is correct in the first place? No. There is good reason to think that we need a theory of quantum gravity, because without that the current theories are just inconsistent. But there is no reason to think that the forces of the standard model have to be unified, or that all the forces ultimately derive from one common explanation. It would be nice, but maybe that’s just not how the universe works.

Saturday, November 02, 2019

Have we really measured gravitational waves?


A few days ago I met a friend on the subway. He tells me he’s been at a conference and someone asked if he knows me. He says yes, and immediately people start complaining about me. One guy, apparently, told him to slap me.

What were they complaining about, you want to know? Well, one complaint came from a particle physicist, who was clearly dismayed that I think building a bigger particle collider is not a good way to invest $40 billion dollars. But it was true when I said it the first time and it is still true: There are better things we can do with this amount money. (Such as, for example, make better climate predictions, which can be done for as “little” as 1 billion dollars.)

Back to my friend on the subway. He told me that besides the grumpy particle physicist there were also several gravitational wave people who have issues with what I have written about the supposed gravitational wave detections by the LIGO collaboration. Most of the time if people have issues with what I’m saying it’s because they do not understand what I’m saying to begin with. So with this video, I hope to clear the situation up.

Let me start with the most important point. I do not doubt that the gravitational wave detections are real. But. I spend a lot of time on science communication, and I know that many of you doubt that these detections are real. And, to be honest, I cannot blame you for this doubt. So here’s my issue. I think that the gravitational wave community is doing a crappy job justifying the expenses for their research. They give science a bad reputation. And I do not approve of this.

Before I go on, a quick reminder what gravitational waves are. Gravitational waves are periodic deformations of space and time. These deformations can happen because Einstein’s theory of general relativity tells us that space and time are not rigid, but react to the presence of matter. If you have some distribution of matter that curves space a lot, such as a pair of black holes orbiting one another, these will cause space-time to wobble and the wobbles carry energy away. That’s what gravitational waves are.

We have had indirect evidence for gravitational waves since the 1970s because you can measure how much energy a system loses through gravitational waves without directly measuring the gravitational waves. Hulse and Taylor did this by closely monitoring the orbiting frequency of a pulsar binary. If the system loses energy, the two stars get closer and they orbit faster around each other. The predictions for the emission of gravitational waves fit exactly on the observations. Hulse and Taylor got a Nobel prize for that in 1993.

For the direct detection of gravitational waves you have to measure the deformation of space and time that they cause. You can do this by using very sensitive interferometers. An interferometer bounces laser light back and forth in two orthogonal directions and then combines the light.

Light is a wave and depending on whether the crests of the waves from the two directions lie on top of each other or not, the resulting signal is strong – that’s constructive interference – or washed out – that’s destructive interference. Just what happens depends very sensitively on the distance that the light travels. So you can use changes in the strength of the interference pattern to figure out whether one of the directions of the interferometer was temporarily shorter or longer.

A question that I frequently get is how can this interferometer detect anything if both the light and the interferometer itself deform with space-time? Wouldn’t the effect cancel out? No, it does not cancel out, because the interferometer is not made of light. It’s made of massive particles and therefore reacts differently to a periodic deformation of space-time than light does. That’s why one can use light to find out that something happened for real. For more details, please check these papers.

The first direct detection of gravitational waves was made by the LIGO collaboration in September 2015. LIGO consists of two separate interferometers. They are both located in the United States, some thousand kilometers apart. Gravitational waves travel at the speed of light, so if one comes through, it should trigger both detectors with a small delay that comes from the time it takes the wave to travel from one detector to the other. Looking for a signal that appears almost simultaneously in the two detectors helps to identify the signal in the noise.

This first signal measured by LIGO looks like a textbook example of a gravitational wave signal from a merger of two black holes. It’s a periodic signal that increases in frequency and amplitude, as the two black holes get closer to each other and their orbiting period gets shorter. When the horizons of the two black holes merge, the signal is suddenly cut off. After this follows a brief period in which the newly formed larger black hole settles in a new state, called the ringdown. A Nobel Prize was awarded for this measurement in 2017. If you plot the frequency distribution over time, you get this banana. Here it's the upward bend that tells you that the frequency increases before dying off entirely.

Now, what’s the problem? The first problem is that no one seems to actually know where the curve in the famous LIGO plot came from. You would think it was obtained by a calculation, but members of the collaboration are on record saying it was “not found using analysis algorithms” but partly done “by eye” and “hand-tuned for pedagogical purposes.” Both the collaboration and the journal in which the paper was published have refused to comment. This, people, is highly inappropriate. We should not hand out Nobel Prizes if we don’t know how the predictions were fitted to the data.

The other problem is that so far we do not have a confirmation that the signals which LIGO detects are in fact of astrophysical origin, and not misidentified signals that originated on Earth. The way that you could show this is with a LIGO detection that matches electromagnetic signals, such as gamma ray bursts, measured by telescopes.

The collaboration had, so far, one opportunity for this, which was an event in August 2017. The problem with this event is that the announcement from the collaboration about their detection came after the announcement of the incoming gamma ray. Therefore, the LIGO detection does not count as a confirmed prediction, because it was not a prediction in the first place – it was a postdiction.

It seems to offend people in the collaboration tremendously if I say this, so let me be clear. I have no reason to think that something fishy went on, and I know why the original detection did not result in an automatic alert. But this isn’t the point. The point is that no one knows what happened before the official announcement besides members of the collaboration. We are waiting for an independent confirmation. This one missed the mark.

Since 2017, the two LIGO detectors have been joined by a third detector called Virgo, located in Italy. In their third run, which started in April this year, the LIGO/Virgo collaboration has issued alerts for 41 events. From these 41 alerts, 8 were later retracted. Of the remaining gravitational wave events, 10 look like they are either neutron star mergers, or mergers of a neutron star with a black hole. In these cases, there should also be electromagnetic radiation emitted which telescopes can see. For black hole mergers, one does not expect this to be the case.

However, no telescope has so far seen a signal that fits to any of the gravitational wave events. This may simply mean that the signals have been too weak for the telescopes to see them. But whatever the reason, the consequence is that we still do not know that what LIGO and Virgo see are actually signals from outer space.

You may ask isn’t it enough that they have a signal in their detector that looks like it could be caused by gravitational waves? Well, if this was the only thing that could trigger the detectors, yes. But that is not the case. The LIGO detectors have about 10-100 “glitches” per day. The glitches are bright and shiny signals but do not look like gravitational wave events. The cause of some of these glitches is known. The cause of other glitches not. LIGO uses a citizen science project to classify these glitches and has given them funky names like “Koi Fish” or “Blip.”

What this means is that they do not really know what their detector detects. They just throw away data that don’t look like they want it to look. This is not a good scientific procedure. Here is why.

Think of an animal. Let me guess, it’s... an elephant. Right? Right for you, right for you, not right for you? Hmm, that’s a glitch in the data, so you don’t count.

Does this prove that I am psychic? No, of course it doesn’t. Because selectively throwing away data that’s inconvenient is a bad idea. Goes for me, goes for LIGO too. At least that’s what you would think.

If we had an independent confirmation that the good-looking signal is really of astrophysical origin, this wouldn’t matter. But we don’t have that either. So that’s the situation in summary. The signals that LIGO and Virgo see are well explained by gravitational wave events. But we cannot be sure that these are actually signals coming from outer space and not some unknown terrestrial effect.

Let me finish by saying once again that personally I do not actually doubt these signals are caused by gravitational waves. But in science, it’s evidence that counts, not opinion.

Wednesday, October 30, 2019

The crisis in physics is not only about physics

downward spiral
In the foundations of physics, we have not seen progress since the mid 1970s when the standard model of particle physics was completed. Ever since then, the theories we use to describe observations have remained unchanged. Sure, some aspects of these theories have only been experimentally confirmed later. The last to-be-confirmed particle was the Higgs-boson, predicted in the 1960s, measured in 2012. But all shortcomings of these theories – the lacking quantization of gravity, dark matter, the quantum measurement problem, and more – have been known for more than 80 years. And they are as unsolved today as they were then.

The major cause of this stagnation is that physics has changed, but physicists have not changed their methods. As physics has progressed, the foundations have become increasingly harder to probe by experiment. Technological advances have not kept size and expenses manageable. This is why, in physics today we have collaborations of thousands of people operating machines that cost billions of dollars.

With fewer experiments, serendipitous discoveries become increasingly unlikely. And lacking those discoveries, the technological progress that would be needed to keep experiments economically viable never materializes. It’s a vicious cycle: Costly experiments result in lack of progress. Lack of progress increases the costs of further experiment. This cycle must eventually lead into a dead end when experiments become simply too expensive to remain affordable. A $40 billion particle collider is such a dead end.

The only way to avoid being sucked into this vicious cycle is to choose carefully which hypothesis to put to the test. But physicists still operate by the “just look” idea like this was the 19th century. They do not think about which hypotheses are promising because their education has not taught them to do so. Such self-reflection would require knowledge of the philosophy and sociology of science, and those are subjects physicists merely make dismissive jokes about. They believe they are too intelligent to have to think about what they are doing.

The consequence has been that experiments in the foundations of physics past the 1970s have only confirmed the already existing theories. None found evidence of anything beyond what we already know.

But theoretical physicists did not learn the lesson and still ignore the philosophy and sociology of science. I encounter this dismissive behavior personally pretty much every time I try to explain to a cosmologist or particle physicists that we need smarter ways to share information and make decisions in large, like-minded communities. If they react at all, they are insulted if I point out that social reinforcement – aka group-think – befalls us all, unless we actively take measures to prevent it.

Instead of examining the way that they propose hypotheses and revising their methods, theoretical physicists have developed a habit of putting forward entirely baseless speculations. Over and over again I have heard them justifying their mindless production of mathematical fiction as “healthy speculation” – entirely ignoring that this type of speculation has demonstrably not worked for decades and continues to not work. There is nothing healthy about this. It’s sick science. And, embarrassingly enough, that’s plain to see for everyone who does not work in the field.

This behavior is based on the hopelessly naïve, not to mention ill-informed, belief that science always progresses somehow, and that sooner or later certainly someone will stumble over something interesting. But even if that happened – even if someone found a piece of the puzzle – at this point we wouldn’t notice, because today any drop of genuine theoretical progress would drown in an ocean of “healthy speculation”.

And so, what we have here in the foundation of physics is a plain failure of the scientific method. All these wrong predictions should have taught physicists that just because they can write down equations for something does not mean this math is a scientifically promising hypothesis. String theory, supersymmetry, multiverses. There’s math for it, alright. Pretty math, even. But that doesn’t mean this math describes reality.

Physicists need new methods. Better methods. Methods that are appropriate to the present century.

And please spare me the complaints that I supposedly do not have anything better to suggest, because that is a false accusation. I have said many times that looking at the history of physics teaches us that resolving inconsistencies has been a reliable path to breakthroughs, so that’s what we should focus on. I may be on the wrong track with this, of course. But for all I can tell at this moment in history I am the only physicist who has at least come up with an idea for what to do.

Why don’t physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it.

You may want to put this down as a minor worry because – $40 billion dollar collider aside – who really cares about the foundations of physics? Maybe all these string theorists have been wasting tax-money for decades, alright, but in the large scheme of things it’s not all that much money. I grant you that much. Theorists are not expensive.

But even if you don’t care what’s up with strings and multiverses, you should worry about what is happening here. The foundations of physics are the canary in the coal mine. It’s an old discipline and the first to run into this problem. But the same problem will sooner or later surface in other disciplines if experiments become increasingly expensive and recruit large fractions of the scientific community.

Indeed, we see this beginning to happen in medicine and in ecology, too.

Small-scale drug trials have pretty much run their course. These are good only to find in-your-face correlations that are universal across most people. Medicine, therefore, will increasingly have to rely on data collected from large groups over long periods of time to find increasingly personalized diagnoses and prescriptions. The studies which are necessary for this are extremely costly. They must be chosen carefully for not many of them can be made. The study of ecosystems faces a similar challenge, where small, isolated investigations are about to reach their limits.

How physicists handle their crisis will give an example to other disciplines. So watch this space.