Pages

Saturday, November 30, 2019

Dark energy might not exist after all

Last week I told you what dark energy is and why astrophysicists believe it exists. This week I want to tell you about a recent paper that claims dark energy does not exist.


To briefly remind you, dark energy is what speeds up the expansion of the universe. In contrast to all other types of matter and energy, dark energy does not dilute if the universe expands. This means that eventually all the other stuff is more dilute than dark energy and, therefore, it’s the dark energy that determines the ultimate fate of our universe. If dark energy is real, the universe will expand faster and faster until all eternity. If there’s no dark energy, the expansion will slow down instead and it might even reverse, in which case the universe will collapse back to a point.

I don’t know about you, but I would like to know what is going to happen with our universe.

So what do we know about dark energy. The most important evidence we have for the existence of dark energy comes from supernova redshifts. Saul Perlmutter and Adam Riess won a Nobel Prize for this observation in 2011. It’s this Nobel-prize winning discovery which the new paper calls into question.

Supernovae give us information about dark energy because some of them are very regular. These are the so-called type Ia supernovae. Astrophysicists understand quite well how these supernovae happen. This allows physicists to calculate how much light these blasts emit as a function of time, so they know what was emitted. But the farther the supernova is away, the dimmer it appears. So, if you observe one of these supernova, you can infer its distance from the brightness.

At the same time, you can also determine the color of the light. Now, and this is the important point, this light from the supernova will stretch if space expands while the light travels from the supernova to us. This means that the wave-lengths we observe here on earth are longer than they were at emission or, to put it differently, the light arrives here with a frequency that is shifted to the red. This red-shift of the light therefore tells us something about the expansion of the universe.

Now, the farther away a supernova is, the longer it takes the light to reach us, and the longer ago the supernova must have happened. This means that if you measure supernovae at different distances, they really happened at different times, and you know how the expansion of space changes with time.

And this is, in a nutshell, what Perlmutter and Riess did. They used the distance inferred from the brightness and the redshift of type 1a supernovae, and found that the only way to explain both types of measurements is that the expansion of the universe is getting faster. And this means that dark energy must exist.

Now, Perlmutter and Riess did their analysis 20 years ago and they used a fairly small sample of about 110 supernovae. Meanwhile, we have data for more than 1000 supernovae. For the new paper, the researchers used 740 supernovae from the JLA catalogue. But they also explain that if one just uses the data from this catalogue as it is, one gets a wrong result. The reason is that the data has been “corrected” already.

This correction is made because the story that I just told you about the redshift is more complicated than I made it sound. That’s because the frequency of light from a distant source can also shift just because our galaxy moves relative to the source. More generally, both our galaxy and the source move relative to the average restframe of stuff in the universe. And it is this latter frame that one wants to make a statement about when it comes to the expansion of the universe.

How do you even make such a correction? Well, you need to have some information about the motion of our galaxy from observations other than supernovae. You can do that by relying on regularities in the emission of light from galaxies and galaxy clusters. This allow astrophysicist to create a map with the velocities of galaxies around us, called the “bulk flow” .

But the details don’t matter all that much. To understand this new paper you only need to know that the authors had to go and reverse this correction to get the original data. And *then they fitted the original data rather than using data that were, basically, assumed to converge to the cosmological average.

What they found is that the best fit to the data is that the redshift of supernovae is not the same in all directions, but that it depends on the direction. This direction is aligned with the direction in which we move through the cosmic microwave background. And – most importantly – you do not need further redshift to explain the observations.

If what they say is correct, then it is unnecessary to postulate dark energy which means that the expansion of the universe might not speed up after all.

Why didn’t Perlmutter and Riess come to this conclusions? They could not, because the supernovae that they looked were skewed in direction. The ones with low redshift were in the direction of the CMB dipole; and high redshift ones away from it. With a skewed sample like this, you can’t tell if the effect you see is the same in all directions.*

What is with the other evidence for dark energy? Well, all the other evidence for dark energy is not evidence for dark energy in particular, but for a certain combination of parameters in the concordance model of cosmology. These parameters include, among other things, the amount of dark matter, the amount of normal matter, and the Hubble rate.

There is for example the data from baryon acoustic oscillations and from the cosmic microwave background which are currently best fit by the presence of dark energy. But if the new paper is correct, then the current best-fit parameters for those other measurements no longer agree with those of the supernovae measurements. This does not mean that the new paper is wrong. It means that one has to re-analyze the complete set of data to find out what is overall the combination of parameters that makes the best fit.

This paper, I have to emphasize, has been peer reviewed, is published in a high quality journal, and the analysis meets the current scientific standard of the field. It is not a result that can be easily dismissed and it deserves to be taken very seriously, especially because it calls into question a Nobel Prize winning discovery. This analysis has of course to be checked by other groups and I am sure we will hear about this again, so stay tuned.



* Corrected this paragraph which originally said that all their supernovae were in the same direction of the sky.

Saturday, November 23, 2019

What is Dark Energy?

What’s the difference between dark energy and dark matter? What does dark energy have to do with the cosmological constant and is the cosmological constant really the worst prediction ever? At the end of this video, you will know.


First things first, what is dark energy? Dark energy is what causes the expansion of the universe to accelerate. It’s not only that astrophysicists think the universe expands, but that the expansion is actually getting faster. And, here’s the important thing, matter alone cannot do that. If there was only matter in the universe, the expansion would slow down. To make the expansion of the universe accelerate, it takes negative pressure, and neither normal matter nor dark matter has negative pressure – but dark energy has it.

We do not actually know that dark energy is really made of anything, so interpreting this pressure in the normal way as by particles bumping into each other may be misleading. This negative pressure is really just something that we write down mathematically and that fits to the observations. It is similarly misleading to call dark energy “dark”, because “dark” suggests that it swallows light like, say, black holes do. But neither dark matter nor dark energy is actually dark in this sense. Instead, light just passes through them, so they are really transparent and not dark.

What’s the difference between dark energy and dark matter? Dark energy is what makes the universe expand, dark matter is what makes galaxies rotate faster. Dark matter does not have the funny negative pressure that is characteristic of dark energy. Really the two things are different and have different effects. There are of course some physicists speculating that dark energy and dark matter might have a common origin, but we don’t know whether that really is the case.

What does dark energy have to do with the cosmological constant? The cosmological constant is the simplest type of dark energy. As the name says, it’s really just a constant, it doesn’t change in time. Most importantly this means that it doesn’t change when the universe expands. This sounds innocent, but it is a really weird property. Think about this for a moment. If you have any kind of matter or radiation in some volume of space and that volume expands, then the density of the energy and pressure will decrease just because the stuff dilutes. But dark energy doesn’t dilute! It just remains constant.

Doesn’t this violate energy conservation? I get this question a lot. The answer is yes, and no. Yes, it does violate energy conservation in the way that we normally use the term. That’s because if the volume of space increases but the density of dark energy remains constant, then it seems that there is more energy in that volume. But energy just is not a conserved quantity in general relativity, if the volume of space can change with time. So, no, it does not violate energy conservation because in general relativity we have to use a different conservation law, that is the local conservation of all kinds of energy densities. And this conservation law is fulfilled even by dark energy. So the mathematics is all fine, don’t worry.

The cosmological constant was famously already introduced by Einstein and then discarded again. But astrophysicists think today that is necessary to explain observations, and it has a small, positive value. But I often hear physicists claiming that if you try to calculate the value of the cosmological constant, then the result is 120 orders of magnitude larger than what we observe. This, so the story has it, is the supposedly worst prediction ever.

Trouble is, that’s not true! It just isn’t a prediction. If it was a prediction, I ask you, what theory was ruled out by it being so terribly wrong? None, of course. The reason is that this constant which you can calculate – the one that is 120 orders of magnitude too large – is not observable. It doesn’t correspond to anything we can measure. The actually measureable cosmological constant is a free parameter of Einstein’s theory of general relativity that cannot be calculated by the theories we currently have.

Dark energy now is a generalization of the cosmological constant. This generalization allows that the energy density and pressure of dark energy can change with time and maybe also with space. In this case, dark energy is really some kind of field that fills the whole universe.

What observations speak for dark energy? Dark energy in the form of a cosmological constant is one of the parameters in the concordance model of cosmology. This model is also sometimes called ΛCDM. The Λ (Lambda) in this name is the cosmological constant and CDM stands for cold dark matter.

The cosmological constant in this model is not extracted from one observation in particular, but from a combination of observations. Notably that is the distribution of matter in the universe, the properties of the cosmic microwave background, and supernovae redshifts. Dark energy is necessary to make the concordance model fit to the data.

At least that’s what most physicists say. But some of them are claiming that really the data has been wrongly analyzed and the expansion of the universe doesn’t speed up after all. Isn’t science fun? If I come around to do it, I’ll tell you something about this new paper next week, so stay tuned.

Friday, November 22, 2019

What can artificial intelligence do for physics? And what will it do to physics?

Neural net illustration. Screenshot from this video.

In the past two years, governments all over the world have launched research initiatives for Artificial Intelligence (AI). Canada, China, the United States, the European Commission, Australia, France, Denmark, the UK, Germany – everyone suddenly has a strategy for “AI made in” whatever happens to be their own part of the planet. In the coming decades, it is now foreseeable, tens of billions of dollars will flow into the field.

But ask a physicist what they think of artificial intelligence, and they’ll probably say “duh.” For them, AI was trendy in the 1980s. They prefer to call it “machine learning” and pride themselves having used it for decades.

Already in the mid 1980s, researchers working in statistical mechanics – a field concerned with the interaction of large number of particles – set out to better understand how machines learn. They noticed that magnets with disorderly magnetization (known as “spin glasses”) can serve as a physical realization for certain mathematical rules used in machine learning. This in turn means that the physical behavior of these magnets shed light on some properties of learning machines, such as their storage capacity. Back then, physicists also used techniques from statistical mechanics to classify the learning abilities of algorithms.

Particle physicists, too, were on the forefront of machine learning. The first workshop on Artificial Intelligence in High Energy and Nuclear Physics (AIHENP) was held already in 1990. Workshops in this series still take place, but have since been renamed to Advanced Computing and Analysis Techniques. This may be because the new acronym, ACAT, is catchier. But it also illustrates that the phrase “Artificial Intelligence” is no longer common use among researchers. It now appears primarily as an attention-grabber in the mass media.

Physicists avoid the term “Artificial Intelligence” not only because it reeks of hype, but because the analogy to natural intelligence is superficial at best, misleading at worst. True, the current models are loosely based on the human brain’s architecture. These so-called “neural networks” are algorithms based on mathematical representations of “neurons” connected by “synapses.” Using feedback about its performance – the “training” – the algorithm then “learns” to optimize a quantifiable goal, such as recognizing an image, or predicting a data-trend.

This type of iterative learning is certainly one aspect of intelligence, but it leaves much wanting. The current algorithms heavily rely on humans to provide suitable input data. They do not formulate own goals. They do not propose models. They are, as far as physicists are concerned, but elaborate ways of fitting and extrapolating data.

But then, what novelty can AI bring to physics? A lot, it turns out. While the techniques are not new – even “deep learning” dates back to the early 2000s – today’s ease of use and sheer computational power allows physicists to now assign computers to tasks previously reserved for humans. It has also enabled them to explore entirely new research directions. Until a few years ago, other computational methods often outperformed machine learning, but now machine learning leads in many different areas. This is why, in the past years, interest in machine learning has spread into seemingly every niche.

Most applications of AI in physics loosely fall into three main categories: Data analysis, modeling, and model analysis.

Data analysis is the most widely known application of machine learning. Neural networks can be trained to recognize specific patterns, and can also learn to find new patterns on their own. In physics, this is used in image analysis, for example when astrophysicists search for signals of gravitational lensing. Gravitational lensing happens when space-time around an object is deformed so much that it noticeably distorts the light coming from behind it. The recent, headline-making, black hole image is an extreme example. But most gravitational lensing events are more subtle, resulting in smears or partial arcs. AIs can learn to identify these.

Particle physicists also use neural networks to find patterns, both specific and unspecific ones. Highly energetic particle collisions, like those done at the Large Hadron Collider, produce huge amounts of data. Neural networks can be trained to flag interesting events. Similar techniques have been used to identify certain types of radio bursts, and may soon help finding gravitational waves.

Machine learning aids the modeling of physical systems both by speeding up calculations and by enabling new types of calculations. For example, simulations for the formation of galaxies take a long time even on the current generation of super-computers. But neural networks can learn to extrapolate from the existing simulations, without having to re-run the full simulation each time, a technique that was recently successfully used to match the amount of dark matter to the amount of visible matter in galaxies. Neural networks have also been used to reconstruct what happens when cosmic rays hit the atmosphere, or how elementary particles are distributed inside composite particles.

For model analysis, machine learning is applied to understand better the properties of already known theories which cannot be extracted by other mathematical methods, or to speed up computation. For example, the interaction of many quantum particles can result in a variety of phases of matter. But the existing mathematical methods have not allowed physicists to calculate these phases. Neural nets can encode the many quantum particles and then classify the different types of behavior.

Similar ideas underlie neural networks that seek to classify the properties of materials, such as conductivity or compressibility. While the theory for the materials’ atomic structure is known in principle, many calculations have so-far exceeded the existing computational resources. Machine learning is beginning to change that. Many hope that it may one day allow physicists to find materials that are superconducting at room temperature. Another fertile area for applications of neural nets is “quantum tomography,” that is the reconstruction of quantum state from the measurements performed on it, a problem of high relevance for quantum computing.

And it is not only that machine learning advances physics, physics can in return advance machine learning. At present, it is not well understood just why neural nets work as well as they do. Since some neural networks can be represented as physical systems, knowledge from physics may shed light on the situation.

In summary, machine learning rather suddenly allows physicists to tackle a lot of problems that were previously intractable, simply because of the high computational burden.

What does this mean for the future of physics? Will we see the “End of Theory” as Chris Anderson oracled in 2008?

I do not think so. There are many different types of neural networks, which differ in their architecture and learning scheme. Physicists now have to understand which algorithm works for which case and how well, the same way they previously had to understand which theory works for which case and how well. Rather than spelling the end of theory, machine learning will take it to the next level.

[You can help me keep my writing freely available by using the donate button in the top right corner of the page.]

Wednesday, November 20, 2019

Can we tell if there’s a wormhole in the Milky-Way?

This week I got a lot of questions about an article by Dennis Overbye in the New York Times, titled “How to Peer Through a Wormhole.” This article says “Theoretically, the universe may be riddled with tunnels through space and time” and goes on to explain that “Wormholes are another prediction of Einstein’s theory of general relativity, which has already delivered such wonders as an expanding universe and black holes.” Therefore, so Overbye tells his readers, it is reasonable to study whether the black hole in the center of our Milky Way is such a wormhole.


The trouble with this article is that it makes it appear as if wormholes are a prediction of general relativity comparable to the prediction of the expansion of the universe and the prediction of black holes. But this is most definitely not so. Overbye kind of says this by alluding to some “magic” that is necessary to have wormholes, but unfortunately he does not say it very clearly. This has caused quite some confusion. On twitter, for example, Natalie Wolchover, has put wormholes on par with gravitational waves.

So here are the facts. General Relativity is based on Einstein’s field equations which determine the geometry of space-time as a consequence of the energy and matter that is in that space-time. General Relativity has certain kinds of wormholes as solutions. These are the so-called Einstein-Rosen bridges. There are two problems with those.

First, no one knows how to create them with a physically possible process. It’s one thing to say that the solution exists in the world of mathematics. It’s another thing entirely to say that such a solution describes something in our universe. There are whole books full with solutions to Einstein’s field equations. Most of these solutions have no correspondence in the real world.

Second, even leaving aside that they won’t be created during the evolution of the universe, nothing can travel through these wormholes.

If you want to keep a wormhole open, you need some kind of matter that has a negative energy density, which is stuff that for all we know does not exist. Can you write down the mathematics for it? Yes. Do we have any reason whatsoever to think that this mathematics describes the real world? No. And that, folks, is really all there is to say about it. It’s mathematics and we have no reason to think it’s real.

In this, wormholes are very, very different to the predictions of the expanding universe, gravitational waves, and black holes. The expanding universe, gravitational waves and black holes are consequences of general relativity. You have to make an effort to avoid that they exist. It’s the exact opposite with wormholes. You have to bend over backwards to make the math work so that they can exist.

Now, certain people like to tell me that this should count as “healthy speculation” and I should stop complaining about it. These certain people are either physicists who produce such speculations or science writers who report about it. In other words, they are people who make a living getting you to believe this mathematical fiction. But there is nothing healthy about this type of speculation. It’s wasting time and money that would be better used on research that could actually advance physics.

Let me give you an example to see the problem. Suppose the same thing would happen in medicine. Doctors would invent diseases that we have no reason to think exist. They would then write papers about how to diagnose those invented diseases and how to cure those invented diseases and, for good measure, argue that someone should do an experiment to look for their invented diseases.

Sounds ridiculous? Yeah, it is ridiculous. But that’s exactly what is going on in the foundations of physics, and it has been going on for decades, which is why no one sees anything wrong with it anymore.

Is there at least something new that would explain why the NYT reports on this? What’s new is that two physicists have succeeded in publishing a paper which says that if the black hole in the center of our galaxy is a traversable wormhole then maybe we might be able to see this. The idea is that if there is stuff moving around the other end of the wormhole then we might notice the gravitational influence of that stuff on our side of the wormhole.

Is it possible to look for this? Yes, it is also possible to look for alien spaceships coming through, and chances are, next week a paper will get published about this and the New York Times reports it.

On a more technical note, a quick remark about the paper, which you find here:
The authors look at what happens with the gravitational field on one side of a non-traversable wormhole if a shell of matter is placed around the other side of the wormhole. They conclude:
“[T]he gravitational field can cross from one to the other side of the wormhole even from inside the horizon... This is very interesting since it implies that gravity can leak even through the non-traversable wormhole.”
But the only thing their equation says is that the strength of the gravitational field on one side of the wormhole depends on the matter on the other side of the wormhole. Which is correct of course. But there is no information “leaking” through the non-traversable (!) wormhole because it’s a time-independent situation. There is no change that can be measured here.

This isn’t simply because they didn’t look at the time-dependence, but because the spherically symmetric case is always time-independent. We know that thanks to Birkhoff’s theorem. We also know that gravitational waves have no monopole contribution, so there are no propagating modes in this case either.

The case that they later discuss, the one that is supposedly observable, instead talks of objects on orbits around the other end of the wormhole. This is, needless to say, not a spherically symmetric case and therefore this argument that the effect is measurable for non-traversable wormholes is not supported by their analysis. If you want more details, this comment gets it right.

Friday, November 15, 2019

Did scientists get climate change wrong?

On my recent trip to the UK, I spoke with Tim Palmer about the uncertainty in climate predictions.

Saturday, November 09, 2019

How can we test a Theory of Everything?

How can we test a Theory of Everything? That’s a question I get a lot in my public lectures. In the past decade, physicists have put forward some speculations that cannot be experimentally ruled out, ever, because you can always move predictions to energies higher than what we have tested so far. Supersymmetry is an example of a theory that is untestable in this particular way. After I explain this, I am frequently asked if it is possible to test a theory of everything, or whether such theories are just entirely unscientific.


It’s a good question. But before we get to the answer, I have tell you exactly what physicists mean by “theory of everything”, so we’re on the same page. For all we currently know the world is held together by four fundamental forces. That’s the electromagnetic force, the strong and the weak nuclear force, and gravity. All other forces, like for example Van-der-Waals forces that hold together molecules or muscle forces derive from those four fundamental forces.

The electromagnetic force and the strong and the weak nuclear force are combined in the standard model of particle physics. These forces have in common that they have quantum properties. But the gravitational force stands apart from the three other forces because it does not have quantum properties. That’s a problem, as I have explained in an earlier video. A theory that solves the problem of the missing quantum behavior of gravity is called “quantum gravity”. That’s not the same as a theory of everything.

If you combine the three forces in the standard model to only one force from which you can derive the standard model, that is called a “Grand Unified Theory” or GUT for short. That’s not a theory of everything either.

If you have a theory from which you can derive gravity and the three forces of the standard model, that’s called a “Theory of Everything” or TOE for short. So, a theory of everything is both a theory of quantum gravity and a grand unified theory.

The name is somewhat misleading. Such a theory of everything would of course *not explain everything. That’s because for most purposes it would be entirely impractical to use it. It would be impractical for the same reason it’s impractical to use the standard model to explain chemical reactions, not to mention human behavior. The description of large objects in terms of their fundamental constituents does not actually give us much insight into what the large objects do. A theory of everything, therefore, may explain everything in principle, but still not do so in practice.

The other problem with the name “theory of everything” is that we will never know that not at some point in the future we will discover something that the theory does not explain. Maybe there is indeed a fifth fundamental force? Who knows.

So, what physicists call a theory of everything should really be called “a theory of everything we know so far, at least in principle.”

The best known example of a theory of everything is string theory. There are a few other approaches. Alain Connes, for example, has an approach based on non-commutative geometry. Asymptotically safe gravity may include a grand unification and therefore counts as a theory of everything. Though, for reasons I don’t quite understand, physicists do not normally discuss asymptotically safe gravity as a candidate for a theory of everything. If you know why, please leave a comment.

These are the large programs. Then there are a few small programs, like Garrett Lisi’s E8 theory, or Xiao-Gang Wen’s idea that the world is really made of qbits, or Felix Finster’s causal fermion systems.

So, are these theories testable?

Yes, they are testable. The reason is that any theory which solves the problem with quantum gravity must make predictions that deviate from general relativity. And those predictions, this is really important, cannot be arbitrarily moved to higher and higher energies. We know that because combining general relativity with the standard model, without quantizing gravity, just stops working near an energy known as the Planck energy.

These approaches to a theory of everything normally also make other predictions. For example they often come with a story about what happened in the early universe, which can have consequences that are still observable today. In some cases they result in subtle symmetry violations that can be measurable in particle physics experiments. The details about this differ from one theory to the next.

But what you really wanted to know, I guess, is whether these tests are practically possible any time soon? I do think it is realistically possible that we will be able to see these deviations from general relativity in the next 50 years or so. About the other tests that rely on models for the early universe or symmetry violations, I’m not so sure, because for these it is again possible to move the predictions and then claim that we need bigger and better experiments to see them.

Is there any good reason to think that such a theory of everything is correct in the first place? No. There is good reason to think that we need a theory of quantum gravity, because without that the current theories are just inconsistent. But there is no reason to think that the forces of the standard model have to be unified, or that all the forces ultimately derive from one common explanation. It would be nice, but maybe that’s just not how the universe works.

Saturday, November 02, 2019

Have we really measured gravitational waves?


A few days ago I met a friend on the subway. He tells me he’s been at a conference and someone asked if he knows me. He says yes, and immediately people start complaining about me. One guy, apparently, told him to slap me.

What were they complaining about, you want to know? Well, one complaint came from a particle physicist, who was clearly dismayed that I think building a bigger particle collider is not a good way to invest $40 billion dollars. But it was true when I said it the first time and it is still true: There are better things we can do with this amount money. (Such as, for example, make better climate predictions, which can be done for as “little” as 1 billion dollars.)

Back to my friend on the subway. He told me that besides the grumpy particle physicist there were also several gravitational wave people who have issues with what I have written about the supposed gravitational wave detections by the LIGO collaboration. Most of the time if people have issues with what I’m saying it’s because they do not understand what I’m saying to begin with. So with this video, I hope to clear the situation up.

Let me start with the most important point. I do not doubt that the gravitational wave detections are real. But. I spend a lot of time on science communication, and I know that many of you doubt that these detections are real. And, to be honest, I cannot blame you for this doubt. So here’s my issue. I think that the gravitational wave community is doing a crappy job justifying the expenses for their research. They give science a bad reputation. And I do not approve of this.

Before I go on, a quick reminder what gravitational waves are. Gravitational waves are periodic deformations of space and time. These deformations can happen because Einstein’s theory of general relativity tells us that space and time are not rigid, but react to the presence of matter. If you have some distribution of matter that curves space a lot, such as a pair of black holes orbiting one another, these will cause space-time to wobble and the wobbles carry energy away. That’s what gravitational waves are.

We have had indirect evidence for gravitational waves since the 1970s because you can measure how much energy a system loses through gravitational waves without directly measuring the gravitational waves. Hulse and Taylor did this by closely monitoring the orbiting frequency of a pulsar binary. If the system loses energy, the two stars get closer and they orbit faster around each other. The predictions for the emission of gravitational waves fit exactly on the observations. Hulse and Taylor got a Nobel prize for that in 1993.

For the direct detection of gravitational waves you have to measure the deformation of space and time that they cause. You can do this by using very sensitive interferometers. An interferometer bounces laser light back and forth in two orthogonal directions and then combines the light.

Light is a wave and depending on whether the crests of the waves from the two directions lie on top of each other or not, the resulting signal is strong – that’s constructive interference – or washed out – that’s destructive interference. Just what happens depends very sensitively on the distance that the light travels. So you can use changes in the strength of the interference pattern to figure out whether one of the directions of the interferometer was temporarily shorter or longer.

A question that I frequently get is how can this interferometer detect anything if both the light and the interferometer itself deform with space-time? Wouldn’t the effect cancel out? No, it does not cancel out, because the interferometer is not made of light. It’s made of massive particles and therefore reacts differently to a periodic deformation of space-time than light does. That’s why one can use light to find out that something happened for real. For more details, please check these papers.

The first direct detection of gravitational waves was made by the LIGO collaboration in September 2015. LIGO consists of two separate interferometers. They are both located in the United States, some thousand kilometers apart. Gravitational waves travel at the speed of light, so if one comes through, it should trigger both detectors with a small delay that comes from the time it takes the wave to travel from one detector to the other. Looking for a signal that appears almost simultaneously in the two detectors helps to identify the signal in the noise.

This first signal measured by LIGO looks like a textbook example of a gravitational wave signal from a merger of two black holes. It’s a periodic signal that increases in frequency and amplitude, as the two black holes get closer to each other and their orbiting period gets shorter. When the horizons of the two black holes merge, the signal is suddenly cut off. After this follows a brief period in which the newly formed larger black hole settles in a new state, called the ringdown. A Nobel Prize was awarded for this measurement in 2017. If you plot the frequency distribution over time, you get this banana. Here it's the upward bend that tells you that the frequency increases before dying off entirely.

Now, what’s the problem? The first problem is that no one seems to actually know where the curve in the famous LIGO plot came from. You would think it was obtained by a calculation, but members of the collaboration are on record saying it was “not found using analysis algorithms” but partly done “by eye” and “hand-tuned for pedagogical purposes.” Both the collaboration and the journal in which the paper was published have refused to comment. This, people, is highly inappropriate. We should not hand out Nobel Prizes if we don’t know how the predictions were fitted to the data.

The other problem is that so far we do not have a confirmation that the signals which LIGO detects are in fact of astrophysical origin, and not misidentified signals that originated on Earth. The way that you could show this is with a LIGO detection that matches electromagnetic signals, such as gamma ray bursts, measured by telescopes.

The collaboration had, so far, one opportunity for this, which was an event in August 2017. The problem with this event is that the announcement from the collaboration about their detection came after the announcement of the incoming gamma ray. Therefore, the LIGO detection does not count as a confirmed prediction, because it was not a prediction in the first place – it was a postdiction.

It seems to offend people in the collaboration tremendously if I say this, so let me be clear. I have no reason to think that something fishy went on, and I know why the original detection did not result in an automatic alert. But this isn’t the point. The point is that no one knows what happened before the official announcement besides members of the collaboration. We are waiting for an independent confirmation. This one missed the mark.

Since 2017, the two LIGO detectors have been joined by a third detector called Virgo, located in Italy. In their third run, which started in April this year, the LIGO/Virgo collaboration has issued alerts for 41 events. From these 41 alerts, 8 were later retracted. Of the remaining gravitational wave events, 10 look like they are either neutron star mergers, or mergers of a neutron star with a black hole. In these cases, there should also be electromagnetic radiation emitted which telescopes can see. For black hole mergers, one does not expect this to be the case.

However, no telescope has so far seen a signal that fits to any of the gravitational wave events. This may simply mean that the signals have been too weak for the telescopes to see them. But whatever the reason, the consequence is that we still do not know that what LIGO and Virgo see are actually signals from outer space.

You may ask isn’t it enough that they have a signal in their detector that looks like it could be caused by gravitational waves? Well, if this was the only thing that could trigger the detectors, yes. But that is not the case. The LIGO detectors have about 10-100 “glitches” per day. The glitches are bright and shiny signals but do not look like gravitational wave events. The cause of some of these glitches is known. The cause of other glitches not. LIGO uses a citizen science project to classify these glitches and has given them funky names like “Koi Fish” or “Blip.”

What this means is that they do not really know what their detector detects. They just throw away data that don’t look like they want it to look. This is not a good scientific procedure. Here is why.

Think of an animal. Let me guess, it’s... an elephant. Right? Right for you, right for you, not right for you? Hmm, that’s a glitch in the data, so you don’t count.

Does this prove that I am psychic? No, of course it doesn’t. Because selectively throwing away data that’s inconvenient is a bad idea. Goes for me, goes for LIGO too. At least that’s what you would think.

If we had an independent confirmation that the good-looking signal is really of astrophysical origin, this wouldn’t matter. But we don’t have that either. So that’s the situation in summary. The signals that LIGO and Virgo see are well explained by gravitational wave events. But we cannot be sure that these are actually signals coming from outer space and not some unknown terrestrial effect.

Let me finish by saying once again that personally I do not actually doubt these signals are caused by gravitational waves. But in science, it’s evidence that counts, not opinion.