Saturday, January 22, 2022

Does the sun cause earthquakes? (It's not as crazy as it sounds.)

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video. References are in the info fold below the video on YouTube or on Patreon.]


Earthquakes are the most fatal natural disasters. According to a report from the United Nations Office for Disaster Risk Reduction, in the period from 1998-2017, Earthquakes accounted for 7.8% of natural disasters, but for 56% of deaths from natural disasters. Why is it so hard to predict earthquakes? Did you know that the number of earthquakes correlates with solar activity and with the length of the day? You didn’t? Well then stay tuned because that’s what we’ll talk about today.

This is the first part of a two-part video about earthquake prediction. In this part, we will talk about the long-term and intermediate-term forecast for earthquake probability, ranging from centuries to months. And in the second part, which is scheduled for next week, we will talk about the short-term forecast, from months to seconds.

First things first, why do earthquakes happen? Well, there are many different types of earthquakes, but the vast majority of large earthquakes happen in the same regions, over and over again. You can see this right away from this map which shows the locations of earthquakes from 1900 to 2017. This happens because the surface of earth is fractured into about a dozen pieces, the tectonic plates, and these plates move at speeds of a few centimeters per year. But they don’t all move in the same direction, so where they meet, they rub on each other. Those places are called “faults”.

Most of the time, a fault is “locked”, which means that resistance from friction prevents the rocks from moving against each other. But the strain in the rock accumulates until it reaches a critical value where it overcomes the frictional resistance. Then the rocks on both sides of the fault suddenly slip against each other. This suddenly releases the stress and causes the earth to shake.

But that’s not the end of the story because the plates continue to move, so the strain will build up again, and eventually cause another earthquake. If the motion of the tectonic plates was perfectly steady and the friction was perfectly constant, you’d expect the earthquakes to happen periodically. But reality is more difficult than that. The surface of the rocks isn’t exactly the same everywhere, the motion of the plates may not be entirely steady, the earthquakes themselves may change the rocks, and also, earthquakes in one location can trigger earthquakes elsewhere.

This is why earthquakes recur in irregular intervals. In a nutshell, it’s fairly easy to predict where big earthquakes are likely to happen, but difficult to predict when they will happen. According to the website of the US Geological Service, “Neither the USGS nor any other scientists have ever predicted a major earthquake. We do not know how, and we do not expect to know how any time in the foreseeable future.”

Ok, so that’s it, thanks for watching. No, wait. That’s not it! Because even though no one knows how to predict an individual earthquake, we still might be able to predict the probability that an earthquake occurs in some period of time. This sounds somewhat lame, I know, but this information is super important if you want to decide whether it’s worth investing into improving the safety of buildings, or warning systems. It can save lives.

And indeed, geophysicists know some general probabilistic facts about the occurrence of earthquakes. The best known one is probably the Gutenberg-Richter law. The Gutenberg Richter law is a relationship between the magnitude and total number of earthquakes which have at least that magnitude.

Loosely speaking it says that the number of large earthquakes drops exponentially with the magnitude. For example, in seismically active regions, there will typically be about 10 times more events of magnitude 3.0 and up than there are of 4.0 and up. And 100 times more earthquakes of magnitude 2.0 and up than 4.0 and up, you get the idea. The exact scaling depends on the region; it can actually be larger than a factor 10 per order of magnitude.

The US Geological Service has for example used the past records of seismic activity in the San Francisco bay area to predict that the area has a 75% probability of an earthquake of at least magnitude 6.0 before the year 2043.

Geophysicists also know empirically that the distribution of earthquakes over time strongly departs from a Poisson distribution, which means it doesn’t look like it’s entirely random. Instead, the observed distribution indicates the presence of correlations. They have found for example that earthquakes are more likely to repeat in intervals of 32 years than in other intervals. This was first reported in 2008 but has later also been found by some other researchers. Here is for example a figure from a 2017 paper by Bendick and Bilham, which shows the deviations in the earthquake clustering from being random. So a completely random distribution would all be at zero, and the blue curve shows there’s a periodicity in the intervals.

That there are patterns in the earthquake occurrences is very intriguing and the reason why geophysicists have looked for systematic influences on the observed rate of earthquakes.

We have chosen here three examples that we totally subjectively found to be the most interesting: Solar activity, tides, and the length of the day. I have to warn you that this is all quite recent research and somewhat controversial, but not as crazy as you might think.

First, solar activity. In 2020 a group of Italian researchers published a paper in which they report having found a very strong correlation between earthquakes and solar activity. They analyzed 20 years of data from the SOHO satellite about the density and velocity of protons in the magnetosphere, so that’s about 500 kilometers about the surface of earth. Those protons come from solar wind, so they depend on the solar activity. And then compared that to the worldwide seismicity in the corresponding period.

They found that the proton density strongly correlated with the occurrence of large earthquakes of magnitude 5.6 and up, with a time shift of one day. The authors claim that the probability that the correlation is just coincidence is smaller than 10 to the minus five. And the correlation increases with the magnitude of the earthquake.

The authors of the paper also propose a mechanism that could explain the correlation at least qualitatively, namely a reverse piezoelectric effect. The piezoelectric effect is when a mechanical stress produces an electric field. The reverse piezoelectric effect, is, well, the reverse. Discharges of current from the atmosphere could produce stress in the ground. That could then trigger earthquakes in regions where the stress load was already close to rupture. A few other groups have since looked at this idea and so far no one has found a major problem with the analysis.

Problem with using solar activity to predict earthquakes is well, it’s difficult to predict solar activity… Though the sun is known to have a periodic cycle, so if this result holds up it’d tell us that during years of high solar activity we’re more likely to see big earthquakes.

Second, tides. The idea that tides trigger earthquakes has a long history. It’s been discussed in the scientific literature already since the 19th century. But for a long time, scientists found statistically significant correlations only limited to special regions or circumstances. However, in 2016 a team of Japanese researchers published a paper in Nature in which they claimed to have found that very large earthquakes, above magnitude 8 point 2 tend to occur near the time of maximum tidal stress amplitude.

They claim that this result makes some sense if one knows that very large earthquakes often happen in subduction zones, so that’s places where one tectonic plate goes under another. And those places are known to be very sensitive to extra stress, which can be caused by the tides. Basically the idea is that tides may trigger an earthquake that was nearly about to happen. However, it isn’t always the case that large earthquakes happen when the tide is high and also, there are very few of these earthquakes overall which means the correlation has a low statistical significance.

Third: The length of the day. As you certainly know, the length of the day depends on which way the wind blows.

Ok, in all fairness I didn’t know this, but if you think about it for a second, this has to be the case. If the earth was a perfectly rigid ball, then it would rotate around its axis steadily because angular momentum is conserved. But the earth isn’t a rigid ball. Most importantly it’s surrounded by an atmosphere and that atmosphere can move differently than the solid sphere. This means if the wind blows in the other direction than the earth is spinning, then the spinning of the earth has to speed up to preserve angular momentum. Physics!

This is a small effect but it’s totally measurable and on the order of some milliseconds a day. Indeed, you can use the length of the day to draw conclusions about annual weather phenomena, such as El Nino. This was first shown in a remarkable 1991 paper by Hide and Dickey. Have a look at this figure from their paper. The horizontal axis is years and the upper curve is variations in the length of the day. The lower curve is a measure for the strength of the Southern Oscillation, that’s a wind pattern which you may know as the El Nina, El Nino years. You can see right away that they’re correlated.

Yes, so the length of the day depends on which way the wind is blowing. The fluid core of the earth is also sloshing around and affects the length of the day, but this effect is even smaller than that of the atmosphere and less well understood. Fun fact: The fluid core is only about 3000 km underneath your feet. That’s less than the distance from LA to New York. But back to the earthquakes.

Earthquakes make the earth more compact which decreases the moment of inertia. But again, angular momentum is conserved, so earthquakes shorten the length of the day. But that’s not all. Geophysicists have known since the 1970s that seismic activity correlates with the rotation of the earth and therefore the length of the day, in that shorter days are followed by more earthquakes, with a time-lag of about 5 years.

Since the 1970s data has much improved, and this finding has become more somewhat more robust. Based on this Bendick and Bilham made a forecast in 2017 that in 2018 we would see an uptick in Earthquakes. The number of large earthquakes since 2018 within the uncertainties of their forecast. Yes, correlation doesn’t necessarily imply causation, but correlations are useful for forecasting even if you don’t understand the causation.

Just why that happens is presently rather unclear. Bendick and Bilham suggest that earthquakes are weakly coupled by the rotation of the earth, and when that rotation frequency changes that may cause a cascade effects by inertia, basically. The earth spins and all those plates on it spin with it, but when the spinning changes it takes some time until the plates get the message. And then they don’t all react the same way, which may cause some extra stress. That triggers earthquakes in some places and those trigger further earthquakes.

So it’s not like the changes in the rotation actually cause earthquakes, it’s just that they advance some earthquakes, and then retard others because the stress between the plates was released early. But really no one knows whether this is actually what happens.

As you can see, geophysicists are teasing out some really intriguing systematic correlations that may lead to better long-term predictions for earthquake risk. And next week we will talk about short term predictions, among other things whether animals can sense earthquakes and whether earthquake lights are real.

Saturday, January 15, 2022

Are warp drives science now?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Warp drives are not just science fiction. Einstein’s theory of general relativity says they should be possible. Yes, that guy again!

A year ago I told you about some recent developments, and since then warp drives have been in the news several times. In one case the headlines claimed that a physicist had found a warp drive that makes faster than light travel possible without requiring unphysical negative energies. In another case you could read that a “warp drive pioneer” had discovered an “actual, real world warp-bubble”. Seriously? What does that mean? That’s what we’ll talk about today.

First things first, what’s a warp drive. A warp drive is a new method of travel. It brings you from one place to another not by moving you through space, but by deforming space around you. Alright. Easy enough. Just one thing. How do you do that?

Well, Einstein taught us that you can deform space with energy, so you surround yourself with a bubble of energy, which contracts the space before you and expands it behind you. As a result, the places where you want to go move closer to you. So you’re traveling, even though you didn’t move. Okay. But what’s that bubble of energy made of and where do you get it from? Yes, indeed, good question. That’s why no one has ever built an actual warp drive.

As I explained in my previous video, warp drives are solutions to Einstein’s equations of general relativity. So they are mathematically possible. But that a warp drive is a solution of general relativity does not mean it makes physical sense.

What Einstein’s equations tell you is just that a certain distribution of energy and mass goes along with a certain curved space-time. If you put in a distribution of energy and mass, you get out the curved space-time this would create. If you put in a curved space-time you get out the distribution of energy of mass that you would need to create it. There will always be some energy distribution for which your curved space-time is a solution. But in general those distributions of energy and mass are not physically possible.

There are three different types of weird stuff which we have never seen that can become necessary for warp drives. There is (a) stuff that has negative energy density, (b) stuff that has a weird gravitational behavior which can seem repulsive (c) stuff that moves faster than the speed of light.

The worst type of weird stuff is that with the negative energy density, not only because we’ve never seen that but also because it would make the vacuum unstable. If negative energies existed, one could make pairs of negative and positive energy particles out of nothing, in infinite numbers, which destroys literally everything. So if negative energy existed we wouldn’t exist. We’ll mark that with a red thumb down.

The repulsive stuff isn’t quite as bad. Indeed, physicists have a few theories for such repulsive stuff, though there is no evidence they actually exist. There is for example the hypothetical “inflaton field” which allegedly rapidly expanded our universe just after the big bang. This inflaton field, or rather its potential, can act gravitationally repulsive. And dark energy is also repulsive stuff, if it is stuff. And if it exists. Which it may not. But well, you could say, at least that stuff doesn’t destroy the universe so we’ll mark that with an orange thumb down.

Finally, superluminal stuff, so stuff that moves faster than light. This isn’t all that problematic other than that we’ve never seen it, so we’ll give it a yellow thumb down. It’s just that if you need stuff that moves faster than light to move stuff faster than light then that isn’t super useful.

Now that we have color coded problematic types of energy which makes us look super organized, let us look at whether warp drives require them. The best known warp drive solution dates back to 1994 and is named the “Alcubiere drive” after the physicist Miguel Alcubierre. The Alcubierre drive requires all of the above, negative energies, repulsive gravity, and superluminal stuff. That’s not particularly encouraging.

Now the big headlines that you saw in March last year were triggered by a press release from the University of Göttingen about the publication of a paper by Erik Lentz. Lentz claimed to have found a new solution for warp drives that does not require negative energies.

The paper was published in the journal Classical and Quantum Gravity, which is a specialized high quality journal. I have published quite a few papers there myself. I mention this because I have seen a few people tried to criticize Lentz’ paper by discrediting the journal. This is not a good argument, it’s a fine journal. However, this doesn’t mean the paper is right.

Lentz claims both in his paper and the press release that he avoided unphysical stuff by stitching together solutions that, to make a long story short, have fewer symmetries than the warp drives that were previously considered. He does not explain why or how this prevents negative energies.

Just one month later, in April 2021, another paper came out, this one by Shaun Fell and Lavinia Heisenberg. They made a similar claim like Lentz, namely that they’d found a warp drive solution that doesn’t require unphysical stuff by using a configuration that has fewer symmetries than the previously considered ones. The Fell and Heisenberg paper got published in the same journal and is mathematically more rigorous than Lentz’. But it didn’t come with a press release and didn’t make headlines.

Now, those new warp drive solutions, from Lentz and Fell and Heisenberg are in the same general class as the Alcubierre drive, which is called the Natario class, named after Jose Natario. In May last year, a third warp drive paper appeared, this one by Santiago, Schuster, and Visser. The authors claim to have proved that all warp drives in the Natario class require negative energies and are therefore not physical. So that means, the new solutions are all bad, bad, and bad.

Why the disagreement? Why do those people say it’s possible, and those prove it’s impossible. Well, in their paper, Santiago and coauthors point out that the other authors omitted a necessary check on their derivation. After that, Fell and Heisenberg revised their paper and now agree that their warp drive require negative energies after all. Lentz also revised his paper but still claims that he doesn’t need negative energies. It’s unclear at the moment how he wants to escape the argument in the Santiago paper.

Now, the Santiago paper has not yet gotten published in a journal. I’ve read it and it looks good to me but in all honesty I didn’t check the calculation. If this result holds up, you may think it’s bad news because they’ve ruled out an entire class of warp drives. But I think it’s good news because their proof tells us why those solutions don’t work.

Just to give you the brief summary, they show that these solutions require that the integral over the energy density is zero. This means if it’s non-zero anywhere, it has to be negative somewhere. If they’re correct, this would tell us we should look for solutions which don’t have this constraint.

Okay, so the situation with the new solution from Lentz isn’t entirely settled, but if the proof from the Santiago paper is right then Lentz’ solution also has a negative energy problem. The Santiago paper by the way does not apply to the more general class of warp drives from Bobrick and Martire, which I talked about in my earlier video. The issue with those more general warp drives is that they’re somewhat unspecific. They just say we need several regions with certain properties, but one doesn’t really know how to do that.

Those general warp drives can be divided into those that move faster than light and those that move slower than light. The ones that move faster than light still require stuff that moves faster than light, so they’re still problematic. The ones that stay below the speed of light however don’t seem to have any obvious physical problem. You might find it somewhat disappointing that a warp drive stays below the speed of light and I can see that. But look at it this way: if we could at least travel with nearly the speed of light that would already be great progress.

So the claim that Lentz found a warp drive solution which allows faster than light travel without negative energies is highly questionable. But what’s with the headlines that said someone had actually built a warp drive? Well that turns out to be just bad science communication.

In July last year, a paper was published by a group with the lead author Harold White. They had done a computer simulation of certain nanostructures. And in that simulation they found a distribution of energies similar to that of the Alcubierre drive. This can happen because on very short distances the Casimir effect can give rise to energy densities that are effectively negative. So, not only did they not actually build the thing, it was a computer simulation, it’s also an in-medium effect. It’s kind of like a simulation of a simulation and definitely not an “actual” warp drive.

Where does this leave us? The big picture is that warp drives are getting some serious attention from researchers who work on general relativity. I think this is a good development. We certainly have a long way to go, but as they say, every journey begins with a first step. I think warp drives are a possibility that’s worth investigating. If you want to work on warp drives yourself, check out Gianni Martire’s website, because he is offering research grants and tells me he has to get rid of the money fast.

Having said that, I think those people all miss the point. If you want to have a propulsion mechanism the relevant question isn’t whether there is some energy distribution that can move an object. The question is how efficiently can you convert the energy into motion. You want to know what it takes to accelerate something. At present those papers basically say if you throw out stuff that way, then the space-ship will go that way because momentum is conserved. And that is probably correct, but it’s not exactly a new idea.

Saturday, January 08, 2022

Do Climate Models predict Extreme Weather?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


The politics of climate change receives a lot of coverage. The science, not so much. That’s unfortunate because it makes it very difficult to tell apart facts from opinions about what to do in light of those facts. But that’s what you have me for, to tell apart the facts from the opinions.

What we’ll look at in this video is a peculiar shift in the climate change headlines that you may have noticed yourself. After preaching for two decades that weather isn’t climate, now climate scientists claim they can attribute extreme weather events to climate change. They also at the same time say that their climate models can’t predict the extreme events that actually happen. How does that make sense? That’s what we’ll talk about today.

In the last year a lot of extreme weather events made headlines, like the heat dome in North Western America in summer 2021. It lasted for about two weeks and reached a record temperature of almost 50 degrees Celsius in British Columbia. More than a thousand people died from the heat. Later in the summer, we had severe floods in middle Europe. Water levels rose so fast so quickly that entire parts of cities were swept away. More than 200 people died.

Let’s look at what the headlines said about this. The BBC wrote “Europe's extreme rains made more likely by humans... Downpours in the region are 3-19% more intense because of human induced warming.” Or, according to Nature, “Climate change made North America’s deadly heatwave 150 times more likely”. Where do those numbers come from?

These probabilities come out of a research area called “event attribution”. The idea was first put forward by Myles Allen in a Nature commentary in 2003. Allen, who is a professor at the University of Oxford, was trying to figure out whether he might one day be able to sue the fossil fuel industry because his street was flooded, much to the delight of ducks, and he couldn’t find sufficiently many sandbags.

But extreme event attribution only attracted public attention in recent years. Indeed, last year two of the key scientists in this area, Frederike Otto and Geert Jan van Oldenborgh made it on the list of Time Magazine’s most influential people of 2021. Sadly, van Oldenborgh died two months ago.

The idea of event attribution is fairly simple. Climate models are computer simulations of the earth with many parameters that you can twiddle. One of those parameters is the carbon dioxide level in the atmosphere. Another one is methane, and then there’s aerosols and the amount of energy that we get from the sun, and so on. 

Now, if carbon dioxide levels increase then the global mean surface air temperature goes up. So far, so clear. But this doesn’t really tell you much because a global mean temperature isn’t something anyone ever experiences. We live locally from day to day and not globally averaged over a year.

For this reason, global average values work poorly in science communication. They just don’t mean anything to people. So the global average surface temperature has increased by one degree Celsius in 50 years? Who cares.

Enter extreme event attribution. It works like this. On your climate model, you leave the knob for greenhouse gases at the pre-industrial levels. Then you ask how often a certain extreme weather event, like a heat wave or flood, would occur in some period of time. Then you do the same again with today’s level of greenhouse gases. And finally you compare the two cases.

So, say, with pre-industrial greenhouse gas levels you find one extreme flood in a century, but with current greenhouse gas levels you find ten floods, then you can say these events have become ten times more likely. I think such studies have become popular in the media because the numbers are fairly easy to digest. But what do the numbers actually mean?

Well first of all there’s the issue of interpreting probabilities in general. An increase in the probability of occurrence of a certain event in a changed climate doesn’t mean it wouldn’t have happened without climate change. That’s because it’s always possible that a series of extreme events was just coincidence. Bad luck. But that it was just coincidence becomes increasingly less likely the more of those extreme events you see. Instead, you’re probably witnessing a trend.

That one can strictly speaking never rule out coincidence but only say it’s unlikely to be coincidence is always the case in science, nothing new about this. But for this reason I personally don’t like the word “attribution”. It just seems too strong. Maybe speaking of shifting trends would be better. But this is just my personal opinion about the nomenclature. There is however another issue with extreme event attribution. It’s that the probability of an event depends on how you describe the event.

Think of the floods in central Europe as an example. If you take all the data which we have for the event, the probability that you see this exact event in any computer simulation is zero. To begin with that’s because the models aren’t detailed enough. But also, the event is so specific with some particular distribution of clouds and winds and precipitation and what have you, you’d have to run your simulation forever to see it even once.

What climate scientists therefore do is to describe the event as one in a more general class. Events in that class might have, say, more than some amount of rainfall during a few days in some region of Europe during some period of the year. The problem is such a generalization is arbitrary. And the more specific you make the class for the event the more unlikely it is that you’ll see it. So the numbers you get in the end strongly depend on how someone chose to generalize the event.

Here is how this was explained in a recent review paper: “Once an extreme event has been selected for analysis, our next step is to define it quantitatively. The results of the attribution study can depend strongly on this definition, so the choice needs to be communicated clearly.”

But that the probabilities ultimately depend on arbitrary choices isn’t the biggest problem. You could say as long as those choices are clearly stated, that’s fine, even if it’s usually not mentioned in media reports, which is not so fine. The much bigger problem is that even if you make the event more general, you may still not see it in the climate models.

This is because most of the current climate models have problems correctly predicting the severity and frequency of extreme events. Weather situations like dry spells or heat domes tend to come out not as long lasting or not as intense as they are in reality. Climate models are imperfect simulations of reality, particularly when it comes to extreme events. 

Therefore, if you look for the observed events in the model, the probability may just be zero, both with current greenhouse gas levels and with pre-industrial levels. And dividing zero by zero doesn’t tell you anything about the increase of probability.

Here is another quote from that recent review on extreme event attribution “climate models have not been designed to represent extremes well. Even when the extreme should be resolved numerically, the sub-grid scale parameterizations are often seen to fail under these extreme conditions. Trends can also differ widely between different climate models”. Here is how the climate scientist Michael Mann put this in an interview with CNN “The models are underestimating the magnitude of the impact of climate change on extreme weather events.”

But they do estimate the impact, right, so what do they do? Well, keep in mind that you have some flexibility for how you define your extreme event class. If you set the criteria for what counts as “extreme” low enough, eventually the events will begin to show up in some models. And then you just discard the other models. 

This is actually what they do. In the review that I mentioned they write “The result of the model evaluation is a set of models that seem to represent the extreme event under study well enough. We discard the others and proceed with this set, assuming they all represent the real climate equally well.”

Ok. Well. But this raises the question, if you had to discard half of the models because they didn’t work at all, what reason do you have to think that the other half will give you an accurate estimate. By all chances, the increase in probability which you get this way will be an underestimate.

Here’s a sketch to see why.

Suppose you have a distribution of extreme events that looks like this red curve. The vertical axis is the probability of an event in a certain period of time, and the horizontal axis is some measure of how extreme the events are. The duration of a heat wave or amount of rainfall or whatever it is that you are interested in. The distributions of those events will be different for pre-industrial levels of greenhouse gases than for the present levels. The increase in greenhouse gas concentrations changes the distribution so that extreme events become more likely. For the extreme event attribution, you want to estimate how much more likely they become. So you want to compare the areas under the curve for extreme events.

However, the event that we actually observe is so extreme it doesn’t show up in the models at all. This means the model underestimates what’s called the “tail” of this probability distribution. The actual distribution may instead look more like the green curve. We don’t know exactly what it looks like, but we know from observations that the tail has to be fatter than that of the distribution we get from the models.

Now what you can do to get the event attribution done is to look at less extreme events because these have a higher probability to occur in the models. You count how many events occurred for both cases of greenhouse gas levels and compare the numbers. Okay. But, look, we already know that the models underestimate the tail of the probability distribution. Therefore, the increase in probability which we get this way is just a lower bound on the actual increase in probability.

Again, I am not saying anything new, this is usually clearly acknowledged in the publications on the topic. Take for example a 2020 study of the 2019/20 wildfires in Australia that were driven by a prolonged drought. They authors who did the study write:
“While all eight climate models that were investigated simulate increasing temperature trends, they all have some limitations for simulating heat extremes… the trend in these heat extremes is only 1 ºC, substantially lower than observed. We can therefore only conclude that anthropogenic climate change has made a hot week like the one in December 2019 more likely by at least a factor of two.”
At least a factor of two. But it could have been a factor 200 or 2 million for all we know.

And that is what I think is the biggest problem. The way that extreme event attributions are presented in the media conveys a false sense of accuracy. The probabilities that they quote could be orders of magnitude too small. The current climate models just aren’t good enough to give accurate estimates. This matters a lot because nations will have to make investments to prepare for the disasters that they expect to come and figure out how important that is, compared to other investments. What we should do, that’s opinion, but getting the facts right is a good place to start.

Saturday, January 01, 2022

What’s the difference between American English and British English?

[This video is about pronunciation. The transcript won’t make sense without the audio!]


It never occurred to me that one day people might want to hear me speak in a foreign language. That was not the plan when I studied physics. I’ve meanwhile subscribed to like a dozen English pronunciation channels and spend a lot of time with the online dictionary replaying words, so much so that at this point I think my channel should really be called “Sabine learns English.”

Inevitably, I’m now giving English lectures to my English friend who really doesn’t know anything about English. But then, I also don’t know anything about German, other than speaking it. And because looking into English pronunciation clicked some things into place that I kind of knew but never consciously realized, we’ll start the New Year with a not too brain-intensive video on the difference between American and British English. And that’s what we’ll talk about today.

I spent some years in the United States and some more in Canada, and when I came back to Europe my English sounded like this.

I then acquired a British friend. His name is Tim Palmer, and you already know him as the singing climate scientist.

Tim also has an interest in quantum mechanics and he’d get very offended when I’d say quannum mechanics. And since I’m the kind of person who overthinks everything, I can now exactly tell you what makes British people sound British.

But since my own pronunciation is clearly of no help, I asked Tim to read you some sentences. And to represent the American pronunciation I asked the astrophysicist Brian Keating who kindly agreed.

Before we listen to their speech samples, I want to be clear that of course there are many different accents in both the UK and in the US, and it’s not like there’s only one right pronunciation. But the differences that I’ll be talking about are quite general as I’m sure you’ll notice in a moment.

Now first of all there are many words which are just different in North America and in the UK. Trucks. Lorries. Cookies. Biscuits. Trash. Rubbish. Gasoline. Petrol, and so on.

There are also some words which are used in both languages but don’t mean the same thing, such as rubber or fanny. Just be careful with those.

And then there are words which are the same but are pronounced somewhat differently, like /təˈmeɪtəʊ/ and /təˈmɑːtəʊ/ or /ˈvaɪtəmɪn/ and /ˈvɪtəmɪn/, or they have a somewhat different emphasis, like /vækˈsiːn/, which in American English has the emphasis on the second syllable, whereas in British English it’s /ˈvæksiːn/ with the emphasis on the first syllable.

But besides that there are also some overall differences in the pronunciation. The probably most obvious one is the t’s. Listen to this first example from Brian and Tim and pay attention to those ts.

“Quantum mechanics isn’t as complicated as they say.”

In American English it’s quite common to use tap ts that sound kind of like d’s. Bedder, complicated. Or kind of mumble over the ts altogether as in quannum mechanics.

In British English the ts tend to be much clearer pronounced. Better, complicated, quantum mechanics.

While this is I believe the difference that’s the easiest to pick up on, it’s not overall the biggest difference. I think the biggest difference is probably that a lot of “r”s are silent in British English, which means they’re not pronounced at all. There are a lot of words in which this happens, like the word “word” which in American English would be “word”. Let’s hear the next sample and pay attention to the rs.

“The first stars were born about one hundred million years after the big bang.”

In American English you have first stars born. In British English, none of those has an r, so it’s first stars born. The Brits also drop those r’s at the end of words.

“The plumber wasn’t any less clever than the professor.”

That sound which the Brits leave at the end of an “er” word, is called the “schwa” sound and its phonetic spelling is an “e” turned by 180 degrees. It’s the most common sound in British English and it goes like this. E. That’s it.

In case you think that “Schwa” sounds kind of German that’s because it is kind of German. It goes back to a Hebrew word, but the term was introduced by the German linguist Johann Schmeller in 1821 to describe how Germans actually pronounced words rather than write them. The Schwa sound is all over the place in German too, for example at the end of words like Sonne, Treppe, or Donaudampfschiffahrtsgesellschaftskapitänsmütze.

But let’s come back to English and talk a little bit more about those rs. Not all r’s are silent in British English. You keep them at the beginning of words, right?, but you also keep them between two vowels. For example, wherever, in American English has two rs in British English you drop the one at the end but keep the one in the middle. Wherever.

The “r” dropping is actually easy to learn once you’ve wrapped your head around it because there are general rules for it. Personally I find the British pronunciation easier because the English “r”s don’t exist in German, so if you can get rid of them, great. Ok, but just dropping the r’s won’t make you sound like the queen, there are a few more things. Let’s hear another sample. This time, pay attention to the vowels.

“I thought I caught a cold after I fell into the water.”

So you hear the two different ts in “water” again, but more prominently you hear the British use “ooo” where the Americans use a sound that’s closer to an “aaa”. W/ooo/ter. W/aaa/ter. This one is also quite easy to learn because it’s a general thing. Somewhat more difficult is the next one. Again pay attention to the vowels.

“You got this wrong. It’s probably not possible.”

What you hear there is that some of the “a”s in American English are “o”s in British English. Those two sounds have a different phonetic spelling, the a is spelled like this, and the o is the upside down of this. The difference is most obvious in the word “not” which in American English sounds more like “not”. Same thing with possible which becomes possible. And I did expect Brian to say pr/a/bably but he pronounced it probably, which is interesting. As I said there’s lot of regional variations in the pronunciation.

I have found this one rather difficult to learn because there’s another “a” sound for which this shift does not happen, and that has a phonetic spelling which looks somewhat like a big Lambda. Conversation turns into conversation, consciously into consciously, but company stays company. God turns to god but nothing stays nothing. Yes, that is really confusing. And in that word “confusing” the pronunciation doesn’t change because the first syllable is unstressed and the o becomes a schwa. Which is also confusing.

One final difference. This took me a long time to notice but once you know it it’s really obvious. English famously has a lot of diphthongs; those are combinations of two vowels. For example “I” that’s actually two vowels “a” and “I” -- aaii. Or “ey” as in take or baby. That’s an e and an i. eeeiiii. Take.

Most of those diphthongs are pronounced pretty much the same in British and American English, except for the əʊ as in “no”. In American English that’s more like an əʊ whereas in British English you start close to an “e”, əʊ. No. No. Only. Only. And so on.

Let’s hear how Brian and Tim pronounce this.

“I don’t know how slowly it will go.”

As I said, once you know it, it’s entirely obvious isn’t it. And if you want to sound Canadian you keep the American ou, but use the British eu for the aʊ. So out become out. About become about. Without without. But be careful, because once you’ve got that eu in your about it’s really hard to get rid of.

The shift from the American ou to the British eu is also confusing because many dictionaries use the same phonetic spelling for both sounds. And also, you don’t do it if there’s an “l” coming after the əʊ. So for example “pole” doesn’t become pole. No. So don’t do it if an l is coming up, except there’s an exception to the exception which is “folks”.

And that’s it folks, next week we’re back to talking about science.

Did you know by the way that German almost became the official language in the United States? You did? Well that’s an urban legend, it never happened. The USA doesn’t even have an official language; it’s just that English is the most widely spoken one.

Many thanks to Tim and Brian for their recordings, and many thanks also to Kate Middleton from SpeakMyLanguage for helping with this video. I learned most of what I know about British English from Kate. If you need help with your English, go check out her website.

Saturday, December 25, 2021

We wish you a nerdy Xmas!

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Happy holidays everybody, today we’re celebrating Isaac Newton’s birthday with a hand selected collection of nerdy Christmas facts that you can put to good use in every appropriate and inappropriate occasion.

You have probably noticed that in recent years worshipping Newton on Christmas has become somewhat of a fad on social media. People are wishing each other a happy Newtonmas rather than Christmas because December 25th is also Newton’s birthday. But did you know that this fad is more than a century old?

In 1891, The Japan Daily Mail reported that a society of Newton worshippers had sprung up at the University of Tokyo. It was founded, no surprise, by mathematicians and physicists. It was basically a social club for nerds, with Newton’s picture residing over meetings. The members were expected to give speeches and make technical jokes that only other members would get. So kind of like physics conferences basically.

The Japan Daily Mail also detailed what the nerds considered funny. For example, on Christmas, excuse me, Newtonmas, they’d have a lottery in which everyone drew a paper with a scientists’ name and then got a matching gift. So if you drew Newton you’d get an apple, if you drew Franklin a kite, Archimedes got you a naked doll, and Kant-Laplace would get you a puff of tobacco into your face. That was supposed to represent the Nebular Hypothesis. What’s that? That’s the idea that solar systems form from gas clouds, and yes, that was first proposed by Immanuel Kant. No, it doesn’t rhyme to pissant, sorry.

Newton worship may not have caught on, but nebular hypotheses certainly have.

By the way, did you know that Xmas isn’t an atheist term for Christmas? The word “Christ” in Greek is Christos written like this (Χριστός.) That first letter is called /kaɪ/ and in the Roman alphabet it becomes an X. It’s been used as an abbreviation for Christ since at least the 15th century.

However, in the 20th century the abbreviation has become somewhat controversial among Christians because the “X” is now more commonly associated with a big unknown. So, yeah, use at your own risk. Or maybe stick with Happy Newtonmas after all?

Well that is controversial too because it’s not at all cl

ear that Newton’s birthday is actually December 25th. Isaac Newton was born on December 25, 1642 in England.

But. At that time, the English still used the Julian calendar. That is already confusing because the new, Gregorian calendar, was introduced by Pope Gregory in 1584, well before Newton’s birth. It replaced the older, Julian calendar, that didn’t properly match the months to the orbit of Earth around the sun.

Yet, when Pope Gregory introduced the new calendar, the British were mostly Anglicans and they weren’t going to have some pope tell them what to do. So for over a hundred years, people in Great Britain celebrated Christmas 10 or 11 days later than most of Europe. Newton was born during that time. Great Britain eventually caved in and adopted the Gregorian calendar in 1751. They passed a law that overnight moved all dates forward by 11 days. So now Newton would have celebrated his birthday on January 4th, except by that time he was dead.

However, it gets more difficult because these two calendars continue running apart, so if you’d run forward the old Julian calendar until today, then December 25th according to the old calendar would now actually be January 7th. So, yeah, I think sorting this out will greatly enrich your conversation over Christmas lunch. By the way, Greece didn’t adopt the Gregorian calendar until 1923. Except for the Monastic Republic of Mount Athos, of course, which still uses the Gregorian calendar.

Regardless of exactly which day you think Newton was born, there’s no doubt he changed the course of science and with that the course of the world. But Newton was also very religious. He spent a lot of time studying the Bible looking for numerological patterns. On one occasion he argued, I hope you’re sitting, that the Pope is the anti-Christ, based in part on the appearance of the number 666 in scripture. Yeah, the Brits really didn’t like the Catholics, did they.

Newton also, at the age of 19 or 20, had a notebook in which he kept a list of sins he had committed such as eating an apple at the church, making pies on Sunday night, “Robbing my mother’s box of plums and sugar” and “Using Wilford’s towel to spare my own”. Bad boy. Maybe more interesting is that Newton recorded his secret confessions in a cryptic code that was only deciphered in 1964. There are still four words that nobody has been able to crack. If you get bored over Christmas, you can give it a try yourself, link’s in the info below.

Newton may now be most famous for inventing calculus and for Newton’s laws and Newtonian gravity, all of which sound like he was a pen on paper person. But he did some wild self-experiments that you can put to good use for your Christmas conversations. Merry Christmas, did you know that Newton once poked a needle into his eye? I think this will go really well.

Not a joke. In 1666, when he was 23, Newton, according to his own records, poked his eye with a bodkin, which is more or less a blunt stitching needle. In his own words “I took a bodkine and put it between my eye and the bone as near to the backside of my eye as I could: and pressing my eye with the end of it… there appeared several white dark and coloured circles.”

If this was not crazy enough, in the same year, he also stared at the Sun taking great care to first spend some time in a dark room so his pupils would be wide open when he stepped outside. Here is how he described this in a letter to John Locke 30 years later:
“in a few hours’ time I had brought my eyes to such a pass that I could look upon no bright object with either eye but I saw the sun before me, so that I could neither write nor read... I began in three or four days to have some use of my eyes again & by forbearing a few days longer to look upon bright objects recovered them pretty well.”
Don’t do this at home. Since we’re already talking about needles, did you know that pine needles are edible? Yes, they are edible and some people say they taste like vanilla, so you can make ice cream with them. Indeed, they are a good source of vitamin C and were once used by sailors to treat and prevent scurvy.

By some estimate, scurvy killed more than 2 million sailors between the 16th and 18th centuries. On a long trip it was common to lose about half of the crew, but in extreme cases it could be worse. On his first trip to India in 1499, Vasco da Gama reportedly lost 116 of 170 men, almost all to scurvy.

But in 1536, the crew of the French explorer Jacques Cartier was miraculously healed from scurvy upon arrival in what is now Québec. The miracle cure was a drink that the Iroquois prepared by boiling winter leaves and the bark from an evergreen tree, which was rich in vitamin C.

So, if you’ve run out of emphatic sounds to make in response to aunt Emma, just take a few bites off the Christmas tree, I’m sure that’ll lighten things up a bit.

Speaking of lights. Christmas lights were invented by no other than Thomas Edison. According to the Library of Congress, Edison created the first strand of electric lights in 1880, and he hung them outside his laboratory in New Jersey, during Christmastime. Two years later, his business partner Edward Johnson had the idea to wrap a strand of hand-wired red, white, and blue bulbs around a Christmas tree. So maybe take a break with worshipping Newton and spare a thought for Edison.

But watch out when you put the lights on the tree. According to the United States Consumer Product Safety Commission, in 2018, 17,500 people sought treatment at a hospital for injuries sustained while decorating for the holiday.

And this isn’t the only health risk on Christmas. In 2004 researchers in the United States found that people are much more likely to die from heart problems than you expect both on Christmas and on New Year. A 2018 study from Sweden made a similar finding. The authors of the 2004 study speculate that the reason may be that people delay seeking treatment during the holidays. So if you feel unwell don’t put off seeing a doctor even if it’s Christmas.

And since we’re already handing out the cheerful news, couples are significantly more likely to break up in the weeks before Christmas. This finding comes from a 2008 paper by British researchers who analyzed facebook status updates. Makes you wonder, do people break up because they can’t agree which day Newton was born or do they just not want to see their in-laws? Let me know what you think in the comments.

Saturday, December 18, 2021

Does Superdeterminism save Quantum Mechanics? Or Does It Kill Free Will and Destroy Science?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Superdeterminism is a way to make sense of quantum mechanics. But some physicists and philosophers have argued that if one were to allow it, it would destroy science. Seriously. How does superdeterminism work, what is it good for, and why does it allegedly destroy science? That’s what we’ll talk about today.

First things first, what is superdeterminism? Above all, it’s a terrible nomenclature because it suggests something more deterministic than deterministic and how is that supposed to work? Well, that’s just not how it works. Superdeterminism is exactly as deterministic as plain old vanilla determinism. Think Newton’s laws. If you know the initial position and velocity of an arrow, you can calculate where it will land, at least in principle. That’s determinism: Everything that happens follows from what happened earlier. But in quantum mechanics we can only predict probabilities for measurement outcomes, rather than the measurement outcomes themselves. The outcomes are not determined, so quantum mechanics is indeterministic.

Superdeterminism returns us to determinism. According to superdeterminism, the reason we can’t predict the outcome of a quantum measurement is that we are missing information. This missing information is usually referred to as the “hidden variables”. I’ll tell you more about those later. But didn’t this guy what’s his name Bell prove that hidden variables are wrong?

No, he didn’t, though this is a very common misunderstanding, depressingly, even among physicists. Bell proved that a hidden variables theory which is (a) local and (b) fulfills an obscure assumption called “statistical independence” must obey an inequality, now called Bell’s inequality. We know experimentally that this inequality is violated. It follows that any local hidden variable theory which fits to our observations, has to violate statistical independence.

If statistical independence is violated, this means that what a quantum particle does depends on what you measure. And that’s how superdeterminism works: what a quantum particle does depends on what you measure. I’ll give you an example in a moment. But first let me tell you where the name superdeterminism comes from and why physicists get so upset if you mention it.

Bell didn’t like the conclusion which followed from his own mathematics. Like so many before and after him, Bell wanted to prove Einstein wrong. If you remember, Einstein had said that quantum mechanics can’t be complete because it has a spooky action at a distance. That’s why Einstein thought quantum mechanics is just an average description for a hidden variables theory. Bell in contrast wanted physicists to accept this spooky action. So he had to somehow convince them that this weird extra assumption, statistical independence, makes sense. In a 1983 BBC interview he said the following:
“There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the “decision” by the experimenter to carry out one set of measurements rather than another, the difficulty disappears.”
This is where the word “superdeterminism” comes from. Bell called a violation of statistical independence “superdeterminism” and claimed that it would require giving up free will. He argued that there are only two options: either accept spooky action and keep free will which would mean that Bell was right, or reject spooky action but give up free will which would mean that Einstein was right. Bell won. Einstein lost.

Now you all know that I think free will is logically incoherent nonsense. But even if you don’t share my opinion, Bell’s argument just doesn’t work. Spooky action at a distance doesn’t make any difference for free will because the indeterministic processes in quantum mechanics are not influenced by anything, so they are not influenced by your “free will,” whatever that may be. And in any case, throwing out determinism just because you don’t like its consequences is really bad science.

Nevertheless, the mathematical assumption of “statistical independence” has since widely been called the “free will” assumption, or the “free choice” assumption. And physicists stopped questioning it to the point that today most of them don’t know that Bell’s theorem even requires this additional assumption.

This is not a joke. All the alleged strangeness of quantum mechanics has its origin in nomenclature. It was forced on us by physicists who called a mathematical statement the “free will assumption”, never mind that it’s got nothing to do with free will, and then argued that one must believe in it because one must believe in free will.

If you find this hard to believe, I can’t blame you, but let me read you a quote from a book by Nicolas Gisin, who is Professor for Physics in Geneva and works on quantum information theory.
“This hypothesis of superdeterminism hardly deserves mention and appears here only to illustrate the extent to which many physicists, even among specialists in quantum physics, are driven almost to despair by the true randomness and nonlocality of quantum physics. But for me, the situation is very clear: not only does free will exist, but it is a prerequisite for science, philosophy, and our very ability to think rationally in a meaningful way. Without free will, there could be no rational thought. As a consequence, it is quite simply impossible for science and philosophy to deny free will.”
Keep in mind that superdeterminism just means statistical independence is violated which has nothing to do with free will. However, even leaving that aside, fact is, the majority of philosophers either believe that free will is compatible with determinism, about 60% of them, or they agree with me that free will doesn’t exist anyway, about 10% of them.

But in case you’re still not convinced that physicists actually bought Bell’s free will argument, here is another quote from a book by Anton Zeilinger, one of the probably most famous physicists alive. Zeilinger doesn’t use the word superdeterminism in his book, but it is clear from the context that he is justifying the assumption of statistical independence. He writes:
“[W]e always implicitly assume the freedom of the experimentalist. This is the assumption of free will… This fundamental assumption is essential to doing science.”
So he too bought Bell’s claim that you have to pick between spooky action and free will. At this point you must be wondering just what this scary mathematical expression is that supposedly eradicates free will. I am about to reveal it, brace yourself. Here we go.

I assume you are shivering in fear of being robbed of your free will if one ever were to allow this. And not only would it rob you of free will, it would destroy science. Indeed, already in 1976, Shimony, Horne, and Clauser argued that doubting statistical independence must be verboten. They wrote: “skepticism of this sort will essentially dismiss all results of scientific experimentation”. And here is one final quote about superdeterminism from the philosopher Tim Maudlin: “besides being insane, [it] would undercut scientific method.”

As you can see, we have no shortage of men who have strong opinions about things they know very little about, but not like this is news. So now let me tell you how superdeterminism actually works, using the double slit experiment as an example.

In the double slit experiment, you send a coherent beam of light at a plate with two thin openings, that’s the double slit. On the screen behind the slit you then see an interference pattern. The interference isn’t in and by itself a quantum effect, you can do this with any type of wave, water waves or sound waves for example.

The quantum effects only become apparent when you let a single quantum of light go through the slits at a time. Each of those particles makes a dot on the screen. But the dots build up… to an interference pattern. What this tells you is that even single particles act like waves. This is why we describe them with wave-functions usually denoted psi. From the wave-function we can calculate the probability of measuring the particle in a particular place, but we can’t calculate the actual place.

Here’s the weird bit. If you measure which slit the particles go through, the interference pattern vanishes. Why? Well, remember that the wave-function – even that of a single particle – describes probabilities for measurement outcomes. In this case the wave-function would first tell you the particle goes through the left and right slit with 50% probability each. But once you measure the particle you know 100% where it is.

So when you measure at which slit the particle is you have to “update” the wave-function. And after that, there is nothing coming from the other slit to interfere with. You’ve destroyed the interference pattern by finding out what the wave did. This update of the wave-function is sometimes also called the collapse or the reduction of the wave-function. Different words, same thing.

The collapse of the wave-function doesn’t make sense as a physical process because it happens instantaneously, and that violates the speed of light limit. Somehow the part of the wave-function at the one slit needs to know that a measurement happened at the other slit. That’s Einstein’s “spooky action at a distance.”

Physicists commonly deal with this spooky action by denying that wave-function collapse is a physical process. Instead, they argue it’s just an update of information. But information about… what? In quantum mechanics there isn’t any further information beyond the wave-function. Interpreting the collapse as an information update really only makes sense in a hidden variables theory. In that case, a measurement tells you more about the possible values of the hidden variables.

Think about the hidden variables as labels for the possible paths that the particle could take. Say the labels 1 2 3 go to the left slit and the labels 4 5 6 go to the right slit and the labels 7 to 12 go through both. The particle really has only one of those hidden variables, but we don’t know which. Then, if we measure the particle at the left slit, that simply tells us that the hidden variable was in the 1 2 3 batch, if we measure it right, it was in the 4 5 6 batch, if we measure it on the screen, it was in the 7 - 12 batch. No mystery, no instantaneous collapse, no non-locality. But it means that the particle’s path depends on what measurement will take place. Because the particles must have known already when they got on the way whether to pick one of the two slits, or go through both. This is just what observations tell us.

And that’s what superdeterminism is. It takes our observations seriously. What the quantum particle does depends on what measurement will take place. Now you may say uhm drawing lines on YouTube isn’t proper science and I would agree. If you’d rather see equations, you’re most welcome to look at my papers instead.

Let us then connect this with what Bell and Zeilinger were talking about. Here is again the condition that statistical independence is violated. The lambda here stands for the hidden variables, and rho is the probability distribution of the hidden variables. This distribution tells you how likely it is that the quantum particle will do any one particular thing. In Bell’s theorem, a and b are the measurement settings of two different detectors at the time of measurement. And this bar here means you’re looking at a conditional probability, so that’s the probability for lambda given a particular combination of settings. When statistical independence is violated, this means that the probability for a quantum particle to do a particular thing depends on the detector settings at the time of measurement.

Since this is a point that people often get confused about, let me stress that it doesn’t matter what the setting is at any earlier or later time. This never appears in Bell’s theorem. You only need to know what’s the measurement that actually happens. It also doesn’t matter how one chooses the detector settings, that never makes any appearance either. And contrary to what Bell and Zeilinger argued, this relation does not restrict the freedom of the experimentalist. Why would it? The experimentalist can measure whatever they like, it’s just that what the particle does depend on what they measure.

And of course this won’t affect the scientific method. What these people were worrying about is that random control trials would be impossible if choosing a control group could depend on what you later measure.

Suppose you randomly assign people into two groups to test whether a vaccine is efficient. People in one group get the vaccine, people in the other group a placebo. The group assignment is the “hidden variable.” If someone falls ill, you do a series of tests to find out what they have, so that’s the measurement. If you think that what happens to people depends on what measurement you will do on them, then you can’t draw conclusions about the efficiency of the vaccine. Alrighty. But you know what, people aren’t quantum particles. And believing that superdeterminism plays a role for vaccine trials is like believing Schrödinger’s cat is really dead and alive.

The correlation between the detector settings and the behavior of a quantum particle which is the hallmark of superdeterminism only occurs when quantum mechanics would predict a non-local collapse of the wave-function. Remember that’s what we need superdeterminism for: that there is no spooky action at a distance. But once you have measured the quantum state, that’s the end of those violations of statistical independence.

I should probably add that a “measurement” in quantum mechanics doesn’t actually require a measurement device. What we call a measurement in quantum mechanics is really any sufficiently strong or frequent interaction with an environment. That’s why we don’t see dead and alive cats. Because there’s always some environment, like air, or the cosmic microwave background. And that’s also why we don’t see superdeterministic correlations in people.

Okay, so I hope I’ve convinced you that superdeterminism doesn’t limit anyone’s free will and doesn’t kill science, now let’s see what it’s good for.

Once you understand what’s going on with the double slit, all the other quantum effects that are allegedly mysterious or strange also make sense. Take for example a delayed choice experiment. In such an experiment, it’s only after the particle started its path that you decide whether to measure which slit it went through. And that gives the same result as the usual double slit experiment.

Well, that’s entirely unsurprising. If you considered measuring something but eventually didn’t, that’s just irrelevant. The only relevant thing is what you actually measure. The path of the particle has to be consistent with that. Or take the bomb experiment that I talked about earlier. Totally unsurprising, the photon’s path just depends on what you measure. Or the quantum eraser. Of course the path of the particle depends on what you measure. That’s exactly what superdeterminism tells you!

So, in my eyes, all those experiments have been screaming us into the face for half a century that what a quantum particle does depends on the measurement setting, and that’s superdeterminism. The good thing about superdeterminism is that since it’s local it can easily be combined with general relativity, so it can help us find a theory of quantum gravity.

Let me finally talk about something less abstract, namely how one can test it. You can’t test superdeterminism by measuring violations of Bell’s inequality because it doesn’t fulfil the assumptions of Bell’s theorem, so doesn’t have to obey the inequality. But superdeterminism generically predicts that measurement outcomes in quantum mechanics are actually determined, and not random.

Now, any theory that solves the measurement problem has to be non-linear, so the reason we haven’t noticed superdeterminism is almost certainly that all our measurements so far have been well in the chaotic regime. In that case trying to make a prediction for a measurement outcome is like trying to make a weather forecast for next year. The best you can do is calculate average values. That’s what quantum mechanics gives us.

But if you want to find out whether measurement outcomes are actually determined, you have to get out of the chaotic regime. This means looking at small systems at low temperatures and measurements in a short sequence, ideally on the same particle. Those measurements are currently just not being done. However, there is a huge amount of progress in quantum technologies at the moment, especially in combination with AI which is really good for finding new patterns. And this makes me think that at some point it’ll just become obvious that measurement outcomes are actually much more predictable than quantum mechanics says. Indeed, maybe someone already has the data, they just haven’t analyzed it the right way.

I know it’s somewhat boring coming from a German but I think Einstein was right about quantum mechanics. Call me crazy if you want but to me it’s obvious that superdeterminism is the correct explanation for our observations. I just hope I’ll live long enough to see that all those men who said otherwise will be really embarrassed.

Thursday, December 16, 2021

Public Event in Canada coming up in April

Yes, I have taken traveling back up and optimistically agreed to a public event in Vancouver on April 14, together with Lawrence Krauss and Chris Hadfield. If you're in the area, it would be lovely to see you there! Don't miss the trailer video


Tickets will be on sale from Jan 1st on this website.

Saturday, December 11, 2021

Is the Hyperloop just Hype?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


A few weeks ago I talked about hypersonic flight and why that doesn’t make sense to me. A lot of you asked what’s with Elon Musk’s hyperloop. Does it make any more sense to push high speed trains through vacuum tubes? Can we maybe replace flights with hyperloops? And what’s a hyperloop in the first place? That’s what we’ll talk about today.

As I told you in my previous video, several companies have serious plans to build airplanes that fly more than five times the speed of sound. But physics gets in the way. At such high speed, air resistance rises rapidly. Even if you manage to prevent the plane from melting or simply falling into pieces, you still need a lot of fuel to counter the pressure of the atmosphere. You could instead try flying so high up that the atmosphere is incredibly thin. But you have to get there in the first place, and that too consumes a lot of fuel.

So why don’t we instead build airtight tubes, pump as much air out of them as possible, and then accelerate passenger capsules inside until they exceed the speed of sound? That’s the idea of the “hyperloop” which Elon Musk would like to see become reality. He is a busy man, however, so he made his take on the idea available open source and hopes someone else does it.

The idea of transporting things by pushing them through tubes isn’t exactly new. It’s been used since the eighteenth century to transport small goods and mail. You still find those tube systems today in old office buildings or in hospitals.

In 1908, Joseph Stoetzel, an inventor from Chicago sent his own child through such a tube to prove it was safe. Yeah I’m not sure what ethics committees would say about that today.

The idea to create vacuum in a tube and put a train inside is also not new. It was proposed already in 1904 by the engineer and physicist Robert Goddard, who called it the “vactrain”.

The quality of a vacuum can be measured either in pressure or in percent. A zero percent vacuum is no vacuum, so just standard atmospheric pressure. A one hundred percent vacuum would be no air at all. An interest group in Switzerland has outlined a plan to build a network of high speed trains that would use tunnels with a 93 percent vacuum in which trains could reach about 430 kilometers per hour. That’s about 270 miles per hour if you’re American or about one point four times 10 to the 9 hands per fortnight if you’re British.

It doesn’t look like the Swiss plan has much of a chance to become reality, but about 10 years ago Elon Musk put forward his plan for the hyperloop. Its first version should reach about 790 miles per hour, which is just barely above the speed of sound. But you should think of it as a proof of concept. If it works for reaching the speed of sound, you can probably go above that too. Once you’ve removed the air, speed is really far less of a problem.

Hyperloop is not the name of a company, but the name for the conceptual idea, so there are now a number of different companies trying to turn the idea into reality. The first test for the hyperloop with passengers took place last year with technology from the company Virgin Hyperloop. But there are other companies working on it too, for example Hyperloop Transportation Technologies which is based in California, or TransPod which is based in Canada.

Why the thing’s called the hyperloop to begin with is somewhat of a mystery, probably not because it’s hype going around in a loop. More likely because it should one day reach hypersonic speeds and go in a loop, maybe around the entire planet. Who knows.

Elon did his first research on the hyperloop using a well-known rich-man’s recipe: let others do the job for free. From 2015 to 2019 Musk’s company Space X sponsored a competition in which teams presented their prototypes to be tested in a one kilometer tube. All competitions were won by the Technical University of Munich, and their design served as the base for further developments.

So what are the details of the hyperloop? In 2013 Elon Musk published a white paper called “Hyperloop Alpha” in which he proposed that the capsules would carry 28 passengers each through a 99.9 percent vacuum, that’s about 100 Pascal, and they would be levitated by air-cushions. The idea was that you suck in the remaining air in the tunnel from the front of the capsule and blow it out at the bottom.

This sounds good at first, but that’s where the technical problems begin. If you crunch the numbers, then the gap which the air-cushion creates between the bottom of the capsule and the tube is about one millimeter. This means if there’s any bump or wiggle or two people stand up to go to the loo at the same time, the thing’s going to run into the ground. That’s not good. This is why the companies working on the hyperloop have abandoned the air cushion idea and instead go for magnetic levitation. The best way to achieve the strong fields necessary for magnetic levitation is to use superconducting magnets.

The downside is that they need to be cooled with expensive cryogenic systems, but magnetic levitation can create a gap of about ten centimeters between the passenger capsule and the tunnel which should be enough to comfortably float over all bumps and wiggles.

But there are lots of other technical problems to solve and they’re all interconnected. This figure from Virgin Hyperloop explains it all in one simple diagram. Just in case that didn’t explain it, let me mention some of the biggest problems.

First, you need to maintain the vacuum in the tube, possibly over hundreds of kilometers, and the tube needs to have exits, both regular ones at the stations and emergency exits in between. If you put the tube in a tunnel, you have to cope with geological stress. But putting the tube on pillars over ground has its own problems.

A group of researchers from the UK showed last year that at such high speeds as the hyperloop is supposed to go, the risk of resonance catastrophes significantly increases. In a nutshell this means that the pillars would have to be much stronger than usual and have extra vibration dampers.

The other problem with putting the tube over ground is that temperature changes will create stress on the tube by expansion and contraction. That’s a bigger problem than you may expect because the vacuum slows down the equilibration of temperature changes in the tube. Since temperature changes tend to be larger over ground, digging a tunnel seems the way to go. Unfortunately, digging tunnels is really expensive, so there’s a lot of upfront investment.

This brings me to the second problem. To keep costs low you want to keep the tunnel small, but if the space between the capsule and the tunnel wall is too small you can’t reach high speeds despite near vacuum.

The issue is that even though the air pressure is so low, there’s still air in that tunnel which needs to go around the capsule. If the air can’t go around the capsule, it’ll be pushed ahead of the capsule, limiting its speed. This is known as the Kantrowitz limit. Exactly when this happens is difficult to calculate because the capsules trigger acoustic waves that go back and forth through the tunnels.

The third problem is that you don’t want the passengers to stick flat to the walls each time the capsule changes direction. But the forces coming from the change of direction increase with the square of the velocity. They also go down inversely with the increase of the radius of curvature though. The radius of curvature is loosely speaking the radius of a circle you can match to a stretch of a curve, in this case to a stretch of the hyperloop track. To keep the acceleration inside the capsule manageable, if you double the speed you have to increase the radius of curvature by four. This means basically that the hyperloop has to go in almost perfectly straight lines, or slow down dramatically to change direction.

And this brings me to the fourth problem. The thing shakes, it shakes a lot, and it’s not clear how to solve the problem. Take a look to the footage of the Virgin Hyperloop test and pay attention to the vibration.

It’s noticeable, but you may say it’s not too bad. Then again, they reached a velocity of merely 100 miles per hour. Passengers may be willing to accept the risk of dying from leaks in a capsule surrounded by near vacuum. But only as long as they’re comfortable before they die. I don’t think they’ll accept having their teeth shook out along the way.

So the hyperloop is without doubt facing a lot of engineering challenges that will take time to sort out. However, I don’t really see a physical obstacle to making the hyperloop economically viable in the long run. Also, in the short run it doesn’t even have to be profitable. Some governments may want to build one just to show off their technological capital. Indeed, small scale hyperloops are planned for the near future in China, Abu Dhabi and India, though none of those will reach the speed of sound, and they’re basically just magnetically levitated trains in tubes.

What do governments think? In 2017, the Science Advisory Council of the Department of Transport in the UK looked at Musk’s 2013 white paper. They concluded that “because of the scale of the technical challenges involved, an operational Hyperloop system is likely to be at least a couple of decades away.” A few months ago they reasserted this position and stated that they still favor high speed rail. To me this assessment sounds reasonable for the time being.

In summary, the hyperloop isn’t just hype, it may one day become a real alternative to airplanes. But it’s probably not going to happen in the next two decades.

Saturday, December 04, 2021

Where is the antimatter?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Is it ant-ee-matter or ant-ai-matter? What do ants think about it and why isn’t aunt pronounced aunt. These are all good questions that we’ll not talk about today. Instead I want to talk about why the universe contains so little antimatter, why that’s not a good question, and if there might be stars made antimatter in our galaxy. Welcome to another episode of science without the gobbledygook.

Last year, I took part in a panel debate on the question why there’s so little anti-matter in the universe. It was organized by the Institute of Art and Ideas in the UK and I was on the panel together with Lee Smolin from Perimeter Institute and Tara Shears who’s a professor for particle physics in Liverpool. This debate was introduced with the following description:

“Antimatter has fascinated since it was proposed by Dirac in the 1920s and confirmed with the discovery of the positron a few years later. Heisenberg - the father of modern physics - referred to its discovery as “the biggest jumps of all the big jumps in physics”. But there’s a fundamental problem. The theory predicts the disappearance of the universe within moments of its inception as matter and antimatter destroy each other in a huge cataclysm.”

Unfortunately, that’s wrong, and I don’t just mean that Heisenberg wasn’t the father of modern physics, I mean it’s wrong that Dirac’s theory predicts the universe shouldn’t exist. I mean, if it did, it would have been falsified.

When I did the debate I found this misunderstanding a little odd… but I have since noticed that it’s far more common than I realized. And it’s yet another one of those convenient misunderstandings that physicists don’t make a lot of effort clearing up. In this case it’s convenient because they want you to believe that there is this big mystery and to solve it, we need some expensive experiments.

So let’s have a look at what Dirac actually discovered. Dirac was bothered by the early versions of quantum mechanics because they were incompatible with Einstein’s theory of special relativity. In a nutshell this is because, if you just quickly look at the equation which they used at the time, you have one time derivative but two space derivatives. So space and time are not treated the same way which, according to Einstein, they should be.

Dirac found a way to remedy this problem, and his remedy is what’s now called the Dirac equation. You can see right away that it treats space and time derivatives the same way. And so it’s neatly compatible with Einstein’s Special Relativity.

But. Here’s the thing. He also found that the solutions to this equation always come in pairs. And, after some back and forth, it turned out that those pairs are particles of the same mass but of opposite electric charge. So, every particle has partner particle with opposite charge, which is called it’s “anti-particle”. Though in some cases the particle and anti-particle are identical. That’s the case for example for the photon, which has no electric charge.

The first anti-particle was detected only a few years after Dirac’s derivation. In my mind this is one of the most impressive predictions in the history of science. Dirac solved a mathematical problem and from that he correctly predicted a new type of particle. But what Dirac’s equation tells you is really just that those particles exist. It tells you nothing about how many of them are around in the universe. Dirac’s equation doesn’t say any more about the amount of anti-matter in the universe than Newton’s law of gravity tells you about the number of apples on earth.

The number of particles of any kind in the universe is an initial condition, which means you have to specify this number at some moment in time, usually early in the universe, and then you can use Dirac’s and other equations to calculate what happens later. This means that the amount of particles is just a number that you must enter into the model for the universe. This number can’t be calculated, so one just extracts it from observations. It can’t be calculated because all our current theories work with differential equations. And those equations need the initial conditions to work. The only way you can explain an initial condition is with an even earlier initial condition. You’ll never get rid of it. I talked more about differential equations in an earlier video, so check this out for more.

The supposed problem with the amount of antimatter is often called the matter anti-matter asymmetry or the baryon asymmetry problem. Those terms refer to the same issue. The argument is that if matter and anti-matter had been present in the early universe in exactly the same amounts, then they’d have annihilated and just left behind a lot of radiation. So how come we have all this stuff around?

Well, was it maybe because there wasn’t an equal amount of matter and anti-matter in the early universe? Indeed, that solves the problem.

Case closed? Of course not. Because physicists make a living from solving problems, so they have an incentive to create problems where there aren’t any. For anti-matter this works as follows. You can calculate that to correctly obtain the amount of radiation and matter we see today, the early universe must have contained just a tiny little bit more matter than anti-matter. A tiny little bit means a ratio of about 1.0000000001.

If it had been exactly one, there’d be only radiation left. But it wasn’t exactly one, so today there’s us.

Particle physicists now claim that the ratio should have been 1 exactly. That’s because for some reason they believe that this number is somehow better than the number which actually describes our observations. Why? I don’t know. Remember that none of our theories can actually predict this number one way or another. But once you insist that the ratio was actually one, you have to come up with a mechanism for how it ended up not being one. And then you can publish papers with all kinds of complicated solutions to the problem which you just created.

To see why I say this is a fabricated problem, let us imagine for a moment that if the matter anti-matter ratio was 1 exactly that would actually describe our universe. It doesn’t, of course, but just for the sake of the argument imagine the theory was so that 1 was indeed compatible with observer. Remember that this is the value that physicists argue the number should have. What would they say if it was actually correct? They would probably remember that Dirac’s theory actually did not predict that this number must have been exactly. So then they’d ask why it is equal to one, just like they do now ask why its 1.0000000001. As I said, it doesn’t matter what the number is, we can’t explain it one way or another.

You sometimes hear particle physicists claim that you can shed light on this alleged problem with particle colliders. They say this because you can use particle colliders to test for certain interactions that would shift the matter anti-matter ratio. However, these shifts are too small to bring us from 1 exactly to the observed ratio. This means not only is there no problem to begin with, even if you think there is a problem, particle colliders won’t solve it.

The brief summary is that the matter antimatter asymmetry is a pseudo-problem. It can be solved by using an initial value that agrees with observations, and that’s that. Of course it would be nice to have a deeper explanation for that initial value. But within the framework of the theories that we currently have, such an explanation is not possible. You always have to choose an initial state, and you do that just because it explains what we observe. If a physicist tries to tell you otherwise, ask them where they get their initial state from.

You may now wonder though how well we actually know how much anti-matter there is in the universe. If Dirac’s theory doesn’t predict how much it is, maybe we’re underestimating how much there is? Indeed, it isn’t entirely impossible that big chunks of antimatter float around somewhere in the universe. Weirder still, if you remember, anti-matter is identical to normal matter except for its electric charge.

So for all we know you can make stars and planets out of anti-matter, and they would work exactly like ours. Such “anti-stars” could survive in the present universe for quite a long time because there is very little matter in outer space, so they would annihilate only very slowly. But when the particles floating around in outer space come in contact with such an anti-star that would create an unusual glow.

Astrophysicists can and have looked for such a glow around stars that might indicate the stars made of antimatter. Earlier this year, a group of researchers from Toulouse in France analyzed data from the Fermi telescope. They identified fourteen candidates for anti-stars in our galactic neighborhood which they now investigate closer. They also use this to put a bound on the overall fraction of anti-stars which is about 2 per million, in galactic environments similar as ours.

While such anti-stars could in principle exist, it’s very difficult to understand how they would have escaped annihilation during the formation of our galaxy. So it is a very speculative idea which is a polite way of saying I think it’s nonsense. But, well, when Dirac predicted anti-matter his colleagues also thought that was nonsense, so let’s wait and see what further observations show.

Saturday, November 27, 2021

Does Anti-Gravity Explain Dark Energy?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


One of the lesser known facts about me is that I’m one of the few world experts on anti-gravity. That’s because 20 years ago I was convinced that repulsive gravity could explain some of the puzzling observations astrophysicists have made which they normally attribute to dark matter and dark energy. In today’s video I’ll tell you why that didn’t work, what I learned from that, and also why anti-matter doesn’t fall up.

Newton’s law of gravity says that the gravitational force between two masses is the product of the masses, divided by the square of the distance between them. And then there’s a constant that tells you how strong the force is. For the electric force between two charges, we have Coulomb’s law, that says the force is the product of the charges, divided by the square of the distance between them. And again there’s a constant that tells you how strong the force is.

These two force laws look pretty much the same. But the electric force can be both repulsive and attractive, depending on whether you have two negative or two positive charges, or a positive and a negative one. The gravitational force, on the other hand, is always attractive because we don’t have any negative masses. But why not?

Well, we’ve never seen anything fall up, right? Then again, if there was any anti-gravitating matter, it would be repelled by our planet. So maybe it’s not so surprising that we don’t see any anti-gravitating matter here. But it could be out there somewhere. Why aren’t physicists looking for it?

One argument that you may have heard physicists bring up is that negative masses can’t exist because that would make the vacuum decay. That’s because, if negative masses exist, then so do negative energies. Because, E equals m c squared and so on. Yes, that guy again.

And if we had negative energies, then you could create pairs of particles with negative and positive energy from nothing and particle pairs would spontaneously pop up all around us. A theory with negative masses would therefore predict that the universe doesn’t exist, which is in conflict with evidence. I’ve heard that argument many times. Unfortunately it doesn’t work.

This argument doesn’t work because it confuses to different types of mass. If you remember, Einstein’s theory of general relativity is based on the Equivalence Principle, that’s the idea that gravitational mass equals inertial mass. The gravitational mass is the mass that appears in the law of gravity. The inertial mass is the mass that resists acceleration. But if we had anti-gravitating matter, only its gravitational mass would be negative. The inertial mass always remains positive. And since the energy-equivalent of inertial mass is as usual conserved, you can’t make gravitating and anti-gravitating particles out of nothing.

Some physicists may argue that you can’t make anti-gravity compatible with general relativity because particles in Einstein’s theory will always obey the equivalence principle. But this is wrong. Of course you can’t do it in general relativity as it is. But I wrote a paper many years ago in which I show how general relativity can be extended to include anti-gravitating matter, so that the equivalence principle only holds up to a sign. That means, gravitational mass is either plus or minus inertial mass. So, in theory that’s possible. The real problem is, well, we don’t see any anti-gravitating matter.

Is it maybe that anti-matter anti-gravitates. Anti-matter is made of anti-particles. Anti-particles are particles which have the opposite electric charge to normal particles. The anti-particle of an electron, for example, is the same as the electron just with a positive electric charge. It’s called the positron. We don’t normally see anti-particles around us because they annihilate when they come in contact with normal matter. Then they disappear and leave behind a flash of light or, in other words, a bunch of photons. And it’s difficult to avoid contact with normal matter on a planet made of normal matter. This is why we observe anti-matter only in cosmic radiation or if it’s created in particle colliders.

But if there is so little anti-matter around us and it lasts only for such short amounts of time, how do we know it falls down and not up? We know this because both matter and anti-matter particles hold together the quarks that make up neutrons and protons.

Inside a neutron and proton there aren’t just three quarks. There’s really a soup of particles that holds the quarks together, and some of the particles in the soup are anti-particles. Why don’t those anti-particles annihilate? They do. They are created and annihilate all the time. We therefore call them “virtual particles.” But they still make a substantial contribution to the gravitational mass of neutrons and protons. That means, crazy as it sounds, the masses of anti-particles make a contribution to the total mass of everything around us. So, if anti-matter had a negative gravitational mass, the equivalence principle would be violated. It isn’t. This is why we know anti-matter doesn’t anti-gravitate.

But that’s just theory, you may say. Maybe it’s possible to find another theory in which anti-particles only anti-gravitate sometimes, so that the masses of neutrons and protons aren’t affected. I don’t know any way to do this consistently, but even so, three experiments at CERN are measuring the gravitational behavior of anti-matter.

Those experiments have been running for several years but so far the results are not very restrictive. The ALPHA experiment has ruled out that anti-particles have anti-gravitating masses, but only if the absolute value of the mass is much larger than the mass of the corresponding normal particle. This means so far they ruled out something one wouldn’t expect in the first place. However, give it a few more years and they’ll get there. I don’t expect surprises from this experiment. That’s not to say that I think it shouldn’t be done. Just that I think the theoretical arguments for why anti-matter can’t anti-gravitate are solid.

Okay, so anti-matter almost certainly doesn’t anti-gravitate. But maybe there’s another type of matter out there, something new entirely, and that anti-gravitates. If that was the case, how would it behave? For example, if anti-gravitating matter repels normal matter, then does it also repel among itself, like electrons repel among themselves? Or does it attract its own type?

This question, interestingly enough, is pretty easy to answer with a little maths. Forces are mediated by fields and those fields have a spin which is a positive integer, so, 0, 1, 2, etc.

For gravity, the gravitational mass plays the role of a charge. And the force between two charges is always proportional to the product of those charges times minus one to the power of the spin.

For a spin zero field, the force is attractive between like charges. But electromagnetism is mediated by a spin-1 field, that’s electromagnetic radiation or photons if you quantize it. And this is why, for electromagnetism, the force between like charges is repulsive but unlike charges attract. Gravity is mediated by a spin-2 field, that’s gravitational radiation or gravitons if you quantize it. And so for gravity it’s just the other way round again. Like charges attract and unlike charges repel. Keep in mind that for gravity the charge is the gravitational mass.

This means, if there is anti-gravitating matter it would be repelled by the stuff we are made of, but clump among itself. Indeed, it could form planets and galaxies just like ours. The only way we would know about it, is its gravitational effect. That sound kind of like, dark matter and dark energy, right?

Indeed, that’s why I thought it would be interesting. Because I had this idea that anti-gravitating matter could surround normal galaxies and push in on them. Which would create an additional force that looks much like dark matter. Normally the excess force we observe is believed to be caused by more positive mass inside and around the galaxies. But aren’t those situations very similar? More positive mass inside, or negative mass outside pushing in? And if you remember, the important thing about dark energy is that it has negative pressure. Certainly if you have negative energy you can also get negative pressure somehow.

So using anti-gravitating matter to explain dark matter and dark energy sounds good at first sight. But at second sight neither of those ideas work. The idea that galaxies would be surrounded by anti-gravitating matter doesn’t work because such an arrangement would be dramatically unstable. Remember the anti-gravitating stuff wants to clump just like normal matter. It wouldn’t enclose galaxies of normal matter, it would just form its own galaxies. So getting anti-gravity to explain dark matter doesn’t work even for galaxies, and that’s leaving aside all the other evidence for dark matter.

And dark energy? Well, the reason that dark energy makes the expansion of the universe speed up is actually NOT that it has negative pressure. It’s that the ratio of the energy density over the pressure is negative. And for anti-gravitating matter, they both turn negative so that the ratio is the same. Contrary to what you expect, that does not speed up the expansion of the universe.

Another way to see this is by noting that anti-gravitating matter is still matter and behaves like matter. Dark energy on the contrary does not behave like matter, regardless of what type of matter. This is why I get a little annoyed when people claim that dark energy is kind of like anti-gravity. It isn’t.

So in the end I developed this beautiful theory with a new symmetry between gravity and anti-gravity. And it turned out to be entirely useless. What did I learn from this? Well, that I wasted a considerable amount of my time on this was one of the reasons I began thinking about more promising ways to develop new theories. Clearly just guessing something because it’s pretty is not a good strategy. In the end, I wrote an entire book about this. Today I try to listen to my own advice, at least some of time. I don’t always listen to myself, but sometimes it’s worth the effort.