Pages

Sunday, January 30, 2022

What may the new James Webb telescope discover?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


The James Webb Space Telescope has finally launched. And that’s exciting not just for astrophysicists, but for all space lovers. What makes the Webb telescope new and different? What is it looking for and what may it discover? That’s what we will talk about today.

The James Webb Space Telescope is a joint project between NASA, ESA, and the Canadian Space Agency. I’ve heard astrophysicists going on about it since I was a student.

In 1999, the Webb telescope was scheduled for launch in 2007, but in 2003 a redesign pushed the date to 2010. In 2005 it turned out that the cost estimate didn’t pan out. The whole thing was replanned and the launch date moved to 2013. And then to 2015. And then to 2018. And then some further delays made that 2020. And then COVID happened. It was finally launched on Christmas day 2021 after more than 3 decades of planning.

Now everyone hopes the wait was worth it. The James Webb telescope is an infrared telescope. It’s equipped with four different instruments that can detect light in the wavelength regime from zero point six to 28 micrometers. So that is at the long wave-length end of visible light, and then below that.

The previous infrared telescope was the Spitzer Telescope. But it required liquid helium for cooling and that ran out in 2009. After that, the Spitzer telescope operated with reduced functionality until it was retired in 2020. The James Webb telescope will have a better resolution than Spitzer. Here is a simulation of how much better. On the left is the Spitzer resolution on the right what we expect from Webb in comparison. Indeed, a group of astrophysicists have done a simulation of a far field image from Webb into which you can zoom and zoom and zoom.  

What’s so great about infrared light? Well, each wave-length range is good for something else. Infrared light in particular is good to see through dust. And space is full of dust. Dust is made of small particles, and often they are of a size that’s about the same as the wave-length of visible light. This means visible light scatters a lot on dust. Infrared light is scattered far less because of the longer wave-length, so one can use it to see through the dust.

This is interesting for example because a lot of galaxies or galaxy clusters are surrounded by dust so we don’t really know what’s going on inside. Here are example images from the Hubble space telescope, on the left, in the visible range. And from the Spitzer infrared telescope on the right. Look how the dust has basically disappeared. Now you can see inside. The James Webb telescope can do that too, but at higher resolution. And compared to Hubble, Webb will have a larger field of view covering more than 15 times the area that Hubble covers.

Another reason why infrared is great is that as the universe expands, wavelengths stretch. So the light from early stars and galaxies becomes shifted to the red. This means, the better you can measure on the red end, the more you can see of the early stars and galaxies.

That Webb is an infrared telescope is also the reason why the mirror is gold. Gold is not the greatest reflector in the visible range, but it is a great reflector in the infrared. It’s a tiny amount of gold that’s been used for those mirrors, because the cover is only about 100 nanometers thick. In total that’s less than 50 grams of gold.

That other big piece on the telescope that’s the sun shield. It’s there to keep the instruments cool. Webb has four different instruments, each of which can detect somewhat different properties of the light that the telescope collects. By the end of January, the Webb telescope reached its final position which is the Lagrange point two of the sun-earth system, that’s farther away from the sun than we are. The good thing about the Lagrange point is that the telescope can orbit around it with only small corrections, so it won’t need a lot of fuel. The fuel supply for the propulsion system is designed to last for about 10 years, but maybe in 10 years it’ll be possible to refill it.

The sunshield will block emissions from both the sun and also the earth. The temperature differences between the two sides are remarkable. One the side facing towards the sun it’s as high as 80 degrees Celsius and on the side facing out into space, just 40 degrees above absolute zero. Still, for one of the instruments on board the Webb telescope that isn’t cold enough, it needs to be cooled to 7 degrees above absolute zero. Webb does that with a cooling system that won’t just exhaust the supply of a cooling agent, but that can keep on going as long as the equipment doesn’t fail and as long it has power. Where does the power come from? It comes from the sun. Lots of solar power out there in space.

So what do astrophysicists want to do with the Webb telescope? Well the way that it works in astrophysics is that you apply for time with a telescope so you can collect the data you want. The Webb telescope will be used by those groups whose research proposals have been accepted. And the people who worked on the telescope development have some time reserved for themselves. You can look up on the website which research programs have been accepted. Here is an overview on the topics. I will leave you a link in the info below so you can look at the complete list yourself. The observation cycles will each go about a year. The first cycle will start after Webb has completed checks, probably around June this year.

As you can see, the scientific program covers quite a variety of topics. One is scanning exoplanets for signs of molecules that might signal the presence of life. Infrared light is good for that because the absorption signatures of molecules like oxygen, water, carbon dioxide, ozone, and methane are in that range.

Another big bunch of topics is the formation of stars and solar systems that is often obscured in other wave-length ranges. From Webb we might learn a lot about how all that dust manages to clump together and form planets. There is also the dust itself. I know I said that you can use infrared light to see through dust, but at somewhat shorter wavelengths you’ll also begin to test the chemical make-up of dust clouds. And some researchers want to use the Webb telescope to look at objects inside the solar system. For example at asteroids, to find out whether there’s water on them or something else. This would be very helpful to find out where the water on earth comes from which is somewhat of a mystery. And that again would be very helpful to understand how likely it is that life developed elsewhere in the universe.

All of this is pretty cool, but I am personally most excited about the observations on young galaxies, at extremely high redshifts, early in the universe. The youngest galaxy that the Hubble telescope has seen has been estimated to date back to about 400 million years after the big bang. Webb should be able to see back to about 100 million years after the big bang.

That’s very interesting because the way that galaxies form tells us something about the matter in the universe, in particular about dark matter and its role in structure formation. In the currently most widely accepted theory for cosmology, the large galaxies we see today build very gradually by merging smaller galaxies

This figure shows how astrophysicists think this works. All the symbols here are galaxies and the larger the symbol the larger the galaxy. Time increases from the bottom up. At the beginning you have all these tiny galaxies, and then they join to increasingly larger ones.

What you can see from this graph is that if this theory is correct there basically shouldn’t be any large galaxies at very early times. But is this correct? This figure shows the predictions from the millennium simulation in comparison to data. You can see two things here. One is that there isn’t a lot of data at the moment. But also that it seems like the data is way off the simulation.

The millennium simulation was a large computer simulation for structure formation in the standard model of cosmology. In such a simulation, you basically distribute dark matter in the early universe and then you let it clump following its own gravitational pull. Normal matter mostly follows the gravitational pull of the dark matter, but then the normal sticks together better and forms stars which dark matter doesn’t do, or at least isn’t expect to do.

The millennium simulation used about 10 billion particles to study structure formation. That was pretty amazing in 2005, but today computing power has much improved. The newest simulation for structure formation is the Uchuu simulation that was just released a few months ago. It contains about two trillion particles, or to be precise, 2,097,152,000,000. You can also download this data yourself if you want. At least If you have 125 Terabytes of free disk space.

There will without doubt soon be papers coming out that quantify the structures they have found in this simulation. They will probably confirm the findings of the earlier simulation, namely that galaxy formation with dark matter takes a long time. You don’t expect to see large galaxies early on in the universe if the standard model of cosmology is right.

If the Webb telescope sees large galaxies anyhow, then that’s going to be very difficult to explain with dark matter. That, in my opinion would be the most interesting discovery the telescope could make. Though, I guess oxygen and water on an exoplanet would be a close second. What do you hope Webb will see? Let me know in the comments.

Saturday, January 29, 2022

Can animals sense earthquakes?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video. References are in the info fold below the video on YouTube or on Patreon.]


Earthquakes are much more common than you might expect. An earthquake of magnitude 6 and up happens every couple of days somewhere on the planet. It’s just that most of them don’t affect densely populated areas, and, as the seismologist Nicholas Ambraseys put it “earthquakes don’t kill people, buildings do”.

But earthquakes are the most fatal natural disasters. In the two decades from 1998 to 2017, they killed more than 700 thousand people.

So what are seismologists doing to warn people of earthquakes? Can animals sense if an earthquake is coming? And what are earthquake lights? That’s what we will talk about today.

This is the second part of a two-part video. Last week, we talked about long-term and mid-term predictions for earthquakes. Today we will talk about short-term predictions, that’s months to seconds before an earthquake. Short-term predictions don’t help with infrastructure investments, but they give people time to evacuate.

The one reliable known precursor for earthquakes is the P wave that comes a few seconds up to a minute before the biggest shake. The P stands for primary and for pressure. It’s a longitudinal deformation that travels through earth, so the displacement that the wave produces is parallel to the direction of propagation of the wave.

This P wave is difficult to notice and doesn’t normally cause much damage but it’s fairly easily to measure, so it can be used to issue a warning. What causes most of the damage is the S wave which comes a little later because it travels a little slower. The S stands for secondary or shear. The displacement of this wave is perpendicular to the direction of propagation of the wave, and that’s what causes buildings to crack.

Now strictly speaking using the P wave isn’t a prediction of an earthquake because with the P wave the earthquake has already started. And a few seconds might just be enough to get out of the house, but this will only work if people are alert to the warning to begin with. Clearly you’d want to know at least a few hours or better days ahead if an earthquake is coming up. You want an actual prediction. The most obvious thing you can do for that is to look at records of past earthquakes and see if you can find any precursors in those. The problem is that the known precursors are unreliable.

Take for example the first earthquake with a successful prediction on record: The 1975 Haicheng earthquake in northeast China. It had a magnitude of 7 point 3. The event was preceded by many different precursors: foreshocks, ground-water changes, and strange animal behavior. Seismologists put out a warning on the day of the earthquake before it hit. Buildings in the city were mostly evacuated. About a thousand people died, which for city of that size and an earthquake of that magnitude isn’t much.

Seismologists were really excited about this success. But only a year later, an earthquake of magnitude seven point seven happened just 200 kilometers southwest of Haicheng. No precursors were detected, no warning was issued. The earthquake largely destroyed the city of Tangshan and killed more than a quarter of a million people. That’s the official number. The unofficial number is about three times as high.

As you see, it’s difficult. Seismologists can’t always find precursors, but they have made progress in identifying some of them.

The most obvious precursors are seismic changes like tiny earthquakes or measurable deformations in the rocks. Small deformations in rocks can also allow gases to escape from underground. That’s mainly radon gas which is quite easy to measure since it’s radioactive. Small cracks in rocks can also cause changes in groundwater level. And in case the groundwater is connected to a thermal source, that can further change the ground temperature.

This may all sound rather obvious, but there are also some surprising precursors and not all of them are well understood, like fluctuations of the Earth’s magnetic field. These were measured for example in October 1989, when an earthquake of magnitude 7.1 hit Loma Prieta in Northern California.

Researchers from Stanford University measured fluctuations in the magnetic field first a couple of weeks and then some hours before the earthquake. They only found this in the data after the earthquake had happened, so they couldn’t issue a warning, but the same has also been seen in a few other events so it’s worth monitoring. It’s somewhat unclear at the moment what causes these fluctuations of the magnetic field, could be stress in the rocks, or changes in the flow of molten rocks further underneath.

Maybe related to this or maybe not, sometimes earthquakes are preceded by fluctuations in the ionosphere, so that’s the upper part of the atmosphere at 50 kilometers and up.

For example, two hours before a magnitude 9.2 earthquake hit Alaska in 1964, researchers in Boulder, Colorado, reported a strong anomaly in the ionosphere in that area. This Alaska earthquake, by the way, was the second largest one ever recorded. The largest one is the Valdivia earthquake from 1960 in southern Chile, which had a magnitude of 9.4.

Another electromagnetic phenomenon that researchers have observed are changes in the Van Allen belts. The Van Allen belts are regions in the upper atmosphere where charged particles get trapped by Earth's magnetic field. When those belts become overloaded with too many particles, then some particles hit the upper atmosphere and cause auroras.

Scientists have known since the late 1980s that the van Allen belts change locally in the area of an earthquake several days to several hours before the earthquake hits. In 2003 and 2005 Russian and Italian researchers did a statistical analysis of satellite data and found that particle bursts from the Van Allen belts precede earthquakes by 4-5 hours. They say the correlation has a high statistical significance of 5 sigma.

In 2013, another Italian team found a similar correlation, and they claim an even higher statistical significance of 5.7 sigma. However, the authors also point out that they saw this only for a few earthquakes of magnitude 5 and up. Then again, this may just be because the data aren’t good enough. The Chinese-Italian satellite mission CSES is currently collecting data to maybe solve this puzzle.

So as you can see there are quite a few earthquake precursors that are fairly well established, though not all are well understood. Now what’s with the idea that animals can tell when an earthquake is coming up?

Indeed, that animals act weird before an earthquake has been reported for as long as there’ve been reports, basically. Already in the third century, the Roman author Claudius Aelianus wrote about an earthquake that had wiped out the Greek city of Helike about 400 years BC.

“For five days before Helike disappeared all the mice and martens and snakes and centipedes and beetles and every other creature of that kind in the town left... And the people of Helike seeing this happening were filled with amazement, but were unable to guess the reason. But after the aforesaid creatures had departed, an earthquake occurred in the night; the town collapsed; an immense wave poured over it, and Helike disappeared”.

Makes you wonder how he would have known what the mice and beetles were doing in Greece 500 years before he was born. Also, some recent historical studies showed that coins were still issued in Helike some decades after it allegedly disappeared, there’s no report of Helike’s destruction in Greek texts, and no evidence of a tidal wave washing over the city. So, well. Don’t believe everything you read in thousand years old books, even if it’s in Latin.

But in any case, this is the first anecdote we have of animals acting weird before an earthquake. Such stories frequently made headlines. For example, just a couple of months ago, after an earthquake in Victoria in Australia, zoo staff reported that birds and kangaroos had been acting strangely that day.

And those stories aren’t entirely implausible because animals might be able to pick up some of the previously listed precursors, such as unusual gas emissions or changes to the magnetic field. But as we discussed in a previous video, humans have a tendency to see patterns even where there aren’t any, so going by anecdotes is not a good idea. The major issue with those reports is the lack of control groups. If a herd of sheep acts funny but no earthquake happens, no one writes headlines about that.

But some scientists have looked into the idea that animals can tell when an earthquake is coming up. Already in 1988, Rand Schaal, a geologist from University of California in Davis, did a statistical analysis to test whether the number of newspaper ads about lost pets was a sign of an imminent earthquake. He compared more than 42 thousand reports of missing pets with the times of about 200 earthquakes of magnitude 2.5 or larger in the San Francisco Bay area. He found no significant correlation.

More recently, in 2018, a paper by a group from the German Research Centre for Geosciences analyzed more than 700 reports of claimed correlations between earthquakes and “anomalies” in animal behavior.

Those reports were from more than one hundred different species – fish, mice, elephants, cows, snakes and various pets. The researchers found that the number of claimed observations increases close to the seismic event. Almost 60% of reports come from the last 5 min. This makes it possible that the animals might be detecting the P-wave, which humans usually fail to notice. But 50% of the reports related to only 3 earthquakes and the details of the reports are insufficient for a rigorous scientific analysis. The German group concludes that… they can’t conclude anything because the data is rubbish. So, more work is needed, basically.

Now let’s talked about the mysterious earthquake lights. Some people claim to have seen strange lights in the sky before, during, or shortly after large earthquakes, up to several kilometers away from the epicenter. The lights typically last a fraction of a second to several seconds. Anecdotes about this can be found already in ancient Egyptian, Chinese, and Japanese documents. Those earthquake lights are definitely a real thing. Indeed, you can see them yourself in this video. This footage was captured by security camera at the university in San Miguel, Peru during a 2007 Earthquake. It had a magnitude of 8.

The researchers who captured this video were later able to directly compare it with the recorded seismic activity. In a paper published in 2011 they showed that the time of the lights not only coincided with the time of the earthquakes but that they coincided with peaks of ground acceleration as you can see in this graph.

Earthquake lights have been reported mostly for a certain type of earthquake, the so-called shallow earthquakes of high magnitude. Some of the reports are probably something else, like sparks in electricity lines, but earthquake lights seem to be a real thing. It is not presently well understood what causes those lights, but one hypothesis that seismologists are investigating is piezoelectricity, so that is an electric response that some solid materials have to stress. However, some of the observed earthquake lights last too long for that explanation to work, so that’s probably not the full story.

As you see, seismologists know quite a few precursors, so how is it that still so few earthquakes can be predicted. Researchers from Japan have pointed out that short-term precursors are mostly non-seismic, so understanding and making use of them requires a multi-disciplinary effort that’s slow to get going. Artificial Intelligence will probably help, but big earthquakes are too infrequent for substantial software training, so that won’t really solve the problem.

Another issue is that predictions come with a heavy burden if they’re wrong. A prominent example is the 2009 L’Aquila Earthquake in Italy. It had magnitude 6 point 3. 308 people died. In 2012 a court gave multiple manslaughter convictions to six Italian seismologists for false assurances and failing to predict the earthquake. The conviction was overturned two years later but still Italians yelled “Shame! Shame!” at the scientists.

Well, it’s a shame indeed, but not because the seismologists were wrong. No prediction is ever 100% certain. But it would be good if earthquake predictions would come with a quantitative risk assessment, much like weather forecasts maybe. As you can see, earthquake prediction is a very active research area, and we will update you from time to time. So if you liked this video, don’t forget to subscribe.

Saturday, January 22, 2022

Does the sun cause earthquakes? (It's not as crazy as it sounds.)

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video. References are in the info fold below the video on YouTube or on Patreon.]


Earthquakes are the most fatal natural disasters. According to a report from the United Nations Office for Disaster Risk Reduction, in the period from 1998-2017, Earthquakes accounted for 7.8% of natural disasters, but for 56% of deaths from natural disasters. Why is it so hard to predict earthquakes? Did you know that the number of earthquakes correlates with solar activity and with the length of the day? You didn’t? Well then stay tuned because that’s what we’ll talk about today.

This is the first part of a two-part video about earthquake prediction. In this part, we will talk about the long-term and intermediate-term forecast for earthquake probability, ranging from centuries to months. And in the second part, which is scheduled for next week, we will talk about the short-term forecast, from months to seconds.

First things first, why do earthquakes happen? Well, there are many different types of earthquakes, but the vast majority of large earthquakes happen in the same regions, over and over again. You can see this right away from this map which shows the locations of earthquakes from 1900 to 2017. This happens because the surface of earth is fractured into about a dozen pieces, the tectonic plates, and these plates move at speeds of a few centimeters per year. But they don’t all move in the same direction, so where they meet, they rub on each other. Those places are called “faults”.

Most of the time, a fault is “locked”, which means that resistance from friction prevents the rocks from moving against each other. But the strain in the rock accumulates until it reaches a critical value where it overcomes the frictional resistance. Then the rocks on both sides of the fault suddenly slip against each other. This suddenly releases the stress and causes the earth to shake.

But that’s not the end of the story because the plates continue to move, so the strain will build up again, and eventually cause another earthquake. If the motion of the tectonic plates was perfectly steady and the friction was perfectly constant, you’d expect the earthquakes to happen periodically. But reality is more difficult than that. The surface of the rocks isn’t exactly the same everywhere, the motion of the plates may not be entirely steady, the earthquakes themselves may change the rocks, and also, earthquakes in one location can trigger earthquakes elsewhere.

This is why earthquakes recur in irregular intervals. In a nutshell, it’s fairly easy to predict where big earthquakes are likely to happen, but difficult to predict when they will happen. According to the website of the US Geological Service, “Neither the USGS nor any other scientists have ever predicted a major earthquake. We do not know how, and we do not expect to know how any time in the foreseeable future.”

Ok, so that’s it, thanks for watching. No, wait. That’s not it! Because even though no one knows how to predict an individual earthquake, we still might be able to predict the probability that an earthquake occurs in some period of time. This sounds somewhat lame, I know, but this information is super important if you want to decide whether it’s worth investing into improving the safety of buildings, or warning systems. It can save lives.

And indeed, geophysicists know some general probabilistic facts about the occurrence of earthquakes. The best known one is probably the Gutenberg-Richter law. The Gutenberg Richter law is a relationship between the magnitude and total number of earthquakes which have at least that magnitude.

Loosely speaking it says that the number of large earthquakes drops exponentially with the magnitude. For example, in seismically active regions, there will typically be about 10 times more events of magnitude 3.0 and up than there are of 4.0 and up. And 100 times more earthquakes of magnitude 2.0 and up than 4.0 and up, you get the idea. The exact scaling depends on the region; it can actually be larger than a factor 10 per order of magnitude.

The US Geological Service has for example used the past records of seismic activity in the San Francisco bay area to predict that the area has a 75% probability of an earthquake of at least magnitude 6.0 before the year 2043.

Geophysicists also know empirically that the distribution of earthquakes over time strongly departs from a Poisson distribution, which means it doesn’t look like it’s entirely random. Instead, the observed distribution indicates the presence of correlations. They have found for example that earthquakes are more likely to repeat in intervals of 32 years than in other intervals. This was first reported in 2008 but has later also been found by some other researchers. Here is for example a figure from a 2017 paper by Bendick and Bilham, which shows the deviations in the earthquake clustering from being random. So a completely random distribution would all be at zero, and the blue curve shows there’s a periodicity in the intervals.

That there are patterns in the earthquake occurrences is very intriguing and the reason why geophysicists have looked for systematic influences on the observed rate of earthquakes.

We have chosen here three examples that we totally subjectively found to be the most interesting: Solar activity, tides, and the length of the day. I have to warn you that this is all quite recent research and somewhat controversial, but not as crazy as you might think.

First, solar activity. In 2020 a group of Italian researchers published a paper in which they report having found a very strong correlation between earthquakes and solar activity. They analyzed 20 years of data from the SOHO satellite about the density and velocity of protons in the magnetosphere, so that’s about 500 kilometers about the surface of earth. Those protons come from solar wind, so they depend on the solar activity. And then compared that to the worldwide seismicity in the corresponding period.

They found that the proton density strongly correlated with the occurrence of large earthquakes of magnitude 5.6 and up, with a time shift of one day. The authors claim that the probability that the correlation is just coincidence is smaller than 10 to the minus five. And the correlation increases with the magnitude of the earthquake.

The authors of the paper also propose a mechanism that could explain the correlation at least qualitatively, namely a reverse piezoelectric effect. The piezoelectric effect is when a mechanical stress produces an electric field. The reverse piezoelectric effect, is, well, the reverse. Discharges of current from the atmosphere could produce stress in the ground. That could then trigger earthquakes in regions where the stress load was already close to rupture. A few other groups have since looked at this idea and so far no one has found a major problem with the analysis.

Problem with using solar activity to predict earthquakes is well, it’s difficult to predict solar activity… Though the sun is known to have a periodic cycle, so if this result holds up it’d tell us that during years of high solar activity we’re more likely to see big earthquakes.

Second, tides. The idea that tides trigger earthquakes has a long history. It’s been discussed in the scientific literature already since the 19th century. But for a long time, scientists found statistically significant correlations only limited to special regions or circumstances. However, in 2016 a team of Japanese researchers published a paper in Nature in which they claimed to have found that very large earthquakes, above magnitude 8 point 2 tend to occur near the time of maximum tidal stress amplitude.

They claim that this result makes some sense if one knows that very large earthquakes often happen in subduction zones, so that’s places where one tectonic plate goes under another. And those places are known to be very sensitive to extra stress, which can be caused by the tides. Basically the idea is that tides may trigger an earthquake that was nearly about to happen. However, it isn’t always the case that large earthquakes happen when the tide is high and also, there are very few of these earthquakes overall which means the correlation has a low statistical significance.

Third: The length of the day. As you certainly know, the length of the day depends on which way the wind blows.

Ok, in all fairness I didn’t know this, but if you think about it for a second, this has to be the case. If the earth was a perfectly rigid ball, then it would rotate around its axis steadily because angular momentum is conserved. But the earth isn’t a rigid ball. Most importantly it’s surrounded by an atmosphere and that atmosphere can move differently than the solid sphere. This means if the wind blows in the other direction than the earth is spinning, then the spinning of the earth has to speed up to preserve angular momentum. Physics!

This is a small effect but it’s totally measurable and on the order of some milliseconds a day. Indeed, you can use the length of the day to draw conclusions about annual weather phenomena, such as El Nino. This was first shown in a remarkable 1991 paper by Hide and Dickey. Have a look at this figure from their paper. The horizontal axis is years and the upper curve is variations in the length of the day. The lower curve is a measure for the strength of the Southern Oscillation, that’s a wind pattern which you may know as the El Nina, El Nino years. You can see right away that they’re correlated.

Yes, so the length of the day depends on which way the wind is blowing. The fluid core of the earth is also sloshing around and affects the length of the day, but this effect is even smaller than that of the atmosphere and less well understood. Fun fact: The fluid core is only about 3000 km underneath your feet. That’s less than the distance from LA to New York. But back to the earthquakes.

Earthquakes make the earth more compact which decreases the moment of inertia. But again, angular momentum is conserved, so earthquakes shorten the length of the day. But that’s not all. Geophysicists have known since the 1970s that seismic activity correlates with the rotation of the earth and therefore the length of the day, in that shorter days are followed by more earthquakes, with a time-lag of about 5 years.

Since the 1970s data has much improved, and this finding has become more somewhat more robust. Based on this Bendick and Bilham made a forecast in 2017 that in 2018 we would see an uptick in Earthquakes. The number of large earthquakes since 2018 within the uncertainties of their forecast. Yes, correlation doesn’t necessarily imply causation, but correlations are useful for forecasting even if you don’t understand the causation.

Just why that happens is presently rather unclear. Bendick and Bilham suggest that earthquakes are weakly coupled by the rotation of the earth, and when that rotation frequency changes that may cause a cascade effects by inertia, basically. The earth spins and all those plates on it spin with it, but when the spinning changes it takes some time until the plates get the message. And then they don’t all react the same way, which may cause some extra stress. That triggers earthquakes in some places and those trigger further earthquakes.

So it’s not like the changes in the rotation actually cause earthquakes, it’s just that they advance some earthquakes, and then retard others because the stress between the plates was released early. But really no one knows whether this is actually what happens.

As you can see, geophysicists are teasing out some really intriguing systematic correlations that may lead to better long-term predictions for earthquake risk. And next week we will talk about short term predictions, among other things whether animals can sense earthquakes and whether earthquake lights are real.

Saturday, January 15, 2022

Are warp drives science now?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Warp drives are not just science fiction. Einstein’s theory of general relativity says they should be possible. Yes, that guy again!

A year ago I told you about some recent developments, and since then warp drives have been in the news several times. In one case the headlines claimed that a physicist had found a warp drive that makes faster than light travel possible without requiring unphysical negative energies. In another case you could read that a “warp drive pioneer” had discovered an “actual, real world warp-bubble”. Seriously? What does that mean? That’s what we’ll talk about today.

First things first, what’s a warp drive. A warp drive is a new method of travel. It brings you from one place to another not by moving you through space, but by deforming space around you. Alright. Easy enough. Just one thing. How do you do that?

Well, Einstein taught us that you can deform space with energy, so you surround yourself with a bubble of energy, which contracts the space before you and expands it behind you. As a result, the places where you want to go move closer to you. So you’re traveling, even though you didn’t move. Okay. But what’s that bubble of energy made of and where do you get it from? Yes, indeed, good question. That’s why no one has ever built an actual warp drive.

As I explained in my previous video, warp drives are solutions to Einstein’s equations of general relativity. So they are mathematically possible. But that a warp drive is a solution of general relativity does not mean it makes physical sense.

What Einstein’s equations tell you is just that a certain distribution of energy and mass goes along with a certain curved space-time. If you put in a distribution of energy and mass, you get out the curved space-time this would create. If you put in a curved space-time you get out the distribution of energy of mass that you would need to create it. There will always be some energy distribution for which your curved space-time is a solution. But in general those distributions of energy and mass are not physically possible.

There are three different types of weird stuff which we have never seen that can become necessary for warp drives. There is (a) stuff that has negative energy density, (b) stuff that has a weird gravitational behavior which can seem repulsive (c) stuff that moves faster than the speed of light.

The worst type of weird stuff is that with the negative energy density, not only because we’ve never seen that but also because it would make the vacuum unstable. If negative energies existed, one could make pairs of negative and positive energy particles out of nothing, in infinite numbers, which destroys literally everything. So if negative energy existed we wouldn’t exist. We’ll mark that with a red thumb down.

The repulsive stuff isn’t quite as bad. Indeed, physicists have a few theories for such repulsive stuff, though there is no evidence they actually exist. There is for example the hypothetical “inflaton field” which allegedly rapidly expanded our universe just after the big bang. This inflaton field, or rather its potential, can act gravitationally repulsive. And dark energy is also repulsive stuff, if it is stuff. And if it exists. Which it may not. But well, you could say, at least that stuff doesn’t destroy the universe so we’ll mark that with an orange thumb down.

Finally, superluminal stuff, so stuff that moves faster than light. This isn’t all that problematic other than that we’ve never seen it, so we’ll give it a yellow thumb down. It’s just that if you need stuff that moves faster than light to move stuff faster than light then that isn’t super useful.

Now that we have color coded problematic types of energy which makes us look super organized, let us look at whether warp drives require them. The best known warp drive solution dates back to 1994 and is named the “Alcubiere drive” after the physicist Miguel Alcubierre. The Alcubierre drive requires all of the above, negative energies, repulsive gravity, and superluminal stuff. That’s not particularly encouraging.

Now the big headlines that you saw in March last year were triggered by a press release from the University of Göttingen about the publication of a paper by Erik Lentz. Lentz claimed to have found a new solution for warp drives that does not require negative energies.

The paper was published in the journal Classical and Quantum Gravity, which is a specialized high quality journal. I have published quite a few papers there myself. I mention this because I have seen a few people tried to criticize Lentz’ paper by discrediting the journal. This is not a good argument, it’s a fine journal. However, this doesn’t mean the paper is right.

Lentz claims both in his paper and the press release that he avoided unphysical stuff by stitching together solutions that, to make a long story short, have fewer symmetries than the warp drives that were previously considered. He does not explain why or how this prevents negative energies.

Just one month later, in April 2021, another paper came out, this one by Shaun Fell and Lavinia Heisenberg. They made a similar claim like Lentz, namely that they’d found a warp drive solution that doesn’t require unphysical stuff by using a configuration that has fewer symmetries than the previously considered ones. The Fell and Heisenberg paper got published in the same journal and is mathematically more rigorous than Lentz’. But it didn’t come with a press release and didn’t make headlines.

Now, those new warp drive solutions, from Lentz and Fell and Heisenberg are in the same general class as the Alcubierre drive, which is called the Natario class, named after Jose Natario. In May last year, a third warp drive paper appeared, this one by Santiago, Schuster, and Visser. The authors claim to have proved that all warp drives in the Natario class require negative energies and are therefore not physical. So that means, the new solutions are all bad, bad, and bad.

Why the disagreement? Why do those people say it’s possible, and those prove it’s impossible. Well, in their paper, Santiago and coauthors point out that the other authors omitted a necessary check on their derivation. After that, Fell and Heisenberg revised their paper and now agree that their warp drive require negative energies after all. Lentz also revised his paper but still claims that he doesn’t need negative energies. It’s unclear at the moment how he wants to escape the argument in the Santiago paper.

Now, the Santiago paper has not yet gotten published in a journal. I’ve read it and it looks good to me but in all honesty I didn’t check the calculation. If this result holds up, you may think it’s bad news because they’ve ruled out an entire class of warp drives. But I think it’s good news because their proof tells us why those solutions don’t work.

Just to give you the brief summary, they show that these solutions require that the integral over the energy density is zero. This means if it’s non-zero anywhere, it has to be negative somewhere. If they’re correct, this would tell us we should look for solutions which don’t have this constraint.

Okay, so the situation with the new solution from Lentz isn’t entirely settled, but if the proof from the Santiago paper is right then Lentz’ solution also has a negative energy problem. The Santiago paper by the way does not apply to the more general class of warp drives from Bobrick and Martire, which I talked about in my earlier video. The issue with those more general warp drives is that they’re somewhat unspecific. They just say we need several regions with certain properties, but one doesn’t really know how to do that.

Those general warp drives can be divided into those that move faster than light and those that move slower than light. The ones that move faster than light still require stuff that moves faster than light, so they’re still problematic. The ones that stay below the speed of light however don’t seem to have any obvious physical problem. You might find it somewhat disappointing that a warp drive stays below the speed of light and I can see that. But look at it this way: if we could at least travel with nearly the speed of light that would already be great progress.

So the claim that Lentz found a warp drive solution which allows faster than light travel without negative energies is highly questionable. But what’s with the headlines that said someone had actually built a warp drive? Well that turns out to be just bad science communication.

In July last year, a paper was published by a group with the lead author Harold White. They had done a computer simulation of certain nanostructures. And in that simulation they found a distribution of energies similar to that of the Alcubierre drive. This can happen because on very short distances the Casimir effect can give rise to energy densities that are effectively negative. So, not only did they not actually build the thing, it was a computer simulation, it’s also an in-medium effect. It’s kind of like a simulation of a simulation and definitely not an “actual” warp drive.

Where does this leave us? The big picture is that warp drives are getting some serious attention from researchers who work on general relativity. I think this is a good development. We certainly have a long way to go, but as they say, every journey begins with a first step. I think warp drives are a possibility that’s worth investigating. If you want to work on warp drives yourself, check out Gianni Martire’s website, because he is offering research grants and tells me he has to get rid of the money fast.

Having said that, I think those people all miss the point. If you want to have a propulsion mechanism the relevant question isn’t whether there is some energy distribution that can move an object. The question is how efficiently can you convert the energy into motion. You want to know what it takes to accelerate something. At present those papers basically say if you throw out stuff that way, then the space-ship will go that way because momentum is conserved. And that is probably correct, but it’s not exactly a new idea.

Saturday, January 08, 2022

Do Climate Models predict Extreme Weather?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


The politics of climate change receives a lot of coverage. The science, not so much. That’s unfortunate because it makes it very difficult to tell apart facts from opinions about what to do in light of those facts. But that’s what you have me for, to tell apart the facts from the opinions.

What we’ll look at in this video is a peculiar shift in the climate change headlines that you may have noticed yourself. After preaching for two decades that weather isn’t climate, now climate scientists claim they can attribute extreme weather events to climate change. They also at the same time say that their climate models can’t predict the extreme events that actually happen. How does that make sense? That’s what we’ll talk about today.

In the last year a lot of extreme weather events made headlines, like the heat dome in North Western America in summer 2021. It lasted for about two weeks and reached a record temperature of almost 50 degrees Celsius in British Columbia. More than a thousand people died from the heat. Later in the summer, we had severe floods in middle Europe. Water levels rose so fast so quickly that entire parts of cities were swept away. More than 200 people died.

Let’s look at what the headlines said about this. The BBC wrote “Europe's extreme rains made more likely by humans... Downpours in the region are 3-19% more intense because of human induced warming.” Or, according to Nature, “Climate change made North America’s deadly heatwave 150 times more likely”. Where do those numbers come from?

These probabilities come out of a research area called “event attribution”. The idea was first put forward by Myles Allen in a Nature commentary in 2003. Allen, who is a professor at the University of Oxford, was trying to figure out whether he might one day be able to sue the fossil fuel industry because his street was flooded, much to the delight of ducks, and he couldn’t find sufficiently many sandbags.

But extreme event attribution only attracted public attention in recent years. Indeed, last year two of the key scientists in this area, Frederike Otto and Geert Jan van Oldenborgh made it on the list of Time Magazine’s most influential people of 2021. Sadly, van Oldenborgh died two months ago.

The idea of event attribution is fairly simple. Climate models are computer simulations of the earth with many parameters that you can twiddle. One of those parameters is the carbon dioxide level in the atmosphere. Another one is methane, and then there’s aerosols and the amount of energy that we get from the sun, and so on. 

Now, if carbon dioxide levels increase then the global mean surface air temperature goes up. So far, so clear. But this doesn’t really tell you much because a global mean temperature isn’t something anyone ever experiences. We live locally from day to day and not globally averaged over a year.

For this reason, global average values work poorly in science communication. They just don’t mean anything to people. So the global average surface temperature has increased by one degree Celsius in 50 years? Who cares.

Enter extreme event attribution. It works like this. On your climate model, you leave the knob for greenhouse gases at the pre-industrial levels. Then you ask how often a certain extreme weather event, like a heat wave or flood, would occur in some period of time. Then you do the same again with today’s level of greenhouse gases. And finally you compare the two cases.

So, say, with pre-industrial greenhouse gas levels you find one extreme flood in a century, but with current greenhouse gas levels you find ten floods, then you can say these events have become ten times more likely. I think such studies have become popular in the media because the numbers are fairly easy to digest. But what do the numbers actually mean?

Well first of all there’s the issue of interpreting probabilities in general. An increase in the probability of occurrence of a certain event in a changed climate doesn’t mean it wouldn’t have happened without climate change. That’s because it’s always possible that a series of extreme events was just coincidence. Bad luck. But that it was just coincidence becomes increasingly less likely the more of those extreme events you see. Instead, you’re probably witnessing a trend.

That one can strictly speaking never rule out coincidence but only say it’s unlikely to be coincidence is always the case in science, nothing new about this. But for this reason I personally don’t like the word “attribution”. It just seems too strong. Maybe speaking of shifting trends would be better. But this is just my personal opinion about the nomenclature. There is however another issue with extreme event attribution. It’s that the probability of an event depends on how you describe the event.

Think of the floods in central Europe as an example. If you take all the data which we have for the event, the probability that you see this exact event in any computer simulation is zero. To begin with that’s because the models aren’t detailed enough. But also, the event is so specific with some particular distribution of clouds and winds and precipitation and what have you, you’d have to run your simulation forever to see it even once.

What climate scientists therefore do is to describe the event as one in a more general class. Events in that class might have, say, more than some amount of rainfall during a few days in some region of Europe during some period of the year. The problem is such a generalization is arbitrary. And the more specific you make the class for the event the more unlikely it is that you’ll see it. So the numbers you get in the end strongly depend on how someone chose to generalize the event.

Here is how this was explained in a recent review paper: “Once an extreme event has been selected for analysis, our next step is to define it quantitatively. The results of the attribution study can depend strongly on this definition, so the choice needs to be communicated clearly.”

But that the probabilities ultimately depend on arbitrary choices isn’t the biggest problem. You could say as long as those choices are clearly stated, that’s fine, even if it’s usually not mentioned in media reports, which is not so fine. The much bigger problem is that even if you make the event more general, you may still not see it in the climate models.

This is because most of the current climate models have problems correctly predicting the severity and frequency of extreme events. Weather situations like dry spells or heat domes tend to come out not as long lasting or not as intense as they are in reality. Climate models are imperfect simulations of reality, particularly when it comes to extreme events. 

Therefore, if you look for the observed events in the model, the probability may just be zero, both with current greenhouse gas levels and with pre-industrial levels. And dividing zero by zero doesn’t tell you anything about the increase of probability.

Here is another quote from that recent review on extreme event attribution “climate models have not been designed to represent extremes well. Even when the extreme should be resolved numerically, the sub-grid scale parameterizations are often seen to fail under these extreme conditions. Trends can also differ widely between different climate models”. Here is how the climate scientist Michael Mann put this in an interview with CNN “The models are underestimating the magnitude of the impact of climate change on extreme weather events.”

But they do estimate the impact, right, so what do they do? Well, keep in mind that you have some flexibility for how you define your extreme event class. If you set the criteria for what counts as “extreme” low enough, eventually the events will begin to show up in some models. And then you just discard the other models. 

This is actually what they do. In the review that I mentioned they write “The result of the model evaluation is a set of models that seem to represent the extreme event under study well enough. We discard the others and proceed with this set, assuming they all represent the real climate equally well.”

Ok. Well. But this raises the question, if you had to discard half of the models because they didn’t work at all, what reason do you have to think that the other half will give you an accurate estimate. By all chances, the increase in probability which you get this way will be an underestimate.

Here’s a sketch to see why.

Suppose you have a distribution of extreme events that looks like this red curve. The vertical axis is the probability of an event in a certain period of time, and the horizontal axis is some measure of how extreme the events are. The duration of a heat wave or amount of rainfall or whatever it is that you are interested in. The distributions of those events will be different for pre-industrial levels of greenhouse gases than for the present levels. The increase in greenhouse gas concentrations changes the distribution so that extreme events become more likely. For the extreme event attribution, you want to estimate how much more likely they become. So you want to compare the areas under the curve for extreme events.

However, the event that we actually observe is so extreme it doesn’t show up in the models at all. This means the model underestimates what’s called the “tail” of this probability distribution. The actual distribution may instead look more like the green curve. We don’t know exactly what it looks like, but we know from observations that the tail has to be fatter than that of the distribution we get from the models.

Now what you can do to get the event attribution done is to look at less extreme events because these have a higher probability to occur in the models. You count how many events occurred for both cases of greenhouse gas levels and compare the numbers. Okay. But, look, we already know that the models underestimate the tail of the probability distribution. Therefore, the increase in probability which we get this way is just a lower bound on the actual increase in probability.

Again, I am not saying anything new, this is usually clearly acknowledged in the publications on the topic. Take for example a 2020 study of the 2019/20 wildfires in Australia that were driven by a prolonged drought. They authors who did the study write:
“While all eight climate models that were investigated simulate increasing temperature trends, they all have some limitations for simulating heat extremes… the trend in these heat extremes is only 1 ºC, substantially lower than observed. We can therefore only conclude that anthropogenic climate change has made a hot week like the one in December 2019 more likely by at least a factor of two.”
At least a factor of two. But it could have been a factor 200 or 2 million for all we know.

And that is what I think is the biggest problem. The way that extreme event attributions are presented in the media conveys a false sense of accuracy. The probabilities that they quote could be orders of magnitude too small. The current climate models just aren’t good enough to give accurate estimates. This matters a lot because nations will have to make investments to prepare for the disasters that they expect to come and figure out how important that is, compared to other investments. What we should do, that’s opinion, but getting the facts right is a good place to start.

Saturday, January 01, 2022

What’s the difference between American English and British English?

[This video is about pronunciation. The transcript won’t make sense without the audio!]


It never occurred to me that one day people might want to hear me speak in a foreign language. That was not the plan when I studied physics. I’ve meanwhile subscribed to like a dozen English pronunciation channels and spend a lot of time with the online dictionary replaying words, so much so that at this point I think my channel should really be called “Sabine learns English.”

Inevitably, I’m now giving English lectures to my English friend who really doesn’t know anything about English. But then, I also don’t know anything about German, other than speaking it. And because looking into English pronunciation clicked some things into place that I kind of knew but never consciously realized, we’ll start the New Year with a not too brain-intensive video on the difference between American and British English. And that’s what we’ll talk about today.

I spent some years in the United States and some more in Canada, and when I came back to Europe my English sounded like this.

I then acquired a British friend. His name is Tim Palmer, and you already know him as the singing climate scientist.

Tim also has an interest in quantum mechanics and he’d get very offended when I’d say quannum mechanics. And since I’m the kind of person who overthinks everything, I can now exactly tell you what makes British people sound British.

But since my own pronunciation is clearly of no help, I asked Tim to read you some sentences. And to represent the American pronunciation I asked the astrophysicist Brian Keating who kindly agreed.

Before we listen to their speech samples, I want to be clear that of course there are many different accents in both the UK and in the US, and it’s not like there’s only one right pronunciation. But the differences that I’ll be talking about are quite general as I’m sure you’ll notice in a moment.

Now first of all there are many words which are just different in North America and in the UK. Trucks. Lorries. Cookies. Biscuits. Trash. Rubbish. Gasoline. Petrol, and so on.

There are also some words which are used in both languages but don’t mean the same thing, such as rubber or fanny. Just be careful with those.

And then there are words which are the same but are pronounced somewhat differently, like /təˈmeɪtəʊ/ and /təˈmɑːtəʊ/ or /ˈvaɪtəmɪn/ and /ˈvɪtəmɪn/, or they have a somewhat different emphasis, like /vækˈsiːn/, which in American English has the emphasis on the second syllable, whereas in British English it’s /ˈvæksiːn/ with the emphasis on the first syllable.

But besides that there are also some overall differences in the pronunciation. The probably most obvious one is the t’s. Listen to this first example from Brian and Tim and pay attention to those ts.

“Quantum mechanics isn’t as complicated as they say.”

In American English it’s quite common to use tap ts that sound kind of like d’s. Bedder, complicated. Or kind of mumble over the ts altogether as in quannum mechanics.

In British English the ts tend to be much clearer pronounced. Better, complicated, quantum mechanics.

While this is I believe the difference that’s the easiest to pick up on, it’s not overall the biggest difference. I think the biggest difference is probably that a lot of “r”s are silent in British English, which means they’re not pronounced at all. There are a lot of words in which this happens, like the word “word” which in American English would be “word”. Let’s hear the next sample and pay attention to the rs.

“The first stars were born about one hundred million years after the big bang.”

In American English you have first stars born. In British English, none of those has an r, so it’s first stars born. The Brits also drop those r’s at the end of words.

“The plumber wasn’t any less clever than the professor.”

That sound which the Brits leave at the end of an “er” word, is called the “schwa” sound and its phonetic spelling is an “e” turned by 180 degrees. It’s the most common sound in British English and it goes like this. E. That’s it.

In case you think that “Schwa” sounds kind of German that’s because it is kind of German. It goes back to a Hebrew word, but the term was introduced by the German linguist Johann Schmeller in 1821 to describe how Germans actually pronounced words rather than write them. The Schwa sound is all over the place in German too, for example at the end of words like Sonne, Treppe, or Donaudampfschiffahrtsgesellschaftskapitänsmütze.

But let’s come back to English and talk a little bit more about those rs. Not all r’s are silent in British English. You keep them at the beginning of words, right?, but you also keep them between two vowels. For example, wherever, in American English has two rs in British English you drop the one at the end but keep the one in the middle. Wherever.

The “r” dropping is actually easy to learn once you’ve wrapped your head around it because there are general rules for it. Personally I find the British pronunciation easier because the English “r”s don’t exist in German, so if you can get rid of them, great. Ok, but just dropping the r’s won’t make you sound like the queen, there are a few more things. Let’s hear another sample. This time, pay attention to the vowels.

“I thought I caught a cold after I fell into the water.”

So you hear the two different ts in “water” again, but more prominently you hear the British use “ooo” where the Americans use a sound that’s closer to an “aaa”. W/ooo/ter. W/aaa/ter. This one is also quite easy to learn because it’s a general thing. Somewhat more difficult is the next one. Again pay attention to the vowels.

“You got this wrong. It’s probably not possible.”

What you hear there is that some of the “a”s in American English are “o”s in British English. Those two sounds have a different phonetic spelling, the a is spelled like this, and the o is the upside down of this. The difference is most obvious in the word “not” which in American English sounds more like “not”. Same thing with possible which becomes possible. And I did expect Brian to say pr/a/bably but he pronounced it probably, which is interesting. As I said there’s lot of regional variations in the pronunciation.

I have found this one rather difficult to learn because there’s another “a” sound for which this shift does not happen, and that has a phonetic spelling which looks somewhat like a big Lambda. Conversation turns into conversation, consciously into consciously, but company stays company. God turns to god but nothing stays nothing. Yes, that is really confusing. And in that word “confusing” the pronunciation doesn’t change because the first syllable is unstressed and the o becomes a schwa. Which is also confusing.

One final difference. This took me a long time to notice but once you know it it’s really obvious. English famously has a lot of diphthongs; those are combinations of two vowels. For example “I” that’s actually two vowels “a” and “I” -- aaii. Or “ey” as in take or baby. That’s an e and an i. eeeiiii. Take.

Most of those diphthongs are pronounced pretty much the same in British and American English, except for the əʊ as in “no”. In American English that’s more like an əʊ whereas in British English you start close to an “e”, əʊ. No. No. Only. Only. And so on.

Let’s hear how Brian and Tim pronounce this.

“I don’t know how slowly it will go.”

As I said, once you know it, it’s entirely obvious isn’t it. And if you want to sound Canadian you keep the American ou, but use the British eu for the aʊ. So out become out. About become about. Without without. But be careful, because once you’ve got that eu in your about it’s really hard to get rid of.

The shift from the American ou to the British eu is also confusing because many dictionaries use the same phonetic spelling for both sounds. And also, you don’t do it if there’s an “l” coming after the əʊ. So for example “pole” doesn’t become pole. No. So don’t do it if an l is coming up, except there’s an exception to the exception which is “folks”.

And that’s it folks, next week we’re back to talking about science.

Did you know by the way that German almost became the official language in the United States? You did? Well that’s an urban legend, it never happened. The USA doesn’t even have an official language; it’s just that English is the most widely spoken one.

Many thanks to Tim and Brian for their recordings, and many thanks also to Kate Middleton from SpeakMyLanguage for helping with this video. I learned most of what I know about British English from Kate. If you need help with your English, go check out her website.