Wednesday, April 21, 2021

All you need to know about Elon Musk’s Carbon Capture Prize

[This is a transcript of the video embedded below.]


Elon Musk has announced he is sponsoring a competition for the best carbon removal ideas with a fifty million dollar prize for the winner. The competition will open on April twenty-second, twenty-twenty-one. In this video, I will tell you all you need to know about carbon capture to get your brain going, and put you on the way for the fifty million dollar prize.

During the formation of our planet, large amounts of carbon dioxide were stored in the ground, and ended up in coal and oil. By burning these fossil fuels, we have released a lot of that old carbon dioxide really suddenly. It accumulates in the atmosphere and prevents our planet from giving off heat the way it used to. As a consequence, the climate changes, and it changes rapidly.

The best course of action would have been to not pump that much carbon dioxide into the atmosphere to begin with, but at this point reducing future emissions alone might no longer be the best way to proceed. We might have to find ways to actually get carbon dioxide back out of the air. Getting this done is what Elon Musk’s competition is all about.

The problem is, once carbon dioxide is in the atmosphere it stays there for a long time. By natural processes alone, it would take several thousand years for atmospheric carbon dioxide levels to return to pre-industrial. And the climate reacts slowly to the sudden increase in carbon dioxide, so we haven’t yet seen the full impact of what we have done already. For example, there’s a lot of water on our planet, and warming up this water takes time.

So, even if we were to entirely stop carbon dioxide emissions today, the climate would continue to change for at least several more decades, if not centuries. It’s like you elected someone out of office, and now they’re really pissed off, but they’ve got six weeks left on the job and nothing you can do about that.

Globally, we are presently emitting about forty billion tons of carbon dioxide per year. According to the Intergovernmental Panel on Climate Change, we’d have to get down to twenty billion tons per year to limit warming to one point five degrees Celsius compared to preindustrial levels. These one point five degrees are what’s called the “Paris target.” This means, if we continue emitting at the same level as today, we’ll have to remove twenty billion tons carbon dioxide per year.

But to score in Musk’s competition, you don’t need a plan to remove the full twenty billion tons per year. You merely need “A working carbon removal prototype that can be rigorously validated” that is “capable of removing at least 1 ton per day” and the carbon “should stay locked up for at least one hundred years.” But other than that, pretty much everything goes. According to the website, the “main metric for the competition is cost per ton”.

So, which options do we have to remove carbon dioxide and how much do they cost?

The obvious thing to try is enhancing natural processes which remove carbon dioxide from the atmosphere. You can do that for example by planting trees because trees take up carbon dioxide as they grow. They are what’s called a natural “carbon sink”. This carbon is released again if the trees die and rot, or are burned, so planting trees alone isn’t enough, we’d have to permanently increase their numbers.

By how much? Depends somewhat on the type of forest, but to get rid of the twenty billion tons per year, we’d have to plant about ten million square kilometers of new forests. That’s about the area of the United States and more than the entire remaining Amazon rainforest.

Planting so many trees seems a bit impractical. And it isn’t cheap either. The cost is about 100 US dollars per ton of carbon dioxide. So, to get rid of the 20 billion tons excess carbon dioxide, that would be a few trillion dollars per year. Trees are clearly part of the solution, but we need to do more than that. And stop burning the rain forest wouldn’t hurt either.

Humans by the way are also a natural carbon sink because we’re eighteen percent carbon. Unfortunately, burying or burning dead people returns that carbon into the environment. Indeed, a single cremation releases about two-hundred-fifty kilograms of carbon dioxide, which could be avoided, for example, by dumping dead people in the deep sea where they won’t rot. So, if we were to do sea burials instead of cremations, that would save up to a million tons carbon dioxide per year. Not a terrible lot. And probably quite expensive. Yeah, I’m not the person to win that prize.

But there’s a more efficient way that oceans could help removing carbon. If one stimulates the growth of algae, these will take up carbon. When the algae die, they sink to the bottom of the ocean, where the carbon could remain, in principle, for millions of years. This is called “ocean fertilization”.

It’s a good idea in theory, but in practice it’s presently unclear how efficient it is. There’s no good data for how many of the algae sink and how many of them get eaten, in which case the carbon might be released, and no one knows what else such fertilization might do to the oceans. So, a lot of research remains to be done here. It’s also unclear how much it would cost. Estimates range from two to four hundred fifty US dollars per ton of carbon dioxide.

Besides enhancing natural carbon sinks, there are a variety of technologies for removing carbon permanently.

For example, if one burns agricultural waste or wood in the absence of oxygen, this will not release all the carbon dioxide but produce a substance called biochar. The biochar keeps about half of the carbon, and not only is it is stable for thousands of years, it can also improve the quality of soil.

The major problem with this idea is that there’s only so much agricultural waste to burn. Still, by some optimistic estimates one could remove up to one point eight billion tons carbon dioxide per year this way. Cost estimates are between thirty and one hundred twenty US dollars per ton of carbon dioxide.

By the way, plastic is about eighty percent carbon. That’s because it’s mostly made of oil and natural gas. And since it isn’t biodegradable, it’ll safely store the carbon – as long as you don’t burn it. So, the Great Pacific garbage patch? That’s carbon storage. Not a particularly popular one though.

A more popular idea is enhanced weathering. For this, one artificially creates certain minerals that, when they come in contact with water, can bind carbon dioxide to them, thereby removing it from the air. The idea is to produce large amounts of these minerals, crush them, and distribute them over large areas of land.

The challenges for this method are: how do you produce large amounts of these minerals, and where do you find enough land to put it on. The supporters of the American weathering project Vesta claim that the cost would be about ten US dollars per ton of carbon dioxide. So that’s a factor ten less than planting trees.

Then there is direct air capture. The most common method for this is pushing air through filters which absorb carbon dioxide. Several petrol companies like Chevron, BHP, and Occidental currently explore this technology. The company Carbon Engineering, which is backed by Bill Gates, has a pilot plant in British Columbia that they want to scale up to commercial plants. They claim every such plant will be equivalent in carbon removal to 40 million trees, removing 1 million tons of carbon dioxide per year.

They estimate the cost between ninety-four and 232 US dollar per ton. That would mean between two to four trillion US dollars per year to eliminate the entire twenty billion tons carbon dioxide which we need to get rid of. That’s between two point five and five percent of the world’s GDP.

But, since carbon dioxide is taken up by the oceans, one can also try to get rid of it by extracting it from seawater. Indeed, the density of carbon dioxide in seawater is about one hundred twenty five times higher than it is in air. And once you’ve removed it, the water will take up new carbon dioxide from the air, so you can basically use the oceans to suck the carbon dioxide out of the atmosphere. That sounds really neat.

The current cost estimate for carbon extraction from seawater is about 50 dollars per ton, so that’s about half as much as carbon extraction from air. The major challenge for this idea is that the currently known methods for extracting carbon dioxide from water require heating the water to about seventy degrees Celsius which takes up a lot of energy. But maybe there are other, more energy efficient ways, to get carbon dioxide out of water? You might be the person to solve this problem.

Finally, there is carbon capture and storage, which means capturing carbon dioxide right where it’s produced and store it away before it’s released into the atmosphere.

About twenty-six commercial facilities already use this method, and a few dozen more are planned. In twenty-twenty, about forty million tons of carbon dioxide were captured by this method. The typical cost is between 50 and 100 US$ per ton of carbon dioxide, though in particularly lucky cases the cost may go down to about 15 dollars per ton. The major challenge here is that present technologies for carbon capture and storage require huge amounts of water.

As you can see an overall problem for these ideas is that they’re expensive. You can therefore score on Musk’s competition by making one of the existing technologies cheaper, or more efficient, or both, or maybe you have an entirely new idea to put forward. I wish you good luck!

Saturday, April 17, 2021

Does the Universe have higher dimensions? Part 2

[This is a transcript of the video embedded below.]


In science fiction, hyper drives allow spaceships to travel faster than light by going through higher dimensions. And physicists have studied the question whether such extra dimensions exist for real in quite some detail. So, what have they found? Are extra dimensions possible? What do they have to do with string theory and black holes at the Large Hadron collider? And if extra dimensions are possible, can we use them for space travel? That’s what we will talk about today.

This video continues the one of last week, in which I talked about the history of extra dimensions. As I explained in the previous video, if one adds 7 dimensions of space to our normal three dimensions, then one can describe all of the fundamental forces of nature geometrically. And that sounds like a really promising idea for a unified theory of physics. Indeed, in the early 1980s, the string theorist Edward Witten thought it was intriguing that seven additional dimensions of space is also the maximum for supergravity.

However, that numerical coincidence turned out to not lead anywhere. This geometric construction of fundamental forces which is called Kaluza-Klein theory, suffers from several problems that no one has managed to solved.

One problem is that the radii of these extra dimensions are unstable. So they could grow or shrink away, and that’s not compatible with observation. Another problem is that some of the particles we know come in two different versions, a left handed and a right handed one. And these two version do not behave the same way. This is called chirality. That particles behave this way is an observational fact, but it does not fit with the Kaluza-Klein idea. Witten actually worried about this in his 1981 paper.

Enter string theory. In string theory, the fundamental entities are strings. That the strings are fundamental means they are not made of anything else. They just are. And everything else is made from these strings. Now you can ask how many dimensions does a string need to wiggle in to correctly describe the physics we observe?

The first answer that string theorists got was twenty six. That’s twenty five dimensions of space and one dimension of time. That’s a lot. Turns out though, if you add supersymmetry the number goes down to ten, so, nine dimension of space and one dimension of time. String theory just does not work properly in fewer dimensions of space.

This creates the same problem that people had with Kaluza-Klein theory a century ago: If these dimensions exist, where are they? And string theorists answered the question the same way: We can’t see them, because they are curled up to small radii.

In string theory, one curls up those extra dimensions to complicated geometrical shapes called “Calabi-Yau manifolds”, but the details aren’t all that important. The important thing is that because of this curling up, the strings have higher harmonics. This is the same thing which happens in Kaluza-Klein theory. And it means, if a string gets enough energy, it can oscillate with certain frequencies that have to match to the radius of these extra dimensions.

Therefore, it’s not true that string theory does not make predictions, though I frequently hear people claim that. String theory makes the prediction that these higher harmonics should exist. The problem is that you need really high energies to create them. That’s because we already know that these curled up dimensions have to be small. And small radii means high frequencies, and therefore high energies.

How high does the energy have to be to see these higher harmonics? Ah, here’s the thing. String theory does not tell you. We only know that these extra dimensions have to be so small we haven’t yet seen them. So, in principle, they could be just out of reach, and the next bigger particle collider could create these higher harmonics.

And this… is where the idea comes from that the Large Hadron Collider might create tiny black holes.

To understand how extra dimensions help with creating black holes, you first have to know that Newton’s one over R squared law is geometrical. The gravitational force of a point mass falls with one over R squared because the surface of the sphere grows with R squared, where R is the radius of the sphere. So, if you increase the distance to the mass, the force lines thin out as the surface of the sphere grows. But… here is the important point. Suppose you have additional dimensions of space. Say you don’t have three, but 3+n, where n is a positive integer. Then, the surface of the sphere increases with R to the (2+n).

Consequently, the gravitational force drops with one over R to the (2+n) as you move away from the mass. This means, if space has more than three dimensions, the force drops much faster with distance to the source than normally.

Of course Newtonian gravity was superseded by Einstein’s theory of General Relativity, but this general geometric consideration about how gravity weakens with distance to the source remains valid. So, in higher dimensions the gravitational force drops faster with distance to the source.

Keep in mind though that the extra dimensions we are concerned with are curled up, because otherwise we’d already have noticed them. This means, into the direction of these extra dimensions, the force lines can only spread out up to a distance that is comparable to the radius of the dimensions. After this, the only directions the force lines can continue to spread out into are the three large directions. This means that on distances much larger than the radius of the extra dimensions, this gives back the usual 1/R^2 law, which we observe.

Now about those black holes. If gravity works as usual in three dimensions of space, we cannot create black holes. That’s because gravity is just too weak. But consider you have these extra dimensions. Since the gravitational force falls much faster as you go away from the mass, it means that if you get closer to a mass, the force gets much stronger than it would in only 3 dimensions. That makes it much easier to create black holes. Indeed, if the extra dimensions are large enough, you could create black holes at the Large Hadron Collider.

At least in theory. In practice, the Large Hadron Collider did not produce black holes, which means that if the extra dimensions exist, they’re really small. How “small”? Depends on the number of extra dimensions, but roughly speaking below a micrometer.

If they existed, could we travel through them? The brief answer is no, and even if we could it would be pointless. The reason is that while the gravitational force can spread into all of the extra dimensions, matter, like the stuff we are made of, can’t go there. It is bound to a 3-dimensional slice, which string theorists call a “brane”, that’s b r a n e, not b r a i n, and it’s a generalization of membrane. So, basically, we’re stuck on this 3-dimensional brane, which is our universe. But even if that was not the case, what do you want in these extra dimensions anyway? There isn’t anything in there and you can’t travel any faster there than in our universe.

People often think that extra dimensions provide a type of shortcut, because of illustrations like this. The idea is that our universe is kind of like this sheet which is bent and then you can go into a direction perpendicular to it, to arrive at a seemingly distant point faster. The thing is though, you don’t need extra dimensions for that. What we call the “dimension” in general relativity would be represented in this image by the dimension of the surface, which doesn’t change. Indeed, these things are called wormholes and you can have them in ordinary general relativity with the odinary three dimensions of space.

This embedding space here does not actually exist in general relativity. This is also why people get confused about the question what the universe expands into. It doesn’t expand into anything, it just expands. By the way, fun fact, if you want to embed a general 4 dimensional space-time into a higher dimensional flat space you need 10 dimensions, which happens to be the same number of dimensions you need for string theory to make sense. Yet another one of these meaningless numerical coincidences, but I digress.

What does this mean for space travel? Well, it means that traveling through higher dimensions by using hyper drives is scientifically extremely implausible. Therefore, my ultimate ranking for the scientific plausibility of science fiction travel is:

3rd place: Hyper drives because it’s a nice idea, it just makes no scientific sense.

2nd place: Wormholes, because at least they exist mathematically, though no one has any idea how to create them.

And the winner is... Warp drives! Because not only does the mathematics work out, it’s in principle possible to create them, at least as long as you stay below the speed of light limit. How to travel faster than light, I am afraid we still don’t know. But maybe you are the one to figure it out.

Saturday, April 10, 2021

Does the Universe have Higher Dimensions? Part 1

[This is a transcript of the video embedded below.]

Space, the way we experience it, has three dimensions. Left-right, forward backward, and up-down. But why three? Why not 7? Or 26? The answer is: No one knows. But if no one knows why space has three dimensions, could it be that it actually has more? Just that we haven’t noticed for some reason? That’s what we will talk about today.


The idea that space has more than three dimensions may sound entirely nuts, but it’s a question that physicists have seriously studied for more than a century. And since there’s quite a bit to say about it, this video will have two parts. In this part we will talk about the origins of the idea of extra dimensions, Kaluza-Klein theory and all that. And in the next part, we will talk about more recent work on it, string theory and black holes at the Large Hadron Collider and so on.

Let us start with recalling how we describe space and objects in it. In two dimensions, we can put a grid on a plane, and then each point is a pair of numbers that says how far away from zero you have to go in the horizontal and vertical direction to reach that point. The arrow pointing to that point is called a “vector”.

This construction is not specific to two dimensions. You can add a third direction, and do exactly the same thing. And why stop there? You can no longer *draw a grid for four dimensions of space, but you can certainly write down the vectors. They’re just a row of four numbers. Indeed, you can construct vector spaces in any number of dimensions, even in infinitely many dimensions.

And once you have vectors in these higher dimensions, you can do geometry with them, like constructing higher dimensional planes, or cubes, and calculating volumes, or the shapes of curves, and so on. And while we cannot directly draw these higher dimensional objects, we can draw their projections into lower dimensions. This for example is the projection of a four-dimensional cube into two dimensions.

Now, it might seem entirely obvious today that you can do geometry in any number of dimensions, but it’s actually a fairly recent development. It wasn’t until eighteen forty-three, that the British mathematician Arthur Cayley wrote about the “Analytical Geometry of (n) Dimensions” where n could be any positive integer. Higher Dimensional Geometry sounds innocent, but it was a big step towards abstract mathematical thinking. It marked the beginning of what is now called “pure mathematics”, that is mathematics pursued for its own sake, and not necessarily because it has an application.

However, abstract mathematical concepts often turn out to be useful for physics. And these higher dimensional geometries came in really handy for physicists because in physics, we usually do not only deal with things that sit in particular places, but with things that also move in particular directions. If you have a particle, for example, then to describe what it does you need both a position and a momentum, where the momentum tells you the direction into which the particle moves. So, actually each particle is described by a vector in a six dimensional space, with three entries for the position and three entries for the momentum. This six-dimensional space is called phase-space.

By dealing with phase-spaces, physicists became quite used to dealing with higher dimensional geometries. And, naturally, they began to wonder if not the *actual space that we live in could have more dimensions. This idea was first pursued by the Finnish physicist Gunnar Nordström, who, in 1914, tried to use a 4th dimension of space to describe gravity. It didn’t work though. The person to figure out how gravity works was Albert Einstein.

Yes, that guy again. Einstein taught us that gravity does not need an additional dimension of space. Three dimensions of space will do, it’s just that you have to add one dimension of time, and allow all these dimensions to be curved.

But then, if you don’t need extra dimensions for gravity, maybe you can use them for something else.

Theodor Kaluza certainly thought so. In 1921, Kaluza wrote a paper in which he tried to use a fourth dimension of space to describe the electromagnetic force in a very similar way to how Einstein described gravity. But Kaluza used an infinitely large additional dimension and did not really explain why we don’t normally get lost in it.

This problem was solved few years later by Oskar Klein, who assumed that the 4th dimension of space has to be rolled up to a small radius, so you can’t get lost in it. You just wouldn’t notice if you stepped into it, it’s too small. This idea that electromagnetism is caused by a curled-up 4th dimension of space is now called Kaluza-Klein theory.

I have always found it amazing that this works. You take an additional dimension of space, roll it up, and out comes gravity together with electromagnetism. You can explain both forces entirely geometrically. It is probably because of this that Einstein in his later years became convinced that geometry is the key to a unified theory for the foundations of physics. But at least so far, that idea has not worked out.

Does Kaluza-Klein theory make predictions? Yes, it does. All the electromagnetic fields which go into this 4th dimension have to be periodic so they fit onto the curled-up dimension. In the simplest case, the fields just don’t change when you go into the extra dimension. And that reproduces the normal electromagnetism. But you can also have fields which oscillate once as you go around, then twice, and so on. These are called higher harmonics, like you have in music. So, Kaluza Klein theory makes a prediction which is that all these higher harmonics should also exist.

Why haven’t we seen them? Because you need energy to make this extra dimension wiggle. And the more it wiggles, that is, the higher the harmonics, the more energy you need. Just how much energy? Well, that depends on the radius of the extra dimension. The smaller the radius, the smaller the wavelength, and the higher the frequency. So a smaller radius means you need higher energy to find out if the extra dimension is there. Just how small the radius is, the theory does not tell you, so we don’t know what energy is necessary to probe it. But the short summary is that we have never seen one of these higher harmonics, so the radius must be very small.

Oskar Klein himself, btw was really modest about his theory. He wrote in 1926:
"Ob hinter diesen Andeutungen von Möglichkeiten etwas Wirkliches besteht, muss natürlich die Zukunft entscheiden."

("Whether these indications of possibilities are built on reality has of course to be decided by the future.")

But we don’t actually use Kaluza-Klein theory instead of electromagnetism, and why is that? It’s because Kaluza-Klein theory has some serious problems.

The first problem is that while the geometry of the additional dimension correctly gives you electric and magnetic fields, it does not give you charged particles, like electrons. You still have to put those in. The second problem is that the radius of the extra dimension is not stable. If you perturb it, it can begin to increase, and that can have observable consequences which we have not seen. The third problem is that the theory is not quantized, and no one has figured out how to quantize geometry without running into problems. You can however quantize plain old electromagnetism without problems.

We also know today of course that the electromagnetic force actually combines with the weak nuclear force to what is called the electroweak force. That, interestingly enough, turns out to not be a problem for Kaluza-Klein theory. Indeed, it was shown in the 1960s by Ryszard Kerner, that one can do Kaluza-Klein theory not only for electromagnetism, but for any similar force, including the strong and weak nuclear force. You just need to add a few more dimensions.

How many? For the weak nuclear force, you need two more, and for the strong nuclear force another four. So in total, we now have one dimension of time, 3 for gravity, one for electromagnetism, 2 for the weak nuclear force and 4 for the strong nuclear force, which adds up to a total of 11.

In 1981, Edward Witten noticed that 11 happened to be the same number of dimensions which is the maximum for supergravity. What happened after this is what we’ll talk about next week.

Saturday, April 03, 2021

Should Stephen Hawking have won the Nobel Prize?

[This is a transcript of the video embedded below.]


Stephen Hawking, who sadly passed away in 2018, has repeatedly joked that he might get a Nobel Prize if the Large Hadron Collider produces tiny black holes. For example, here is a recording of a lecture he gave in 2016:
“Some of the collisions might create micro black holes. These would radiate particles in a pattern that would be easy to recognize. So I might get a Nobel Prize after all.”
The British physicist and science writer Phillip Ball, who attended this 2016 lecture, commented:
“I was struck by how unusual it was for a scientist to state publicly that their work warranted a Nobel… [It] gives a clue to the physicist’s elusive character: shamelessly self-promoting to the point of arrogance, and heedless of what others might think.”
I heard Hawking say pretty much exactly the same thing in a public lecture a year earlier in Stockholm. But I had an entirely different reaction. I didn’t think of his comment as arrogant. I thought he was explaining something which few people knew about. And I thought he was right in that, if the Large Hadron Collider would have seen these tiny black holes decay, he almost certainly would have gotten a Nobel Prize. But I also thought that this was not going to happen. He was much more likely to win a Nobel Prize for something else. And he almost did.

Just exactly what might Hawking have won the Nobel Prize for, and should he have won it? That’s what we will talk about today.

In nineteen-seventy-four, Stephen Hawking published a calculation that showed black holes are not perfectly black, but they emit thermal radiation. This radiation is now called “Hawking radiation”. Hawking’s calculation shows that the temperature of a black hole is inversely proportional to the mass of the black hole. This means, the larger the black hole, the smaller its temperature, and the harder it is to measure the radiation. For the astrophysical black holes that we know of, the temperature is way, way too small to be measurable. So, the chances of him ever winning a Nobel Prize for black hole evaporation seemed very small.

But, in the late nineteen-nineties, the idea came up that tiny black holes might be produced in particle collisions at the Large Hadron Collider. This is only possible if the universe has additional dimensions of space, so not just the three that we know of, but at least five. These additional dimensions of space would have to be curled up to small radii, because otherwise we would already have seen them.

Curled up extra dimensions. Haven’t we heard that before? Yes, because string theorists talk about curled up dimensions all the time. And indeed, string theory was the major motivation to consider this hypothesis of extra dimensions of space. However, I have to warn you that string theory does NOT tell you these extra dimensions should have a size that the Large Hadron Collider could probe. Even if they exist, they might be much too small for that.

Nevertheless, if you just assume that the extra dimensions have the right size, then the Large Hadron Collider could have produced tiny black holes. And since they would have been so small, they would have been really, really hot. So hot, indeed, they’d decay pretty much immediately. To be precise, they’d decay in a time of about ten to the minus twenty-three seconds, long before they’d reach a detector.

But according to Hawking’s calculation, the decay of these tiny black holes should proceed by a very specific pattern. Most importantly, according to Hawking, black holes can decay into pretty much any other particle. And there is no other particle decay which looks like this. So, it would have been easy to see black hole decays in the data. If they had happened. They did not. But if they had, it would almost certainly have gotten Hawking a Nobel Prize.

However, the idea that the Large Hadron Collider would produce tiny black holes was never very plausible. That’s because there was no reason the extra dimensions, in case they exist to begin with, should have just the right size for this production to be possible. The only reason physicists thought this would be the case was an argument from mathematical beauty called “naturalness”. I have explained the problems with this argument in an earlier video, so check this out for more.

So, yeah, I don’t think tiny black holes at the Large Hadron Collider was Hawking’s best shot at a Nobel Prize.

Are there other ways you could see black holes evaporate? Not really. Without these curled up extra dimensions, which do not seem to exist, we can’t make black holes ourselves. Without extra dimensions, the energy density that we’d have to reach to make black holes is way beyond our technological limitations. And the black holes that are produced in natural processes are too large, and then too cold to observe Hawking radiation.

One thing you *can do, though, is simulating black holes with superfluids. This has been done by the group of Jeff Steinhauer in Israel. The idea is that you can use a superfluid to mimic the horizon of a black hole. If you remember, the horizon of a black hole is a boundary in space, from inside of which light cannot escape. In a superfluid, one does not trap light, but one traps sound waves instead. One can do this because the speed of sound in the superfluid depends on the density of the fluid. And since one can experimentally control this density, one can control the speed of sound.

If one then makes the fluid flow, there’ll be regions from within which the sound waves cannot escape because they’re just too slow. It’s like you’re trying to swim away from a waterfall. There’s a boundary beyond which you just can’t swim fast enough to get away. That boundary is much like a black hole horizon. And the superfluid has such a boundary, not for swimmers, but for sound waves.

You can also do this with a normal fluid, but you need the superfluid so that the sound has the right quantum properties, as it does in Hawking’s calculation. And in a series of really neat experiments, Steinhauer’s group has shown that these sound waves in the superfluid indeed have the properties that Hawking predicted. That’s because Hawking’s calculation applies to the superfluid in just exactly the same way it applies to real black holes.

Could Hawking have won a Nobel Prize for this? I don’t think so. That’s because mimicking a black hole with a superfluid is cool, but of course it’s not the real thing. These experiments are a type of quantum simulation, which means they demonstrate that Hawking’s calculation is correct. But the measurements on superfluids cannot demonstrate that Hawking’s prediction is correct for real black holes.

So, in all fairness, it never seemed likely Hawking would win a Nobel Prize for Hawking radiation. It’s just too hard to measure. But that wasn’t the only thing Hawking did in his career.

Before he worked on black hole evaporation, Hawking worked with Penrose on the singularity theorems. Penrose’s theorem showed that, in contrast to what most physicists believed at the time, black holes are a pretty much unavoidable consequence of stellar collapse. Before that, physicists thought black holes are mathematical curiosities that would not be produced in reality. It was only because of the singularity theorems that black holes began to be taken seriously. Eventually astronomers looked for them, and now we have solid experimental evidence that black holes exist. Hawking applied the same method to the early universe to show that the Big Bang singularity is likewise unavoidable, unless General Relativity somehow breaks down. And that is an absolutely amazing insight about the origin of our universe.

I made a video about the history of black holes two years ago in which I said that the singularity theorems are worth a Nobel Prize. And indeed, Penrose was one of the recipients of the 2020 Nobel Prize in physics. If Hawking had not died two years earlier, I believe he would have won the Nobel Prize together with Penrose. Or maybe the Nobel Prize committee just waited for him to die, so they wouldn’t have to think about just how to disentangle Hawking’s work from Penrose’s? We’ll never know.

Does it matter that Hawking did not win a Nobel Prize? Personally, I think of the Nobel Prize in the first line as an opportunity to celebrate scientific discoveries. The people who we think might win this prize are highly deserving with or without an additional medal. And Hawking didn’t need a Nobel Prize, he’ll be remembered without it.

Saturday, March 27, 2021

Is the universe REALLY a hologram?

[This is a transcript of the video embedded below.]


Do we live in a hologram? String theorists think we do. But what does that mean? How do holograms work, and how are they related to string theory? That’s what we will talk about today.

In science fiction movies, holograms are 3-dimensional, moving images. But in reality, the technology for motion holograms hasn’t caught up with imagination. At least so far, holograms are still mostly stills.

The holograms you are most likely to have seen are not like those in the movies. They are not a projection of an object into thin air – however that’s supposed to work. Instead, you normally see a three-dimensional object above or behind a flat film. Small holograms are today frequently used as a security measure on credit cards, ID cards, or even banknotes, because they are easy to see, but difficult to copy.

If you hold such a hologram into light, you will see that it seems to have depth, even though it is printed on a flat surface. That’s because in photographs, we are limited to the one perspective from which the picture was taken, and that’s why they look flat. But you can tilt holograms and observe them from different angles, as if you were examining a three-dimensional object.

Now, these holograms on your credit cards, or the ones that you find on postcards or book covers, are not “real” holograms. They are actually composed of several 2-dimensional images and depending on the angle, a different image is reflected back at you, which creates the illusion of a 3-dimensional image.

In a real hologram the image is indeed 3-dimensional. But the market for real holograms is small, so they are hard to come by, even though the technology to produce them is straightforward. A real hologram looks like this.

Real holograms actually encode a three-dimensional object on a flat surface. How is this possible? The answer is interference.

Light is electromagnetic waves, so it has crests and troughs. And a key property of waves is that they can be overlaid and then amplify or wash out each other. If two waves are overlaid so that two crests meet at the same point, that will amplify the wave. This is called constructive interference. But if a crest meets a trough, the waves will cancel. This is called destructive interference.

Now, we don’t normally see light cancelling out other light. That’s because to see interference one needs very regular light, where the crests and troughs are neatly aligned. Sunlight or LED light doesn’t have that property. But laser light has it, and so laser light can be interfered.

And this interference can be used to create holograms. For this, one first splits a laser beam in two with a semi-transparent glass or crystal, called a beam-splitter, and makes each beam broader with a diverging lens. Then, one aims one half of the beam at the object that one wants to take an image of. The light will not just bounce off the object in one single direction, but it will scatter in many different directions. And the scattered light contains information about the surface of the object. Then, one recombines the two beams and captures the intensity of the light with a light-sensitive screen.

Now, remember that laser light can interfere. This means, how large the intensity on the screen is, depends on whether the interference was destructive or constructive, which again depends on just where the object was located and how it was shaped. So, the screen has captured the full three-dimensional information. To view the hologram, one develops the film and shines light onto it at the same wavelength as the image was taken, which reproduces the 3-dimensional image.

To understand this in a little more detail, let us look at the image on the screen if one uses a very small point-like object. It looks like this. It’s called a zone plate. The intensity and width of the rings depends on the distance between the point-like object and the screen, and the wavelength of the light. But any object is basically a large number of point-like objects, so the interference image on the screen is generally an overlap of many different zone plates with these concentric rings.

The amazing thing about holograms is now this. Every part of the screen receives information from every part of the object. As a consequence, if you develop the image to get the hologram, you can take it apart into pieces, and each piece will still recreate the whole 3-dimensional object. To understand better how this works, look again at the zone plate, the one of a single point-like object. If you have only a small piece that contains part of the rings, you can infer the rest of the pattern, though it gets a little more difficult. If you have a general plate that overlaps many zone plates, this is still possible. So, at least mathematically, you can reconstruct the entire object from any part of the holographic plate. In reality, the quality of the image will go down.

So, now that you know how real holograms work, let us talk about the idea that the universe is a hologram.

When string theorists claim that our universe is a hologram, they mean the following. Our universe has a positive cosmological constant. But mathematically, universes with a negative cosmological constant are much easier to work with. So, this is what string theorists usually look at. These universes with a negative cosmological constant are called Anti-de Sitter spaces and into these Anti-de Sitter things they put supersymmetric matter. To best current knowledge, our universe is not Anti De Sitter and matter is not supersymmetric, but mathematically, you can certain do that.

For some specific examples, it has then been shown that the gravitational theory in such an Anti de Sitter universe is mathematically equivalent to a different theory on the conformal boundary of that universe. What the heck is the conformal boundary of the universe? Well, our actual universe doesn’t have one. But these Anti-De Sitter spaces do. Just exactly how they are defined isn’t all that important. You only need to know that this conformal boundary has one dimension of space less than the space it is a boundary of.

So, you have an equivalence between two theories in a different number of dimensions of space. A gravitational theory in this anti-De Sitter space with the weird matter. And a different theory on the boundary of that space, which also has weird matter. And just so you have heard the name: The theory on the boundary is what’s called a conformal field theory, and the whole thing is known as the Anti-de Sitter – Conformal Field Theory duality, or AdS/CFT for short.

This duality has been mathematically confirmed for some specific cases, but pretty much all string theorists seem to believe it is much more generally valid. In fact, a lot of them seem believe it is valid even in our universe, even though there is no evidence for that, neither observational nor mathematical. In this most general form, the duality is simply called the “holographic principle”.

If the holographic principle was correct, it would mean that the information about any volume in our universe is encoded on the boundary of that volume. That’s remarkable because naively, you’d think the amount of information you can store in a volume of space grows much faster than the information you can store on the surface. But according to the holographic principle, the information you can put into the volume somehow isn’t what we think it is. It must have more correlations than we realize. So it the holographic principle was true, that would be very interesting. I talked about this in more detail in an earlier video.

The holographic principle indeed sounds a little like optical holography. In both cases one encodes information about a volume on a surface with one dimension less. But if you look a little more closely, there are two important differences between the holographic principle and real holography:

First, an optical hologram is not actually captured in two dimensions; the holographic film has a thickness, and you need that thickness to store the information. The holographic principle, on the other hand, is a mathematical abstraction, and the encoding really occurs in one dimension less.

Second, as we saw earlier, in a real hologram, each part contains information about the whole object. But in the mathematics of the holographic universe, this is not the case. If you take only a piece of the boundary, that will not allow you to reproduce what goes on in the entire universe.

This is why I don’t think referring to this idea from string theory as holography is a good analogy. But now you know just exactly what the two types of holography do, and do not have in common.

Saturday, March 20, 2021

Whatever happened to Life on Venus?

[This is a transcript of the video embedded below.]


A few months ago, the headlines screamed that scientists had found signs of life on Venus. But it didn’t take long for other scientists to raise objections. So, just exactly what did they find on Venus? Did they actually find it? And what does it all mean? That’s what we will talk about today.

The discovery that made headlines a few months ago was that an international group of researchers said they’d found traces of a molecule called phosphine in the atmosphere of Venus.

Phosphine is a molecule made of one phosphorus and three hydrogen atoms. On planets like Jupiter and Saturn, pressure and temperature are so high that phosphine can form by coincidental chemical reactions, and indeed phosphine has been observed in the atmosphere of these two planets. On planets like Venus, however, the pressure isn’t remotely large enough to produce phosphine this way.

And the only other known processes to create phosphine are biological. On Earth, for example, which in size and distance to the Sun isn’t all that different to Venus, the only natural production processes for phosphine are certain types of microbes. Lest you think this means that phosphine is somehow “good for life”, I should add that the microbes in question live without oxygen. Indeed, phosphine is toxic for forms of life that use oxygen, which is most of life on earth. In fact, phosphine is used in the agricultural industry to kill rodents and insects.

So, the production of phosphine on Venus at fairly low atmospheric pressure seems to require life in some sense, which is why the claim that there’s phosphine on Venus is BIG. It could mean there’s microbial life on Venus. And just in case microbial life doesn’t excite you all that much, this would be super-interesting because it would give us a clue to what the chances are that life evolves on other planets in general.

So, just exactly what did they find?

The suspicion that phosphine might be present on Venus isn’t entirely new. The researchers first saw something that could be phosphine in two-thousand and seventeen in data from the James Clerk Maxwell Telescope, which is a radio telescope in Hawaii. However, this signal was not particularly good, so they didn’t publish it. Instead they waited for more data from the ALMA telescope in Chile. Then they published a combined analysis of the data from both telescopes in Nature Astronomy.

Here’s what they did. One can look for evidence of molecules by exploiting that each molecule reacts to light at different wave-lengths. To some wave-lengths, a molecule may not react at all, but others it may absorb because they cause the molecule to vibrate or rotate around itself. It’s like each molecule has very specific resonance frequencies, like if you’re in an airplane and the engine’s being turned up and then, at a certain pitch the whole plane shakes? That’s a resonance. For the plane it happens at certain wavelengths of sound. For molecules it happens at certain wave-lengths of light.

So, if light passes through a gas, like the atmosphere of Venus, then just how much light at each wave-length passes through depends on what molecules are in the gas. Each molecule has a very specific signature, and that makes the identification possible.

At least in principle. In practice… it’s difficult. That’s because different molecules can have very similar absorption lines.

For example, the phosphine absorption line which all the debate is about has a frequency of two-hundred sixty-six point nine four four Gigahertz. But sulfur dioxide has an absorption line at two-hundred sixty-six point nine four three GigaHertz, and sulfur dioxide is really common in the atmosphere of Venus. That makes it quite a challenge to find traces of phosphine.

But challenges are there to be met. The astrophysicists estimated the contribution from Sulphur dioxide from other lines which this molecule should also produce.

They found that these other lines were almost invisible. So they concluded that the absorption in the frequency range of interest had to be mostly due to phosphine and they estimated the amount with about seven to twenty parts per billion, so that’s seven to twenty molecules of phosphine per billion molecules of anything.

It’s this discovery which made the big headlines. The results they got for the phosphine amount from the two different telescopes are a little different, and such an inconsistency is somewhat of a red flag. But then, these measurements were made some years apart and the atmosphere of Venus could have undergone changes in that period, so it’s not necessarily a problem.

Unfortunately, after publishing their analysis, the team learned that the data from ALMA had not been processed correctly. It was not their fault, but it meant they had to redo their analysis. With the corrected data, the amount of phosphine they claimed to see fell to something between 1 and 4 parts per billion. Less, but still there.

Of course such an important finding attracted a lot of attention, and it didn’t take long for other researchers to have a close look at the analysis. It was not only that finding phosphine was surprising, not finding sulphur dioxide was not normal either; it had been detected many times in the atmosphere of Venus in amounts about 10 times higher than what the phosphine-discovery study claimed it was.

Already in October last year, a paper came out that argued there’s no signal at all in the data, and that said the original study used an overly complicated twelve parameter fit that fooled them into seeing something where there was nothing. This criticism has since been published in a peer reviewed journal. And by the end of January another team put out two papers in which they pointed out several other problems with the original analysis.

First they used a model of the atmosphere of Venus and calculated that the alleged phosphine absorption comes from altitudes higher than eighty kilometers. Problem is, at such a high altitude, phosphine is incredibly unstable because ultraviolet light from the sun breaks it apart quickly. They estimated it would have a lifetime of under one second! This means for phosphine to be present on Venus in the observed amounts, it would’ve to be produced at a rate higher than the production of oxygen by photosynthesis on Earth. You’d need a lot of bacteria to get that done.

Second, they claim that the ALMA telescope should not have been able to see the signal at all, or at least a much smaller signal, because of an effect called line dilution. Line dilution can occur if one has a telescope with many separate dishes like ALMA. A signal that’s smeared out over many of the dishes, like the signal from the atmosphere of Venus, can then be affected by interference effects.

According to estimates in the new paper, line dilution should suppress the signal in the ALMA telescope by about a factor 10-20, in which case it would not be visible at all. And indeed they claim that no signal is entirely consistent with the data from the second telescope. This criticism, too, has now passed peer review.

What does it mean?

Well, the authors of the original study might reply to this criticism, and so it will probably take some time until the dust settles. But even if the criticism is correct, this would not mean there’s no phosphine on Venus. As they say, absence of evidence is not evidence of absence. If the criticism is correct, then the observations, exactly because they probe only high altitudes where phosphine is unstable, can neither exclude, nor confirm, the presence of phosphine on Venus. And so, the summary is, as so often in science: More work is needed.

Wednesday, March 17, 2021

Live Seminar about Dark Matter on Friday

I will give an online simiar about dark matter and modified gravity on Friday at 4pm CET, if you want to attend, the link is here:


I'm speaking in English (as you can see, half in American, half in British English, as usual), but the seminar will be live translated to Spanish, for which there's a zoom link somewhere.

Saturday, March 13, 2021

Can we stop hurricanes?

[This is a transcript of the video embedded below.]


Hurricanes are among the most devastating natural disasters. That’s because hurricanes are enormous! A medium-sized hurricane extends over an area about the size of Texas. On a globe they’ll cover 6 to 12 degrees latitude. And as they blow over land, they leave behind wide trails of destruction, caused by strong winds and rain. Damages from hurricanes regularly exceed billions of US dollars. Can’t we do something about that? Can’t we blast hurricanes apart? Redirect them? Or stop them from forming in the first place? What does science say about that? That’s what we’ll talk about today.

Donald Trump, the former president of the United States, has reportedly asked repeatedly whether it’s possible to get rid of hurricanes by dropping nuclear bombs on them. His proposal was swiftly dismissed by scientists and the media likewise. Their argument can be summed up with “you can’t” and even if you could “it’d be a bad idea.” Trump then denied he ever said anything, the world forgot about it, and here we are, still wondering if not there’s something we can do to stop hurricanes.

Trumps idea might sound crazy, but he was not the first to think of nuking a hurricane, and he probably won’t be the last. And I think trying to prevent hurricanes isn’t as crazy as it sounds.

The idea to nuke a hurricane came up already right after nuclear weapons were deployed for the first time, in Japan in August 1945. August is in the middle of the hurricane season in Florida. The mayor of Miami Beach, Herbert Frink, made the connection. He asked President Harry Truman about the possibility to use the new weapon to fight against hurricanes. And, sure enough, the Americans looked into it.

But they quickly realized that while the energy released by a nuclear bomb was gigantic compared to all other kinds of weapons, it was still nothing compared to the energies that build up in hurricanes. For comparison: The atomic bombs dropped on Japan released an energy of about 20 kilotons each. A typical hurricane releases about 10,000 times as much energy – per hour. The total power of a hurricane is comparable to the entire global power consumption. That’s because hurricanes are enormous!

By the way, hurricanes and typhoons are the same thing. The generic term used by meterologists is “tropical cyclone”. It refers to “a rotating, organized system of clouds and thunderstorms that originates over tropical or subtropical waters.” If they get large enough, they’re then either called hurricanes or typhoons, or they just remain tropical cyclones. But it’s like the difference between an astronaut and a cosmonaut. The same thing!

But back to the nukes. In 1956 an Air Force meteorologist by name Jack W Reed proposed to launch a megaton nuclear bomb – that is about 50 times the power of the ones in Japan – into a hurricane. Just to see what happened. He argued: “Since a complete theory for the dynamics of hurricanes will probably not be derived by meteorologists for several years, argument pros and con without conclusive foundation will be made over the effects to be expected… Only a full-scale test could prove the results.” In other words, if we don’t do it, we’ll never know just how bad the idea is. For what the radiation hazard was concerned, Reed claimed it would be negligible: “An airburst would cause no intense fallout,” never mind that a complete theory for the dynamics of hurricanes wasn’t available then and still isn’t.

Reed’s proposal was dismissed by both the military and the scientific community. The test never took place, but the proposal is interesting nevertheless, because Reed went to some length to explain how to go about nuking a hurricane smartly.

To understand what he was trying to get at, let’s briefly talk about how hurricanes form. Hurricanes can form over the ocean when the water temperature is high enough. Trouble begins at around 26 degrees Celsius or 80 degrees Fahrenheit. The warm water evaporates and rises. As it rises it cools and creates clouds. This tower of water-heavy clouds begins to spin because the Coriolis force, which comes from the rotation of planet Earth, acts on the air that’s drawn in, and the more the clouds spin, the better they get at drawing in more air. As the spinning accelerates, the center of the hurricane clears out and leaves behind a mostly calm region that’s usually a few dozen miles in diameter and has very low barometric pressure. This calm center is called the “eye” of the hurricane.

Reed now argued that if one detonates a megaton nuclear weapon directly in the eye of a hurricane, this would blast away the warm air that feeds the cycle, increase the barometric pressure, and prevent the storm from gathering more strength.

Now, the obvious problem with this idea is that even if you succeeded, you’d deposit radioactive debris in clouds that you just blasted all over the globe, congratulations. But even leaving aside the little issue with the radioactivity, it almost certainly wouldn’t work because - hurricanes are enormous.

It’s not only that you’re still up against a power that exceeds that of your nuclear bomb by three orders of magnitude, it’s also that an explosion doesn’t actually move a lot of air from one place to another, which is what Reed envisioned. The blast creates a shock wave – that’s bad news for everything in the way of that shock – but it does little to change the barometric pressure after the shock wave has passed through.

So if nuclear bombs are not the way to deal with hurricanes, can we maybe make them rain off before they make landfall? This technique is called “cloud seeding” and we talked about this in a previous video. If you remember, there are two types of cloud seeding, one that creates snow or ice, and one that creates rain.

The first one, called glaciogenic seeding was indeed tried on hurricanes by Homer Simpson. https://www.youtube.com/watch?v=HMVKksxZgwU No, not this Homer, but a man by name Robert Homer Simpson, who in 1962 was the first director of the American Project Stormfury, which had the goal of weakening hurricanes.

The Americans actually *did spray a hurricane with silver iodide and observed afterwards that the hurricane indeed weakened. Hooray! But wait. Further research showed that hurricane clouds contain very little supercooled water droplets, so the method couldn’t work even in theory. Instead, it turned out that hurricanes frequently undergo similar changes without intervention, so the observation was most likely coincidence. Project Stormfury was canceled in 1983.

What about hygroscopic cloud seeding, which works by spraying clouds with particles that absorb water, to make the clouds rain off? The effects of this have been studied to some extent by observing natural phenomena. For example, dust that’s blown up over the Sahara Desert can be transported by winds over long distances. Though much remains to be understood, some observations seem to indicate that interactions with this dust makes it easier for the clouds to rain off, which naturally weaken hurricanes.

So why don’t we try something similar? Again, the problem is that hurricanes are enormous! You’d need a whole army of airplanes to spray the clouds, and even then that would almost certainly not make the hurricanes disappear, but merely weaken them.

There’s a long list of other things people have considered to get rid of hurricanes. For example, spraying the upper layers of a hurricane with particles that absorb sunlight to warm up the air, and thereby reduce the updraft. But again, the problem is that hurricanes are enormous! Keep in mind, you’d have to spray an area about the size of Texas.

A similar idea is to prevent the air above the ocean from evaporating and feeding the growth of the hurricane, for example by covering the ocean surface with oil films. The obvious problem with this idea is that, well, now you have all that oil on the ocean. But also, some small-scale experiments have shown that the oil-cover tends to break up, and where it doesn’t break up, it can actually aid the warming of the water, which is exactly what you don’t want.

How about we cool the ocean surface instead? This idea has been pursued for example by Bill Gates, who, in 2009, together with a group of scientists and entrepreneurs patented a pump system that would float in the ocean and pump cool water from deep down to the surface. In 2017 the Norwegian company SINTEF put forward a similar proposal. The problem with this idea is, guess what, hurricanes are enormous! You’d have to get a huge number of these pumps in the right place at the right time.

Another seemingly popular idea is to drag icebergs from the poles to the tropics to cool the water. I leave it to you to figure out the logistics for making this happen.

Yet again other people have argued that one doesn’t actually have to blow apart a hurricane to get rid of it, one merely has to detonate a nuclear bomb strategically so that the hurricane changes direction. The problem with this idea is that no one wants multiple nations to play nuclear billiard on the oceans.

As you have seen, there are lots of ideas, but the key problem is that hurricanes are enormous!

And that means the most promising way to prevent them is to intervene before they get too large. Hurricanes don’t suddenly pop out of nowhere, they take several days to form and usually arise from storms in the tropics which also don’t pop out of nowhere.

What the problem then comes down to is that meteorologists can’t presently predict well enough and not long enough in advance just which regions will go on to form hurricanes. But, as you have seen, researchers have tried quite a few methods to interfere with the feedback cycle that grows hurricanes, and some of them actually work. So, if we could tell just when and where to interfere, that might actually make a difference.

My conclusion therefore is: If you want to prevent hurricanes, you don’t need larger bombs, you need to invest into better weather forecasts.

Saturday, March 06, 2021

Do Complex Numbers Exist?

[This is a transcript of the video embedded below.]

When the world seems particularly crazy, I like looking into niche-controversies. A case where the nerds argue passionately over something that no one knew was controversial in the first place. In this video, I want to pick up one of these super-niche nerd fights: Are complex numbers necessary to describe the world as we observe it? Do they exist? Or are they just a mathematical convenience? That’s what we’ll talk about today.

So the recent controversy broke out when a paper appeared on the preprint server with the title “Quantum physics needs complex numbers”. The paper contains a proof for the claim in the title, in response to an earlier claim that one can do without the complex numbers.

What happened next is that the computer scientist Scott Aaronson wrote a blogpost in which he called the paper “striking”. But the responses were, well, not very enthusiastic. They ranged from “why fuss about it” to “bullshit” to “it’s missing the point.”

We’ll look at the paper in a moment, but first I will briefly summarize what we’re even talking about, so that no one’s left behind.

The Math of Complex Numbers

You probably remember from school that complex numbers are what you need to solve equations like x squared equals minus 1. You can’t solve that equation with the real numbers that we are used to. Real numbers are numbers that can have infinitely many digits after the decimal point, like square root of 2 and π, but they also include integers and fractions and so on. You can’t solve this equation with real numbers because they’ll always square to a positive number. If you want to solve equations like this, you therefore introduce a new number, usually denoted “i” with the property that it squares to -1.

Interestingly enough, just giving a name to the solution of this one equation and adding it to the set of real numbers turns out to be sufficient to make all algebraic equations solvable. Doesn’t matter how long or how complicated the equation, you can always write all their solutions as a+ib, where a and b are real numbers. 

Fun fact: This doesn’t work for numbers that have infinitely many digits before the point. Yes, that’s a thing, they’re called p-adic numbers. Maybe we’ll talk about this some other time.

Complex numbers are now all numbers of the type a plus I time b, where a and b are real numbers. “a” is called the “real” part, and “b” the “imaginary” part of the complex number. Complex numbers are frequently drawn in a plane, called the complex plane, where the horizontal axis is the real part and the vertical axis is the imaginary part. i itself is by convention in the upper half of the complex plane. But this looks the same as if you draw a map on a grid and name each point with two real numbers. Doesn’t this mean that the complex numbers are just a two-dimensional real vector space?

No, they’re not. And that’s because complex numbers multiply by a particular rule that you can work out by taking into account that the square of i is minus 1. Two complex numbers can be added like they were vectors, but the multiplication law makes them different. Complex number are, to use the mathematical term, a “field”, like the real numbers. They have a rule both for addition AND for multiplication. They are not just like that two-dimensional grid.

The Physics of Complex Numbers

We use complex numbers in physics all the time because they’re extremely useful. There useful for many reasons, but the major reason is this. If you take any real number, let’s call it α, multiply it with I, and put it into an exponential function, you get exp(Iα). In the complex plane, this number, exp(Iα), always lies on a circle of radius one around zero. And if you increase α, you’ll go around that circle. Now, if you look only at the real or only at the imaginary part of that circular motion, you’ll get an oscillation. And indeed, this exponential function is a sum of a cosine and I times a sine function.

Here’s the thing. If you multiply two of these complex exponentials say, one with α and one with β, you can just add the exponents. But if you multiply two cosines or a sine with a cosine… that’s a mess. You don’t want to do that. That’s why, in physics, we do the calculation with the complex numbers, and then, at the very end, we take either the real or the imaginary part. Especially when we describe electromagnetic radiation, we have to deal with a lot of oscillations, and complex numbers come in very handy.

But we don’t have to use them. In most cases we could do the calculation with only real numbers. It’s just cumbersome. With the exception of quantum mechanics, to which we’ll get in a moment, the complex numbers are not necessary.

And, as I have explained in an earlier video, it’s only if a mathematical structure is actually necessary to describe observations that we can say they “exist” in a scientifically meaningful way. For the complex numbers in non-quantum physics that’s not the case. They’re not necessary.

So, as long as you ignore quantum mechanics, you can think of complex numbers as a mathematical tool, and you have no reason to think they physically exist. Let’s then talk about quantum mechanics.

Complex Numbers in Quantum Mechanics

In quantum mechanics, we work with wave-function, usually denoted Ψ, which are complex valued, and the equation that tells us what the wave-function does is the Schrödinger equation. It looks like this. You’ll see immediately, there’s an “i” in this equation, which is why the wave-function has to be complex valued.

However, you can of course take the wave-function and this equation apart into a real and an imaginary part. Indeed, one often does that, if one solves the equation numerically. And I remind you, that both the real and the imaginary part of a complex number are real numbers. Now, if we calculate a prediction for a measurement outcome in quantum mechanics, then that measurement outcome will also always be a real number. So, it looks like you can get rid of the complex numbers in quantum mechanics, by splitting the equation into a real and imaginary part, and that’ll never make a difference for the result of the calculation.

This finally brings us to the paper I mentioned in the beginning. What I just said about decomposing the Schrödinger equation is of course correct, but that’s not what they looked at in the paper, that would be rather lame.

Instead they ask what happens with the wave-function if you have a system that is composed of several parts, in the simplest case that would be several particles. In normal quantum mechanics, each of these particles has a wave-function that’s complex-valued, and from these we construct a wave-function for all the particles together, which is also complex-valued. Just what this wave-function looks like depends on which particle is entangled with which. If two particles are entangled, this means their properties are correlated, and we know experimentally that this entanglement-correlation is stronger than what you can do without quantum theory.

The question which they look at in the new paper is then whether there are ways to entangle particles in the normal, complex quantum mechanics that you cannot build up from particles that are described entirely by real valued functions. Previous calculation showed that this could always be done if the particles came from a single source. But in the new paper they look at particles from two independent sources, and claim that there are cases which you cannot reproduce with real numbers only. They also propose a way to experimentally measure this specific entanglement.

I have to warn you that this paper has not yet been peer reviewed, so maybe someone finds a flaw in their proof. But assuming their result holds up, this means if the experiment which they propose finds the specific entanglement predicted by complex quantum mechanics, then you know you can’t describe observations with real numbers. It would then be fair to say that complex numbers exist. So, this is why it’s cool. They’ve figured out a way to experimentally test if complex numbers exist!

Well, kind of. Here is the fineprint: This conclusion only applies if you want the purely real-valued theory to work the same way as normal quantum mechanics. If you are willing to alter quantum mechanics, so that it becomes even more non-local than it already is, then you can still create the necessary entanglement with real valued numbers.

Why is it controversial? Well, if you belong to the shut-up and calculate camp, then this finding is entirely irrelevant. Because there’s nothing wrong with complex numbers in the first place. So that’s why you have half of the people saying “what’s the point” or “why all the fuss about it”. If you, on the other hand, are in the camp of people who think there’s something wrong with quantum mechanics because it uses complex numbers that we can never measure, then you are now caught between a rock and a hard place. Either embrace complex numbers, or accept that nature is even more non-local than quantum mechanics.

Or, of course, it might be that that the experiment will not agree with the predictions of quantum mechanics, which would be the most exciting of all possible outcomes. Either way, I am sure that this is a topic we will hear about again.

Tuesday, March 02, 2021

[Guest Post] Problems with Eric Weinstein's “Geometric Unity”

[This post is written by Timothy Nguyen, a mathematician and an author of the recently released paper “A Response to Geometric Unity”.]


On April 2, 2020, Eric Weinstein released a video of his 2013 Oxford lecture in which he presents his theory of everything “Geometric Unity” (GU). Since then, Weinstein has appeared in interviews alongside Sabine Hossenfelder, Brian Keating, Lee Smolin, Max Tegmark, and Stephen Wolfram to discuss his theory. 

In these interviews, Weinstein laments that the scientific community is dismissive of GU because he has not released a technical paper, but insists that scientists should be able to understand the substantive content of GU from the lecture alone (see here and here). In fact, Weinstein regards the conventional requirement of writing a paper to be flawed, since he questions the legitimacy of peer review, credit assignment, and institutional recognition (see here, here, here, and here).

Theo, my anonymous physicist coauthor, and I became aware of Weinstein and Geometric Unity through his podcast The Portal. We independently communicated with Weinstein on Discord and we both came to the conclusion that Weinstein was unable to provide an adequate explanation of GU or why it was a compelling theory. 

I also became increasingly skeptical of Weinstein’s claims when I pressed him about his alleged discovery of the Seiberg-Witten equations before Seiberg and Witten (see here, here, here, and here), a set of equations which was the central focus of my PhD thesis and several resultant papers. When I asked Weinstein for certain mathematical details about how he had arrived at the Seiberg-Witten equations, his vague responses led me to doubt his claims. Though Weinstein proposed to host a more in-depth discussion about GU and the requisite math and physics, no such discussion ever materialized.

These difficulties in communicating with Weinstein is what motivated our response paper. Suffice it to say that it was no easy task, as it required repeatedly watching his YouTube lecture and carefully timestamping its content in order to cite the material. These appear as clickable links in our response paper for those who wish to verify that our transcription of Weinstein's presentation is accurate.

Here's the high-level overview of how GU makes a claim towards a Theory of Everything. Essentially, GU asserts that there is a set of equations in 14 dimensions that are to contain the Einstein equations, Dirac equation, and Yang-Mills equations. Because the Einstein equations describe gravity, the Dirac equation accounts for fermions, and the Yang-Mills equations account for gauge-theories describing the strong and electroweak forces, all fundamental forces and particle types are therefore superficially accounted for. It is our understanding that it is in this very limited and weak sense that GU attempts to position itself as a Theory of Everything.

The most glaring deficiency in Weinstein’s presentation is that it does not incorporate any quantum theory. Establishing a consistent quantum theory of gravity alone has defied the efforts of nearly a century’s worth of vigorous research and is part of what makes formulating a Theory of Everything an enormous challenge. For GU to overlook this obstacle means that it has no possible claim on being a Theory of Everything.

Our findings are that even aside from its status as Theory of Everything, GU contains serious technical gaps both mathematical and physical. In summary:
  • GU introduces a “shiab” operator that overlooks a required complexification step. Omitting this step creates a mathematical error but including it precludes having a physically sensible quantum theory. 
  • The choice of gauge group for GU naively leads to a quantum gauge anomaly, thereby rendering the quantum theory inconsistent. Any straightforward attempt to eliminate this anomaly would make the shiab operator impossible to define, compounding the previous objection. 
  • The setup of GU asserts that it will have supersymmetry. In 14 dimensions, adopting supersymmetry is highly restrictive. It implies that the proposed gauge group of GU cannot be correct and that the theory as stated is incomplete. 
  •  Essential technical details of GU are omitted, leaving many of the central claims unverifiable.

Coincidentally, the night before we posted our response paper, Weinstein announced on Lex Fridman’s podcast that he plans on releasing a paper on GU on April 1st. We look forward to seeing Weinstein's response to the problems we have identified.

Saturday, February 27, 2021

Schrödinger’s Cat – Still Not Dead

[This is a transcript of the video embedded below.]


The internet, as we all know, was invented so we can spend our days watching cat videos, which is why this video is about the most famous of all science cats, Schrödinger’s cat. It is really both dead and alive? If so, what does that mean? And what has recent research to say about it? That’s what we’ll talk about today.

Quantum mechanics has struck physicists as weird ever since its discovery, more than a century ago. One especially peculiar aspect of quantum mechanics is that it forces you to accept the existence of superpositions. That are systems which can be in two states at the same time, until you make a measurement, which suddenly “collapses” the superposition into one definite measurement outcome.

The system here could be a single particle, like a photon, but it could also be a big object made of many particles. The thing is that in quantum mechanics, if two states exist separately, like an object being here and being there, then the superposition – that is the same object both here and there – must also exist. We know this experimentally, and I explained the mathematics behind this in an earlier video.

Now, you may think that being in a quantum superposition is something that only tiny particles can do. But these superpositions for large objects can’t be easily ignored, because you can take the tiny ones and amplify them to macroscopic size.

This amplification is what Erwin Schrödinger wanted to illustrate with a hypothetical experiment he came up with in 1935. In this experiment, a cat is in a box, together with a vial of poison, a trigger mechanism, and a radioactive atom. The nucleus of the atom has a fifty percent chance of decaying in a certain amount of time. If it decays, the trigger breaks the vial of poison, which kills the cat.

But the decay follows the laws of quantum physics. Before you measure it, the nucleus is both decayed and not decayed, and so, it seems that before one opens the box, the cat is both dead and alive. Or is it?

Well, depends on your interpretation of quantum mechanics, that is, what you think the mathematics means. In the most widely taught interpretation, the Copenhagen interpretation, the question what state the cat is in before you measure it is just meaningless. You’re not supposed to ask. The same is the case in all interpretations according to which quantum mechanics is a theory about the knowledge we have about a system, and not about the system itself.

In the many-worlds interpretation, in contrast, each possible measurement outcome happens in a separate universe. So, there’s a universe where the cat lives and one where the cat dies. When someone opens the box, that decides which universe they’re in. But for what observations are concerned, the result is exactly the same as in the Copenhagen interpretation.

Pilot wave-theory, which we talked about earlier, says that the cat is really always in only one state, you just don’t know which one it is until you look. The same is the case for spontaneous collapse models. In these models, the collapse of the wave-function is not merely an update when you open the box, but it’s a physical process.

It’s no secret that I myself am signed up to superdeterminism, which means that the measurement outcome is partly determined by the measurement settings. In this case, the cat may start out in a superposition, but by the time you measure it, it has reached the state which you actually observe. So, there is no sudden collapse in superdeterminism, it’s a smooth, deterministic, and local process.

Now, one cannot experimentally tell apart interpretations of mathematics, but collapse models, superdeterminism, and, under certain circumstances, pilot wave theory, make different predictions than Copenhagen or many worlds. So, clearly, one wants to do the experiment!

But. As you have undoubtedly noticed, cats are usually either dead or alive, not both. The reason is that even tiny interactions with a quantum system have the same effect as a measurement, and large objects, like cats, just constantly interact with something, like air or the cosmic background radiation. And that’s already sufficient to destroy a quantum superposition of a cat so quickly we’d never observe it. But physicists are trying to push the experimental boundary for bringing large objects into quantum states.

For example, in 2013, a team of physicists from the University of Calgary in Canada amplified a quantum superposition of a single photon. They first fired the photon at a partially silvered mirror, called a beam splitter, so that it became a superposition of two states: it passed through the mirror and also reflected back off it. Then they used one part of this superposition to trigger a laser pulse, which contains a whole lot of photons. Finally, they showed that the pulse was still in a superposition with the single photon. In another 2019 experiment, they amplified both parts of this superposition, and again they found that the quantum effects survived, for up to about 100 million photons.

Now, a group of 100 million photons not a cat, but it is bigger than your standard quantum particle. So, some headlines referred to this as the “Schrödinger's kitten” experiment.

But just in case you think a laser pulse is a poor approximation for a cat, how about this. In 2017, scientists at the University of Sheffield put bacteria in a cavity between two mirrors and they bounced light between the mirrors. The bacteria absorbed, emitted, and re-absorbed the light multiple times. The researchers could demonstrate that this way, some of the bacterias’ molecules became entangled with the cavity, so that is a special case of a quantum superposition.

However, a paper published the following year by scientists at Oxford University argued that the observations on the bacteria could also be explained without quantum effects. Now, this doesn’t mean that this is the correct explanation. Indeed, it doesn’t make much sense because we already know that molecules have quantum effects and they couple to light in certain quantum ways. However, this criticism demonstrates that it can be difficult to prove that something you observe is really a quantum effect, and the bacteria experiment isn’t quite there yet.

Let us then talk about a variant of Schrödinger’s cat that Eugene Wigner came up with in the nineteen-sixties. Imagine that this guy Wigner is outside the laboratory in which his friend just opens the box with the cat. In this case, not only would the cat be both dead and alive before the friend observes it, the friend would also both see a dead cat and see a live cat, until Wigner opens the door to the room where the experiment took place.

This sounds both completely nuts as well as an unnecessary complication, but bear with me for a moment, because this is a really important twist on Schrödinger’s cat experiment. Because if you think that the first measurement, so the friend observing the cat, actually resulted in a definite outcome, just that the friend outside the lab doesn’t know it, then, as long as the door is closed, you effectively have a deterministic hidden variable model for the second measurement. The result is clear already, you just don’t know what it is. But we know that deterministic hidden variable models cannot produce the results of quantum mechanics, unless they are also superdeterministic.

Now, again, of course, you can’t actually do the experiment with cats and friends and so on because their quantum effects would get destroyed too quickly to observe anything. But recently a team at Griffith University in Brisbane, Australia, created a version of this experiment with several devices that measure, or observe, pairs of photons. As anticipated, the measurement result agrees with the predictions of quantum mechanics.

What this means is that one of the following three assumptions must be wrong:

1. No Superdeterminism.
2. Measurements have definite outcomes.
3. No spooky action at a distance.

The absence of superdeterminism is sometimes called “Free choice” or “Free will”, but really it has nothing to do with free will. Needless to say, I think what’s wrong is rejecting superdeterminism. But I am afraid most physicists presently would rather throw out objective reality. Which one are you willing to give up? Let me know in the comments.

As of now, scientists remain hard at work trying to unravel the mysteries of Schrödinger's cat. For example, a promising line of investigation that’s still in its infancy is to measure the heat of a large system to determine whether quantum superpositions can influence its behavior. You find references to that as well as to the other papers that I mentioned in the info below the video. Schrödinger, by the way, didn’t have a cat, but a dog. His name was Burschie.

Wednesday, February 24, 2021

What's up with the Ozone Layer?

[This is a transcript of the video embedded below.]

Without the ozone layer, life, as we know it, would not exist. Scientists therefore closely monitor how the ozone layer is doing. In the past years, two new developments have attracted their attention and concern. What have they found and what does it mean? That’s what we’ll talk about today.
 

First things first, ozone is a molecule made of three oxygen atoms. It’s unstable, and on the surface of Earth it decays quickly, on the average within a day or so. For this reason, there’s very little ozone around us, and that’s good, because breathing in ozone is really unhealthy even in small doses.

But ozone is produced when sunlight hits the upper atmosphere, and accumulates far up there in a region called the “stratosphere”. This “ozone layer” then absorbs much of the sun’s ultraviolet light. The protection we get from the ozone layer is super-important, because the energy of ultraviolet light is high enough to break molecular bonds. Ultra-violet light, therefore, can damage cells or their genetic code. This means, with exposure to ultraviolet light, the risk of cancer and other mutations increases significantly. I have explained radiation risk in more detail in an earlier video, so check this out for more.

You have probably all heard of the ozone “hole” that was first discovered in the 1980s. This ozone hole is still with us today. It was caused by human emissions of ozone-depleting substances, notably chlorofluorocarbons – CFCs for short – that were used, among other things, in refrigerators and spray cans. CFCs have since been banned, but it will take at least several more decades for the ozone layer to completely recover. With that background knowledge, let’s now look at the two new developments.

What’s new?

The first news is that last year we have seen a large and pronounced ozone hole over the North Pole, in addition to the “usual” one over the South Pole. This has happened before, but it’s still an unusual event. That’s because the creation of an ozone hole is driven by supercooled droplets of water and nitric acid which are present in polar stratospheric clouds, so clouds that you find on the poles in the stratosphere. But these clouds can only form if it’s cold enough, and I mean really cold, below about −108 °F or −78 °C. Therefore, the major reason that ozone holes form more readily over the South pole than over the North Pole is quite simply that the South Pole is, on average, colder.

Why is the South Pole colder? Loosely speaking it’s because there are fewer high mountains in the Southern hemisphere than in the Northern hemisphere. And because of this, wind circulations around the South Pole tend to be more stable; they can lock in air, which then cools over the dark polar winter months. Air over the North Pole, in contrast, mixes more efficiently with warmer air from the mid latitudes.

On occasion, however, cold air gets locked in over the North Pole as well, which creates conditions similar to those at the South Pole. This is what happened in the Spring of 2020. For five weeks in March and early April, the North Pole saw the biggest arctic ozone hole on record, surrounded by a stable wind circulation called a polar vortex.

Now, we have all witnessed in the past decade that climate change alters wind patterns in the Northern Hemisphere, which gives rise to longer heat waves in the summer. This brings up the question whether climate change was one of the factors contributing to the northern ozone hole and whether we, therefore, must expect it to become a recurring event.

This question was studied in a recent paper by Martin Dameris and coauthors, for the full reference, please check the info below the video. Their conclusion is that, so far, observations of the northern ozone hole are consistent with it just being a coincidence. However, if coincidences pile upon coincidences, they make a trend. And so, researchers are now waiting to see whether the hole will return in the Spring of 2021 or in the coming years.

The second new development is that the ozone layer over the equator isn’t recovering as quickly as scientists expected. Indeed, above the equator, the amount of ozone in the lower parts of the stratosphere seems to be declining, though that trend is, for now, offset by the recovery of ozone in the upper parts of the stratosphere, which proceeds as anticipated.

The scientists who work on this previously considered various possible reasons, from data problems to illegal emissions of ozone-depleting substances. But the data have held up, and while we now know illegal emissions are indeed happening, these do not suffice to explain the observations.

Instead, further analysis indicates that the depletion of ozone in the lower stratosphere over the equator seems to be driven, again, by wind patterns. Earth’s ozone is itself created by sunlight, which is why most of it forms over the equator where sunlight is the most intensive. The ozone is then transported from the equatorial regions towards the poles by a wind cycle – called the “Brewer-Dobson circulation” – in which air rises over the equator and comes down again in mid to high latitude. With global warming, that circulation may become more intense, so that more ozone is redistributed from the equator to higher latitudes.

Again, though, the strength of this circulation also changes just by random chance. It’s therefore presently unclear whether the observations merely show a temporary fluctuation or are indicative of a trend. However, a recent analysis of different climate-chemistry models by Simone Dietmüller et al shows that human-caused carbon dioxide emissions contribute to the trend of less ozone over the equator and more ozone in the mid-latitudes, and the trend is therefore likely to continue. I have to warn you though that this paper has not yet passed peer review.

Before we talk about what this all means, I want to thank my tier four supporters on Patreon. Your help is greatly appreciated. And you, too, can help us produce videos by supporting us on Patreon. Now let’s talk about what these news from the ozone layer mean.

You may say, ah, so what. Tell the people in the tropics to put on more sun-lotion and those in Europe to take more vitamin D. This is a science channel, and I’ll not tell anyone what they should or shouldn’t worry about, that’s your personal business. But to help you gauge the present situation, let me tell you an interesting bit of history.

The Montreal protocol from 1987, which regulates the phasing out of ozone depleting substances, was passed quickly after the discovery of the first ozone hole. It is often praised as a milestone of environmental protection, the prime example that everyone points to for how to do it right. But I think the Montreal Protocol teaches us a very different lesson.

That’s because scientists knew already in the 1970s, long before the first ozone hole was discovered, that chlorofluorocarbons would deplete the ozone layer. But they thought the effect would be slow and global. When the ozone hole over the South Pole was discovered by the British Antarctic Survey in 1985, that came as a complete surprise.

Indeed, fun fact, it later turned out that American satellites had measured the ozone hole years before the British Survey did, but since the data were so far off the expected value, they were automatically overwritten by software.

The issue was that at the time the effects of polar stratospheric clouds on the ozone layer were poorly understood, and the real situation turned out to be far worse than scientists thought.

So, for me, the lesson from the Montreal Protocol is that we’d be fools to think that we now have all pieces in place to understand our planet’s climate system. We know we’re pushing the planet into regimes that scientists poorly understand and chances are that this will bring more unpleasant surprises.

So what do those changes in the ozone layer mean? They mean we have to pay close attention to what’s happening.