Saturday, April 10, 2021

Does the Universe have Higher Dimensions? Part 1

[This is a transcript of the video embedded below.]

Space, the way we experience it, has three dimensions. Left-right, forward backward, and up-down. But why three? Why not 7? Or 26? The answer is: No one knows. But if no one knows why space has three dimensions, could it be that it actually has more? Just that we haven’t noticed for some reason? That’s what we will talk about today.

The idea that space has more than three dimensions may sound entirely nuts, but it’s a question that physicists have seriously studied for more than a century. And since there’s quite a bit to say about it, this video will have two parts. In this part we will talk about the origins of the idea of extra dimensions, Kaluza-Klein theory and all that. And in the next part, we will talk about more recent work on it, string theory and black holes at the Large Hadron Collider and so on.

Let us start with recalling how we describe space and objects in it. In two dimensions, we can put a grid on a plane, and then each point is a pair of numbers that says how far away from zero you have to go in the horizontal and vertical direction to reach that point. The arrow pointing to that point is called a “vector”.

This construction is not specific to two dimensions. You can add a third direction, and do exactly the same thing. And why stop there? You can no longer *draw a grid for four dimensions of space, but you can certainly write down the vectors. They’re just a row of four numbers. Indeed, you can construct vector spaces in any number of dimensions, even in infinitely many dimensions.

And once you have vectors in these higher dimensions, you can do geometry with them, like constructing higher dimensional planes, or cubes, and calculating volumes, or the shapes of curves, and so on. And while we cannot directly draw these higher dimensional objects, we can draw their projections into lower dimensions. This for example is the projection of a four-dimensional cube into two dimensions.

Now, it might seem entirely obvious today that you can do geometry in any number of dimensions, but it’s actually a fairly recent development. It wasn’t until eighteen forty-three, that the British mathematician Arthur Cayley wrote about the “Analytical Geometry of (n) Dimensions” where n could be any positive integer. Higher Dimensional Geometry sounds innocent, but it was a big step towards abstract mathematical thinking. It marked the beginning of what is now called “pure mathematics”, that is mathematics pursued for its own sake, and not necessarily because it has an application.

However, abstract mathematical concepts often turn out to be useful for physics. And these higher dimensional geometries came in really handy for physicists because in physics, we usually do not only deal with things that sit in particular places, but with things that also move in particular directions. If you have a particle, for example, then to describe what it does you need both a position and a momentum, where the momentum tells you the direction into which the particle moves. So, actually each particle is described by a vector in a six dimensional space, with three entries for the position and three entries for the momentum. This six-dimensional space is called phase-space.

By dealing with phase-spaces, physicists became quite used to dealing with higher dimensional geometries. And, naturally, they began to wonder if not the *actual space that we live in could have more dimensions. This idea was first pursued by the Finnish physicist Gunnar Nordström, who, in 1914, tried to use a 4th dimension of space to describe gravity. It didn’t work though. The person to figure out how gravity works was Albert Einstein.

Yes, that guy again. Einstein taught us that gravity does not need an additional dimension of space. Three dimensions of space will do, it’s just that you have to add one dimension of time, and allow all these dimensions to be curved.

But then, if you don’t need extra dimensions for gravity, maybe you can use them for something else.

Theodor Kaluza certainly thought so. In 1921, Kaluza wrote a paper in which he tried to use a fourth dimension of space to describe the electromagnetic force in a very similar way to how Einstein described gravity. But Kaluza used an infinitely large additional dimension and did not really explain why we don’t normally get lost in it.

This problem was solved few years later by Oskar Klein, who assumed that the 4th dimension of space has to be rolled up to a small radius, so you can’t get lost in it. You just wouldn’t notice if you stepped into it, it’s too small. This idea that electromagnetism is caused by a curled-up 4th dimension of space is now called Kaluza-Klein theory.

I have always found it amazing that this works. You take an additional dimension of space, roll it up, and out comes gravity together with electromagnetism. You can explain both forces entirely geometrically. It is probably because of this that Einstein in his later years became convinced that geometry is the key to a unified theory for the foundations of physics. But at least so far, that idea has not worked out.

Does Kaluza-Klein theory make predictions? Yes, it does. All the electromagnetic fields which go into this 4th dimension have to be periodic so they fit onto the curled-up dimension. In the simplest case, the fields just don’t change when you go into the extra dimension. And that reproduces the normal electromagnetism. But you can also have fields which oscillate once as you go around, then twice, and so on. These are called higher harmonics, like you have in music. So, Kaluza Klein theory makes a prediction which is that all these higher harmonics should also exist.

Why haven’t we seen them? Because you need energy to make this extra dimension wiggle. And the more it wiggles, that is, the higher the harmonics, the more energy you need. Just how much energy? Well, that depends on the radius of the extra dimension. The smaller the radius, the smaller the wavelength, and the higher the frequency. So a smaller radius means you need higher energy to find out if the extra dimension is there. Just how small the radius is, the theory does not tell you, so we don’t know what energy is necessary to probe it. But the short summary is that we have never seen one of these higher harmonics, so the radius must be very small.

Oskar Klein himself, btw was really modest about his theory. He wrote in 1926:
"Ob hinter diesen Andeutungen von Möglichkeiten etwas Wirkliches besteht, muss natürlich die Zukunft entscheiden."

("Whether these indications of possibilities are built on reality has of course to be decided by the future.")

But we don’t actually use Kaluza-Klein theory instead of electromagnetism, and why is that? It’s because Kaluza-Klein theory has some serious problems.

The first problem is that while the geometry of the additional dimension correctly gives you electric and magnetic fields, it does not give you charged particles, like electrons. You still have to put those in. The second problem is that the radius of the extra dimension is not stable. If you perturb it, it can begin to increase, and that can have observable consequences which we have not seen. The third problem is that the theory is not quantized, and no one has figured out how to quantize geometry without running into problems. You can however quantize plain old electromagnetism without problems.

We also know today of course that the electromagnetic force actually combines with the weak nuclear force to what is called the electroweak force. That, interestingly enough, turns out to not be a problem for Kaluza-Klein theory. Indeed, it was shown in the 1960s by Ryszard Kerner, that one can do Kaluza-Klein theory not only for electromagnetism, but for any similar force, including the strong and weak nuclear force. You just need to add a few more dimensions.

How many? For the weak nuclear force, you need two more, and for the strong nuclear force another four. So in total, we now have one dimension of time, 3 for gravity, one for electromagnetism, 2 for the weak nuclear force and 4 for the strong nuclear force, which adds up to a total of 11.

In 1981, Edward Witten noticed that 11 happened to be the same number of dimensions which is the maximum for supergravity. What happened after this is what we’ll talk about next week.

Saturday, April 03, 2021

Should Stephen Hawking have won the Nobel Prize?

[This is a transcript of the video embedded below.]

Stephen Hawking, who sadly passed away in 2018, has repeatedly joked that he might get a Nobel Prize if the Large Hadron Collider produces tiny black holes. For example, here is a recording of a lecture he gave in 2016:
“Some of the collisions might create micro black holes. These would radiate particles in a pattern that would be easy to recognize. So I might get a Nobel Prize after all.”
The British physicist and science writer Phillip Ball, who attended this 2016 lecture, commented:
“I was struck by how unusual it was for a scientist to state publicly that their work warranted a Nobel… [It] gives a clue to the physicist’s elusive character: shamelessly self-promoting to the point of arrogance, and heedless of what others might think.”
I heard Hawking say pretty much exactly the same thing in a public lecture a year earlier in Stockholm. But I had an entirely different reaction. I didn’t think of his comment as arrogant. I thought he was explaining something which few people knew about. And I thought he was right in that, if the Large Hadron Collider would have seen these tiny black holes decay, he almost certainly would have gotten a Nobel Prize. But I also thought that this was not going to happen. He was much more likely to win a Nobel Prize for something else. And he almost did.

Just exactly what might Hawking have won the Nobel Prize for, and should he have won it? That’s what we will talk about today.

In nineteen-seventy-four, Stephen Hawking published a calculation that showed black holes are not perfectly black, but they emit thermal radiation. This radiation is now called “Hawking radiation”. Hawking’s calculation shows that the temperature of a black hole is inversely proportional to the mass of the black hole. This means, the larger the black hole, the smaller its temperature, and the harder it is to measure the radiation. For the astrophysical black holes that we know of, the temperature is way, way too small to be measurable. So, the chances of him ever winning a Nobel Prize for black hole evaporation seemed very small.

But, in the late nineteen-nineties, the idea came up that tiny black holes might be produced in particle collisions at the Large Hadron Collider. This is only possible if the universe has additional dimensions of space, so not just the three that we know of, but at least five. These additional dimensions of space would have to be curled up to small radii, because otherwise we would already have seen them.

Curled up extra dimensions. Haven’t we heard that before? Yes, because string theorists talk about curled up dimensions all the time. And indeed, string theory was the major motivation to consider this hypothesis of extra dimensions of space. However, I have to warn you that string theory does NOT tell you these extra dimensions should have a size that the Large Hadron Collider could probe. Even if they exist, they might be much too small for that.

Nevertheless, if you just assume that the extra dimensions have the right size, then the Large Hadron Collider could have produced tiny black holes. And since they would have been so small, they would have been really, really hot. So hot, indeed, they’d decay pretty much immediately. To be precise, they’d decay in a time of about ten to the minus twenty-three seconds, long before they’d reach a detector.

But according to Hawking’s calculation, the decay of these tiny black holes should proceed by a very specific pattern. Most importantly, according to Hawking, black holes can decay into pretty much any other particle. And there is no other particle decay which looks like this. So, it would have been easy to see black hole decays in the data. If they had happened. They did not. But if they had, it would almost certainly have gotten Hawking a Nobel Prize.

However, the idea that the Large Hadron Collider would produce tiny black holes was never very plausible. That’s because there was no reason the extra dimensions, in case they exist to begin with, should have just the right size for this production to be possible. The only reason physicists thought this would be the case was an argument from mathematical beauty called “naturalness”. I have explained the problems with this argument in an earlier video, so check this out for more.

So, yeah, I don’t think tiny black holes at the Large Hadron Collider was Hawking’s best shot at a Nobel Prize.

Are there other ways you could see black holes evaporate? Not really. Without these curled up extra dimensions, which do not seem to exist, we can’t make black holes ourselves. Without extra dimensions, the energy density that we’d have to reach to make black holes is way beyond our technological limitations. And the black holes that are produced in natural processes are too large, and then too cold to observe Hawking radiation.

One thing you *can do, though, is simulating black holes with superfluids. This has been done by the group of Jeff Steinhauer in Israel. The idea is that you can use a superfluid to mimic the horizon of a black hole. If you remember, the horizon of a black hole is a boundary in space, from inside of which light cannot escape. In a superfluid, one does not trap light, but one traps sound waves instead. One can do this because the speed of sound in the superfluid depends on the density of the fluid. And since one can experimentally control this density, one can control the speed of sound.

If one then makes the fluid flow, there’ll be regions from within which the sound waves cannot escape because they’re just too slow. It’s like you’re trying to swim away from a waterfall. There’s a boundary beyond which you just can’t swim fast enough to get away. That boundary is much like a black hole horizon. And the superfluid has such a boundary, not for swimmers, but for sound waves.

You can also do this with a normal fluid, but you need the superfluid so that the sound has the right quantum properties, as it does in Hawking’s calculation. And in a series of really neat experiments, Steinhauer’s group has shown that these sound waves in the superfluid indeed have the properties that Hawking predicted. That’s because Hawking’s calculation applies to the superfluid in just exactly the same way it applies to real black holes.

Could Hawking have won a Nobel Prize for this? I don’t think so. That’s because mimicking a black hole with a superfluid is cool, but of course it’s not the real thing. These experiments are a type of quantum simulation, which means they demonstrate that Hawking’s calculation is correct. But the measurements on superfluids cannot demonstrate that Hawking’s prediction is correct for real black holes.

So, in all fairness, it never seemed likely Hawking would win a Nobel Prize for Hawking radiation. It’s just too hard to measure. But that wasn’t the only thing Hawking did in his career.

Before he worked on black hole evaporation, Hawking worked with Penrose on the singularity theorems. Penrose’s theorem showed that, in contrast to what most physicists believed at the time, black holes are a pretty much unavoidable consequence of stellar collapse. Before that, physicists thought black holes are mathematical curiosities that would not be produced in reality. It was only because of the singularity theorems that black holes began to be taken seriously. Eventually astronomers looked for them, and now we have solid experimental evidence that black holes exist. Hawking applied the same method to the early universe to show that the Big Bang singularity is likewise unavoidable, unless General Relativity somehow breaks down. And that is an absolutely amazing insight about the origin of our universe.

I made a video about the history of black holes two years ago in which I said that the singularity theorems are worth a Nobel Prize. And indeed, Penrose was one of the recipients of the 2020 Nobel Prize in physics. If Hawking had not died two years earlier, I believe he would have won the Nobel Prize together with Penrose. Or maybe the Nobel Prize committee just waited for him to die, so they wouldn’t have to think about just how to disentangle Hawking’s work from Penrose’s? We’ll never know.

Does it matter that Hawking did not win a Nobel Prize? Personally, I think of the Nobel Prize in the first line as an opportunity to celebrate scientific discoveries. The people who we think might win this prize are highly deserving with or without an additional medal. And Hawking didn’t need a Nobel Prize, he’ll be remembered without it.

Saturday, March 27, 2021

Is the universe REALLY a hologram?

[This is a transcript of the video embedded below.]

Do we live in a hologram? String theorists think we do. But what does that mean? How do holograms work, and how are they related to string theory? That’s what we will talk about today.

In science fiction movies, holograms are 3-dimensional, moving images. But in reality, the technology for motion holograms hasn’t caught up with imagination. At least so far, holograms are still mostly stills.

The holograms you are most likely to have seen are not like those in the movies. They are not a projection of an object into thin air – however that’s supposed to work. Instead, you normally see a three-dimensional object above or behind a flat film. Small holograms are today frequently used as a security measure on credit cards, ID cards, or even banknotes, because they are easy to see, but difficult to copy.

If you hold such a hologram into light, you will see that it seems to have depth, even though it is printed on a flat surface. That’s because in photographs, we are limited to the one perspective from which the picture was taken, and that’s why they look flat. But you can tilt holograms and observe them from different angles, as if you were examining a three-dimensional object.

Now, these holograms on your credit cards, or the ones that you find on postcards or book covers, are not “real” holograms. They are actually composed of several 2-dimensional images and depending on the angle, a different image is reflected back at you, which creates the illusion of a 3-dimensional image.

In a real hologram the image is indeed 3-dimensional. But the market for real holograms is small, so they are hard to come by, even though the technology to produce them is straightforward. A real hologram looks like this.

Real holograms actually encode a three-dimensional object on a flat surface. How is this possible? The answer is interference.

Light is electromagnetic waves, so it has crests and troughs. And a key property of waves is that they can be overlaid and then amplify or wash out each other. If two waves are overlaid so that two crests meet at the same point, that will amplify the wave. This is called constructive interference. But if a crest meets a trough, the waves will cancel. This is called destructive interference.

Now, we don’t normally see light cancelling out other light. That’s because to see interference one needs very regular light, where the crests and troughs are neatly aligned. Sunlight or LED light doesn’t have that property. But laser light has it, and so laser light can be interfered.

And this interference can be used to create holograms. For this, one first splits a laser beam in two with a semi-transparent glass or crystal, called a beam-splitter, and makes each beam broader with a diverging lens. Then, one aims one half of the beam at the object that one wants to take an image of. The light will not just bounce off the object in one single direction, but it will scatter in many different directions. And the scattered light contains information about the surface of the object. Then, one recombines the two beams and captures the intensity of the light with a light-sensitive screen.

Now, remember that laser light can interfere. This means, how large the intensity on the screen is, depends on whether the interference was destructive or constructive, which again depends on just where the object was located and how it was shaped. So, the screen has captured the full three-dimensional information. To view the hologram, one develops the film and shines light onto it at the same wavelength as the image was taken, which reproduces the 3-dimensional image.

To understand this in a little more detail, let us look at the image on the screen if one uses a very small point-like object. It looks like this. It’s called a zone plate. The intensity and width of the rings depends on the distance between the point-like object and the screen, and the wavelength of the light. But any object is basically a large number of point-like objects, so the interference image on the screen is generally an overlap of many different zone plates with these concentric rings.

The amazing thing about holograms is now this. Every part of the screen receives information from every part of the object. As a consequence, if you develop the image to get the hologram, you can take it apart into pieces, and each piece will still recreate the whole 3-dimensional object. To understand better how this works, look again at the zone plate, the one of a single point-like object. If you have only a small piece that contains part of the rings, you can infer the rest of the pattern, though it gets a little more difficult. If you have a general plate that overlaps many zone plates, this is still possible. So, at least mathematically, you can reconstruct the entire object from any part of the holographic plate. In reality, the quality of the image will go down.

So, now that you know how real holograms work, let us talk about the idea that the universe is a hologram.

When string theorists claim that our universe is a hologram, they mean the following. Our universe has a positive cosmological constant. But mathematically, universes with a negative cosmological constant are much easier to work with. So, this is what string theorists usually look at. These universes with a negative cosmological constant are called Anti-de Sitter spaces and into these Anti-de Sitter things they put supersymmetric matter. To best current knowledge, our universe is not Anti De Sitter and matter is not supersymmetric, but mathematically, you can certain do that.

For some specific examples, it has then been shown that the gravitational theory in such an Anti de Sitter universe is mathematically equivalent to a different theory on the conformal boundary of that universe. What the heck is the conformal boundary of the universe? Well, our actual universe doesn’t have one. But these Anti-De Sitter spaces do. Just exactly how they are defined isn’t all that important. You only need to know that this conformal boundary has one dimension of space less than the space it is a boundary of.

So, you have an equivalence between two theories in a different number of dimensions of space. A gravitational theory in this anti-De Sitter space with the weird matter. And a different theory on the boundary of that space, which also has weird matter. And just so you have heard the name: The theory on the boundary is what’s called a conformal field theory, and the whole thing is known as the Anti-de Sitter – Conformal Field Theory duality, or AdS/CFT for short.

This duality has been mathematically confirmed for some specific cases, but pretty much all string theorists seem to believe it is much more generally valid. In fact, a lot of them seem believe it is valid even in our universe, even though there is no evidence for that, neither observational nor mathematical. In this most general form, the duality is simply called the “holographic principle”.

If the holographic principle was correct, it would mean that the information about any volume in our universe is encoded on the boundary of that volume. That’s remarkable because naively, you’d think the amount of information you can store in a volume of space grows much faster than the information you can store on the surface. But according to the holographic principle, the information you can put into the volume somehow isn’t what we think it is. It must have more correlations than we realize. So it the holographic principle was true, that would be very interesting. I talked about this in more detail in an earlier video.

The holographic principle indeed sounds a little like optical holography. In both cases one encodes information about a volume on a surface with one dimension less. But if you look a little more closely, there are two important differences between the holographic principle and real holography:

First, an optical hologram is not actually captured in two dimensions; the holographic film has a thickness, and you need that thickness to store the information. The holographic principle, on the other hand, is a mathematical abstraction, and the encoding really occurs in one dimension less.

Second, as we saw earlier, in a real hologram, each part contains information about the whole object. But in the mathematics of the holographic universe, this is not the case. If you take only a piece of the boundary, that will not allow you to reproduce what goes on in the entire universe.

This is why I don’t think referring to this idea from string theory as holography is a good analogy. But now you know just exactly what the two types of holography do, and do not have in common.

Saturday, March 20, 2021

Whatever happened to Life on Venus?

[This is a transcript of the video embedded below.]

A few months ago, the headlines screamed that scientists had found signs of life on Venus. But it didn’t take long for other scientists to raise objections. So, just exactly what did they find on Venus? Did they actually find it? And what does it all mean? That’s what we will talk about today.

The discovery that made headlines a few months ago was that an international group of researchers said they’d found traces of a molecule called phosphine in the atmosphere of Venus.

Phosphine is a molecule made of one phosphorus and three hydrogen atoms. On planets like Jupiter and Saturn, pressure and temperature are so high that phosphine can form by coincidental chemical reactions, and indeed phosphine has been observed in the atmosphere of these two planets. On planets like Venus, however, the pressure isn’t remotely large enough to produce phosphine this way.

And the only other known processes to create phosphine are biological. On Earth, for example, which in size and distance to the Sun isn’t all that different to Venus, the only natural production processes for phosphine are certain types of microbes. Lest you think this means that phosphine is somehow “good for life”, I should add that the microbes in question live without oxygen. Indeed, phosphine is toxic for forms of life that use oxygen, which is most of life on earth. In fact, phosphine is used in the agricultural industry to kill rodents and insects.

So, the production of phosphine on Venus at fairly low atmospheric pressure seems to require life in some sense, which is why the claim that there’s phosphine on Venus is BIG. It could mean there’s microbial life on Venus. And just in case microbial life doesn’t excite you all that much, this would be super-interesting because it would give us a clue to what the chances are that life evolves on other planets in general.

So, just exactly what did they find?

The suspicion that phosphine might be present on Venus isn’t entirely new. The researchers first saw something that could be phosphine in two-thousand and seventeen in data from the James Clerk Maxwell Telescope, which is a radio telescope in Hawaii. However, this signal was not particularly good, so they didn’t publish it. Instead they waited for more data from the ALMA telescope in Chile. Then they published a combined analysis of the data from both telescopes in Nature Astronomy.

Here’s what they did. One can look for evidence of molecules by exploiting that each molecule reacts to light at different wave-lengths. To some wave-lengths, a molecule may not react at all, but others it may absorb because they cause the molecule to vibrate or rotate around itself. It’s like each molecule has very specific resonance frequencies, like if you’re in an airplane and the engine’s being turned up and then, at a certain pitch the whole plane shakes? That’s a resonance. For the plane it happens at certain wavelengths of sound. For molecules it happens at certain wave-lengths of light.

So, if light passes through a gas, like the atmosphere of Venus, then just how much light at each wave-length passes through depends on what molecules are in the gas. Each molecule has a very specific signature, and that makes the identification possible.

At least in principle. In practice… it’s difficult. That’s because different molecules can have very similar absorption lines.

For example, the phosphine absorption line which all the debate is about has a frequency of two-hundred sixty-six point nine four four Gigahertz. But sulfur dioxide has an absorption line at two-hundred sixty-six point nine four three GigaHertz, and sulfur dioxide is really common in the atmosphere of Venus. That makes it quite a challenge to find traces of phosphine.

But challenges are there to be met. The astrophysicists estimated the contribution from Sulphur dioxide from other lines which this molecule should also produce.

They found that these other lines were almost invisible. So they concluded that the absorption in the frequency range of interest had to be mostly due to phosphine and they estimated the amount with about seven to twenty parts per billion, so that’s seven to twenty molecules of phosphine per billion molecules of anything.

It’s this discovery which made the big headlines. The results they got for the phosphine amount from the two different telescopes are a little different, and such an inconsistency is somewhat of a red flag. But then, these measurements were made some years apart and the atmosphere of Venus could have undergone changes in that period, so it’s not necessarily a problem.

Unfortunately, after publishing their analysis, the team learned that the data from ALMA had not been processed correctly. It was not their fault, but it meant they had to redo their analysis. With the corrected data, the amount of phosphine they claimed to see fell to something between 1 and 4 parts per billion. Less, but still there.

Of course such an important finding attracted a lot of attention, and it didn’t take long for other researchers to have a close look at the analysis. It was not only that finding phosphine was surprising, not finding sulphur dioxide was not normal either; it had been detected many times in the atmosphere of Venus in amounts about 10 times higher than what the phosphine-discovery study claimed it was.

Already in October last year, a paper came out that argued there’s no signal at all in the data, and that said the original study used an overly complicated twelve parameter fit that fooled them into seeing something where there was nothing. This criticism has since been published in a peer reviewed journal. And by the end of January another team put out two papers in which they pointed out several other problems with the original analysis.

First they used a model of the atmosphere of Venus and calculated that the alleged phosphine absorption comes from altitudes higher than eighty kilometers. Problem is, at such a high altitude, phosphine is incredibly unstable because ultraviolet light from the sun breaks it apart quickly. They estimated it would have a lifetime of under one second! This means for phosphine to be present on Venus in the observed amounts, it would’ve to be produced at a rate higher than the production of oxygen by photosynthesis on Earth. You’d need a lot of bacteria to get that done.

Second, they claim that the ALMA telescope should not have been able to see the signal at all, or at least a much smaller signal, because of an effect called line dilution. Line dilution can occur if one has a telescope with many separate dishes like ALMA. A signal that’s smeared out over many of the dishes, like the signal from the atmosphere of Venus, can then be affected by interference effects.

According to estimates in the new paper, line dilution should suppress the signal in the ALMA telescope by about a factor 10-20, in which case it would not be visible at all. And indeed they claim that no signal is entirely consistent with the data from the second telescope. This criticism, too, has now passed peer review.

What does it mean?

Well, the authors of the original study might reply to this criticism, and so it will probably take some time until the dust settles. But even if the criticism is correct, this would not mean there’s no phosphine on Venus. As they say, absence of evidence is not evidence of absence. If the criticism is correct, then the observations, exactly because they probe only high altitudes where phosphine is unstable, can neither exclude, nor confirm, the presence of phosphine on Venus. And so, the summary is, as so often in science: More work is needed.

Wednesday, March 17, 2021

Live Seminar about Dark Matter on Friday

I will give an online simiar about dark matter and modified gravity on Friday at 4pm CET, if you want to attend, the link is here:

I'm speaking in English (as you can see, half in American, half in British English, as usual), but the seminar will be live translated to Spanish, for which there's a zoom link somewhere.

Saturday, March 13, 2021

Can we stop hurricanes?

[This is a transcript of the video embedded below.]

Hurricanes are among the most devastating natural disasters. That’s because hurricanes are enormous! A medium-sized hurricane extends over an area about the size of Texas. On a globe they’ll cover 6 to 12 degrees latitude. And as they blow over land, they leave behind wide trails of destruction, caused by strong winds and rain. Damages from hurricanes regularly exceed billions of US dollars. Can’t we do something about that? Can’t we blast hurricanes apart? Redirect them? Or stop them from forming in the first place? What does science say about that? That’s what we’ll talk about today.

Donald Trump, the former president of the United States, has reportedly asked repeatedly whether it’s possible to get rid of hurricanes by dropping nuclear bombs on them. His proposal was swiftly dismissed by scientists and the media likewise. Their argument can be summed up with “you can’t” and even if you could “it’d be a bad idea.” Trump then denied he ever said anything, the world forgot about it, and here we are, still wondering if not there’s something we can do to stop hurricanes.

Trumps idea might sound crazy, but he was not the first to think of nuking a hurricane, and he probably won’t be the last. And I think trying to prevent hurricanes isn’t as crazy as it sounds.

The idea to nuke a hurricane came up already right after nuclear weapons were deployed for the first time, in Japan in August 1945. August is in the middle of the hurricane season in Florida. The mayor of Miami Beach, Herbert Frink, made the connection. He asked President Harry Truman about the possibility to use the new weapon to fight against hurricanes. And, sure enough, the Americans looked into it.

But they quickly realized that while the energy released by a nuclear bomb was gigantic compared to all other kinds of weapons, it was still nothing compared to the energies that build up in hurricanes. For comparison: The atomic bombs dropped on Japan released an energy of about 20 kilotons each. A typical hurricane releases about 10,000 times as much energy – per hour. The total power of a hurricane is comparable to the entire global power consumption. That’s because hurricanes are enormous!

By the way, hurricanes and typhoons are the same thing. The generic term used by meterologists is “tropical cyclone”. It refers to “a rotating, organized system of clouds and thunderstorms that originates over tropical or subtropical waters.” If they get large enough, they’re then either called hurricanes or typhoons, or they just remain tropical cyclones. But it’s like the difference between an astronaut and a cosmonaut. The same thing!

But back to the nukes. In 1956 an Air Force meteorologist by name Jack W Reed proposed to launch a megaton nuclear bomb – that is about 50 times the power of the ones in Japan – into a hurricane. Just to see what happened. He argued: “Since a complete theory for the dynamics of hurricanes will probably not be derived by meteorologists for several years, argument pros and con without conclusive foundation will be made over the effects to be expected… Only a full-scale test could prove the results.” In other words, if we don’t do it, we’ll never know just how bad the idea is. For what the radiation hazard was concerned, Reed claimed it would be negligible: “An airburst would cause no intense fallout,” never mind that a complete theory for the dynamics of hurricanes wasn’t available then and still isn’t.

Reed’s proposal was dismissed by both the military and the scientific community. The test never took place, but the proposal is interesting nevertheless, because Reed went to some length to explain how to go about nuking a hurricane smartly.

To understand what he was trying to get at, let’s briefly talk about how hurricanes form. Hurricanes can form over the ocean when the water temperature is high enough. Trouble begins at around 26 degrees Celsius or 80 degrees Fahrenheit. The warm water evaporates and rises. As it rises it cools and creates clouds. This tower of water-heavy clouds begins to spin because the Coriolis force, which comes from the rotation of planet Earth, acts on the air that’s drawn in, and the more the clouds spin, the better they get at drawing in more air. As the spinning accelerates, the center of the hurricane clears out and leaves behind a mostly calm region that’s usually a few dozen miles in diameter and has very low barometric pressure. This calm center is called the “eye” of the hurricane.

Reed now argued that if one detonates a megaton nuclear weapon directly in the eye of a hurricane, this would blast away the warm air that feeds the cycle, increase the barometric pressure, and prevent the storm from gathering more strength.

Now, the obvious problem with this idea is that even if you succeeded, you’d deposit radioactive debris in clouds that you just blasted all over the globe, congratulations. But even leaving aside the little issue with the radioactivity, it almost certainly wouldn’t work because - hurricanes are enormous.

It’s not only that you’re still up against a power that exceeds that of your nuclear bomb by three orders of magnitude, it’s also that an explosion doesn’t actually move a lot of air from one place to another, which is what Reed envisioned. The blast creates a shock wave – that’s bad news for everything in the way of that shock – but it does little to change the barometric pressure after the shock wave has passed through.

So if nuclear bombs are not the way to deal with hurricanes, can we maybe make them rain off before they make landfall? This technique is called “cloud seeding” and we talked about this in a previous video. If you remember, there are two types of cloud seeding, one that creates snow or ice, and one that creates rain.

The first one, called glaciogenic seeding was indeed tried on hurricanes by Homer Simpson. No, not this Homer, but a man by name Robert Homer Simpson, who in 1962 was the first director of the American Project Stormfury, which had the goal of weakening hurricanes.

The Americans actually *did spray a hurricane with silver iodide and observed afterwards that the hurricane indeed weakened. Hooray! But wait. Further research showed that hurricane clouds contain very little supercooled water droplets, so the method couldn’t work even in theory. Instead, it turned out that hurricanes frequently undergo similar changes without intervention, so the observation was most likely coincidence. Project Stormfury was canceled in 1983.

What about hygroscopic cloud seeding, which works by spraying clouds with particles that absorb water, to make the clouds rain off? The effects of this have been studied to some extent by observing natural phenomena. For example, dust that’s blown up over the Sahara Desert can be transported by winds over long distances. Though much remains to be understood, some observations seem to indicate that interactions with this dust makes it easier for the clouds to rain off, which naturally weaken hurricanes.

So why don’t we try something similar? Again, the problem is that hurricanes are enormous! You’d need a whole army of airplanes to spray the clouds, and even then that would almost certainly not make the hurricanes disappear, but merely weaken them.

There’s a long list of other things people have considered to get rid of hurricanes. For example, spraying the upper layers of a hurricane with particles that absorb sunlight to warm up the air, and thereby reduce the updraft. But again, the problem is that hurricanes are enormous! Keep in mind, you’d have to spray an area about the size of Texas.

A similar idea is to prevent the air above the ocean from evaporating and feeding the growth of the hurricane, for example by covering the ocean surface with oil films. The obvious problem with this idea is that, well, now you have all that oil on the ocean. But also, some small-scale experiments have shown that the oil-cover tends to break up, and where it doesn’t break up, it can actually aid the warming of the water, which is exactly what you don’t want.

How about we cool the ocean surface instead? This idea has been pursued for example by Bill Gates, who, in 2009, together with a group of scientists and entrepreneurs patented a pump system that would float in the ocean and pump cool water from deep down to the surface. In 2017 the Norwegian company SINTEF put forward a similar proposal. The problem with this idea is, guess what, hurricanes are enormous! You’d have to get a huge number of these pumps in the right place at the right time.

Another seemingly popular idea is to drag icebergs from the poles to the tropics to cool the water. I leave it to you to figure out the logistics for making this happen.

Yet again other people have argued that one doesn’t actually have to blow apart a hurricane to get rid of it, one merely has to detonate a nuclear bomb strategically so that the hurricane changes direction. The problem with this idea is that no one wants multiple nations to play nuclear billiard on the oceans.

As you have seen, there are lots of ideas, but the key problem is that hurricanes are enormous!

And that means the most promising way to prevent them is to intervene before they get too large. Hurricanes don’t suddenly pop out of nowhere, they take several days to form and usually arise from storms in the tropics which also don’t pop out of nowhere.

What the problem then comes down to is that meteorologists can’t presently predict well enough and not long enough in advance just which regions will go on to form hurricanes. But, as you have seen, researchers have tried quite a few methods to interfere with the feedback cycle that grows hurricanes, and some of them actually work. So, if we could tell just when and where to interfere, that might actually make a difference.

My conclusion therefore is: If you want to prevent hurricanes, you don’t need larger bombs, you need to invest into better weather forecasts.

Saturday, March 06, 2021

Do Complex Numbers Exist?

[This is a transcript of the video embedded below.]

When the world seems particularly crazy, I like looking into niche-controversies. A case where the nerds argue passionately over something that no one knew was controversial in the first place. In this video, I want to pick up one of these super-niche nerd fights: Are complex numbers necessary to describe the world as we observe it? Do they exist? Or are they just a mathematical convenience? That’s what we’ll talk about today.

So the recent controversy broke out when a paper appeared on the preprint server with the title “Quantum physics needs complex numbers”. The paper contains a proof for the claim in the title, in response to an earlier claim that one can do without the complex numbers.

What happened next is that the computer scientist Scott Aaronson wrote a blogpost in which he called the paper “striking”. But the responses were, well, not very enthusiastic. They ranged from “why fuss about it” to “bullshit” to “it’s missing the point.”

We’ll look at the paper in a moment, but first I will briefly summarize what we’re even talking about, so that no one’s left behind.

The Math of Complex Numbers

You probably remember from school that complex numbers are what you need to solve equations like x squared equals minus 1. You can’t solve that equation with the real numbers that we are used to. Real numbers are numbers that can have infinitely many digits after the decimal point, like square root of 2 and π, but they also include integers and fractions and so on. You can’t solve this equation with real numbers because they’ll always square to a positive number. If you want to solve equations like this, you therefore introduce a new number, usually denoted “i” with the property that it squares to -1.

Interestingly enough, just giving a name to the solution of this one equation and adding it to the set of real numbers turns out to be sufficient to make all algebraic equations solvable. Doesn’t matter how long or how complicated the equation, you can always write all their solutions as a+ib, where a and b are real numbers. 

Fun fact: This doesn’t work for numbers that have infinitely many digits before the point. Yes, that’s a thing, they’re called p-adic numbers. Maybe we’ll talk about this some other time.

Complex numbers are now all numbers of the type a plus I time b, where a and b are real numbers. “a” is called the “real” part, and “b” the “imaginary” part of the complex number. Complex numbers are frequently drawn in a plane, called the complex plane, where the horizontal axis is the real part and the vertical axis is the imaginary part. i itself is by convention in the upper half of the complex plane. But this looks the same as if you draw a map on a grid and name each point with two real numbers. Doesn’t this mean that the complex numbers are just a two-dimensional real vector space?

No, they’re not. And that’s because complex numbers multiply by a particular rule that you can work out by taking into account that the square of i is minus 1. Two complex numbers can be added like they were vectors, but the multiplication law makes them different. Complex number are, to use the mathematical term, a “field”, like the real numbers. They have a rule both for addition AND for multiplication. They are not just like that two-dimensional grid.

The Physics of Complex Numbers

We use complex numbers in physics all the time because they’re extremely useful. There useful for many reasons, but the major reason is this. If you take any real number, let’s call it α, multiply it with I, and put it into an exponential function, you get exp(Iα). In the complex plane, this number, exp(Iα), always lies on a circle of radius one around zero. And if you increase α, you’ll go around that circle. Now, if you look only at the real or only at the imaginary part of that circular motion, you’ll get an oscillation. And indeed, this exponential function is a sum of a cosine and I times a sine function.

Here’s the thing. If you multiply two of these complex exponentials say, one with α and one with β, you can just add the exponents. But if you multiply two cosines or a sine with a cosine… that’s a mess. You don’t want to do that. That’s why, in physics, we do the calculation with the complex numbers, and then, at the very end, we take either the real or the imaginary part. Especially when we describe electromagnetic radiation, we have to deal with a lot of oscillations, and complex numbers come in very handy.

But we don’t have to use them. In most cases we could do the calculation with only real numbers. It’s just cumbersome. With the exception of quantum mechanics, to which we’ll get in a moment, the complex numbers are not necessary.

And, as I have explained in an earlier video, it’s only if a mathematical structure is actually necessary to describe observations that we can say they “exist” in a scientifically meaningful way. For the complex numbers in non-quantum physics that’s not the case. They’re not necessary.

So, as long as you ignore quantum mechanics, you can think of complex numbers as a mathematical tool, and you have no reason to think they physically exist. Let’s then talk about quantum mechanics.

Complex Numbers in Quantum Mechanics

In quantum mechanics, we work with wave-function, usually denoted Ψ, which are complex valued, and the equation that tells us what the wave-function does is the Schrödinger equation. It looks like this. You’ll see immediately, there’s an “i” in this equation, which is why the wave-function has to be complex valued.

However, you can of course take the wave-function and this equation apart into a real and an imaginary part. Indeed, one often does that, if one solves the equation numerically. And I remind you, that both the real and the imaginary part of a complex number are real numbers. Now, if we calculate a prediction for a measurement outcome in quantum mechanics, then that measurement outcome will also always be a real number. So, it looks like you can get rid of the complex numbers in quantum mechanics, by splitting the equation into a real and imaginary part, and that’ll never make a difference for the result of the calculation.

This finally brings us to the paper I mentioned in the beginning. What I just said about decomposing the Schrödinger equation is of course correct, but that’s not what they looked at in the paper, that would be rather lame.

Instead they ask what happens with the wave-function if you have a system that is composed of several parts, in the simplest case that would be several particles. In normal quantum mechanics, each of these particles has a wave-function that’s complex-valued, and from these we construct a wave-function for all the particles together, which is also complex-valued. Just what this wave-function looks like depends on which particle is entangled with which. If two particles are entangled, this means their properties are correlated, and we know experimentally that this entanglement-correlation is stronger than what you can do without quantum theory.

The question which they look at in the new paper is then whether there are ways to entangle particles in the normal, complex quantum mechanics that you cannot build up from particles that are described entirely by real valued functions. Previous calculation showed that this could always be done if the particles came from a single source. But in the new paper they look at particles from two independent sources, and claim that there are cases which you cannot reproduce with real numbers only. They also propose a way to experimentally measure this specific entanglement.

I have to warn you that this paper has not yet been peer reviewed, so maybe someone finds a flaw in their proof. But assuming their result holds up, this means if the experiment which they propose finds the specific entanglement predicted by complex quantum mechanics, then you know you can’t describe observations with real numbers. It would then be fair to say that complex numbers exist. So, this is why it’s cool. They’ve figured out a way to experimentally test if complex numbers exist!

Well, kind of. Here is the fineprint: This conclusion only applies if you want the purely real-valued theory to work the same way as normal quantum mechanics. If you are willing to alter quantum mechanics, so that it becomes even more non-local than it already is, then you can still create the necessary entanglement with real valued numbers.

Why is it controversial? Well, if you belong to the shut-up and calculate camp, then this finding is entirely irrelevant. Because there’s nothing wrong with complex numbers in the first place. So that’s why you have half of the people saying “what’s the point” or “why all the fuss about it”. If you, on the other hand, are in the camp of people who think there’s something wrong with quantum mechanics because it uses complex numbers that we can never measure, then you are now caught between a rock and a hard place. Either embrace complex numbers, or accept that nature is even more non-local than quantum mechanics.

Or, of course, it might be that that the experiment will not agree with the predictions of quantum mechanics, which would be the most exciting of all possible outcomes. Either way, I am sure that this is a topic we will hear about again.

Tuesday, March 02, 2021

[Guest Post] Problems with Eric Weinstein's “Geometric Unity”

[This post is written by Timothy Nguyen, a mathematician and an author of the recently released paper “A Response to Geometric Unity”.]

On April 2, 2020, Eric Weinstein released a video of his 2013 Oxford lecture in which he presents his theory of everything “Geometric Unity” (GU). Since then, Weinstein has appeared in interviews alongside Sabine Hossenfelder, Brian Keating, Lee Smolin, Max Tegmark, and Stephen Wolfram to discuss his theory. 

In these interviews, Weinstein laments that the scientific community is dismissive of GU because he has not released a technical paper, but insists that scientists should be able to understand the substantive content of GU from the lecture alone (see here and here). In fact, Weinstein regards the conventional requirement of writing a paper to be flawed, since he questions the legitimacy of peer review, credit assignment, and institutional recognition (see here, here, here, and here).

Theo, my anonymous physicist coauthor, and I became aware of Weinstein and Geometric Unity through his podcast The Portal. We independently communicated with Weinstein on Discord and we both came to the conclusion that Weinstein was unable to provide an adequate explanation of GU or why it was a compelling theory. 

I also became increasingly skeptical of Weinstein’s claims when I pressed him about his alleged discovery of the Seiberg-Witten equations before Seiberg and Witten (see here, here, here, and here), a set of equations which was the central focus of my PhD thesis and several resultant papers. When I asked Weinstein for certain mathematical details about how he had arrived at the Seiberg-Witten equations, his vague responses led me to doubt his claims. Though Weinstein proposed to host a more in-depth discussion about GU and the requisite math and physics, no such discussion ever materialized.

These difficulties in communicating with Weinstein is what motivated our response paper. Suffice it to say that it was no easy task, as it required repeatedly watching his YouTube lecture and carefully timestamping its content in order to cite the material. These appear as clickable links in our response paper for those who wish to verify that our transcription of Weinstein's presentation is accurate.

Here's the high-level overview of how GU makes a claim towards a Theory of Everything. Essentially, GU asserts that there is a set of equations in 14 dimensions that are to contain the Einstein equations, Dirac equation, and Yang-Mills equations. Because the Einstein equations describe gravity, the Dirac equation accounts for fermions, and the Yang-Mills equations account for gauge-theories describing the strong and electroweak forces, all fundamental forces and particle types are therefore superficially accounted for. It is our understanding that it is in this very limited and weak sense that GU attempts to position itself as a Theory of Everything.

The most glaring deficiency in Weinstein’s presentation is that it does not incorporate any quantum theory. Establishing a consistent quantum theory of gravity alone has defied the efforts of nearly a century’s worth of vigorous research and is part of what makes formulating a Theory of Everything an enormous challenge. For GU to overlook this obstacle means that it has no possible claim on being a Theory of Everything.

Our findings are that even aside from its status as Theory of Everything, GU contains serious technical gaps both mathematical and physical. In summary:
  • GU introduces a “shiab” operator that overlooks a required complexification step. Omitting this step creates a mathematical error but including it precludes having a physically sensible quantum theory. 
  • The choice of gauge group for GU naively leads to a quantum gauge anomaly, thereby rendering the quantum theory inconsistent. Any straightforward attempt to eliminate this anomaly would make the shiab operator impossible to define, compounding the previous objection. 
  • The setup of GU asserts that it will have supersymmetry. In 14 dimensions, adopting supersymmetry is highly restrictive. It implies that the proposed gauge group of GU cannot be correct and that the theory as stated is incomplete. 
  •  Essential technical details of GU are omitted, leaving many of the central claims unverifiable.

Coincidentally, the night before we posted our response paper, Weinstein announced on Lex Fridman’s podcast that he plans on releasing a paper on GU on April 1st. We look forward to seeing Weinstein's response to the problems we have identified.

Saturday, February 27, 2021

Schrödinger’s Cat – Still Not Dead

[This is a transcript of the video embedded below.]

The internet, as we all know, was invented so we can spend our days watching cat videos, which is why this video is about the most famous of all science cats, Schrödinger’s cat. It is really both dead and alive? If so, what does that mean? And what has recent research to say about it? That’s what we’ll talk about today.

Quantum mechanics has struck physicists as weird ever since its discovery, more than a century ago. One especially peculiar aspect of quantum mechanics is that it forces you to accept the existence of superpositions. That are systems which can be in two states at the same time, until you make a measurement, which suddenly “collapses” the superposition into one definite measurement outcome.

The system here could be a single particle, like a photon, but it could also be a big object made of many particles. The thing is that in quantum mechanics, if two states exist separately, like an object being here and being there, then the superposition – that is the same object both here and there – must also exist. We know this experimentally, and I explained the mathematics behind this in an earlier video.

Now, you may think that being in a quantum superposition is something that only tiny particles can do. But these superpositions for large objects can’t be easily ignored, because you can take the tiny ones and amplify them to macroscopic size.

This amplification is what Erwin Schrödinger wanted to illustrate with a hypothetical experiment he came up with in 1935. In this experiment, a cat is in a box, together with a vial of poison, a trigger mechanism, and a radioactive atom. The nucleus of the atom has a fifty percent chance of decaying in a certain amount of time. If it decays, the trigger breaks the vial of poison, which kills the cat.

But the decay follows the laws of quantum physics. Before you measure it, the nucleus is both decayed and not decayed, and so, it seems that before one opens the box, the cat is both dead and alive. Or is it?

Well, depends on your interpretation of quantum mechanics, that is, what you think the mathematics means. In the most widely taught interpretation, the Copenhagen interpretation, the question what state the cat is in before you measure it is just meaningless. You’re not supposed to ask. The same is the case in all interpretations according to which quantum mechanics is a theory about the knowledge we have about a system, and not about the system itself.

In the many-worlds interpretation, in contrast, each possible measurement outcome happens in a separate universe. So, there’s a universe where the cat lives and one where the cat dies. When someone opens the box, that decides which universe they’re in. But for what observations are concerned, the result is exactly the same as in the Copenhagen interpretation.

Pilot wave-theory, which we talked about earlier, says that the cat is really always in only one state, you just don’t know which one it is until you look. The same is the case for spontaneous collapse models. In these models, the collapse of the wave-function is not merely an update when you open the box, but it’s a physical process.

It’s no secret that I myself am signed up to superdeterminism, which means that the measurement outcome is partly determined by the measurement settings. In this case, the cat may start out in a superposition, but by the time you measure it, it has reached the state which you actually observe. So, there is no sudden collapse in superdeterminism, it’s a smooth, deterministic, and local process.

Now, one cannot experimentally tell apart interpretations of mathematics, but collapse models, superdeterminism, and, under certain circumstances, pilot wave theory, make different predictions than Copenhagen or many worlds. So, clearly, one wants to do the experiment!

But. As you have undoubtedly noticed, cats are usually either dead or alive, not both. The reason is that even tiny interactions with a quantum system have the same effect as a measurement, and large objects, like cats, just constantly interact with something, like air or the cosmic background radiation. And that’s already sufficient to destroy a quantum superposition of a cat so quickly we’d never observe it. But physicists are trying to push the experimental boundary for bringing large objects into quantum states.

For example, in 2013, a team of physicists from the University of Calgary in Canada amplified a quantum superposition of a single photon. They first fired the photon at a partially silvered mirror, called a beam splitter, so that it became a superposition of two states: it passed through the mirror and also reflected back off it. Then they used one part of this superposition to trigger a laser pulse, which contains a whole lot of photons. Finally, they showed that the pulse was still in a superposition with the single photon. In another 2019 experiment, they amplified both parts of this superposition, and again they found that the quantum effects survived, for up to about 100 million photons.

Now, a group of 100 million photons not a cat, but it is bigger than your standard quantum particle. So, some headlines referred to this as the “Schrödinger's kitten” experiment.

But just in case you think a laser pulse is a poor approximation for a cat, how about this. In 2017, scientists at the University of Sheffield put bacteria in a cavity between two mirrors and they bounced light between the mirrors. The bacteria absorbed, emitted, and re-absorbed the light multiple times. The researchers could demonstrate that this way, some of the bacterias’ molecules became entangled with the cavity, so that is a special case of a quantum superposition.

However, a paper published the following year by scientists at Oxford University argued that the observations on the bacteria could also be explained without quantum effects. Now, this doesn’t mean that this is the correct explanation. Indeed, it doesn’t make much sense because we already know that molecules have quantum effects and they couple to light in certain quantum ways. However, this criticism demonstrates that it can be difficult to prove that something you observe is really a quantum effect, and the bacteria experiment isn’t quite there yet.

Let us then talk about a variant of Schrödinger’s cat that Eugene Wigner came up with in the nineteen-sixties. Imagine that this guy Wigner is outside the laboratory in which his friend just opens the box with the cat. In this case, not only would the cat be both dead and alive before the friend observes it, the friend would also both see a dead cat and see a live cat, until Wigner opens the door to the room where the experiment took place.

This sounds both completely nuts as well as an unnecessary complication, but bear with me for a moment, because this is a really important twist on Schrödinger’s cat experiment. Because if you think that the first measurement, so the friend observing the cat, actually resulted in a definite outcome, just that the friend outside the lab doesn’t know it, then, as long as the door is closed, you effectively have a deterministic hidden variable model for the second measurement. The result is clear already, you just don’t know what it is. But we know that deterministic hidden variable models cannot produce the results of quantum mechanics, unless they are also superdeterministic.

Now, again, of course, you can’t actually do the experiment with cats and friends and so on because their quantum effects would get destroyed too quickly to observe anything. But recently a team at Griffith University in Brisbane, Australia, created a version of this experiment with several devices that measure, or observe, pairs of photons. As anticipated, the measurement result agrees with the predictions of quantum mechanics.

What this means is that one of the following three assumptions must be wrong:

1. No Superdeterminism.
2. Measurements have definite outcomes.
3. No spooky action at a distance.

The absence of superdeterminism is sometimes called “Free choice” or “Free will”, but really it has nothing to do with free will. Needless to say, I think what’s wrong is rejecting superdeterminism. But I am afraid most physicists presently would rather throw out objective reality. Which one are you willing to give up? Let me know in the comments.

As of now, scientists remain hard at work trying to unravel the mysteries of Schrödinger's cat. For example, a promising line of investigation that’s still in its infancy is to measure the heat of a large system to determine whether quantum superpositions can influence its behavior. You find references to that as well as to the other papers that I mentioned in the info below the video. Schrödinger, by the way, didn’t have a cat, but a dog. His name was Burschie.

Wednesday, February 24, 2021

What's up with the Ozone Layer?

[This is a transcript of the video embedded below.]

Without the ozone layer, life, as we know it, would not exist. Scientists therefore closely monitor how the ozone layer is doing. In the past years, two new developments have attracted their attention and concern. What have they found and what does it mean? That’s what we’ll talk about today.

First things first, ozone is a molecule made of three oxygen atoms. It’s unstable, and on the surface of Earth it decays quickly, on the average within a day or so. For this reason, there’s very little ozone around us, and that’s good, because breathing in ozone is really unhealthy even in small doses.

But ozone is produced when sunlight hits the upper atmosphere, and accumulates far up there in a region called the “stratosphere”. This “ozone layer” then absorbs much of the sun’s ultraviolet light. The protection we get from the ozone layer is super-important, because the energy of ultraviolet light is high enough to break molecular bonds. Ultra-violet light, therefore, can damage cells or their genetic code. This means, with exposure to ultraviolet light, the risk of cancer and other mutations increases significantly. I have explained radiation risk in more detail in an earlier video, so check this out for more.

You have probably all heard of the ozone “hole” that was first discovered in the 1980s. This ozone hole is still with us today. It was caused by human emissions of ozone-depleting substances, notably chlorofluorocarbons – CFCs for short – that were used, among other things, in refrigerators and spray cans. CFCs have since been banned, but it will take at least several more decades for the ozone layer to completely recover. With that background knowledge, let’s now look at the two new developments.

What’s new?

The first news is that last year we have seen a large and pronounced ozone hole over the North Pole, in addition to the “usual” one over the South Pole. This has happened before, but it’s still an unusual event. That’s because the creation of an ozone hole is driven by supercooled droplets of water and nitric acid which are present in polar stratospheric clouds, so clouds that you find on the poles in the stratosphere. But these clouds can only form if it’s cold enough, and I mean really cold, below about −108 °F or −78 °C. Therefore, the major reason that ozone holes form more readily over the South pole than over the North Pole is quite simply that the South Pole is, on average, colder.

Why is the South Pole colder? Loosely speaking it’s because there are fewer high mountains in the Southern hemisphere than in the Northern hemisphere. And because of this, wind circulations around the South Pole tend to be more stable; they can lock in air, which then cools over the dark polar winter months. Air over the North Pole, in contrast, mixes more efficiently with warmer air from the mid latitudes.

On occasion, however, cold air gets locked in over the North Pole as well, which creates conditions similar to those at the South Pole. This is what happened in the Spring of 2020. For five weeks in March and early April, the North Pole saw the biggest arctic ozone hole on record, surrounded by a stable wind circulation called a polar vortex.

Now, we have all witnessed in the past decade that climate change alters wind patterns in the Northern Hemisphere, which gives rise to longer heat waves in the summer. This brings up the question whether climate change was one of the factors contributing to the northern ozone hole and whether we, therefore, must expect it to become a recurring event.

This question was studied in a recent paper by Martin Dameris and coauthors, for the full reference, please check the info below the video. Their conclusion is that, so far, observations of the northern ozone hole are consistent with it just being a coincidence. However, if coincidences pile upon coincidences, they make a trend. And so, researchers are now waiting to see whether the hole will return in the Spring of 2021 or in the coming years.

The second new development is that the ozone layer over the equator isn’t recovering as quickly as scientists expected. Indeed, above the equator, the amount of ozone in the lower parts of the stratosphere seems to be declining, though that trend is, for now, offset by the recovery of ozone in the upper parts of the stratosphere, which proceeds as anticipated.

The scientists who work on this previously considered various possible reasons, from data problems to illegal emissions of ozone-depleting substances. But the data have held up, and while we now know illegal emissions are indeed happening, these do not suffice to explain the observations.

Instead, further analysis indicates that the depletion of ozone in the lower stratosphere over the equator seems to be driven, again, by wind patterns. Earth’s ozone is itself created by sunlight, which is why most of it forms over the equator where sunlight is the most intensive. The ozone is then transported from the equatorial regions towards the poles by a wind cycle – called the “Brewer-Dobson circulation” – in which air rises over the equator and comes down again in mid to high latitude. With global warming, that circulation may become more intense, so that more ozone is redistributed from the equator to higher latitudes.

Again, though, the strength of this circulation also changes just by random chance. It’s therefore presently unclear whether the observations merely show a temporary fluctuation or are indicative of a trend. However, a recent analysis of different climate-chemistry models by Simone Dietmüller et al shows that human-caused carbon dioxide emissions contribute to the trend of less ozone over the equator and more ozone in the mid-latitudes, and the trend is therefore likely to continue. I have to warn you though that this paper has not yet passed peer review.

Before we talk about what this all means, I want to thank my tier four supporters on Patreon. Your help is greatly appreciated. And you, too, can help us produce videos by supporting us on Patreon. Now let’s talk about what these news from the ozone layer mean.

You may say, ah, so what. Tell the people in the tropics to put on more sun-lotion and those in Europe to take more vitamin D. This is a science channel, and I’ll not tell anyone what they should or shouldn’t worry about, that’s your personal business. But to help you gauge the present situation, let me tell you an interesting bit of history.

The Montreal protocol from 1987, which regulates the phasing out of ozone depleting substances, was passed quickly after the discovery of the first ozone hole. It is often praised as a milestone of environmental protection, the prime example that everyone points to for how to do it right. But I think the Montreal Protocol teaches us a very different lesson.

That’s because scientists knew already in the 1970s, long before the first ozone hole was discovered, that chlorofluorocarbons would deplete the ozone layer. But they thought the effect would be slow and global. When the ozone hole over the South Pole was discovered by the British Antarctic Survey in 1985, that came as a complete surprise.

Indeed, fun fact, it later turned out that American satellites had measured the ozone hole years before the British Survey did, but since the data were so far off the expected value, they were automatically overwritten by software.

The issue was that at the time the effects of polar stratospheric clouds on the ozone layer were poorly understood, and the real situation turned out to be far worse than scientists thought.

So, for me, the lesson from the Montreal Protocol is that we’d be fools to think that we now have all pieces in place to understand our planet’s climate system. We know we’re pushing the planet into regimes that scientists poorly understand and chances are that this will bring more unpleasant surprises.

So what do those changes in the ozone layer mean? They mean we have to pay close attention to what’s happening.

Saturday, February 20, 2021

The Science of Making Rain

[This is a transcript of the video embedded below]

Wouldn’t it be great if we could control the weather? I am sure people have thought about this ever since there’ve been people having thoughts. But what are scientists thinking about this today? In this video we’ll look at the best understood case of weather control, that’s making rain by seeding clouds. How is cloud seeding supposed to work? Does it work? And if it works, is it a good idea? That’s what we’ll talk about today.

First things first, what is cloud seeding? Cloud seeding is a method for increasing precipitation, which is a fancy word for water that falls off the sky in any form: rain, snow, hail and so on. One seeds a cloud by spraying small particles into it, which encourages the cloud to shed precipitation. At least that’s the idea. Cloud seeding does not actually create new clouds. It’s just a method to get water out of already existing clouds. So you can’t use it to turn a desert into a forest – the water needs to be in the air already.

Cloud seeding was discovered, as so many things, accidentally. In nineteen-fourty-six a man named Vincent Schaefer was studying clouds in a box in his laboratory, but it was too warm for his experiment to work. So he put dry ice into his cloud box, that’s carbon dioxide frozen at about minus eighty degrees Celsius. He then observed that small grains of dry ice would rapidly grow to the size of snowflakes.

Schaefer realized this happened because the water in the clouds was supercooled, that means below freezing point, but still liquid. This is an energetically unstable state. If one introduces tiny amounts of crystals into a supercooled cloud, the water droplets will attach to the crystals immediately and freeze, so the crystals grow quickly until they are heavy enough to fall down. Schaefer saw this happening when sprinkles of solid dry ice fell into his box. He had seeded the first cloud. In the following years he’d go on to test various methods of cloud seeding.

Today scientists distinguish two different ways of seeding clouds, either by growing ice crystals, as Schaefer did, that’s called Glaciogenic seeding. Or by growing water droplets, which is called hygroscopic seeding.

How does it work?

The method that Schaefer used is today more specifically called the “Glaciogenic static mode”, static because it doesn’t rely on circulation within the cloud. There’s also a Glaciogenic dynamic mode which works somewhat differently.

In the dynamic mode, one exploits that the conversion of the supercoooled water into ice releases heat, and that heat creates an updraft. This allows the seeds to reach more water droplets, so the cloud grows, and eventually more snow falls. One of the substances commonly used for this is silver iodide, though there are a number of different organic and inorganic substances that have proved to work.

For hygroscopic seeding one uses particles that can absorb water that serve as condensation seeds to turn water vapor into large drops that become rain. The substances used for this are typically some type of salt.

How do you do it?

Seeding clouds in a box in the laboratory is one thing, seeding a real cloud another thing entirely. To seed a real cloud, one either uses airplanes that spray the seeding particles directly into the cloud, or targets the cloud with a rocket which gives off the particles, or one uses a ground-based generator that releases the particles slowly mixed with hot air, that rises up into the atmosphere. They do this for example in Colorado, and other winter tourism areas, and claim that it can lead to several inches more snow.

But does it work?

It’s difficult to test if cloud seeding actually works. The issue is, as I said, seeding doesn’t actually create clouds, it just encourages clouds to release snow or rain at a particular time and place. But how do you know if it wouldn’t have rained anyway?

After Schaefer’s original work in the nineteen-fifties, the United States launched a research program on cloud seeding, and so did several other countries including the UK, Canada, India, and Australia. But evidence that cloud seeding works didn’t come by for a long time, and so, in the late nineteen-eighties, funding into this research area drastically declined. That didn’t deter people from trying to seed clouds though. Despite the absence of evidence quite a few winter sport areas used cloud seeding in an attempt to increase snow fall.

But beginning around the turn of the millennium, interest in cloud seeding was revived by several well-funded studies in the United States, Australia, Japan, and China, for just to name a few. Quite possibly this interest was driven by the increasing risk of drought due to climate change. And today, scientists have much better technology to figure out whether cloud seeding works, and so, the new studies could finally deliver evidence that it does work.

Some of the most convincing studies used radar measurements to detect ice crystals in clouds after a plane went through and distributed the seeds. This was done for example in a 2011 study in Australia and also in a 2018 study in the northern part of the United States.

These radar measurements are a direct signature of seeding, glaciogenic seeding in this case. The researchers can tell that the ice crystals are caused by the seeding because the crystals that appear in the radar signal replicate the trajectory of the seeding plane, downwind.

From the radar measurements they can also tell that the concentration of ice crystals is two to three orders of magnitude larger than those in neighboring, not-seeded areas. And, they know that the newly formed ice-crystals grow, because the amount of radar signal that’s reflected depends on the size of the particle.

This and similar studies also contained several cross checks. For example, they seeded some areas of the clouds with particles that are known to grow ice crystals and others with particles that aren’t expected to do that. And they detected ice formation only for the particles that act as seeds. They also checked that the resulting snowfall is really the one that came from the seeding. One can do this by analyzing the snow for traces of the substance used for seeding.

Besides this, there are also about a dozen studies that evaluated statistically if there changes in precipitation from the glaciogenic static seeding. These come from research programs in the United States, Australia, and Japan. To get statistics, they monitor the unseeded areas surrounding the seeded region as an estimation of the natural precipitation. It’s not a perfect method of course, but done often enough and for long enough periods, it gives a reasonable assessment for the increase of precipitation due to seeding.

These studies typically found an increase in precipitation around 15% and estimated the probability that this increase happened just coincidentally with 5%.

So, at least for the seeding of ice crystals, there is now pretty solid evidence that it works better than a rain dance. For the other types of seeding it’s still unclear whether it’s efficient.

Please check the information below the video for references to the papers.

The world’s biggest weather modification program is China’s. The Chinese government employs an estimated 35,000 people to this end already, and in December 2020 they announced they’ll increase investments into their weather modification program five-fold.

Now, as we have seen, cloud seeding isn’t terribly efficient and for it to work, the clouds have to be already there in the first place. Nevertheless, there’s an obvious worry here. If some countries can go and make clouds rain off over their territory, that might leave less water for neighboring countries.

And the bad news is, there aren’t currently any international laws regulating this. Most countries have regulations for what you are allowed to spray into the air or how much, but cloud seeding is mostly legal. There is an international convention, the Environmental Modification Convention, that seventy-eight states have signed, which prohibits “the military and hostile use of environmental modification techniques.” But this can’t in any clear way be applied to cloud seeding.

I think that now that we know cloud seeding does work, we should think about how to regulate it, before someone actually gets good at it. Controlling the weather is an ancient dream, but, thanks to Vincent Schaefer, maybe it won’t remain a dream forever. When he died in 1993, his obituary in the New York Times said “He was hailed as the first person to actually do something about the weather and not just talk about it”.

Tuesday, February 16, 2021

Saturday, February 13, 2021

The Simulation Hypothesis is Pseudoscience

[This is a transcript of the video embedded below.]

I quite like the idea that we live in a computer simulation. It gives me hope that things will be better on the next level. Unfortunately, the idea is unscientific. But why do some people believe in the simulation hypothesis? And just exactly what’s the problem with it? That’s what we’ll talk about today.

According to the simulation hypothesis, everything we experience was coded by an intelligent being, and we are part of that computer code. That we live in some kind of computation in and by itself is not unscientific. For all we currently know, the laws of nature are mathematical, so you could say the universe is really just computing those laws. You may find this terminology a little weird, and I would agree, but it’s not controversial. The controversial bit about the simulation hypothesis is that it assumes there is another level of reality where someone or some thing controls what we believe are the laws of nature, or even interferes with those laws.

The belief in an omniscient being that can interfere with the laws of nature, but for some reason remains hidden from us, is a common element of monotheistic religions. But those who believe in the simulation hypothesis argue they arrived at their belief by reason. The philosopher Nick Boström, for example, claims it’s likely that we live in a computer simulation based on an argument that, in a nutshell, goes like this. If there are a) many civilizations, and these civilizations b) build computers that run simulations of conscious beings, then c) there are many more simulated conscious beings than real ones, so you are likely to live in a simulation.

Elon Musk is among those who have bought into it. He too has said “it’s most likely we’re in a simulation.” And even Neil DeGrasse Tyson gave the simulation hypothesis “better than 50-50 odds” of being correct.

Maybe you’re now rolling your eyes because, come on, let the nerds have some fun, right? And, sure, some part of this conversation is just intellectual entertainment. But I don’t think popularizing the simulation hypothesis is entirely innocent fun. It’s mixing science with religion, which is generally a bad idea, and, really, I think we have better things to worry about than that someone might pull the plug on us. I dare you!

But before I explain why the simulation hypothesis is not a scientific argument, I have a general comment about the difference between religion and science. Take an example from Christian faith, like Jesus healing the blind and lame. It’s a religious story, but not because it’s impossible to heal blind and lame people. One day we might well be able to do that. It’s a religious story because it doesn’t explain how the healing supposedly happens. The whole point is that the believers take it on faith. In science, in contrast, we require explanations for how something works.

Let us then have a look at Boström’s argument. Here it is again. If there are many civilizations that run many simulations of conscious beings, then you are likely to be simulated.

First of all, it could be that one or both of the premises is wrong. Maybe there aren’t any other civilizations, or they aren’t interested in simulations. That wouldn’t make the argument wrong of course, it would just mean that the conclusion can’t be draw. But I will leave aside the possibility that one of the premises is wrong because really I don’t think we have good evidence for one side or the other.

The point I have seen people criticize most frequently about Boström’s argument is that he just assumes it is possible to simulate human-like consciousness. We don’t actually know that this is possible. However, in this case it would require explanation to assume that it is not possible. That’s because, for all we currently know, consciousness is simply a property of certain systems that process large amounts of information. It doesn’t really matter exactly what physical basis this information processing is based on. Could be neurons or could be transistors, or it could be transistors believing they are neurons. So, I don’t think simulating consciousness is the problematic part.

The problematic part of Boström’s argument is that he assumes it is possible to reproduce all our observations using not the natural laws that physicists have confirmed to extremely high precision, but using a different, underlying algorithm, which the programmer is running. I don’t think that’s what Bostrom meant to do, but it’s what he did. He implicitly claimed that it’s easy to reproduce the foundations of physics with something else.

But nobody presently knows how to reproduce General Relativity and the Standard Model of particle physics from a computer algorithm running on some sort of machine. You can approximate the laws that we know with a computer simulation – we do this all the time – but if that was how nature actually worked, we could see the difference. Indeed, physicists have looked for signs that natural laws really proceed step by step, like in a computer code, but their search has come up empty handed. It’s possible to tell the difference because attempts to algorithmically reproduce natural laws are usually incompatible with the symmetries of Einstein’s theories of special and general relativity. I’ll leave you a reference in the info below the video. The bottomline is, it’s not easy to outdo Einstein.

It also doesn’t help by the way if you assume that the simulation would run on a quantum computer. Quantum computers, as I have explained earlier, are special purpose machines. Nobody currently knows how to put General Relativity on a quantum computer.

A second issue with Boström’s argument is that, for it to work, a civilization needs to be able to simulate a lot of conscious beings, and these conscious beings will themselves try to simulate conscious beings, and so on. This means you have to compress the information that we think the universe contains. Bostrom therefore has to assume that it’s somehow possible to not care much about the details in some parts of the world where no one is currently looking, and just fill them in in case someone looks.

Again though, he doesn’t explain how this is supposed to work. What kind of computer code can actually do that? What algorithm can identify conscious subsystems and their intention and then quickly fill in the required information without ever producing an observable inconsistency. That’s a much more difficult issue than Bostrom seems to appreciate. You cannot in general just throw away physical processes on short distances and still get the long distances right.

Climate models are an excellent example. We don’t currently have the computational capacity to resolve distances below something like 10 kilometers or so. But you can’t just throw away all the physics below this scale. This is a non-linear system, so the information from the short scales propagates up into large scales. If you can’t compute the short-distance physics, you have to suitably replace it with something. Getting this right even approximately is a big headache. And the only reason climate scientists do get it approximately right is that they have observations which they can use to check whether their approximations work. If you only have a simulation, like the programmer in the simulation hypothesis, you can’t do that.

And that’s my issue with the simulation hypothesis. Those who believe it make, maybe unknowingly, really big assumptions about what natural laws can be reproduced with computer simulations, and they don’t explain how this is supposed to work. But finding alternative explanations that match all our observations to high precision is really difficult. The simulation hypothesis, therefore, just isn’t a serious scientific argument. This doesn’t mean it’s wrong, but it means you’d have to believe it because you have faith, not because you have logic on your side.

Saturday, February 06, 2021

Don't Fall for Quantum Hype

[This is a transcript of the video embedded below.]

Quantum technology is presently amazingly popular. The United States and the United Kingdom have made it a „national initiative”, the European Union has a quantum technology “flagship.” India has a “national mission”, and China has announced they’ll put quantum technology into their next 5 year plan. What is “quantum technology” and what impact will it have on our lives? That’s what we will talk about today.

The quantum initiatives differ somewhat from nation to nation, but they usually contain research programs on four key topics that I will go through in this video. That’s: quantum computing, the quantum internet, quantum metrology, and quantum simulations.

We’ll start with quantum computing.

Quantum computing is one of the most interesting developments in the foundations of physics right now. I have talked about quantum computing in more detail in an earlier video, so check this out for more. In brief, quantum computers can speed up certain types of calculations dramatically. A quantum computer can do this because it does not work with “bits” that have values of either 0 or 1, but with quantum bits – “qbits” for short – that can be entangled, and can take on any value in between 0 and 1.

It’s not an accident that I say “between” instead of “both”, I think this describes the mathematics more accurately. Either way, of course, these are just attempts to put equations into words and the words will in the best case give you a rough idea of what’s really going on. But the bottom line is that you can process much more information with qbits than with normal bits. The consequence is that quantum computers can do certain calculations much faster than conventional computers. This speed-up only works for certain types of calculations though. So, quantum computers are special purpose machines.

The theory behind quantum computing is well understood and uncontroversial. Quantum computers already exist and so far they work as predicted. The problem with quantum computers is that for them to become commercially useful, you need to be able to bring a large number of qbits into controllable quantum states, and that’s really, really difficult.

Estimates say, the number we need to reach is roughly a million, details depend on the quality of qbits and the problem you are trying to solve. The status of research is presently at about 50 qbits. Yes, that’s a good start, but it’s a long way to go to a million and there’s no reason to expect anything resembling Moore’s will help us here, because we’re already working on the limit.

So, the major question for quantum computing is not “does it work”. We know it works. The question is “Will it scale”?

To me the situation for quantum computing today looks similar to the situation for nuclear fusion 50 years ago. 50 years ago, physicists understood how nuclear fusion works just fine, and they had experimentally checked that their theories were correct. The problem was “just” to make the technology large and still efficient enough to actually be useful. And, as you all know, that’s still the problem today.

Now, I am positive that we will eventually use both nuclear fusion and quantum computing in everyday life. But keep in mind that technology enthusiasts tend to be overly optimistic in their predictions for how long it will take for technology to become useful.

The Quantum Internet

The quantum internet refers to information transmitted with quantum effects. This means most importantly, the quantum internet uses quantum cryptography as a security protocol. Quantum cryptography is a method to make information transfer secure by exploiting the fact that in quantum mechanics, a measurement irreversibly changes the state of a quantum particle. This means if you encode a message suitably with quantum particles, you can tell whether it has been intercepted by a hacker, because the hacker’s measurement would change the behavior of the particles. That doesn’t prevent hacking, but it means you’d know when it happens.

I made an entire video about how quantum cryptography works, so check this out if you want to know more. Today I just want to draw your attention to two pointes that the headlines tend to get wrong.

First, you cannot transfer information faster than the speed of light with the quantum internet or with any other quantum effect. That quantum mechanics respects the speed of light limit is super-basic knowledge that you’d think every science writer knows about. Unfortunately, this is not the case. You see this over and over again in the headlines, that the quantum internet can supposedly beat the speed of light limit. It cannot. That’s just wrong.

And no, this does not depend on your interpretation of quantum mechanics, it’s wrong either way you look at it. No, this is not what Einstein meant with “spooky action at a distance”. It’s really just wrong. Quantum mechanics does not allow you to send information faster than the speed of light.

This isn’t the major issue I have with the coverage of the quantum internet though, because that’s obviously wrong and really what do you expect from the Daily Mail. No, the major issue I have is that almost all of the of the articles mislead the audience about the relevance of the quantum internet.

It’s not explicitly lying, but it’s lying by omission. Here is a recent example from Don Lincoln who does exactly this, and pretty much every article you’ll read about the quantum internet goes somewhat like this.

First, they will tell you that quantum computers, if they reach a sufficiently large number of qbits, can break the security protocols that are currently being used on the internet quickly, which is a huge problem for national security and privacy. Second, they will tell you that the quantum internet is safe from hacking by quantum computers.

Now, these two statements separately are entirely correct. But there’s an important piece of information missing between them, which is that we have security protocols that do not require quantum technology but are safe from quantum computers nevertheless. They are just presently not in use. These security protocols that, for all we currently know, cannot be broken even by quantum computers are, somewhat confusingly, called “post-quantum cryptography” or, in somewhat better terminology, quantum-safe cryptography.

This means that we do not need the quantum internet to be safe from quantum computers. We merely need to update the current security protocols, and this update is already under way. For some reason the people who work on quantum things don’t like draw attention to that.

Quantum metrology

Quantum metrology is a collection of techniques to improve measurements by help of quantum effects. The word “metrology” means that this research is about measurement; it’s got nothing to do with meteorology, different thing entirely. Quantum metrology has recently seen quite a few research developments that I expect to become useful soon in areas like medicine or material science. That’s because one of the major benefits of quantum measurements is that they can make do with very few particles, and that means minimal damage to the sample.

Personally I think quantum metrology is the most promising part of the quantum technology package and the one that we’re most likely to encounter in new applications soon.

I made a video especially about quantum metrology earlier, so check this out for more detail.

Quantum Simulations

Quantum simulations are a scientifically extremely interesting development that I think has been somewhat underappreciated. In a quantum simulation you try to understand a complicated system whose properties you cannot calculate, by reproducing its behavior as good as you can with a different quantum system that you can control better, so you can learn more about it.

This is actually something I have worked on myself for some years, in particular the possibility that you can simulate black holes with superfluids. I will tell you more about this some other time, for today let me just say that I think this is a rather dramatic shift in the foundations of physics because it allows you to take out mathematics as the middleman. Instead of modeling a system with mathematics, either with a pen on paper or with computer code, you model it directly with another system without having to write down equations in one form or another.

Now, quantum simulations are really cool from the perspective of basic research, because they allow you to learn a great deal. You can for example simulate particles similar to the Higgs or certain types of neutrinos, and learn something about their behavior, which you couldn’t do in any other way.

However, quantum simulations are unlikely to have technological impact any time soon, and, what’s worse, they have been oversold by some people in the community. Especially all the talk about simulating wormholes is nonsense. These simulated “wormholes” have nothing in common with actual wormholes that, in case you missed it, we have good reason to think do not exist in the first place. I am highlighting the wormhole myth because to my shock I saw it appear in a white house report. So, quantum simulations are cool for the most part, but if someone starts babbling about wormholes, that is not serious science.

I hope this quick summary helps you make sense of all the quantum stuff in the headlines.