Saturday, May 15, 2021

Quantum Computing: Top Players 2021

[This is a transcript of the video embedded below.]

Quantum computing is currently one of the most exciting emergent technologies, and it’s almost certainly a topic that will continue to make headlines in the coming years. But there are now so many companies working on quantum computing, that it’s become really confusing. Who is working on what? What are the benefits and disadvantages of each technology? And who are the newcomers to watch out for? That’s what we will talk about today.

Quantum computers use units that are called “quantum-bits” or qubits for short. In contrast to normal bits, which can take on two values, like 0 and 1, a qubit can take on an arbitrary combination of two values. The magic of quantum computing happens when you entangle qubits.

Entanglement is a type of correlation, so it ties qubits together, but it’s a correlation that has no equivalent in the non-quantum world. There are a huge number of ways qubits can be entangled and that creates a computational advantage - if you want to solve certain mathematical problems.

Quantum computer can help for example to solve the Schrödinger equation for complicated molecules. One could use that to find out what properties a material has without having to synthetically produce it. Quantum computers can also solve certain logistic problems of optimize financial systems. So there is a real potential for application.

But quantum computing does not help for *all types of calculations, they are special purpose machines. They also don’t operate all by themselves, but the quantum parts have to be controlled and read out by a conventional computer. You could say that quantum computers are for problem solving what wormholes are for space-travel. They might not bring you everywhere you want to go, but *if they can bring you somewhere, you’ll get there really fast.

What makes quantum computing special is also what makes it challenging. To use quantum computers, you have to maintain the entanglement between the qubits long enough to actually do the calculation. And quantum effects are really, really sensitive even to smallest disturbances. To be reliable, quantum computer therefore need to operate with several copies of the information, together with an error correction protocol. And to do this error correction, you need more qubits. Estimates say that the number of qubits we need to reach for a quantum computer to do reliable and useful calculations that a conventional computer can’t do is about a million.

The exact number depends on the type of problem you are trying to solve, the algorithm, and the quality of the qubits and so on, but as a rule of thumb, a million is a good benchmark to keep in mind. Below that, quantum computers are mainly of academic interest.

Having said that, let’s now look at what different types of qubits there are, and how far we are on the way to that million.

1. Superconducting Qubits

Superconducting qubits are by far the most widely used, and most advanced type of qubits. They are basically small currents on a chip. The two states of the qubit can be physically realized either by the distribution of the charge, or by the flux of the current.

The big advantage of superconducting qubits is that they can be produced by the same techniques that the electronics industry has used for the past 5 decades. These qubits are basically microchips, except, here it comes, they have to be cooled to extremely low temperatures, about 10-20 milli Kelvin. One needs these low temperatures to make the circuits superconducting, otherwise you can’t keep them in these neat two qubit states.

Despite the low temperatures, quantum effects in superconducting qubits disappear extremely quickly. This disappearance of quantum effects is measured in the “decoherence time”, which for the superconducting qubits is currently a few 10s of micro-seconds.

Superconducting qubits are the technology which is used by Google and IBM and also by a number of smaller companies. In 2019, Google was first to demonstrate “quantum supremacy”, which means they performed a task that a conventional computer could not have done in a reasonable amount of time. The processor they used for this had 53 qubits. I made a video about this topic specifically, so check this out for more. Google’s supremacy claim was later debated by IBM. IBM argued that actually the calculation could have been performed within reasonable time on a conventional super-computer, so Google’s claim was somewhat premature. Maybe it was. Or maybe IBM was just annoyed they weren’t first.

IBM’s quantum computers also use superconducting qubits. Their biggest one currently has 65 qubits and they recently put out a roadmap that projects 1000 qubits by 2023. IBMs smaller quantum computers, the ones with 5 and 16 qubits, are free to access in the cloud.

The biggest problem for superconducting qubits is the cooling. Beyond a few thousand or so, it’ll become difficult to put all qubits into one cooling system, so that’s where it’ll become challenging.

2. Photonic quantum computing

In photonic quantum computing the qubits are properties related to photons. That may be the presence of a photon itself, or the uncertainty in a particular state of the photon. This approach is pursued for example by the company Xanadu in Toronto. It is also the approach that was used a few months ago by a group of Chinese researchers, which demonstrated quantum supremacy for photonic quantum computing.

The biggest advantage of using photons is that they can be operated at room temperature, and the quantum effects last much longer than for superconducting qubits, typically some milliseconds but it can go up to some hours in ideal cases. This makes photonic quantum computers much cheaper and easier to handle. The big disadvantage is that the systems become really large really quickly because of the laser guides and optical components. For example, the photonic system of the Chinese group covers a whole tabletop, whereas superconducting circuits are just tiny chips.

The company PsiQuantum however claims they have solved the problem and have found an approach to photonic quantum computing that can be scaled up to a million qubits. Exactly how they want to do that, no one knows, but that’s definitely a development to have an eye on.

3. Ion traps

In ion traps, the qubits are atoms that are missing some electrons and therefore have a net negative charge. You can then trap these ions in electromagnetic fields, and use lasers to move them around and entangle them. Such ion traps are comparable in size to the qubit chips. They also need to be cooled but not quite as much, “only” to temperatures of a few Kelvin.

The biggest player in trapped ion quantum computing is Honeywell, but the start-up IonQ uses the same approach. The advantages of trapped ion computing are longer coherence times than superconducting qubits – up to a few minutes. The other advantage is that trapped ions can interact with more neighbors than superconducting qubits.

But ion traps also have disadvantages. Notably, they are slower to react than superconducting qubits, and it’s more difficult to put many traps onto a single chip. However, they’ve kept up with superconducting qubits well.

Honeywell claims to have the best quantum computer in the world by quantum volume. What the heck is quantum volume? It’s a metric, originally introduced by IBM, that combines many different factors like errors, crosstalk and connectivity. Honeywell reports a quantum volume of 64, and according to their website, they too are moving to the cloud next year. IonQ’s latest model contains 32 trapped ions sitting in a chain. They also have a roadmap according to which they expect quantum supremacy by 2025 and be able to solve interesting problems by 2028.

4. D-Wave

Now what about D-Wave? D-wave is so far the only company that sells commercially available quantum computers, and they also use superconducting qubits. Their 2020 model has a stunning 5600 qubits.

However, the D-wave computers can’t be compared to the approaches pursued by Google and IBM because D-wave uses a completely different computation strategy. D-wave computers can be used for solving certain optimization problems that are defined by the design of the machine, whereas the technology developed by Google and IBM is good to create a programmable computer that can be applied to all kinds of different problems. Both are interesting, but it’s comparing apples and oranges.

5. Topological quantum computing

Topological quantum computing is the wild card. There isn’t currently any workable machine that uses the technique. But the idea is great: In topological quantum computers, information would be stored in conserved properties of “quasi-particles”, that are collective motions of particles. The great thing about this is that this information would be very robust to decoherence.

According to Microsoft “the upside is enormous and there is practically no downside.” In 2018, their director of quantum computing business development, told the BBC Microsoft would have a “commercially relevant quantum computer within five years.” However, Microsoft had a big setback in February when they had to retract a paper that demonstrated the existence of the quasi-particles they hoped to use. So much about “no downside”.

6. The far field

These were the biggest players, but there are two newcomers that are worth having an eye on.

The first is semi-conducting qubits. They are very similar to the superconducting qubits, but here the qubits are either the spin or charge of single electrons. The advantage is that the temperature doesn’t need to be quite as low. Instead of 10 mK, one “only” has to reach a few Kelvin. This approach is presently pursued by researchers at TU Delft in the Netherlands, supported by Intel.

The second are Nitrogen Vacancy systems where the qubits are places in the structure of a carbon crystal where a carbon atom is replaced with nitrogen. The great advantage of those is that they’re both small and can be operated at room temperatures. This approach is pursued by The Hanson lab at Qutech, some people at MIT, and a startup in Australia called Quantum Brilliance.

So far there hasn’t been any demonstration of quantum computation for these two approaches, but they could become very promising.

So, that’s the status of quantum computing in early 2021, and I hope this video will help you to make sense of the next quantum computing headlines, which are certain to come.

I want to thank Tanuj Kumar for help with this video.

Saturday, May 08, 2021

What did Einstein mean by “spooky action at a distance”?

[This is a transcript of the video embedded below.]

Quantum mechanics is weird – I am sure you’ve read that somewhere. And why is it weird? Oh, it’s because it’s got that “spooky action at a distance”, doesn’t it? Einstein said that. Yes, that guy again. But what is spooky at a distance? What did Einstein really say? And what does it mean? That’s what we’ll talk about today.

The vast majority of sources on the internet claim that Einstein’s “spooky action at a distance” referred to entanglement. Wikipedia for example. And here is an example from Science Magazine. You will also find lots of videos on YouTube that say the same thing: Einstein’s spooky action at a distance was entanglement. But I do not think that’s what Einstein meant.

Let’s look at what Einstein actually said. The origin of the phrase “spooky action at a distance” is a letter that Einstein wrote to Max Born in March 1947. In this letter, Einstein explains to Born why he does not believe that quantum mechanics really describes how the world works.

He begins by assuring Born that he knows perfectly well that quantum mechanics is very successful: “I understand of course that the statistical formalism which you pioneered captures a significant truth.” But then he goes on to explain his problem. Einstein writes:
“I cannot seriously believe [in quantum mechanics] because the theory is incompatible with the requirement that physics should represent reality in space and time without spooky action at a distance...”

There it is, the spooky action at a distance. But just exactly what was Einstein referring to? Before we get into this, I have to quickly remind you how quantum mechanics works.

In quantum mechanics, everything is described by a complex-valued wave-function usually denoted Psi. From the wave-function we calculate probabilities for measurement outcomes, for example the probability to find a particle at a particular place. We do this by taking the absolute square of the wave-function.

But we cannot observe the wave-function itself. We only observe the outcome of the measurement. This means most importantly that if we make a measurement for which the outcome was not one hundred percent certain, then we have to suddenly „update” the wave-function. That’s because the moment we measure the particle, we know it’s either there or it isn’t. And this update is instantaneous. It happens at the same time everywhere, seemingly faster than the speed of light. And I think *that’s what Einstein was worried about because he had explained that already twenty years earlier, in the discussion of the 1927 Solvay conference.

In 1927, Einstein used the following example. Suppose you direct a beam of electrons at a screen with a tiny hole and ask what happens with a single electron. The wave-function of the electron will diffract on the hole, which means it will spread symmetrically into all directions. Then you measure it at a certain distance from the hole. The electron has the same probability to have gone in any direction. But if you measure it, you will suddenly find it in one particular point.

Einstein argues: “The interpretation, according to which [the square of the wave-function] expresses the probability that this particle is found at a given point, assumes an entirely peculiar mechanism of action at a distance, which prevents the wave continuously distributed in space from producing an action in two places on the screen.”

What he is saying is that somehow the wave-function on the left side of the screen must know that the particle was actually detected on the other side of the screen. In 1927, he did not call this action at a distance “spooky” but “peculiar” but I think he was referring to the same thing.

However, in Einstein’s electron argument it’s rather unclear what is acting on what. Because there is only one particle. This is why, Einstein together with Podolsky and Rosen later looked at the measurement for two particles that are entangled, which led to their famous 1935 EPR paper. So this is why entanglement comes in: Because you need at least two particles to show that the measurement on one particle can act on the other particle. But entanglement itself is unproblematic. It’s just a type of correlation, and correlations can be non-local without there being any “action” at a distance.

To see what I mean, forget all about quantum mechanics for a moment. Suppose I have two socks that are identical, except the one is red and the other one blue. I put them in two identical envelopes and ship one to you. The moment you open the envelope and see that your sock is red, you know that my sock is blue. That’s because the information about the color in the envelopes is correlated, and this correlation can span over large distances.

There isn’t any spooky action going on though because that correlation was created locally. Such correlations exist everywhere and are created all the time. Imagine for example you bounce a ball off a wall and it comes back. That transfers momentum to the wall. You can’t see how much, but you know that the total momentum is conserved, so the momentum of the wall is now correlated with that of the ball.

Entanglement is a correlation like this, it’s just that you can only create it with quantum particles. Suppose you have a particle with total spin zero that decays in two particles that can have spin either plus or minus one. One particle goes left, the other one right. You don’t know which particle has which spin, but you know that the total spin is conserved. So either the particle going to the right had spin plus one and the one going left minus one or the other way round.

According to quantum mechanics, before you have measured one of the particles, both possibilities exist. You can then measure the correlations between the spins of both particles with two detectors on the left and right side. It turns out that the entanglement correlations can in certain circumstances be stronger than non-quantum correlations. That’s what makes them so interesting. But there’s no spooky action in the correlation themselves. These correlations were created locally. What Einstein worried about instead is that once you measure the particle on one side, the wave-function for the particle on the other side changes.

But isn’t this the same with the two socks? Before you open the envelope the probability was 50-50 and then when you open it, it jumps to 100:0. But there’s no spooky action going on there. It’s just that the probability was a statement about what you knew, and not about what really was the case. Really, which sock was in which envelope was already decided the time I sent them.

Yes, that explains the case for the socks. But in quantum mechanics, that explanation does not work. If you think that really it was decided already which spin went into which direction when they were emitted, that will not create sufficiently strong correlations. It’s just incompatible with observations. Einstein did not know that. These experiments were done only after he died. But he knew that using entangled states you can demonstrate whether spooky action is real, or not.

I will admit that I’m a little defensive of good, old Albert Einstein because I feel that a lot of people too cheerfully declare that Einstein was wrong about quantum mechanics. But if you read what Einstein actually wrote, he was exceedingly careful in expressing himself and yet most physicists dismissed his concerns. In April 1948, he repeats his argument to Born. He writes that a measurement on one part of the wave-function is a “physical intervention” and that “such an intervention cannot immediately influence the physically reality in a distant part of space.” Einstein concludes:
“For this reason I tend to believe that quantum mechanics is an incomplete and indirect description of reality which will later be replaced by a complete and direct one.”

So, Einstein did not think that quantum mechanics was wrong. He thought it was incomplete, that something fundamental was missing in it. And in my reading, the term “spooky action at a distance” referred to the measurement update, not to entanglement.

Saturday, May 01, 2021

Dark Matter: The Situation Has Changed

[This is a transcript of the video embedded below]

Hi everybody. We haven’t talked about dark matter for some time. Which is why today I want to tell you how my opinion about dark matter has changed over the past twenty years or so. In particular, I want to discuss whether dark matter is made of particles or if not, what else it could be. Let’s get started.

First things first, dark matter is the hypothetical stuff that astrophysicists think makes up eighty percent of the matter in the universe, or 24 percent of the combined matter-energy. Dark matter should not be confused with dark energy. These are two entirely different things. Dark energy is what makes the universe expand faster, dark matter is what makes galaxies rotate faster, though that’s not the only thing dark matter does, as we’ll see in a moment.

But what is dark matter? 20 years ago I thought dark matter is most likely made of some kind of particle that we haven’t measured so far. Because, well, I’m a particle physicist by training. And if a particle can explain an observation, why look any further? Also, at the time there were quite a few proposals for new particles that could fit the data, like some supersymmetric particles or axions. So, the idea that dark matter is stuff, made of particles, seemed plausible to me and like the obvious explanation.

That’s why, just among us, I always thought dark matter is not a particularly interesting problem. Sooner or later they’ll find the particle, give it a name, someone will get a Nobel Prize and that’s that.

But, well, that hasn’t happened. Physicists have tried to measure dark matter particles since the mid 1980s. But no one’s ever seen one. There have been a few anomalies in the data, but these have all gone away upon closer inspection. Instead, what’s happened is that some astrophysical observations have become increasingly difficult to explain with the particle hypothesis. Before I get to the observations that particle dark matter doesn’t explain, I’ll first quickly summarize what it does explain, which are the reasons astrophysicists thought it exists in the first place.

Historically the first evidence for dark matter came from galaxy clusters. Galaxy clusters are made of a few hundred up to a thousand or so galaxies that are held together by their gravitational pull. They move around each other, and how fast they move depends on the total mass of the cluster. The more mass, the faster the galaxies move. Turns out that galaxies in galaxy clusters move way too fast to explain this with the mass that we can attribute to the visible matter. So Fritz Zwicky conjectured in the 1930s, that there must be more matter in galaxy clusters, just that we can’t see it. He called it “dunkle materie” dark matter.

It’s a similar story for galaxies. The velocity of a star which orbits around the center of a galaxy depends on the total mass within this orbit. But the stars in the outer parts of galaxies just orbit too fast around the center. Their velocity should drop with distance to the center of the galaxy, but it doesn’t. Instead, the velocity of the stars becomes approximately constant at far distance to the galactic center. This gives rise to the so-called “flat rotation curves”. Again you can explain that by saying there’s dark matter in the galaxies.

Then there is gravitational lensing. These are galaxies or galaxy clusters which bend light that comes from an object behind them. This object behind them then appears distorted, and from the amount of distortion you can infer the mass of the lens. Again, the visible matter just isn’t enough to explain the observations.

Then there’s the temperature fluctuations in the cosmic microwave background. These fluctuations are what you see in this skymap. All these spots here are deviations from the average temperature, which is about 2.7 Kelvin. The red spots are a little warmer, the blue spots a little colder than that average. Astrophysicists analyze the microwave-background using its power spectrum, where the vertical axis is roughly the number of spots and the horizontal axis is their size, with the larger sizes on the left and increasingly smaller spots to the right. To explain this power spectrum, again you need dark matter.

Finally, there’s the large scale distribution of galaxies and galaxy clusters and interstellar gas and so on, as you see in the image from this computer simulation. Normal matter alone just does not produce enough structure on short scales to fit the observations, and again, adding dark matter will fix the problem.

So, you see, dark matter was a simple idea that fit to a lot of observations, which is why it was such a good scientific explanation. But that was the status 20 years ago. And what’s happened since then is that observations have piled up that dark matter cannot explain.

For example, particle dark matter predicts a density in the cores of small galaxies that peaks, whereas the observations say the distribution should be flat. Dark matter also predicts too many small satellite galaxies, these are small galaxies that fly around a larger host. The Milky Way for example, should have many hundreds, but actually only has a few dozen. Also, these small satellite galaxies are often aligned in planes. Dark matter does not explain why.

We also know from observations that the mass of a galaxy is correlated to the fourth power of the rotation velocity of the outermost stars. This is called the baryonic Tully Fisher relation and it’s just an observational fact. Dark matter does not explain it. It’s a similar issue with Renzo’s rule, that says if you look at the rotation curve of a galaxy, then for every feature in the curve for the visible emission, like a wiggle or bump, there is also a feature in the rotation curve. Again, that’s an observational fact, but it makes absolutely no sense if you think that most of the matter in galaxies is dark matter. The dark matter should remove any correlation between the luminosity and the rotation curves.

Then there are collisions of galaxy clusters at high velocity, like the bullet cluster or the el gordo cluster. These are difficult to explain with particle dark matter, because dark matter creates friction and that makes such high relative velocities incredibly unlikely. Yes, you heard that correctly, the Bullet cluster is a PROBLEM for dark matter, not evidence for it.

And, yes, you can fumble with the computer simulations for dark matter and add more and more parameters to try to get it all right. But that’s no longer a simple explanation, and it’s no longer predictive.

So, if it’s not dark matter then what else could it be? The alternative explanation to particle dark matter is modified gravity. The idea of modified gravity is that we are not missing a source for gravity, but that we have the law of gravity wrong.

Modified gravity solves all the riddles that I just told you about. There’s no friction, so high relative velocities are not a problem. It predicted the Tully-Fisher relation, it explains Renzo’s rule and satellite alignments, it removes the issue with density peaks in galactic cores, and solves the missing satellites problem.

But modified gravity does not do well with the cosmic microwave background and the early universe, and it has some issues with galaxy clusters.

So that looks like a battle between competing hypotheses, and that’s certainly how it’s been portrayed and how most physicists think about it.

But here’s the thing. Purely from the perspective of data, the simplest explanation is that particle dark matter works better in some cases, and modified gravity better in others. A lot of astrophysicist reply to this, well, if you have dark matter anyway, why also have modified gravity? Answer: Because dark matter has difficulties explaining a lot of observations. On its own, it’s no longer parametrically the simplest explanation.

But wait, you may want to say, you can’t just use dark matter for observations a,b,c and modified gravity for observations x,y,z! Well actually, you can totally do that. Nothing in the scientific method that forbids it.

But more importantly, if you look at the mathematics, modified gravity and particle dark matter are actually very similar. Dark matter adds new particles, and modified gravity adds new fields. But because of quantum mechanics, fields are particles and particles are fields, so it’s the same thing really. The difference is the behavior of these fields or particles. It’s the behavior that changes from the scales of galaxies to clusters to filaments and the early universe. So what we need is a kind of phase transition that explains why and under which circumstances the behavior of these additional fields, or particles, changes, so that we need two different sets of equations.

And once you look at it this way, it’s obvious why we have not made progress on the question what dark matter is for such a long time. There’re just the wrong people working on it. It’s not a problem you can solve with particle physics and general relativity. It a problem for condensed matter physics. That’s the physics of gases, fluids, and solids and so on.

So, the conclusion that I have arrived at is that the distinction between dark matter and modified gravity is a false dichotomy. The answer isn’t either – or, it’s both. The question is just how to combine them.

Google talk online now

The major purpose of the talk was to introduce our SciMeter project which I've been working on for a few years now with Tom Price and Tobias Mistele. But I also talk a bit about my PhD topic and particle physics and how my book came about, so maybe it's interesting for some of you.

Saturday, April 24, 2021

Particle Physics Discoveries That Disappeared

[This is a transcript of the video embedded below. Parts of the text will not make sense without the graphics in the video.]

I get asked a lot what I think about this or that report of an anomaly in particle physics, like the B-meson anomaly at the large hadron collider which made headlines last month or the muon g-2 that was just all over the news. But I thought instead of just giving you my opinion, which you may or may not trust, I will instead give you some background to gauge the relevance of such headlines yourself. Why are there so many anomalies in particle physics? And how seriously should you take them? That’s what we will talk about today.

The Higgs boson was discovered in nineteen eighty-four. I’m serious. The Crystal Ball Experiment at DESY in Germany saw a particle that fit the expectation already in nineteen eighty-four. It made it into the New York Times with the headline “Physicists report mystery particle”. But the supposed mystery particle turned out to be a data fluctuation. The Higgs boson was actually only discovered in 2012 at the Large Hadron Collider at CERN. And 1984 was quite a year, because also supersymmetry was observed and then disappeared again.

How can this happen? Particle physicists calculate what they expect to see in an experiment using the best theory they have at the time. Currently that’s the standard model of particle physics. In 1984, that’d have been the standard model minus the particles which hadn’t been discovered.

But the theory alone doesn’t tell you what to expect in a measurement. For this you also have to take into account how the experiment is set up, so for example what beam and what luminosity, and how the detector works and how sensitive it is. This together: theory, setup, detector, gives you an expectation for your measurement. What you are then looking for are deviations from that expectation. Such deviations would be evidence for something new.

Here’s the problem. These expectations are always probabilistic. They don’t tell you exactly what you will see. They only tell you a distribution over possible outcomes. That’s partly due to quantum indeterminism but partly just classical uncertainty.

Therefore, it’s possible that you see a signal when there isn’t one. As an example, suppose I randomly distribute one-hundred points on this square. If I divide the square into four pieces of equal size, I expect about twenty-five points in each square. And indeed that turns out to be about correct for this random distribution. Here is another random distribution. Looks reasonable.

Now let’s do this a million times. No, actually, let’s not do this.

I let my computer do this a million times, and here is one of the outcomes. Whoa. That doesn’t look random! It looks like something’s attracting the points to that one square. Maybe it’s new physics!

No, there’s no new physics going on. Keep in mind, this distribution was randomly created. There’s no signal here, it’s all noise. It’s just that every once in a while noise happens to look like a signal.

This is why particle physicists like scientists in all other disciplines, give a “confidence level” to their observation that tells you how “confident” they are that the observation was not a statistical fluctuation. They do this by calculating the probability that the supposed signal could have been created purely by chance. If fluctuations create a signature like what you are looking for one in twenty times, then the confidence level is 95%. If fluctuations create it one in a hundred times, the confidence level is 99%, and so on. Loosely speaking, the higher the confidence level, the more remarkable the signal.

But exactly at which confidence level you declare a discovery is convention. Since the mid 1990s, particle physicists have used for discoveries a confidence level of 99.99994 percent. That’s about a one in a million chance for the signal to have been a random fluctuation. It’s also frequently referred to as 5 σ, where σ is one standard deviation. (Though that relation only holds for the normal distribution.)

But of course deviations from the expectation attract attention already below the discovery threshold. Here is a little more history. Quarks, for all we currently know, are elementary particles, meaning we haven’t seen an substructures. But a lot of physicists have speculated that quarks might be made up of even small things. These smaller particles are often called “preons”. They were found in 1996. The New York Times reported: “Tiniest Nuclear Building Block May Not Be the Quark”. The significance of the signal was about three sigma, that’s about a one in thousand chance for it to be coincidence and about the same as the current B-meson anomaly. But the supposed quark substructure was a statistical fluctuation.

The same year, the Higgs was discovered again, this time at the Large Electron Positron collider at CERN. It was an excess of Higgs-like events that made it to almost 4 σ, which is a one in sixteenthousand chance to be a random fluctuation. Guess what, that signal vanished too.

Then, in 2003, supersymmetry was “discovered” again, this time in form of a supposed sbottom quark, that’s the hypothetical supersymmetric partner particle of the bottom quark. That signal too was at about 3 σ but then disappeared.

And in 2015, we saw the di-photon anomaly that made it above 4 σ and disappeared again. There have even been some six sigma signals that disappeared again, though these had no known interpretation in terms of new physics.

For example in 1998 the Tevatron at Fermilab measured some events they dubbed “superjets” at six σ. They were never seen again. In 2004 HERA at DESY saw pentaquarks – that are particles made of 5 quarks – with 6 σ significance but that signal also disappeared. And then there is the muon g-2 anomaly that recently increased from 3.7 to 4.2 σ, but still hasn’t crossed the discovery threshold.

Of course not all discoveries that disappeared in particle physics were due to fluctuations. For example, in 1984, the UA1 experiment at CERN saw eleven particle decays of a certain type when they expected only three point five. The signature fit to that expected for the top quark. The physicists were quite optimistic they had found the top quark and this news too made it into the New York Times.

Turned out though they had misestimated the expected number of such events. Really there was nothing out of the ordinary. The top quark wasn’t actually discovered until 1995. A similar thing happened in 2011, when the CDF collaboration at Fermilab saw an excess of events at about 4 \sigma. These were not fluctuations, but they required better understanding of the background.

And then of course there are possible issues with the data analysis. For example, there are various tricks you can play to increase the supposed significance. This basically doesn’t happen in collaboration papers, but you sometimes see individual researchers that use very, erm, creative methods of analysis. And then there may can be systematic problems with the detection, triggers, or filters and so on.

In summary: Possible reasons why a discovery might disappear are (a) fluctuations (b) miscalculations (c) analysis screw-ups (d) systematics. The most common one, just by looking at history, are fluctuations. And why are there so many fluctuations in particle physics? It’s because they have a lot of data. The more data you have, the more likely you are to find fluctuations that look like signals. That, by the way, is why particle physicists introduced the five sigma standard in the first place. Because otherwise they’d constantly have “discoveries” that disappear.

So what’s with that B-meson anomaly at the LHC that recently made headlines. It’s actually been around since 2015, but recently a new analysis came out and so it was in the news again. It’s currently lingering at 3.1 σ. As we saw, signals of that strength go away all the time, but it’s interesting that this one’s stuck around instead of going away. That makes me think it’s either a systematic problem or indeed a real signal.

Note: I have a longer comment about the recent muon g-2 measurement here.

Wednesday, April 21, 2021

All you need to know about Elon Musk’s Carbon Capture Prize

[This is a transcript of the video embedded below.]

Elon Musk has announced he is sponsoring a competition for the best carbon removal ideas with a fifty million dollar prize for the winner. The competition will open on April twenty-second, twenty-twenty-one. In this video, I will tell you all you need to know about carbon capture to get your brain going, and put you on the way for the fifty million dollar prize.

During the formation of our planet, large amounts of carbon dioxide were stored in the ground, and ended up in coal and oil. By burning these fossil fuels, we have released a lot of that old carbon dioxide really suddenly. It accumulates in the atmosphere and prevents our planet from giving off heat the way it used to. As a consequence, the climate changes, and it changes rapidly.

The best course of action would have been to not pump that much carbon dioxide into the atmosphere to begin with, but at this point reducing future emissions alone might no longer be the best way to proceed. We might have to find ways to actually get carbon dioxide back out of the air. Getting this done is what Elon Musk’s competition is all about.

The problem is, once carbon dioxide is in the atmosphere it stays there for a long time. By natural processes alone, it would take several thousand years for atmospheric carbon dioxide levels to return to pre-industrial. And the climate reacts slowly to the sudden increase in carbon dioxide, so we haven’t yet seen the full impact of what we have done already. For example, there’s a lot of water on our planet, and warming up this water takes time.

So, even if we were to entirely stop carbon dioxide emissions today, the climate would continue to change for at least several more decades, if not centuries. It’s like you elected someone out of office, and now they’re really pissed off, but they’ve got six weeks left on the job and nothing you can do about that.

Globally, we are presently emitting about forty billion tons of carbon dioxide per year. According to the Intergovernmental Panel on Climate Change, we’d have to get down to twenty billion tons per year to limit warming to one point five degrees Celsius compared to preindustrial levels. These one point five degrees are what’s called the “Paris target.” This means, if we continue emitting at the same level as today, we’ll have to remove twenty billion tons carbon dioxide per year.

But to score in Musk’s competition, you don’t need a plan to remove the full twenty billion tons per year. You merely need “A working carbon removal prototype that can be rigorously validated” that is “capable of removing at least 1 ton per day” and the carbon “should stay locked up for at least one hundred years.” But other than that, pretty much everything goes. According to the website, the “main metric for the competition is cost per ton”.

So, which options do we have to remove carbon dioxide and how much do they cost?

The obvious thing to try is enhancing natural processes which remove carbon dioxide from the atmosphere. You can do that for example by planting trees because trees take up carbon dioxide as they grow. They are what’s called a natural “carbon sink”. This carbon is released again if the trees die and rot, or are burned, so planting trees alone isn’t enough, we’d have to permanently increase their numbers.

By how much? Depends somewhat on the type of forest, but to get rid of the twenty billion tons per year, we’d have to plant about ten million square kilometers of new forests. That’s about the area of the United States and more than the entire remaining Amazon rainforest.

Planting so many trees seems a bit impractical. And it isn’t cheap either. The cost is about 100 US dollars per ton of carbon dioxide. So, to get rid of the 20 billion tons excess carbon dioxide, that would be a few trillion dollars per year. Trees are clearly part of the solution, but we need to do more than that. And stop burning the rain forest wouldn’t hurt either.

Humans by the way are also a natural carbon sink because we’re eighteen percent carbon. Unfortunately, burying or burning dead people returns that carbon into the environment. Indeed, a single cremation releases about two-hundred-fifty kilograms of carbon dioxide, which could be avoided, for example, by dumping dead people in the deep sea where they won’t rot. So, if we were to do sea burials instead of cremations, that would save up to a million tons carbon dioxide per year. Not a terrible lot. And probably quite expensive. Yeah, I’m not the person to win that prize.

But there’s a more efficient way that oceans could help removing carbon. If one stimulates the growth of algae, these will take up carbon. When the algae die, they sink to the bottom of the ocean, where the carbon could remain, in principle, for millions of years. This is called “ocean fertilization”.

It’s a good idea in theory, but in practice it’s presently unclear how efficient it is. There’s no good data for how many of the algae sink and how many of them get eaten, in which case the carbon might be released, and no one knows what else such fertilization might do to the oceans. So, a lot of research remains to be done here. It’s also unclear how much it would cost. Estimates range from two to four hundred fifty US dollars per ton of carbon dioxide.

Besides enhancing natural carbon sinks, there are a variety of technologies for removing carbon permanently.

For example, if one burns agricultural waste or wood in the absence of oxygen, this will not release all the carbon dioxide but produce a substance called biochar. The biochar keeps about half of the carbon, and not only is it is stable for thousands of years, it can also improve the quality of soil.

The major problem with this idea is that there’s only so much agricultural waste to burn. Still, by some optimistic estimates one could remove up to one point eight billion tons carbon dioxide per year this way. Cost estimates are between thirty and one hundred twenty US dollars per ton of carbon dioxide.

By the way, plastic is about eighty percent carbon. That’s because it’s mostly made of oil and natural gas. And since it isn’t biodegradable, it’ll safely store the carbon – as long as you don’t burn it. So, the Great Pacific garbage patch? That’s carbon storage. Not a particularly popular one though.

A more popular idea is enhanced weathering. For this, one artificially creates certain minerals that, when they come in contact with water, can bind carbon dioxide to them, thereby removing it from the air. The idea is to produce large amounts of these minerals, crush them, and distribute them over large areas of land.

The challenges for this method are: how do you produce large amounts of these minerals, and where do you find enough land to put it on. The supporters of the American weathering project Vesta claim that the cost would be about ten US dollars per ton of carbon dioxide. So that’s a factor ten less than planting trees.

Then there is direct air capture. The most common method for this is pushing air through filters which absorb carbon dioxide. Several petrol companies like Chevron, BHP, and Occidental currently explore this technology. The company Carbon Engineering, which is backed by Bill Gates, has a pilot plant in British Columbia that they want to scale up to commercial plants. They claim every such plant will be equivalent in carbon removal to 40 million trees, removing 1 million tons of carbon dioxide per year.

They estimate the cost between ninety-four and 232 US dollar per ton. That would mean between two to four trillion US dollars per year to eliminate the entire twenty billion tons carbon dioxide which we need to get rid of. That’s between two point five and five percent of the world’s GDP.

But, since carbon dioxide is taken up by the oceans, one can also try to get rid of it by extracting it from seawater. Indeed, the density of carbon dioxide in seawater is about one hundred twenty five times higher than it is in air. And once you’ve removed it, the water will take up new carbon dioxide from the air, so you can basically use the oceans to suck the carbon dioxide out of the atmosphere. That sounds really neat.

The current cost estimate for carbon extraction from seawater is about 50 dollars per ton, so that’s about half as much as carbon extraction from air. The major challenge for this idea is that the currently known methods for extracting carbon dioxide from water require heating the water to about seventy degrees Celsius which takes up a lot of energy. But maybe there are other, more energy efficient ways, to get carbon dioxide out of water? You might be the person to solve this problem.

Finally, there is carbon capture and storage, which means capturing carbon dioxide right where it’s produced and store it away before it’s released into the atmosphere.

About twenty-six commercial facilities already use this method, and a few dozen more are planned. In twenty-twenty, about forty million tons of carbon dioxide were captured by this method. The typical cost is between 50 and 100 US$ per ton of carbon dioxide, though in particularly lucky cases the cost may go down to about 15 dollars per ton. The major challenge here is that present technologies for carbon capture and storage require huge amounts of water.

As you can see an overall problem for these ideas is that they’re expensive. You can therefore score on Musk’s competition by making one of the existing technologies cheaper, or more efficient, or both, or maybe you have an entirely new idea to put forward. I wish you good luck!

Saturday, April 17, 2021

Does the Universe have higher dimensions? Part 2

[This is a transcript of the video embedded below.]

In science fiction, hyper drives allow spaceships to travel faster than light by going through higher dimensions. And physicists have studied the question whether such extra dimensions exist for real in quite some detail. So, what have they found? Are extra dimensions possible? What do they have to do with string theory and black holes at the Large Hadron collider? And if extra dimensions are possible, can we use them for space travel? That’s what we will talk about today.

This video continues the one of last week, in which I talked about the history of extra dimensions. As I explained in the previous video, if one adds 7 dimensions of space to our normal three dimensions, then one can describe all of the fundamental forces of nature geometrically. And that sounds like a really promising idea for a unified theory of physics. Indeed, in the early 1980s, the string theorist Edward Witten thought it was intriguing that seven additional dimensions of space is also the maximum for supergravity.

However, that numerical coincidence turned out to not lead anywhere. This geometric construction of fundamental forces which is called Kaluza-Klein theory, suffers from several problems that no one has managed to solved.

One problem is that the radii of these extra dimensions are unstable. So they could grow or shrink away, and that’s not compatible with observation. Another problem is that some of the particles we know come in two different versions, a left handed and a right handed one. And these two version do not behave the same way. This is called chirality. That particles behave this way is an observational fact, but it does not fit with the Kaluza-Klein idea. Witten actually worried about this in his 1981 paper.

Enter string theory. In string theory, the fundamental entities are strings. That the strings are fundamental means they are not made of anything else. They just are. And everything else is made from these strings. Now you can ask how many dimensions does a string need to wiggle in to correctly describe the physics we observe?

The first answer that string theorists got was twenty six. That’s twenty five dimensions of space and one dimension of time. That’s a lot. Turns out though, if you add supersymmetry the number goes down to ten, so, nine dimension of space and one dimension of time. String theory just does not work properly in fewer dimensions of space.

This creates the same problem that people had with Kaluza-Klein theory a century ago: If these dimensions exist, where are they? And string theorists answered the question the same way: We can’t see them, because they are curled up to small radii.

In string theory, one curls up those extra dimensions to complicated geometrical shapes called “Calabi-Yau manifolds”, but the details aren’t all that important. The important thing is that because of this curling up, the strings have higher harmonics. This is the same thing which happens in Kaluza-Klein theory. And it means, if a string gets enough energy, it can oscillate with certain frequencies that have to match to the radius of these extra dimensions.

Therefore, it’s not true that string theory does not make predictions, though I frequently hear people claim that. String theory makes the prediction that these higher harmonics should exist. The problem is that you need really high energies to create them. That’s because we already know that these curled up dimensions have to be small. And small radii means high frequencies, and therefore high energies.

How high does the energy have to be to see these higher harmonics? Ah, here’s the thing. String theory does not tell you. We only know that these extra dimensions have to be so small we haven’t yet seen them. So, in principle, they could be just out of reach, and the next bigger particle collider could create these higher harmonics.

And this… is where the idea comes from that the Large Hadron Collider might create tiny black holes.

To understand how extra dimensions help with creating black holes, you first have to know that Newton’s one over R squared law is geometrical. The gravitational force of a point mass falls with one over R squared because the surface of the sphere grows with R squared, where R is the radius of the sphere. So, if you increase the distance to the mass, the force lines thin out as the surface of the sphere grows. But… here is the important point. Suppose you have additional dimensions of space. Say you don’t have three, but 3+n, where n is a positive integer. Then, the surface of the sphere increases with R to the (2+n).

Consequently, the gravitational force drops with one over R to the (2+n) as you move away from the mass. This means, if space has more than three dimensions, the force drops much faster with distance to the source than normally.

Of course Newtonian gravity was superseded by Einstein’s theory of General Relativity, but this general geometric consideration about how gravity weakens with distance to the source remains valid. So, in higher dimensions the gravitational force drops faster with distance to the source.

Keep in mind though that the extra dimensions we are concerned with are curled up, because otherwise we’d already have noticed them. This means, into the direction of these extra dimensions, the force lines can only spread out up to a distance that is comparable to the radius of the dimensions. After this, the only directions the force lines can continue to spread out into are the three large directions. This means that on distances much larger than the radius of the extra dimensions, this gives back the usual 1/R^2 law, which we observe.

Now about those black holes. If gravity works as usual in three dimensions of space, we cannot create black holes. That’s because gravity is just too weak. But consider you have these extra dimensions. Since the gravitational force falls much faster as you go away from the mass, it means that if you get closer to a mass, the force gets much stronger than it would in only 3 dimensions. That makes it much easier to create black holes. Indeed, if the extra dimensions are large enough, you could create black holes at the Large Hadron Collider.

At least in theory. In practice, the Large Hadron Collider did not produce black holes, which means that if the extra dimensions exist, they’re really small. How “small”? Depends on the number of extra dimensions, but roughly speaking below a micrometer.

If they existed, could we travel through them? The brief answer is no, and even if we could it would be pointless. The reason is that while the gravitational force can spread into all of the extra dimensions, matter, like the stuff we are made of, can’t go there. It is bound to a 3-dimensional slice, which string theorists call a “brane”, that’s b r a n e, not b r a i n, and it’s a generalization of membrane. So, basically, we’re stuck on this 3-dimensional brane, which is our universe. But even if that was not the case, what do you want in these extra dimensions anyway? There isn’t anything in there and you can’t travel any faster there than in our universe.

People often think that extra dimensions provide a type of shortcut, because of illustrations like this. The idea is that our universe is kind of like this sheet which is bent and then you can go into a direction perpendicular to it, to arrive at a seemingly distant point faster. The thing is though, you don’t need extra dimensions for that. What we call the “dimension” in general relativity would be represented in this image by the dimension of the surface, which doesn’t change. Indeed, these things are called wormholes and you can have them in ordinary general relativity with the odinary three dimensions of space.

This embedding space here does not actually exist in general relativity. This is also why people get confused about the question what the universe expands into. It doesn’t expand into anything, it just expands. By the way, fun fact, if you want to embed a general 4 dimensional space-time into a higher dimensional flat space you need 10 dimensions, which happens to be the same number of dimensions you need for string theory to make sense. Yet another one of these meaningless numerical coincidences, but I digress.

What does this mean for space travel? Well, it means that traveling through higher dimensions by using hyper drives is scientifically extremely implausible. Therefore, my ultimate ranking for the scientific plausibility of science fiction travel is:

3rd place: Hyper drives because it’s a nice idea, it just makes no scientific sense.

2nd place: Wormholes, because at least they exist mathematically, though no one has any idea how to create them.

And the winner is... Warp drives! Because not only does the mathematics work out, it’s in principle possible to create them, at least as long as you stay below the speed of light limit. How to travel faster than light, I am afraid we still don’t know. But maybe you are the one to figure it out.

Saturday, April 10, 2021

Does the Universe have Higher Dimensions? Part 1

[This is a transcript of the video embedded below.]

Space, the way we experience it, has three dimensions. Left-right, forward backward, and up-down. But why three? Why not 7? Or 26? The answer is: No one knows. But if no one knows why space has three dimensions, could it be that it actually has more? Just that we haven’t noticed for some reason? That’s what we will talk about today.

The idea that space has more than three dimensions may sound entirely nuts, but it’s a question that physicists have seriously studied for more than a century. And since there’s quite a bit to say about it, this video will have two parts. In this part we will talk about the origins of the idea of extra dimensions, Kaluza-Klein theory and all that. And in the next part, we will talk about more recent work on it, string theory and black holes at the Large Hadron Collider and so on.

Let us start with recalling how we describe space and objects in it. In two dimensions, we can put a grid on a plane, and then each point is a pair of numbers that says how far away from zero you have to go in the horizontal and vertical direction to reach that point. The arrow pointing to that point is called a “vector”.

This construction is not specific to two dimensions. You can add a third direction, and do exactly the same thing. And why stop there? You can no longer *draw a grid for four dimensions of space, but you can certainly write down the vectors. They’re just a row of four numbers. Indeed, you can construct vector spaces in any number of dimensions, even in infinitely many dimensions.

And once you have vectors in these higher dimensions, you can do geometry with them, like constructing higher dimensional planes, or cubes, and calculating volumes, or the shapes of curves, and so on. And while we cannot directly draw these higher dimensional objects, we can draw their projections into lower dimensions. This for example is the projection of a four-dimensional cube into two dimensions.

Now, it might seem entirely obvious today that you can do geometry in any number of dimensions, but it’s actually a fairly recent development. It wasn’t until eighteen forty-three, that the British mathematician Arthur Cayley wrote about the “Analytical Geometry of (n) Dimensions” where n could be any positive integer. Higher Dimensional Geometry sounds innocent, but it was a big step towards abstract mathematical thinking. It marked the beginning of what is now called “pure mathematics”, that is mathematics pursued for its own sake, and not necessarily because it has an application.

However, abstract mathematical concepts often turn out to be useful for physics. And these higher dimensional geometries came in really handy for physicists because in physics, we usually do not only deal with things that sit in particular places, but with things that also move in particular directions. If you have a particle, for example, then to describe what it does you need both a position and a momentum, where the momentum tells you the direction into which the particle moves. So, actually each particle is described by a vector in a six dimensional space, with three entries for the position and three entries for the momentum. This six-dimensional space is called phase-space.

By dealing with phase-spaces, physicists became quite used to dealing with higher dimensional geometries. And, naturally, they began to wonder if not the *actual space that we live in could have more dimensions. This idea was first pursued by the Finnish physicist Gunnar Nordström, who, in 1914, tried to use a 4th dimension of space to describe gravity. It didn’t work though. The person to figure out how gravity works was Albert Einstein.

Yes, that guy again. Einstein taught us that gravity does not need an additional dimension of space. Three dimensions of space will do, it’s just that you have to add one dimension of time, and allow all these dimensions to be curved.

But then, if you don’t need extra dimensions for gravity, maybe you can use them for something else.

Theodor Kaluza certainly thought so. In 1921, Kaluza wrote a paper in which he tried to use a fourth dimension of space to describe the electromagnetic force in a very similar way to how Einstein described gravity. But Kaluza used an infinitely large additional dimension and did not really explain why we don’t normally get lost in it.

This problem was solved few years later by Oskar Klein, who assumed that the 4th dimension of space has to be rolled up to a small radius, so you can’t get lost in it. You just wouldn’t notice if you stepped into it, it’s too small. This idea that electromagnetism is caused by a curled-up 4th dimension of space is now called Kaluza-Klein theory.

I have always found it amazing that this works. You take an additional dimension of space, roll it up, and out comes gravity together with electromagnetism. You can explain both forces entirely geometrically. It is probably because of this that Einstein in his later years became convinced that geometry is the key to a unified theory for the foundations of physics. But at least so far, that idea has not worked out.

Does Kaluza-Klein theory make predictions? Yes, it does. All the electromagnetic fields which go into this 4th dimension have to be periodic so they fit onto the curled-up dimension. In the simplest case, the fields just don’t change when you go into the extra dimension. And that reproduces the normal electromagnetism. But you can also have fields which oscillate once as you go around, then twice, and so on. These are called higher harmonics, like you have in music. So, Kaluza Klein theory makes a prediction which is that all these higher harmonics should also exist.

Why haven’t we seen them? Because you need energy to make this extra dimension wiggle. And the more it wiggles, that is, the higher the harmonics, the more energy you need. Just how much energy? Well, that depends on the radius of the extra dimension. The smaller the radius, the smaller the wavelength, and the higher the frequency. So a smaller radius means you need higher energy to find out if the extra dimension is there. Just how small the radius is, the theory does not tell you, so we don’t know what energy is necessary to probe it. But the short summary is that we have never seen one of these higher harmonics, so the radius must be very small.

Oskar Klein himself, btw was really modest about his theory. He wrote in 1926:
"Ob hinter diesen Andeutungen von Möglichkeiten etwas Wirkliches besteht, muss natürlich die Zukunft entscheiden."

("Whether these indications of possibilities are built on reality has of course to be decided by the future.")

But we don’t actually use Kaluza-Klein theory instead of electromagnetism, and why is that? It’s because Kaluza-Klein theory has some serious problems.

The first problem is that while the geometry of the additional dimension correctly gives you electric and magnetic fields, it does not give you charged particles, like electrons. You still have to put those in. The second problem is that the radius of the extra dimension is not stable. If you perturb it, it can begin to increase, and that can have observable consequences which we have not seen. The third problem is that the theory is not quantized, and no one has figured out how to quantize geometry without running into problems. You can however quantize plain old electromagnetism without problems.

We also know today of course that the electromagnetic force actually combines with the weak nuclear force to what is called the electroweak force. That, interestingly enough, turns out to not be a problem for Kaluza-Klein theory. Indeed, it was shown in the 1960s by Ryszard Kerner, that one can do Kaluza-Klein theory not only for electromagnetism, but for any similar force, including the strong and weak nuclear force. You just need to add a few more dimensions.

How many? For the weak nuclear force, you need two more, and for the strong nuclear force another four. So in total, we now have one dimension of time, 3 for gravity, one for electromagnetism, 2 for the weak nuclear force and 4 for the strong nuclear force, which adds up to a total of 11.

In 1981, Edward Witten noticed that 11 happened to be the same number of dimensions which is the maximum for supergravity. What happened after this is what we’ll talk about next week.

Saturday, April 03, 2021

Should Stephen Hawking have won the Nobel Prize?

[This is a transcript of the video embedded below.]

Stephen Hawking, who sadly passed away in 2018, has repeatedly joked that he might get a Nobel Prize if the Large Hadron Collider produces tiny black holes. For example, here is a recording of a lecture he gave in 2016:
“Some of the collisions might create micro black holes. These would radiate particles in a pattern that would be easy to recognize. So I might get a Nobel Prize after all.”
The British physicist and science writer Phillip Ball, who attended this 2016 lecture, commented:
“I was struck by how unusual it was for a scientist to state publicly that their work warranted a Nobel… [It] gives a clue to the physicist’s elusive character: shamelessly self-promoting to the point of arrogance, and heedless of what others might think.”
I heard Hawking say pretty much exactly the same thing in a public lecture a year earlier in Stockholm. But I had an entirely different reaction. I didn’t think of his comment as arrogant. I thought he was explaining something which few people knew about. And I thought he was right in that, if the Large Hadron Collider would have seen these tiny black holes decay, he almost certainly would have gotten a Nobel Prize. But I also thought that this was not going to happen. He was much more likely to win a Nobel Prize for something else. And he almost did.

Just exactly what might Hawking have won the Nobel Prize for, and should he have won it? That’s what we will talk about today.

In nineteen-seventy-four, Stephen Hawking published a calculation that showed black holes are not perfectly black, but they emit thermal radiation. This radiation is now called “Hawking radiation”. Hawking’s calculation shows that the temperature of a black hole is inversely proportional to the mass of the black hole. This means, the larger the black hole, the smaller its temperature, and the harder it is to measure the radiation. For the astrophysical black holes that we know of, the temperature is way, way too small to be measurable. So, the chances of him ever winning a Nobel Prize for black hole evaporation seemed very small.

But, in the late nineteen-nineties, the idea came up that tiny black holes might be produced in particle collisions at the Large Hadron Collider. This is only possible if the universe has additional dimensions of space, so not just the three that we know of, but at least five. These additional dimensions of space would have to be curled up to small radii, because otherwise we would already have seen them.

Curled up extra dimensions. Haven’t we heard that before? Yes, because string theorists talk about curled up dimensions all the time. And indeed, string theory was the major motivation to consider this hypothesis of extra dimensions of space. However, I have to warn you that string theory does NOT tell you these extra dimensions should have a size that the Large Hadron Collider could probe. Even if they exist, they might be much too small for that.

Nevertheless, if you just assume that the extra dimensions have the right size, then the Large Hadron Collider could have produced tiny black holes. And since they would have been so small, they would have been really, really hot. So hot, indeed, they’d decay pretty much immediately. To be precise, they’d decay in a time of about ten to the minus twenty-three seconds, long before they’d reach a detector.

But according to Hawking’s calculation, the decay of these tiny black holes should proceed by a very specific pattern. Most importantly, according to Hawking, black holes can decay into pretty much any other particle. And there is no other particle decay which looks like this. So, it would have been easy to see black hole decays in the data. If they had happened. They did not. But if they had, it would almost certainly have gotten Hawking a Nobel Prize.

However, the idea that the Large Hadron Collider would produce tiny black holes was never very plausible. That’s because there was no reason the extra dimensions, in case they exist to begin with, should have just the right size for this production to be possible. The only reason physicists thought this would be the case was an argument from mathematical beauty called “naturalness”. I have explained the problems with this argument in an earlier video, so check this out for more.

So, yeah, I don’t think tiny black holes at the Large Hadron Collider was Hawking’s best shot at a Nobel Prize.

Are there other ways you could see black holes evaporate? Not really. Without these curled up extra dimensions, which do not seem to exist, we can’t make black holes ourselves. Without extra dimensions, the energy density that we’d have to reach to make black holes is way beyond our technological limitations. And the black holes that are produced in natural processes are too large, and then too cold to observe Hawking radiation.

One thing you *can do, though, is simulating black holes with superfluids. This has been done by the group of Jeff Steinhauer in Israel. The idea is that you can use a superfluid to mimic the horizon of a black hole. If you remember, the horizon of a black hole is a boundary in space, from inside of which light cannot escape. In a superfluid, one does not trap light, but one traps sound waves instead. One can do this because the speed of sound in the superfluid depends on the density of the fluid. And since one can experimentally control this density, one can control the speed of sound.

If one then makes the fluid flow, there’ll be regions from within which the sound waves cannot escape because they’re just too slow. It’s like you’re trying to swim away from a waterfall. There’s a boundary beyond which you just can’t swim fast enough to get away. That boundary is much like a black hole horizon. And the superfluid has such a boundary, not for swimmers, but for sound waves.

You can also do this with a normal fluid, but you need the superfluid so that the sound has the right quantum properties, as it does in Hawking’s calculation. And in a series of really neat experiments, Steinhauer’s group has shown that these sound waves in the superfluid indeed have the properties that Hawking predicted. That’s because Hawking’s calculation applies to the superfluid in just exactly the same way it applies to real black holes.

Could Hawking have won a Nobel Prize for this? I don’t think so. That’s because mimicking a black hole with a superfluid is cool, but of course it’s not the real thing. These experiments are a type of quantum simulation, which means they demonstrate that Hawking’s calculation is correct. But the measurements on superfluids cannot demonstrate that Hawking’s prediction is correct for real black holes.

So, in all fairness, it never seemed likely Hawking would win a Nobel Prize for Hawking radiation. It’s just too hard to measure. But that wasn’t the only thing Hawking did in his career.

Before he worked on black hole evaporation, Hawking worked with Penrose on the singularity theorems. Penrose’s theorem showed that, in contrast to what most physicists believed at the time, black holes are a pretty much unavoidable consequence of stellar collapse. Before that, physicists thought black holes are mathematical curiosities that would not be produced in reality. It was only because of the singularity theorems that black holes began to be taken seriously. Eventually astronomers looked for them, and now we have solid experimental evidence that black holes exist. Hawking applied the same method to the early universe to show that the Big Bang singularity is likewise unavoidable, unless General Relativity somehow breaks down. And that is an absolutely amazing insight about the origin of our universe.

I made a video about the history of black holes two years ago in which I said that the singularity theorems are worth a Nobel Prize. And indeed, Penrose was one of the recipients of the 2020 Nobel Prize in physics. If Hawking had not died two years earlier, I believe he would have won the Nobel Prize together with Penrose. Or maybe the Nobel Prize committee just waited for him to die, so they wouldn’t have to think about just how to disentangle Hawking’s work from Penrose’s? We’ll never know.

Does it matter that Hawking did not win a Nobel Prize? Personally, I think of the Nobel Prize in the first line as an opportunity to celebrate scientific discoveries. The people who we think might win this prize are highly deserving with or without an additional medal. And Hawking didn’t need a Nobel Prize, he’ll be remembered without it.

Saturday, March 27, 2021

Is the universe REALLY a hologram?

[This is a transcript of the video embedded below.]

Do we live in a hologram? String theorists think we do. But what does that mean? How do holograms work, and how are they related to string theory? That’s what we will talk about today.

In science fiction movies, holograms are 3-dimensional, moving images. But in reality, the technology for motion holograms hasn’t caught up with imagination. At least so far, holograms are still mostly stills.

The holograms you are most likely to have seen are not like those in the movies. They are not a projection of an object into thin air – however that’s supposed to work. Instead, you normally see a three-dimensional object above or behind a flat film. Small holograms are today frequently used as a security measure on credit cards, ID cards, or even banknotes, because they are easy to see, but difficult to copy.

If you hold such a hologram into light, you will see that it seems to have depth, even though it is printed on a flat surface. That’s because in photographs, we are limited to the one perspective from which the picture was taken, and that’s why they look flat. But you can tilt holograms and observe them from different angles, as if you were examining a three-dimensional object.

Now, these holograms on your credit cards, or the ones that you find on postcards or book covers, are not “real” holograms. They are actually composed of several 2-dimensional images and depending on the angle, a different image is reflected back at you, which creates the illusion of a 3-dimensional image.

In a real hologram the image is indeed 3-dimensional. But the market for real holograms is small, so they are hard to come by, even though the technology to produce them is straightforward. A real hologram looks like this.

Real holograms actually encode a three-dimensional object on a flat surface. How is this possible? The answer is interference.

Light is electromagnetic waves, so it has crests and troughs. And a key property of waves is that they can be overlaid and then amplify or wash out each other. If two waves are overlaid so that two crests meet at the same point, that will amplify the wave. This is called constructive interference. But if a crest meets a trough, the waves will cancel. This is called destructive interference.

Now, we don’t normally see light cancelling out other light. That’s because to see interference one needs very regular light, where the crests and troughs are neatly aligned. Sunlight or LED light doesn’t have that property. But laser light has it, and so laser light can be interfered.

And this interference can be used to create holograms. For this, one first splits a laser beam in two with a semi-transparent glass or crystal, called a beam-splitter, and makes each beam broader with a diverging lens. Then, one aims one half of the beam at the object that one wants to take an image of. The light will not just bounce off the object in one single direction, but it will scatter in many different directions. And the scattered light contains information about the surface of the object. Then, one recombines the two beams and captures the intensity of the light with a light-sensitive screen.

Now, remember that laser light can interfere. This means, how large the intensity on the screen is, depends on whether the interference was destructive or constructive, which again depends on just where the object was located and how it was shaped. So, the screen has captured the full three-dimensional information. To view the hologram, one develops the film and shines light onto it at the same wavelength as the image was taken, which reproduces the 3-dimensional image.

To understand this in a little more detail, let us look at the image on the screen if one uses a very small point-like object. It looks like this. It’s called a zone plate. The intensity and width of the rings depends on the distance between the point-like object and the screen, and the wavelength of the light. But any object is basically a large number of point-like objects, so the interference image on the screen is generally an overlap of many different zone plates with these concentric rings.

The amazing thing about holograms is now this. Every part of the screen receives information from every part of the object. As a consequence, if you develop the image to get the hologram, you can take it apart into pieces, and each piece will still recreate the whole 3-dimensional object. To understand better how this works, look again at the zone plate, the one of a single point-like object. If you have only a small piece that contains part of the rings, you can infer the rest of the pattern, though it gets a little more difficult. If you have a general plate that overlaps many zone plates, this is still possible. So, at least mathematically, you can reconstruct the entire object from any part of the holographic plate. In reality, the quality of the image will go down.

So, now that you know how real holograms work, let us talk about the idea that the universe is a hologram.

When string theorists claim that our universe is a hologram, they mean the following. Our universe has a positive cosmological constant. But mathematically, universes with a negative cosmological constant are much easier to work with. So, this is what string theorists usually look at. These universes with a negative cosmological constant are called Anti-de Sitter spaces and into these Anti-de Sitter things they put supersymmetric matter. To best current knowledge, our universe is not Anti De Sitter and matter is not supersymmetric, but mathematically, you can certain do that.

For some specific examples, it has then been shown that the gravitational theory in such an Anti de Sitter universe is mathematically equivalent to a different theory on the conformal boundary of that universe. What the heck is the conformal boundary of the universe? Well, our actual universe doesn’t have one. But these Anti-De Sitter spaces do. Just exactly how they are defined isn’t all that important. You only need to know that this conformal boundary has one dimension of space less than the space it is a boundary of.

So, you have an equivalence between two theories in a different number of dimensions of space. A gravitational theory in this anti-De Sitter space with the weird matter. And a different theory on the boundary of that space, which also has weird matter. And just so you have heard the name: The theory on the boundary is what’s called a conformal field theory, and the whole thing is known as the Anti-de Sitter – Conformal Field Theory duality, or AdS/CFT for short.

This duality has been mathematically confirmed for some specific cases, but pretty much all string theorists seem to believe it is much more generally valid. In fact, a lot of them seem believe it is valid even in our universe, even though there is no evidence for that, neither observational nor mathematical. In this most general form, the duality is simply called the “holographic principle”.

If the holographic principle was correct, it would mean that the information about any volume in our universe is encoded on the boundary of that volume. That’s remarkable because naively, you’d think the amount of information you can store in a volume of space grows much faster than the information you can store on the surface. But according to the holographic principle, the information you can put into the volume somehow isn’t what we think it is. It must have more correlations than we realize. So it the holographic principle was true, that would be very interesting. I talked about this in more detail in an earlier video.

The holographic principle indeed sounds a little like optical holography. In both cases one encodes information about a volume on a surface with one dimension less. But if you look a little more closely, there are two important differences between the holographic principle and real holography:

First, an optical hologram is not actually captured in two dimensions; the holographic film has a thickness, and you need that thickness to store the information. The holographic principle, on the other hand, is a mathematical abstraction, and the encoding really occurs in one dimension less.

Second, as we saw earlier, in a real hologram, each part contains information about the whole object. But in the mathematics of the holographic universe, this is not the case. If you take only a piece of the boundary, that will not allow you to reproduce what goes on in the entire universe.

This is why I don’t think referring to this idea from string theory as holography is a good analogy. But now you know just exactly what the two types of holography do, and do not have in common.

Saturday, March 20, 2021

Whatever happened to Life on Venus?

[This is a transcript of the video embedded below.]

A few months ago, the headlines screamed that scientists had found signs of life on Venus. But it didn’t take long for other scientists to raise objections. So, just exactly what did they find on Venus? Did they actually find it? And what does it all mean? That’s what we will talk about today.

The discovery that made headlines a few months ago was that an international group of researchers said they’d found traces of a molecule called phosphine in the atmosphere of Venus.

Phosphine is a molecule made of one phosphorus and three hydrogen atoms. On planets like Jupiter and Saturn, pressure and temperature are so high that phosphine can form by coincidental chemical reactions, and indeed phosphine has been observed in the atmosphere of these two planets. On planets like Venus, however, the pressure isn’t remotely large enough to produce phosphine this way.

And the only other known processes to create phosphine are biological. On Earth, for example, which in size and distance to the Sun isn’t all that different to Venus, the only natural production processes for phosphine are certain types of microbes. Lest you think this means that phosphine is somehow “good for life”, I should add that the microbes in question live without oxygen. Indeed, phosphine is toxic for forms of life that use oxygen, which is most of life on earth. In fact, phosphine is used in the agricultural industry to kill rodents and insects.

So, the production of phosphine on Venus at fairly low atmospheric pressure seems to require life in some sense, which is why the claim that there’s phosphine on Venus is BIG. It could mean there’s microbial life on Venus. And just in case microbial life doesn’t excite you all that much, this would be super-interesting because it would give us a clue to what the chances are that life evolves on other planets in general.

So, just exactly what did they find?

The suspicion that phosphine might be present on Venus isn’t entirely new. The researchers first saw something that could be phosphine in two-thousand and seventeen in data from the James Clerk Maxwell Telescope, which is a radio telescope in Hawaii. However, this signal was not particularly good, so they didn’t publish it. Instead they waited for more data from the ALMA telescope in Chile. Then they published a combined analysis of the data from both telescopes in Nature Astronomy.

Here’s what they did. One can look for evidence of molecules by exploiting that each molecule reacts to light at different wave-lengths. To some wave-lengths, a molecule may not react at all, but others it may absorb because they cause the molecule to vibrate or rotate around itself. It’s like each molecule has very specific resonance frequencies, like if you’re in an airplane and the engine’s being turned up and then, at a certain pitch the whole plane shakes? That’s a resonance. For the plane it happens at certain wavelengths of sound. For molecules it happens at certain wave-lengths of light.

So, if light passes through a gas, like the atmosphere of Venus, then just how much light at each wave-length passes through depends on what molecules are in the gas. Each molecule has a very specific signature, and that makes the identification possible.

At least in principle. In practice… it’s difficult. That’s because different molecules can have very similar absorption lines.

For example, the phosphine absorption line which all the debate is about has a frequency of two-hundred sixty-six point nine four four Gigahertz. But sulfur dioxide has an absorption line at two-hundred sixty-six point nine four three GigaHertz, and sulfur dioxide is really common in the atmosphere of Venus. That makes it quite a challenge to find traces of phosphine.

But challenges are there to be met. The astrophysicists estimated the contribution from Sulphur dioxide from other lines which this molecule should also produce.

They found that these other lines were almost invisible. So they concluded that the absorption in the frequency range of interest had to be mostly due to phosphine and they estimated the amount with about seven to twenty parts per billion, so that’s seven to twenty molecules of phosphine per billion molecules of anything.

It’s this discovery which made the big headlines. The results they got for the phosphine amount from the two different telescopes are a little different, and such an inconsistency is somewhat of a red flag. But then, these measurements were made some years apart and the atmosphere of Venus could have undergone changes in that period, so it’s not necessarily a problem.

Unfortunately, after publishing their analysis, the team learned that the data from ALMA had not been processed correctly. It was not their fault, but it meant they had to redo their analysis. With the corrected data, the amount of phosphine they claimed to see fell to something between 1 and 4 parts per billion. Less, but still there.

Of course such an important finding attracted a lot of attention, and it didn’t take long for other researchers to have a close look at the analysis. It was not only that finding phosphine was surprising, not finding sulphur dioxide was not normal either; it had been detected many times in the atmosphere of Venus in amounts about 10 times higher than what the phosphine-discovery study claimed it was.

Already in October last year, a paper came out that argued there’s no signal at all in the data, and that said the original study used an overly complicated twelve parameter fit that fooled them into seeing something where there was nothing. This criticism has since been published in a peer reviewed journal. And by the end of January another team put out two papers in which they pointed out several other problems with the original analysis.

First they used a model of the atmosphere of Venus and calculated that the alleged phosphine absorption comes from altitudes higher than eighty kilometers. Problem is, at such a high altitude, phosphine is incredibly unstable because ultraviolet light from the sun breaks it apart quickly. They estimated it would have a lifetime of under one second! This means for phosphine to be present on Venus in the observed amounts, it would’ve to be produced at a rate higher than the production of oxygen by photosynthesis on Earth. You’d need a lot of bacteria to get that done.

Second, they claim that the ALMA telescope should not have been able to see the signal at all, or at least a much smaller signal, because of an effect called line dilution. Line dilution can occur if one has a telescope with many separate dishes like ALMA. A signal that’s smeared out over many of the dishes, like the signal from the atmosphere of Venus, can then be affected by interference effects.

According to estimates in the new paper, line dilution should suppress the signal in the ALMA telescope by about a factor 10-20, in which case it would not be visible at all. And indeed they claim that no signal is entirely consistent with the data from the second telescope. This criticism, too, has now passed peer review.

What does it mean?

Well, the authors of the original study might reply to this criticism, and so it will probably take some time until the dust settles. But even if the criticism is correct, this would not mean there’s no phosphine on Venus. As they say, absence of evidence is not evidence of absence. If the criticism is correct, then the observations, exactly because they probe only high altitudes where phosphine is unstable, can neither exclude, nor confirm, the presence of phosphine on Venus. And so, the summary is, as so often in science: More work is needed.

Wednesday, March 17, 2021

Live Seminar about Dark Matter on Friday

I will give an online simiar about dark matter and modified gravity on Friday at 4pm CET, if you want to attend, the link is here:

I'm speaking in English (as you can see, half in American, half in British English, as usual), but the seminar will be live translated to Spanish, for which there's a zoom link somewhere.

Saturday, March 13, 2021

Can we stop hurricanes?

[This is a transcript of the video embedded below.]

Hurricanes are among the most devastating natural disasters. That’s because hurricanes are enormous! A medium-sized hurricane extends over an area about the size of Texas. On a globe they’ll cover 6 to 12 degrees latitude. And as they blow over land, they leave behind wide trails of destruction, caused by strong winds and rain. Damages from hurricanes regularly exceed billions of US dollars. Can’t we do something about that? Can’t we blast hurricanes apart? Redirect them? Or stop them from forming in the first place? What does science say about that? That’s what we’ll talk about today.

Donald Trump, the former president of the United States, has reportedly asked repeatedly whether it’s possible to get rid of hurricanes by dropping nuclear bombs on them. His proposal was swiftly dismissed by scientists and the media likewise. Their argument can be summed up with “you can’t” and even if you could “it’d be a bad idea.” Trump then denied he ever said anything, the world forgot about it, and here we are, still wondering if not there’s something we can do to stop hurricanes.

Trumps idea might sound crazy, but he was not the first to think of nuking a hurricane, and he probably won’t be the last. And I think trying to prevent hurricanes isn’t as crazy as it sounds.

The idea to nuke a hurricane came up already right after nuclear weapons were deployed for the first time, in Japan in August 1945. August is in the middle of the hurricane season in Florida. The mayor of Miami Beach, Herbert Frink, made the connection. He asked President Harry Truman about the possibility to use the new weapon to fight against hurricanes. And, sure enough, the Americans looked into it.

But they quickly realized that while the energy released by a nuclear bomb was gigantic compared to all other kinds of weapons, it was still nothing compared to the energies that build up in hurricanes. For comparison: The atomic bombs dropped on Japan released an energy of about 20 kilotons each. A typical hurricane releases about 10,000 times as much energy – per hour. The total power of a hurricane is comparable to the entire global power consumption. That’s because hurricanes are enormous!

By the way, hurricanes and typhoons are the same thing. The generic term used by meterologists is “tropical cyclone”. It refers to “a rotating, organized system of clouds and thunderstorms that originates over tropical or subtropical waters.” If they get large enough, they’re then either called hurricanes or typhoons, or they just remain tropical cyclones. But it’s like the difference between an astronaut and a cosmonaut. The same thing!

But back to the nukes. In 1956 an Air Force meteorologist by name Jack W Reed proposed to launch a megaton nuclear bomb – that is about 50 times the power of the ones in Japan – into a hurricane. Just to see what happened. He argued: “Since a complete theory for the dynamics of hurricanes will probably not be derived by meteorologists for several years, argument pros and con without conclusive foundation will be made over the effects to be expected… Only a full-scale test could prove the results.” In other words, if we don’t do it, we’ll never know just how bad the idea is. For what the radiation hazard was concerned, Reed claimed it would be negligible: “An airburst would cause no intense fallout,” never mind that a complete theory for the dynamics of hurricanes wasn’t available then and still isn’t.

Reed’s proposal was dismissed by both the military and the scientific community. The test never took place, but the proposal is interesting nevertheless, because Reed went to some length to explain how to go about nuking a hurricane smartly.

To understand what he was trying to get at, let’s briefly talk about how hurricanes form. Hurricanes can form over the ocean when the water temperature is high enough. Trouble begins at around 26 degrees Celsius or 80 degrees Fahrenheit. The warm water evaporates and rises. As it rises it cools and creates clouds. This tower of water-heavy clouds begins to spin because the Coriolis force, which comes from the rotation of planet Earth, acts on the air that’s drawn in, and the more the clouds spin, the better they get at drawing in more air. As the spinning accelerates, the center of the hurricane clears out and leaves behind a mostly calm region that’s usually a few dozen miles in diameter and has very low barometric pressure. This calm center is called the “eye” of the hurricane.

Reed now argued that if one detonates a megaton nuclear weapon directly in the eye of a hurricane, this would blast away the warm air that feeds the cycle, increase the barometric pressure, and prevent the storm from gathering more strength.

Now, the obvious problem with this idea is that even if you succeeded, you’d deposit radioactive debris in clouds that you just blasted all over the globe, congratulations. But even leaving aside the little issue with the radioactivity, it almost certainly wouldn’t work because - hurricanes are enormous.

It’s not only that you’re still up against a power that exceeds that of your nuclear bomb by three orders of magnitude, it’s also that an explosion doesn’t actually move a lot of air from one place to another, which is what Reed envisioned. The blast creates a shock wave – that’s bad news for everything in the way of that shock – but it does little to change the barometric pressure after the shock wave has passed through.

So if nuclear bombs are not the way to deal with hurricanes, can we maybe make them rain off before they make landfall? This technique is called “cloud seeding” and we talked about this in a previous video. If you remember, there are two types of cloud seeding, one that creates snow or ice, and one that creates rain.

The first one, called glaciogenic seeding was indeed tried on hurricanes by Homer Simpson. No, not this Homer, but a man by name Robert Homer Simpson, who in 1962 was the first director of the American Project Stormfury, which had the goal of weakening hurricanes.

The Americans actually *did spray a hurricane with silver iodide and observed afterwards that the hurricane indeed weakened. Hooray! But wait. Further research showed that hurricane clouds contain very little supercooled water droplets, so the method couldn’t work even in theory. Instead, it turned out that hurricanes frequently undergo similar changes without intervention, so the observation was most likely coincidence. Project Stormfury was canceled in 1983.

What about hygroscopic cloud seeding, which works by spraying clouds with particles that absorb water, to make the clouds rain off? The effects of this have been studied to some extent by observing natural phenomena. For example, dust that’s blown up over the Sahara Desert can be transported by winds over long distances. Though much remains to be understood, some observations seem to indicate that interactions with this dust makes it easier for the clouds to rain off, which naturally weaken hurricanes.

So why don’t we try something similar? Again, the problem is that hurricanes are enormous! You’d need a whole army of airplanes to spray the clouds, and even then that would almost certainly not make the hurricanes disappear, but merely weaken them.

There’s a long list of other things people have considered to get rid of hurricanes. For example, spraying the upper layers of a hurricane with particles that absorb sunlight to warm up the air, and thereby reduce the updraft. But again, the problem is that hurricanes are enormous! Keep in mind, you’d have to spray an area about the size of Texas.

A similar idea is to prevent the air above the ocean from evaporating and feeding the growth of the hurricane, for example by covering the ocean surface with oil films. The obvious problem with this idea is that, well, now you have all that oil on the ocean. But also, some small-scale experiments have shown that the oil-cover tends to break up, and where it doesn’t break up, it can actually aid the warming of the water, which is exactly what you don’t want.

How about we cool the ocean surface instead? This idea has been pursued for example by Bill Gates, who, in 2009, together with a group of scientists and entrepreneurs patented a pump system that would float in the ocean and pump cool water from deep down to the surface. In 2017 the Norwegian company SINTEF put forward a similar proposal. The problem with this idea is, guess what, hurricanes are enormous! You’d have to get a huge number of these pumps in the right place at the right time.

Another seemingly popular idea is to drag icebergs from the poles to the tropics to cool the water. I leave it to you to figure out the logistics for making this happen.

Yet again other people have argued that one doesn’t actually have to blow apart a hurricane to get rid of it, one merely has to detonate a nuclear bomb strategically so that the hurricane changes direction. The problem with this idea is that no one wants multiple nations to play nuclear billiard on the oceans.

As you have seen, there are lots of ideas, but the key problem is that hurricanes are enormous!

And that means the most promising way to prevent them is to intervene before they get too large. Hurricanes don’t suddenly pop out of nowhere, they take several days to form and usually arise from storms in the tropics which also don’t pop out of nowhere.

What the problem then comes down to is that meteorologists can’t presently predict well enough and not long enough in advance just which regions will go on to form hurricanes. But, as you have seen, researchers have tried quite a few methods to interfere with the feedback cycle that grows hurricanes, and some of them actually work. So, if we could tell just when and where to interfere, that might actually make a difference.

My conclusion therefore is: If you want to prevent hurricanes, you don’t need larger bombs, you need to invest into better weather forecasts.