Saturday, October 30, 2021

The delayed choice quantum eraser, debunked

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


A lot of you have asked me to do a video about the delayed choice quantum eraser, an experiment that supposedly rewrites the past. I haven’t done that simply because there are already lots of videos about it, for example Matt from PBS Space-time, the always amazing Joe Scott, and recently also Don Lincoln from Fermilab. And how many videos do you really need about the same thing if that thing isn’t a kitten in a box. However, having watched all those gentlemen’s videos about quantum erasing, I think they’re all wrong. The quantum eraser isn’t remotely as weird as you think, doesn’t actually erase anything, and certainly doesn’t rewrite the past. And that’s what we’ll talk about today.

Let’s start with a puzzle that has nothing to do with quantum mechanics. Peter is forty-six years old and he’s captain of a container ship. He ships goods between two places that are 100 kilometers apart, let’s call them A and B. He starts his round trip at A with the ship only half full. Three-quarters of the way to B he adds more containers to fill the ship, which slows him down by a factor of two. On the return trip, his ship is empty. How old is the captain?

If you don’t know the answer, let’s rewind this question to the beginning.

Peter is forty-six years old. The answer’s right there. Everything I told you after that was completely unnecessary and just there to confuse you. The quantum eraser is a puzzle just like this.

The quantum eraser is an experiment that combines two quantum effects, interference and entanglement. Interference of quantum particles can itself be tested by the double slit experiment. For the double slit experiment you shoot a coherent beam of particles at a plate with two thin openings, that’s the double slit. On the screen behind it, you then observe several lines, usually five or seven, but not two. This is an interference pattern created by overlapping waves. When a crest meets a trough, the waves cancel and that makes a dark spot on the screen. When crest meets crest they add up and that makes a bright spot.

The amazing thing about the double slit is that you get this pattern even if you let only one particle at a time pass through the slits. This means that even single particles act like waves. We therefore describe quantum particles with a wave-function, usually denoted psi. The interesting thing about the double-slit experiment is that if you measure which slit the particles go through, the interference pattern disappears. Instead the particles behave like particles again and you get two blobs, one from each of the slits.

Well, actually you don’t. Though you’ve almost certainly seen that elsewhere. Just because you know which slit the wave-function goes through doesn’t mean it stops being a wave-function. It’s just no longer a wave-function going through two slits. It’s now a wave-function going through only one slit, so you get a one-slit diffraction pattern. What’s that? That’s also an interference pattern but a fuzzier one and indeed looks mostly like a blob. But a very blurry blob. And if you add the blobs from the two individual slits, they’ll overlap and still pretty much look like one blob. Not, as you see in many videos two cleanly separated ones.

You may think this is nitpicking, but it’ll be relevant to understanding the quantum eraser, so keep this in mind. It’s not so relevant for the double slit experiment, because regardless of whether you think it’s one blob or two, the sum of the images from both separate slits is not the image you get from both slits together. The double slit experiment therefore shows that in quantum mechanics, the result of a measurement depends on what you measure. Yes, that’s weird.

The other ingredient that you need for the quantum eraser is entanglement. I have talked about entanglement several times previously, so let me just briefly remind you: entangled particles share some information, but you don’t know which particle has which share until you measure it. It could be for example that you know the particles have a total spin of zero, but you don’t know the spin of each individual particle. Entangled particles are handy because they allow you to measure quantum effects over large distances which makes them super extra weird.

Okay, now to the quantum eraser. You take your beam of particles, usually photons, and direct it at the double slit. After the double slit you place a crystal that converts each single photon into a pair of entangled photons. From each pair you take one and direct it onto a screen. There you measure whether they interfere. I have drawn the photons which come from the two different places in the crystal with two different colors. But this is just so it’s easier to see what’s going on, these photons actually have the same color.

If you create these entangled pairs after the double slit, then the wave-function of the photon depends on which slit the photons went through. This information comes from the location where the pairs were created and is usually called the “which way information”. Because of this which-way information, the photons on the screen can’t create an interference pattern.

What’s with the other side of the entangled particles? That’s where things get tricky. On the other side, you measure the particles in two different ways. In the first case, you measure the which-way information directly, so you have two detectors, let’s call them D1 and D2. The first detector is on the path of the photons from the left slit, the second detector on the path of the photons from the right slit. If you measure the photons with detectors D1 and D2, you see no interference pattern.

But alternatively you can turn off the first two detectors, and instead combine the two beams in two different ways. These two white bars are mirrors and just redirect the beam. The semi-transparent one is a beam splitter. This means half of the photons go through, and the other half is reflected. This looks a little confusing but the point is just that you combine the two beams so that you no longer know which way the photon came. This is the “erasure” of the “which way information”. And then you measure those combined beams in detectors D3 and D4. A measurement on one of those two detectors does not tell you which slit the photon went through.

Finally, you measure the distribution of photons on the screen that are entangled partners of those photons that went to D3. These photons create an interference pattern. You can alternatively measure the distribution of photons on the screen that are partner particles of those photons that went to D4. Those will also create an interference pattern.

This is the “quantum erasure”. It seems you’ve managed to get rid of the which way information by combining those paths, and that restores the interference pattern. In the delayed choice quantum eraser experiment, the erasure happens well after the entangled partner particle hit the screen. This is fairly easy to do just by making the paths of those photons long enough.

If you watch the other videos about this experiment on YouTube, they’ll now go on to explain that this seems to imply that the choice of what you measure on the one side of the experiment decides what happened on the other side before you even made that choice. Because the photons must have known whether to interfere or not before you decided whether to erase the which-way information. But this is clearly nonsense. Because, let’s rewind this explanation to the beginning.

The photons on the screen can’t create an interference pattern. Everything I told you after this is completely irrelevant. It doesn’t matter at all what you do on the other side of the experiment. The photons on the screen will always create the same pattern. And it’ll never be an interference pattern.

Wait. Didn’t I just tell you that you do get an interference pattern if you use detectors D3 and D4? Indeed. But I’ve omitted a crucial part of the information which is missing in those other YouTube videos. It’s that those interference patterns are not the same. And if you add them, you get exactly the same as you get from detectors 1 and 2. Namely these two overlapping blurry blobs. This is why it matters that you know the combined pattern of two single slits doesn’t give you two separate blobs, as they normally show you.

What you actually do in the eraser experiment, is that you sample the photon pairs in two groups. And you do that in two different ways. If you use detector 1 and 2 you sample them so that the entangled partners on the screen do not create an interference pattern for each detector separately. If you use detector 3 and 4, they each separately create an interference pattern but together they don’t.

This means that the interference pattern really comes from selectively disregarding some of the particles. That this is possible has nothing to do with quantum mechanics. I could throw coins on the floor and then later decide to disregard some of those and create any kind of pattern. Clearly this doesn’t rewrite the past.

This by the way has nothing to do with the particular realization of the quantum eraser experiment that I’ve discussed. This experiment has been done in a number of different ways, but what I just told you is generally true, these interference patterns will always combine to give the original non-interference pattern.

This is not to say that there is nothing weird going on in this experiment. But what’s weird about it is the same thing that’s weird already about the normal double slit experiment. Namely, if you look at the wave-function of a single particle, then that distributes in space. Yet when you measure it, the particle is suddenly in one particular place, and the result must be correlated throughout space and fit to the measurement setting. I actually think the bomb experiment is far weirder than the quantum eraser. Check out my earlier video for more on that.

When I was working on this video I thought certainly someone must have explained this before. But the only person I could find who’d done that is… Sean Carroll in a blogpost two years ago. Yes, you can trust Sean with the quantum stuff. I’ll leave you a link to Sean’s piece in the info.

Wednesday, October 27, 2021

Comments now on Patreon

Many of you have sent me notes asking what happened to the comments. Comments are permanently off on this blog. I just don't have the time to deal with it. In all honesty, since I have turned them off my daily routine has considerably improved, so they'll remain off. If you've witnessed the misery in my comment sections, you probably saw this coming.

This problem has been caused not so much by commenters themselves, as by Google's miserable commenting platform that doesn't allow blocking or managing problematic people in any way. It adds to this that the threaded comments are terrible to read and that you have to know to click on "LOAD MORE" after 200 comments to even read all replies is a remarkably shitty piece of coding.

I am genuinely sorry about this development because over the years I have come to value the feedback from many of you and I feel like I've lost some friends now. At some point I want to move this blog to a different platform and also write some other stuff again, rather than just posting transcripts. But at the moment I don't have the time.

Having said that, I will from now on cross-post transcripts of my videos at Patreon, where you can interact with me and other Patreons for as little as 2 Euro a month. Hope to see you there.

Saturday, October 23, 2021

Does Captain Kirk die when he goes through the transporter?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Does Captain Kirk die when he goes through the transporter? This question has kept me up at night for decades. I’m not kidding. And I still don’t have an answer. So this video isn’t going to answer the question, but I will explain why it’s more difficult than you may think. If you haven’t thought about this before, maybe pause the video for a moment and try to make up your mind. Do you think Kirk dies when he goes through the transporter? Let me know if at the end of this video you’ve changed your mind.

So how does the transporter work? The idea is that the person who enters a transporter is converted into an energy pattern that contains all the information. That energy can be sent or “beamed” at the speed of light. And once it’s arrived at its final destination, it can be converted back-into-the-person.

Now of course energy isn’t something in and by itself. Energy, like momentum or velocity is a property of something. This means the beam has to be made of something. But that doesn’t really matter for the purpose of transportation, it only matters that the beam can contain all the information about the person and it can be sent much faster and much easier than you could send the person in its material form.

Current technology is far, far away from being able to read out all the information that’s necessary to build up a human being from elementary particles. And even if we could do that, it’d take ridiculously long to send that information anywhere. According to a glorious paper by a group of students from the University of Leicester, assuming a bandwidth of about 30 Giga Hertz, just sending the information of a single cell would take more than 10^15 years, and that’s not counting travel time. Just for comparison, the age of the universe is about 10^10 years. So, even if you increase the bandwidth by a quadrillion, it’d still take at least a year just to move a cell one meter to the left.

Clearly we’re not going build a transporter isn’t going to happen any time soon, but from the perspective of physics there’s no reason why it should not be possible. I mean, what makes you you is not a particular collection of elementary particles. Elementary particles are identical to each other. What makes you you is the particular arrangement of those particles. So why not just send that information instead of all the particles? That should be possible.

And according to the best theories that we currently have, that information is entirely contained in the configuration of the particles at any one moment in time. That’s just how the laws of nature seem to work. Once we know the exact state of a system at one moment, say the position and velocity of an apple, then we can calculate what happens at any later time, say, where the apple will fall. I talked about this in more detail in my video about differential equations, so check this out for more.

For the purposes of this video you just need to know that the idea that all the information about a person is contained in the exact configuration at one moment in time is correct. This is also true in quantum mechanics, though quantum mechanics brings in a subtlety that I will get to in a moment.

So, what happens in the transporter is “just” that you get converted into a different medium, all cell and brain processes are put on pause, and then you’re reassembled back and all those processes continue exactly as before. For you, no time has passed, you just find yourself elsewhere. At first sight it seems, Kirk doesn’t die when he goes through the transporter, it’s just a conversion.

But. There’s no reason why you have to convert the person into something else when you read out the information. You can well imagine that you just read out the information, send it elsewhere, and then build a person out of that information. And then, after you’ve done that, you blast the original person into pieces. The result is exactly the same. It’s just that now there’s a time delay between reading out the information and converting the person into something else. Suddenly it looks like Kirk dies and the person on the other end is a copy. Let’s call this the “Copy Argument”.

It might be that this isn’t possible though. For one, when we read out the exact state of a system at any one moment in time that doesn’t only tell you what this system will do in the future, it also tells you what it’s done in the past. This means, strictly speaking, the only way to copy a system elsewhere would require you to also reproduce its entire past, which isn’t possible.

However, you could say that the details of the past don’t matter. Think of a pool table. Balls are rolling around and bouncing off each other. Now imagine that at one particular moment, you record the exact positions and velocities of those balls. Then you can place other balls on another pool table at the right places and give them the correct kick. This should produce the same motion as on the original table, in principle exactly. And that’s even though the past of the copied table isn’t the same because the velocities of the balls came about differently. It’s just that this difference doesn’t matter for the motion of the balls.

Can one do the same for elementary particles? I don’t think so. But maybe you can do it for atoms, or at least for molecules, and that might be enough.

But there’s another reason you might not be able to read out the information of a person without annihilating them in that process, namely that quantum mechanics says that this isn’t possible. You just can’t copy an arbitrary quantum state exactly. However, it’s somewhat questionable whether this matters for people because quantum effects don’t seem to be hugely relevant in the human body. But if you think that those quantum effects are relevant, then you simply cannot copy the information of a person without destroying the original. So in that case the Copy Argument doesn’t work and we’re back to Kirk lives. Let’s call this the No-Copy Argument.

However… there’s another problem. The receiving side of the transporter is basically a machine that builds humans out of information. Now, if you don’t have the information that makes up a particular person, it’s incredibly unlikely you will correctly assemble them. But it’s not impossible. Indeed, if such machines are possible at all and the universe is infinitely large, or if there are other universes, then somewhere there will be a machine that will coincidentally assemble you. Even though the information was never beamed there in the first place. Indeed, this would happen infinitely often.

So you can ask what happens with Kirk in this case. He goes into the transporter, disappears. But copies of him appear elsewhere, coincidentally, even though the information of the original was never read out. You can conclude from this that it doesn’t really matter whether you actually read out the information in the first place. The No-Copy argument fails and it looks again like that the Kirk which we care about dies.

There are various ways people have tried to make sense of this conundrum. The most common one is abandoning our intuitive idea of what it means to be yourself. We have this idea that our experience is continuous and if you go into the transporter there has to be an answer to what you experience next. Do you find yourself elsewhere? Or is that the end of your story and someone else finds themselves elsewhere? It seems that there has to be a difference between these two cases. But if there is no observable difference, then this just means we’re wrong in thinking that being yourself is continuous to begin with.

The other way to deal with the problem is to take our experience seriously and conclude that there is something wrong with physics. That the information about yourself is not contained in any one particular moment. Instead, what makes you you is the entire story of all moments, or at least some stretch of time. In that case, it would be clear that if you convert a person into some other physical medium and then reassemble it, that person’s experience remains intact. Whereas if you break that person’s story in space-time apart, by blasting them away at one place and assembling a copy elsewhere, that would not result in a continuous experience.

At least for me, this seems to make intuitively more sense. But this conflicts with the laws of nature that we currently have. And human intuition is not a good guide to understanding the fundamental laws of nature, quantum mechanics is exhibit A. Philosophers by the way are evenly divided between the possible answers to the question. In a survey, about a third voted for “death” another third for “survival” and yet another third for “other”. What do you think? And did this video change your mind? Let me know in the comments.

Saturday, October 16, 2021

Terraforming Mars in 3 Simple Steps

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


We have made great progress screwing up the climate on this planet, so the time is right to look for a new home on which to continue the successes of the human race. What better place could there be than our neighbor planet Mars. It’s a little cold and a little dusty and it takes seven months to get there, but otherwise it’s a lovely place, and with only 3 simple steps, it can be turned into an Earthlike place or be “terraformed” as they say. Just like magic. And that’s what we’ll talk about today.

First things first, Mars is about one hundred million kilometers farther away from the Sun than Earth. Its average temperature is minus 60 degrees Celsius or minus 80 Fahrenheit. Its atmosphere is very thin and doesn’t contain oxygen. That doesn’t sound very hospitable to life as we know it, but scientists have come up with a solution for our imminent move to Mars.

We’ll start with the atmosphere, which is actually two issues namely, the atmosphere of Mars is very thin and contains basically no oxygen. Instead, it’s mostly carbon-dioxide and nitrogen.

One reason the atmosphere is so thin is that Mars is smaller than Earth and its mass is only a tenth that of Earth. That’d make for interesting Olympic games, but it also makes it easier for gas to escape. This by the way is why I strongly recommend you don’t play with your anti-gravity device. You don’t want the atmosphere of Earth to escape, do you?

But that Mars is lighter than Earth is a minor problem. The bigger problem with the atmosphere of Mars is that Mars doesn’t have a magnetic field, or at least it doesn’t have one any more. The magnetic field of a planet, like the one we have here on Earth, is important because it redirects the charged particles which the sun constantly emits, the so-called solar wind. Without that protection, the solar wind can rip off the atmosphere. That’s not good. Check out my earlier video about solar storms for more about how dangerous they can be.

That the solar wind rips off the atmosphere if the protection from the magnetic field fades away is what happened to Mars. Indeed, it’s still happening. In 2015, NASA’s MAVEN spacecraft measured the slow loss of atmosphere from Mars. They estimate it to be 100 grams per second. This constant loss is balanced by the evaporation of gas from the crust of Mars, so that the pressure has stabilized at a few milli-bar. The atmospheric pressure on the surface of earth is approximately one bar.

Therefore, before we try to create an atmosphere on Mars we first have to create a magnetic field because otherwise the atmosphere would just be wiped away again. How do you create a magnetic field for a planet? Well, physicists have understood magnetic fields two centuries ago and it’s really straight-forward.

In a paper that was just published in April in the International Journal of Astrobiology, two physicists explain that all you have to do put a superconducting wire around Mars, simple enough, isn’t it? The circle would have to have a radius of about 3400 kilometers but the diameter of the collected wires only needs to be about five centimeters. Well, okay, you need an insulation and a refrigeration system to keep it superconducting. And you need a power station to generate a current. But other than that, no fancy technology required.

That superconducting wire would have a weight of about one million tons which is only about 100 times the total weight of the Eiffel tower. The researchers propose to make it of bismuth strontium calcium copper oxide (BSCCO). Where do you get so much bismuth from? Asteroid Mining. Piece of cake.

Meanwhile on Earth. Will Cutbill from the UK earned an entry into the Guinness Book of World Records by stapling 5 M and Ms on top of each other.

Back to Mars. With the magnetic field in place, we can move to step 2 of terraforming Mars, creating an atmosphere. This can be done by releasing the remaining carbon dioxide that’s stored in frozen caps on the poles and in the rocks. In 2018, a group of American researchers published a paper in Nature in which they estimate that using the most wildly optimistic assumptions this would get us to about twenty percent of the atmospheric pressure on earth.

Leaving aside that no one knows how to release the gas, if we would release the gas this would lead to a moderate greenhouse effect. It would increase the average temperature on Mars by about 10 Kelvin to a balmy minus 50 Celsius. That still seems a little chilly, but I hear that fusion power is almost there, so I guess we can heat with that.

Meanwhile on Earth. Visitors of London can now enjoy a new tourist attraction, it’s a man-built hill of 30 meters height from which you have a great view on… construction areas.

Back to Mars. Okay, so we have a magnetic field and created some kind of atmosphere by releasing carbon-dioxide with the added benefit of increasing the average temperature by a few degrees. The remaining problem is that we can’t breathe carbon-dioxide. I mean, we can, but not for very long. So step 3 of terraforming Mars is converting carbon-dioxide to di-oxide. Only thing we need to do for this is to grow a sufficient amount of plants.

There’s the issue that plants tend to not flourish at minus fifty degrees, but that’s easy to fix with a little genetic engineering. Plants as we know them also need a range of nutrients they normally get from soil, most importantly Nitrogen, Phosphorus and Potassium. Luckily, those are present on Mars. The bigger problem may be that the soil on mars is too thin and too hard which makes it difficult for plants to grow roots. It also retains water very poorly, so you have to water the plants very often. How do you water plants at -50 degrees? Good question!

Meanwhile on Earth you can buy fake Mars soil and try your luck growing plants on it yourself!

Ok, so I admit that the last bit with the plants was a tiny bit sketchy. But there might be a better way to do it. In July 2019 researchers from JPL, Harvard and Edinburgh University published a paper in Nature in which they proposed to cover patches of Mars with a thin layer of aerogel.

An aerogel is a synthetic material which contains a lot of gas. It is super light and has an extremely low thermal conductivity, which means it could keep the surface of Mars warm. The gel would be transparent to visible light but can be somewhat opaque in the infrared, so this could create an enhanced greenhouse effect directly on the surface. That would heat up the surface, which would release more carbon dioxide. The carbon dioxide would accumulates under the gel, and then plants should be able to grow in that space. So, we’re not talking about oaks but more like algae or something that covers the ground.

In their paper, the researchers estimate that a layer of about 3 centimeters aerogel could raise the surface temperature of Mars by about 45 Kelvin. With that the average temperature on Mars would still be below the freezing point of water, but in some places it might rise above it. Sounds great! Except that the atmospheric pressure is so low that the liquid water would start boiling as soon as it melts.

So as you see our move to Mars is well on the way, better pack your bags, see you there!

Saturday, October 09, 2021

How I learned to love pseudoscience

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


On this channel, I try to separate the good science from the bad science, the pseudoscience. And I used to think that we’d be better off without pseudoscience, that this would prevent confusion and make our lives easier. But now I think that pseudoscience is actually good for us. And that’s what we’ll talk about today.

Philosophers can’t agree on just what defines “pseudoscience” but in this episode I will take it to mean theories that are in conflict with evidence, but that promoters believe in, either by denying the evidence, or denying the scientific method, or maybe just because they have no idea what either the evidence or the scientific method is.

But what we call pseudoscience today might once have been science. Astrology for example, the idea that the constellations of the stars influence human affairs was once a respectable discipline. Every king and queen had a personal astrologer to give them advice. And many early medical practices weren’t just pseudoscience, they were often fatal. The literal snake oil, obtained by boiling snakes in oil, was at least both useless and harmless. However, they also prescribed tape worms for weight loss. Though in all fairness, that might actually work, if you survive it.

And sometimes, theories accused of being pseudoscientific turned out to be right, for example the idea that the continents on earth today broke apart from one large tectonic plate. That was considered pseudoscience until evidence confirmed it. And the hypothesis of atoms was at first decried as pseudoscience because one could not, at the time, observe atoms.

So the first lesson we can take away is that pseudoscience is a natural byproduct of normal science. You can’t have one without the other. If we learn something new about nature, some fraction of people will cling on to falsified theories longer than reasonable. And some crazy ideas in the end turn out to be correct.

But pseudoscience isn’t just a necessary evil. It’s actually useful to advance science because it forces scientists to improve their methods.

Single-blind trials, for example, were invented in the 18th century to debunk the practice of Mesmerism. At that time, scientists had already begun to study and apply electromagnetism. But many people were understandably mystified by the first batteries and electrically powered devices. Franz Mesmer exploited their confusion.

Mesmer was a German physician who claimed he’d discovered a very thin fluid that penetrated the entire universe, including the human body. When this fluid was blocked from flowing, he argued, the result was that people fell ill.

Fortunately, Mesmer said, it was possible to control the flow of the fluid and cure people. And he knew how to do it. The fluid was supposedly magnetic, and entered the body through “poles”. The north pole was on your head and that’s where the fluid came in from the stars, and the south pole was at your feet where it connected with the magnetic field of earth.

Mesmer claimed that the flow of the fluid could be unblocked by “magnetizing” people. Here is how the historian Lopez described what happened after Mesmer moved to Paris in 1778:
“Thirty or more persons could be magnetized simultaneously around a covered tub, a case made of oak, about one foot high, filled with a layer of powdered glass and iron filings... The lid was pierced with holes through which passed jointed iron branches, to be held by the patients. In subdued light, absolutely silent, they sat in concentric rows, bound to one another by a cord. Then Mesmer, wearing a coat of lilac silk and carrying a long iron wand, walked up and down the crowd, touching the diseased parts of the patients’ bodies. He was a tall, handsome, imposing man.”
After being “magnetized” by Mesmer, patients frequently reported feeling significantly better. This, by the way, is the origin of the word mesmerizing.

Scientists of the time, Benjamin Franklin and Antoine Lavoisier among them, set out to debunk Mesmer’s claims. For this, they blindfolded a group of patients. Some of them they told they’d get a treatment, but then they didn’t do anything, and others they gave a treatment without their knowledge.

Franklin and his people found that the supposed effects of mesmerism were not related to the actual treatment, but to the belief of whether one received a treatment. This isn’t to say there were no effects at all. Quite possibly some patients actually did feel better just believing they’d been treated. But it’s a psychological benefit, not a physical one.

In this case the patients didn’t know whether they received an actual treatment, but those conducting the study did. Such trials can be improved by randomly assigning people to one of the two groups so that neither the people leading the study nor those participating in it know who received an actual treatment. This is now called a “double blind trial,” and that too was invented to debunk pseudoscience, namely homeopathy.

Homeopathy was invented by another German, Samuel Hahnemann. It’s based on the belief that diluting a natural substance makes it more effective in treating illness. In eighteen thirty-five, Friedrich Wilhelm von Hoven, a public health official in Nuremberg, got into a public dispute with the dedicated homeopath Johann Jacob Reuter. Reuter claimed that dissolving a single grain of salt in 100 drops of water, and then diluting it 30 times by a factor of 100 would produce “extraordinary sensations” if you drank it. Von Hoven wouldn’t have it. He proposed and then conducted the following experiment.

He prepared 50 samples of homeopathic salt-water following Reuter’s recipe, and 50 samples of plain water. Today, we’d call the plain water samples a “placebo.” The samples were numbered and randomly assigned to trial participants by repeated shuffling. Here is how they explained this in the original paper from 1835:
“100 vials… are labeled consecutively… then mixed well among each other and placed, 50 per table, on two tables. Those on the table at the right are filled with the potentiation, those on the table at the left are filled with pure distilled snow water. Dr. Löhner enters the number of each bottle, indicating its contents, in a list, seals the latter and hands it over to the committee… The filled bottles are then brought to the large table in the middle, are once more mixed among each other and thereupon submitted to the committee for the purpose of distribution.”
The assignments were kept secret on a list in a sealed envelope. Neither von Hoven nor the patients knew who got what.

They found 50 people to participate in the trial. For three weeks von Hoven collected reports from the study participants, after which he opened the sealed envelope to see who had received what. It turned out that only eight participants had experienced anything unusual. Five of those had received the homeopathic dilution, three had received water. Using today’s language you’d say the effect wasn’t statistically significant.

Von Hoven wasn’t alone with his debunking passion. He was a member of the “society of truth-loving men”. That was one of the skeptical societies that had popped up to counter the spread of quackery and fraud in the 19th century. The society of truth loving men no longer exists. But the oldest such society that still exists today was founded as far back as 1881 in the Netherlands. It’s called the Vereniging tegen de Kwakzalverij, literally the “Society Against Quackery”. This society gave out an annual price called the Master Charlatan Prize to discourage the spread of quackery. They still do this today.

Thanks to this Dutch anti-quackery society, the Netherlands became one of the first countries with governmental drug regulation. In case you wonder, the first country to have such a regulation was the United Kingdom with the 1868 Pharmacy Act. The word “skeptical” has suffered somewhat in recent years because a lot of science deniers now claim to be skeptics. But historically, the task of skeptic societies was to fight pseudoscience and to provide scientific information to the public.

And there are more examples where fighting pseudoscience resulted in scientific and societal progress. For example, to debunk telepathy in the late nineteenth century. At the time, some prominent people believed in it, for example Nobel Prize winners Lord Rayleigh and Charles Richet. Richet proposed to test telepathy by having one person draw a playing card at random and concentrating on it for a while. Then another person had to guess the card. The results were then compared against random chance. This is basically how we today calculate statistical significance.

And if you remember, Karl Popper came up with his demarcation criterion of falsification because he wanted to show that Marxism and Freud’s psychoanalysis wasn’t proper science. Now, of course we know today that falsification is not the best way to go about it, but Popper’s work was arguably instrumental to the entire discipline of the philosophy of science. Again that came out of the desire to fight pseudoscience.

And this fight isn’t over. We’re still today fighting pseudoscience and in that process scientists constantly have to update their methods. For example, all this research we see in the foundations of physics on multiverses and unobservable particles doesn’t contribute to scientific progress. I am pretty sure in fifty years or so that’ll go down as pseudoscience. And of course there’s still loads of quackery in medicine, just think of all the supposed COVID remedies that we’ve seen come and go in the past year.

The fight against pseudoscience today is very much a fight to get relevant information to those who need it. And again I’d say that in the process scientists are forced to get better and stronger. They develop new methods to quickly identify fake studies, to explain why some results can’t be trusted, and to improve their communication skills.

In case this video inspired you to attempt self-experiments with homeopathic remedies, please keep in mind that not everything that’s labeled “homeopathic” is necessarily strongly diluted. Some homeopathic remedies contain barely diluted active ingredients of plants that can be dangerous when overdosed. Before you assume it’s just water or sugar, please check the label carefully.

If you want to learn more about the history of pseudoscience, I can recommend Michael Gordin’s recent book “On the Fringe”.

Saturday, October 02, 2021

How close is nuclear fusion power?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Today I want to talk about nuclear fusion. I’ve been struggling with this video for some while. This is because I am really supportive of nuclear fusion, research and development. However, the potential benefits of current research on nuclear fusion have been incorrectly communicated for a long time. Scientists are confusing the public and policy makers in a way that makes their research appear more promising than it really is. And that’s what we’ll talk about today.

There is a lot to say about nuclear fusion, but today I want to focus on its most important aspect, how much energy goes into a fusion reactor, and how much comes out. Scientists quantify this with the energy gain, that’s the ratio of what comes out over what goes in and is usually denoted Q. If the energy gain is larger than 1 you create net energy. The point where Q reaches 1 is called “Break Even”.

The record for energy gain was just recently broken. You may have seen the headlines. An experiment at the National Ignition Facility in the United States reported they’d managed to get out seventy percent of the energy they put in, so a Q of 0.7. The previous record was 0.67. It was set in nineteen ninety-seven by the Joint European Torus, JET for short.

The most prominent fusion experiment that’s currently being built is ITER. You will find plenty of articles repeating that ITER, when completed, will produce ten times as much energy as goes in, so a Gain of 10. Here is an example from a 2019 article in the Guardian by Phillip Ball who writes
“[The Iter project] hopes to conduct its first experimental runs in 2025, and eventually to produce 500 megawatts (MW) of power – 10 times as much as is needed to operate it.”

Here is another example from Science Magazine where you can read “[ITER] is predicted to produce at least 500 megawatts of power from a 50 megawatt input.”

So this looks like we’re close to actually creating energy from fusion right? No, wrong.

Remember that nuclear fusion is the process by which the sun creates power. The sun forces nuclei into each other with the gravitational force created by its huge mass. We can’t do this on earth so we have to find some other way. The currently most widely used technology for nuclear fusion is heating the fuel in strong magnetic fields until it becomes a plasma. The temperature that must be reached is about 150 million Kelvin. The other popular option is shooting at a fuel pellet with lasers. There are some other methods but they haven’t gotten very far in research and development.

The confusion which you find in pretty much all popular science writing about nuclear fusion is that the energy gain which they quote is that for the energy that goes into the plasma and comes out of the plasma.

In the technical literature, this quantity is normally not just called Q but more specifically Q-plasma. This is not the ratio of the entire energy that comes out of the fusion reactor over that which goes into the reactor, which we can call Q-total. If you want to build a power plant, and that’s what we’re after in the end, it’s the Q-total that matters, not the Q-plasma. 

 Here’s the problem. Fusion reactors take a lot of energy to run, and most of that energy never goes into the plasma. If you keep the plasma confined with a magnetic field in a vacuum, you need to run giant magnets and cool them and maintain that. And pumping a laser isn’t energy efficient either. These energies never appear in the energy gain that is normally quoted.

The Q-plasma also doesn’t take into account that if you want to operate a power plant, the heat that is created by the plasma would still have to be converted into electric energy, and that can only be done with a limited efficiency, optimistically maybe fifty percent. As a consequence, the Q total is much lower than the Q plasma.

If you didn’t know this, you’re not alone. I didn’t know this until a few years ago either. How can such a confusion even happen? I mean, this isn’t rocket science. The total energy that goes into the reactor is more than the energy that goes into the plasma. And yet, science writers and journalists constantly get this wrong. They get the most basic fact wrong on a matter that affects tens of billions of research funding.

It’s not like we are the first to point out that this is a problem. I want to read you some words from a 1988 report from the European Parliament, more specifically from the Committee for Scientific and Technological Options Assessment. They were tasked with establishing criteria for the assessment of European fusion research.

In 1988, they already warned explicitly of this very misunderstanding.
“The use of the term `Break-even’ as defining the present programme to achieve an energy balance in the Hydrogen-Deuterium plasma reaction is open to misunderstanding. IN OUR VIEW 'BREAK-EVEN' SHOULD BE USED AS DESCRIPTIVE OF THE STAGE WHEN THERE IS AN ENERGY BREAKEVEN IN THE SYSTEM AS A WHOLE. IT IS THIS ACHIEVEMENT WHICH WILL OPEN THE WAY FOR FUSION POWER TO BE USED FOR ELECTRICITY GENERATION.”
They then point out the risk:
“In our view the correct scientific criterion must dominate the programme from the earliest stages. The danger of not doing this could be that the entire programme is dedicated to pursuing performance parameters which are simply not relevant to the eventual goal. The result of doing this could, in the very worst scenario be the enormous waste of resources on a program that is simply not scientifically feasible.”
So where are we today? Well, we’re spending lots of money on increasing Q-plasma instead of increasing the relevant quantity Q-total. How big is the difference? Let us look at ITER as an example.

You have seen in the earlier quotes about ITER that the energy input is normally said to be 50 MegaWatts. But according to the head of the Electrical Engineering Division of the ITER Project, Ivone Benfatto, ITER will consume about 440 MegaWatts while it produces fusion power. That gives us an estimate for the total energy that goes in.

Though that is misleading already because 120 of those 440 MegaWatts are consumed whether or not there’s any plasma in the reactor, so using this number assumes the thing would be running permanently. But okay, let’s leave this aside.

The plan is that ITER will generate 500 MegaWatts of fusion power in heat. If we assume a 50% efficiency for converting this heat into electricity, ITER will produce about 250 MegaWatts of electric power.

That gives us a Q total of about 0.57. That’s less than a tenth of the normally stated Q plasma of 10. Even optimistically, ITER will still consume roughly twice the power it generates. What’s with the earlier claim of a Q of 0.67 for the JET experiment? Same thing. 

If you look at the total energy, JET consumed more than 700 MegaWatts of electricity to get its sixteen MegaWatts of fusion power, that’s heat not electric. So if you again assume 50 percent efficiency in the heat to electricity conversion you get a Q-total of about 0.01 and not the claimed 0.67. 

 And those recent headlines about the NIF success? Same thing again. It’s the Q-plasma that is 0.7. That’s calculated with the energy that the laser delivers to the plasma. But how much energy do you need to fire the laser? I don’t know for sure, but NIF is a fairly old facility, so a rough estimate would be 100 times as much. If they’d upgrade their lasers, maybe 10 times as much. Either way, the Q-total of this experiment is almost certainly well below 0.1. 

Of course the people who work on this know the distinction perfectly well. But I can’t shake the impression they quite like the confusion between the two Qs. Here is for example a quote from Holtkamp who at the time was the project construction leader of ITER. He said in an interview in 2006:
“ITER will be the first fusion reactor to create more energy than it uses. Scientists measure this in terms of a simple factor—they call it Q. If ITER meets all the scientific objectives, it will create 10 times more energy than it is supplied with.”
Here is Nick Walkden from JET in a TED talk referring to ITER “ITER will produce ten times the power out from fusion energy than we put into the machine.” and “Now JET holds the record for fusion power. In 1997 it got 67 percent of the power out that we put in. Not 1 not 10 but still getting close.”

But okay, you may say, no one expects accuracy in a TED talk. Then listen to ITER Director General Dr. Bigot speaking to the House of Representatives in April 2016:

[Rep]: I look forward to learning more about the progress that ITER has made under Doctor Bigot’s leadership to address previously identified management deficiencies and to establish a more reliable path forward for the project.

[Bigot]:Okay, so ITER will have delivered in that full demonstration that we could have okay 500 Megawatt coming out of the 50 Megawatt we will put in.
What are we to make of all this?

Nuclear fusion power is a worthy research project. It could have a huge payoff for the future of our civilization. But we need to be smart about just what research to invest into because we have limited resources. For this, it is super important that we focus on the relevant question: Will it output energy into the grid. 

There seem to be a lot of people in fusion research who want you to remain confused about just what the total energy gain is. I only recently read a new book about nuclear fusion “The Star Builders” which does the same thing again (review here). Only briefly mentions the total energy gain, and never gives you a number. This misinformation has to stop. 

If you come across any popular science article or interview or video that does not clearly spell out what the total energy gain is, please call them out on it. Thanks for watching, see you next week.