Saturday, October 16, 2021

Terraforming Mars in 3 Simple Steps

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


We have made great progress screwing up the climate on this planet, so the time is right to look for a new home on which to continue the successes of the human race. What better place could there be than our neighbor planet Mars. It’s a little cold and a little dusty and it takes seven months to get there, but otherwise it’s a lovely place, and with only 3 simple steps, it can be turned into an Earthlike place or be “terraformed” as they say. Just like magic. And that’s what we’ll talk about today.

First things first, Mars is about one hundred million kilometers farther away from the Sun than Earth. Its average temperature is minus 60 degrees Celsius or minus 80 Fahrenheit. Its atmosphere is very thin and doesn’t contain oxygen. That doesn’t sound very hospitable to life as we know it, but scientists have come up with a solution for our imminent move to Mars.

We’ll start with the atmosphere, which is actually two issues namely, the atmosphere of Mars is very thin and contains basically no oxygen. Instead, it’s mostly carbon-dioxide and nitrogen.

One reason the atmosphere is so thin is that Mars is smaller than Earth and its mass is only a tenth that of Earth. That’d make for interesting Olympic games, but it also makes it easier for gas to escape. This by the way is why I strongly recommend you don’t play with your anti-gravity device. You don’t want the atmosphere of Earth to escape, do you?

But that Mars is lighter than Earth is a minor problem. The bigger problem with the atmosphere of Mars is that Mars doesn’t have a magnetic field, or at least it doesn’t have one any more. The magnetic field of a planet, like the one we have here on Earth, is important because it redirects the charged particles which the sun constantly emits, the so-called solar wind. Without that protection, the solar wind can rip off the atmosphere. That’s not good. Check out my earlier video about solar storms for more about how dangerous they can be.

That the solar wind rips off the atmosphere if the protection from the magnetic field fades away is what happened to Mars. Indeed, it’s still happening. In 2015, NASA’s MAVEN spacecraft measured the slow loss of atmosphere from Mars. They estimate it to be 100 grams per second. This constant loss is balanced by the evaporation of gas from the crust of Mars, so that the pressure has stabilized at a few milli-bar. The atmospheric pressure on the surface of earth is approximately one bar.

Therefore, before we try to create an atmosphere on Mars we first have to create a magnetic field because otherwise the atmosphere would just be wiped away again. How do you create a magnetic field for a planet? Well, physicists have understood magnetic fields two centuries ago and it’s really straight-forward.

In a paper that was just published in April in the International Journal of Astrobiology, two physicists explain that all you have to do put a superconducting wire around Mars, simple enough, isn’t it? The circle would have to have a radius of about 3400 kilometers but the diameter of the collected wires only needs to be about five centimeters. Well, okay, you need an insulation and a refrigeration system to keep it superconducting. And you need a power station to generate a current. But other than that, no fancy technology required.

That superconducting wire would have a weight of about one million tons which is only about 100 times the total weight of the Eiffel tower. The researchers propose to make it of bismuth strontium calcium copper oxide (BSCCO). Where do you get so much bismuth from? Asteroid Mining. Piece of cake.

Meanwhile on Earth. Will Cutbill from the UK earned an entry into the Guinness Book of World Records by stapling 5 M and Ms on top of each other.

Back to Mars. With the magnetic field in place, we can move to step 2 of terraforming Mars, creating an atmosphere. This can be done by releasing the remaining carbon dioxide that’s stored in frozen caps on the poles and in the rocks. In 2018, a group of American researchers published a paper in Nature in which they estimate that using the most wildly optimistic assumptions this would get us to about twenty percent of the atmospheric pressure on earth.

Leaving aside that no one knows how to release the gas, if we would release the gas this would lead to a moderate greenhouse effect. It would increase the average temperature on Mars by about 10 Kelvin to a balmy minus 50 Celsius. That still seems a little chilly, but I hear that fusion power is almost there, so I guess we can heat with that.

Meanwhile on Earth. Visitors of London can now enjoy a new tourist attraction, it’s a man-built hill of 30 meters height from which you have a great view on… construction areas.

Back to Mars. Okay, so we have a magnetic field and created some kind of atmosphere by releasing carbon-dioxide with the added benefit of increasing the average temperature by a few degrees. The remaining problem is that we can’t breathe carbon-dioxide. I mean, we can, but not for very long. So step 3 of terraforming Mars is converting carbon-dioxide to di-oxide. Only thing we need to do for this is to grow a sufficient amount of plants.

There’s the issue that plants tend to not flourish at minus fifty degrees, but that’s easy to fix with a little genetic engineering. Plants as we know them also need a range of nutrients they normally get from soil, most importantly Nitrogen, Phosphorus and Potassium. Luckily, those are present on Mars. The bigger problem may be that the soil on mars is too thin and too hard which makes it difficult for plants to grow roots. It also retains water very poorly, so you have to water the plants very often. How do you water plants at -50 degrees? Good question!

Meanwhile on Earth you can buy fake Mars soil and try your luck growing plants on it yourself!

Ok, so I admit that the last bit with the plants was a tiny bit sketchy. But there might be a better way to do it. In July 2019 researchers from JPL, Harvard and Edinburgh University published a paper in Nature in which they proposed to cover patches of Mars with a thin layer of aerogel.

An aerogel is a synthetic material which contains a lot of gas. It is super light and has an extremely low thermal conductivity, which means it could keep the surface of Mars warm. The gel would be transparent to visible light but can be somewhat opaque in the infrared, so this could create an enhanced greenhouse effect directly on the surface. That would heat up the surface, which would release more carbon dioxide. The carbon dioxide would accumulates under the gel, and then plants should be able to grow in that space. So, we’re not talking about oaks but more like algae or something that covers the ground.

In their paper, the researchers estimate that a layer of about 3 centimeters aerogel could raise the surface temperature of Mars by about 45 Kelvin. With that the average temperature on Mars would still be below the freezing point of water, but in some places it might rise above it. Sounds great! Except that the atmospheric pressure is so low that the liquid water would start boiling as soon as it melts.

So as you see our move to Mars is well on the way, better pack your bags, see you there!

Saturday, October 09, 2021

How I learned to love pseudoscience

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


On this channel, I try to separate the good science from the bad science, the pseudoscience. And I used to think that we’d be better off without pseudoscience, that this would prevent confusion and make our lives easier. But now I think that pseudoscience is actually good for us. And that’s what we’ll talk about today.

Philosophers can’t agree on just what defines “pseudoscience” but in this episode I will take it to mean theories that are in conflict with evidence, but that promoters believe in, either by denying the evidence, or denying the scientific method, or maybe just because they have no idea what either the evidence or the scientific method is.

But what we call pseudoscience today might once have been science. Astrology for example, the idea that the constellations of the stars influence human affairs was once a respectable discipline. Every king and queen had a personal astrologer to give them advice. And many early medical practices weren’t just pseudoscience, they were often fatal. The literal snake oil, obtained by boiling snakes in oil, was at least both useless and harmless. However, they also prescribed tape worms for weight loss. Though in all fairness, that might actually work, if you survive it.

And sometimes, theories accused of being pseudoscientific turned out to be right, for example the idea that the continents on earth today broke apart from one large tectonic plate. That was considered pseudoscience until evidence confirmed it. And the hypothesis of atoms was at first decried as pseudoscience because one could not, at the time, observe atoms.

So the first lesson we can take away is that pseudoscience is a natural byproduct of normal science. You can’t have one without the other. If we learn something new about nature, some fraction of people will cling on to falsified theories longer than reasonable. And some crazy ideas in the end turn out to be correct.

But pseudoscience isn’t just a necessary evil. It’s actually useful to advance science because it forces scientists to improve their methods.

Single-blind trials, for example, were invented in the 18th century to debunk the practice of Mesmerism. At that time, scientists had already begun to study and apply electromagnetism. But many people were understandably mystified by the first batteries and electrically powered devices. Franz Mesmer exploited their confusion.

Mesmer was a German physician who claimed he’d discovered a very thin fluid that penetrated the entire universe, including the human body. When this fluid was blocked from flowing, he argued, the result was that people fell ill.

Fortunately, Mesmer said, it was possible to control the flow of the fluid and cure people. And he knew how to do it. The fluid was supposedly magnetic, and entered the body through “poles”. The north pole was on your head and that’s where the fluid came in from the stars, and the south pole was at your feet where it connected with the magnetic field of earth.

Mesmer claimed that the flow of the fluid could be unblocked by “magnetizing” people. Here is how the historian Lopez described what happened after Mesmer moved to Paris in 1778:
“Thirty or more persons could be magnetized simultaneously around a covered tub, a case made of oak, about one foot high, filled with a layer of powdered glass and iron filings... The lid was pierced with holes through which passed jointed iron branches, to be held by the patients. In subdued light, absolutely silent, they sat in concentric rows, bound to one another by a cord. Then Mesmer, wearing a coat of lilac silk and carrying a long iron wand, walked up and down the crowd, touching the diseased parts of the patients’ bodies. He was a tall, handsome, imposing man.”
After being “magnetized” by Mesmer, patients frequently reported feeling significantly better. This, by the way, is the origin of the word mesmerizing.

Scientists of the time, Benjamin Franklin and Antoine Lavoisier among them, set out to debunk Mesmer’s claims. For this, they blindfolded a group of patients. Some of them they told they’d get a treatment, but then they didn’t do anything, and others they gave a treatment without their knowledge.

Franklin and his people found that the supposed effects of mesmerism were not related to the actual treatment, but to the belief of whether one received a treatment. This isn’t to say there were no effects at all. Quite possibly some patients actually did feel better just believing they’d been treated. But it’s a psychological benefit, not a physical one.

In this case the patients didn’t know whether they received an actual treatment, but those conducting the study did. Such trials can be improved by randomly assigning people to one of the two groups so that neither the people leading the study nor those participating in it know who received an actual treatment. This is now called a “double blind trial,” and that too was invented to debunk pseudoscience, namely homeopathy.

Homeopathy was invented by another German, Samuel Hahnemann. It’s based on the belief that diluting a natural substance makes it more effective in treating illness. In eighteen thirty-five, Friedrich Wilhelm von Hoven, a public health official in Nuremberg, got into a public dispute with the dedicated homeopath Johann Jacob Reuter. Reuter claimed that dissolving a single grain of salt in 100 drops of water, and then diluting it 30 times by a factor of 100 would produce “extraordinary sensations” if you drank it. Von Hoven wouldn’t have it. He proposed and then conducted the following experiment.

He prepared 50 samples of homeopathic salt-water following Reuter’s recipe, and 50 samples of plain water. Today, we’d call the plain water samples a “placebo.” The samples were numbered and randomly assigned to trial participants by repeated shuffling. Here is how they explained this in the original paper from 1835:
“100 vials… are labeled consecutively… then mixed well among each other and placed, 50 per table, on two tables. Those on the table at the right are filled with the potentiation, those on the table at the left are filled with pure distilled snow water. Dr. Löhner enters the number of each bottle, indicating its contents, in a list, seals the latter and hands it over to the committee… The filled bottles are then brought to the large table in the middle, are once more mixed among each other and thereupon submitted to the committee for the purpose of distribution.”
The assignments were kept secret on a list in a sealed envelope. Neither von Hoven nor the patients knew who got what.

They found 50 people to participate in the trial. For three weeks von Hoven collected reports from the study participants, after which he opened the sealed envelope to see who had received what. It turned out that only eight participants had experienced anything unusual. Five of those had received the homeopathic dilution, three had received water. Using today’s language you’d say the effect wasn’t statistically significant.

Von Hoven wasn’t alone with his debunking passion. He was a member of the “society of truth-loving men”. That was one of the skeptical societies that had popped up to counter the spread of quackery and fraud in the 19th century. The society of truth loving men no longer exists. But the oldest such society that still exists today was founded as far back as 1881 in the Netherlands. It’s called the Vereniging tegen de Kwakzalverij, literally the “Society Against Quackery”. This society gave out an annual price called the Master Charlatan Prize to discourage the spread of quackery. They still do this today.

Thanks to this Dutch anti-quackery society, the Netherlands became one of the first countries with governmental drug regulation. In case you wonder, the first country to have such a regulation was the United Kingdom with the 1868 Pharmacy Act. The word “skeptical” has suffered somewhat in recent years because a lot of science deniers now claim to be skeptics. But historically, the task of skeptic societies was to fight pseudoscience and to provide scientific information to the public.

And there are more examples where fighting pseudoscience resulted in scientific and societal progress. For example, to debunk telepathy in the late nineteenth century. At the time, some prominent people believed in it, for example Nobel Prize winners Lord Rayleigh and Charles Richet. Richet proposed to test telepathy by having one person draw a playing card at random and concentrating on it for a while. Then another person had to guess the card. The results were then compared against random chance. This is basically how we today calculate statistical significance.

And if you remember, Karl Popper came up with his demarcation criterion of falsification because he wanted to show that Marxism and Freud’s psychoanalysis wasn’t proper science. Now, of course we know today that falsification is not the best way to go about it, but Popper’s work was arguably instrumental to the entire discipline of the philosophy of science. Again that came out of the desire to fight pseudoscience.

And this fight isn’t over. We’re still today fighting pseudoscience and in that process scientists constantly have to update their methods. For example, all this research we see in the foundations of physics on multiverses and unobservable particles doesn’t contribute to scientific progress. I am pretty sure in fifty years or so that’ll go down as pseudoscience. And of course there’s still loads of quackery in medicine, just think of all the supposed COVID remedies that we’ve seen come and go in the past year.

The fight against pseudoscience today is very much a fight to get relevant information to those who need it. And again I’d say that in the process scientists are forced to get better and stronger. They develop new methods to quickly identify fake studies, to explain why some results can’t be trusted, and to improve their communication skills.

In case this video inspired you to attempt self-experiments with homeopathic remedies, please keep in mind that not everything that’s labeled “homeopathic” is necessarily strongly diluted. Some homeopathic remedies contain barely diluted active ingredients of plants that can be dangerous when overdosed. Before you assume it’s just water or sugar, please check the label carefully.

If you want to learn more about the history of pseudoscience, I can recommend Michael Gordin’s recent book “On the Fringe”.

Saturday, October 02, 2021

How close is nuclear fusion power?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Today I want to talk about nuclear fusion. I’ve been struggling with this video for some while. This is because I am really supportive of nuclear fusion, research and development. However, the potential benefits of current research on nuclear fusion have been incorrectly communicated for a long time. Scientists are confusing the public and policy makers in a way that makes their research appear more promising than it really is. And that’s what we’ll talk about today.

There is a lot to say about nuclear fusion, but today I want to focus on its most important aspect, how much energy goes into a fusion reactor, and how much comes out. Scientists quantify this with the energy gain, that’s the ratio of what comes out over what goes in and is usually denoted Q. If the energy gain is larger than 1 you create net energy. The point where Q reaches 1 is called “Break Even”.

The record for energy gain was just recently broken. You may have seen the headlines. An experiment at the National Ignition Facility in the United States reported they’d managed to get out seventy percent of the energy they put in, so a Q of 0.7. The previous record was 0.67. It was set in nineteen ninety-seven by the Joint European Torus, JET for short.

The most prominent fusion experiment that’s currently being built is ITER. You will find plenty of articles repeating that ITER, when completed, will produce ten times as much energy as goes in, so a Gain of 10. Here is an example from a 2019 article in the Guardian by Phillip Ball who writes
“[The Iter project] hopes to conduct its first experimental runs in 2025, and eventually to produce 500 megawatts (MW) of power – 10 times as much as is needed to operate it.”

Here is another example from Science Magazine where you can read “[ITER] is predicted to produce at least 500 megawatts of power from a 50 megawatt input.”

So this looks like we’re close to actually creating energy from fusion right? No, wrong.

Remember that nuclear fusion is the process by which the sun creates power. The sun forces nuclei into each other with the gravitational force created by its huge mass. We can’t do this on earth so we have to find some other way. The currently most widely used technology for nuclear fusion is heating the fuel in strong magnetic fields until it becomes a plasma. The temperature that must be reached is about 150 million Kelvin. The other popular option is shooting at a fuel pellet with lasers. There are some other methods but they haven’t gotten very far in research and development.

The confusion which you find in pretty much all popular science writing about nuclear fusion is that the energy gain which they quote is that for the energy that goes into the plasma and comes out of the plasma.

In the technical literature, this quantity is normally not just called Q but more specifically Q-plasma. This is not the ratio of the entire energy that comes out of the fusion reactor over that which goes into the reactor, which we can call Q-total. If you want to build a power plant, and that’s what we’re after in the end, it’s the Q-total that matters, not the Q-plasma. 

 Here’s the problem. Fusion reactors take a lot of energy to run, and most of that energy never goes into the plasma. If you keep the plasma confined with a magnetic field in a vacuum, you need to run giant magnets and cool them and maintain that. And pumping a laser isn’t energy efficient either. These energies never appear in the energy gain that is normally quoted.

The Q-plasma also doesn’t take into account that if you want to operate a power plant, the heat that is created by the plasma would still have to be converted into electric energy, and that can only be done with a limited efficiency, optimistically maybe fifty percent. As a consequence, the Q total is much lower than the Q plasma.

If you didn’t know this, you’re not alone. I didn’t know this until a few years ago either. How can such a confusion even happen? I mean, this isn’t rocket science. The total energy that goes into the reactor is more than the energy that goes into the plasma. And yet, science writers and journalists constantly get this wrong. They get the most basic fact wrong on a matter that affects tens of billions of research funding.

It’s not like we are the first to point out that this is a problem. I want to read you some words from a 1988 report from the European Parliament, more specifically from the Committee for Scientific and Technological Options Assessment. They were tasked with establishing criteria for the assessment of European fusion research.

In 1988, they already warned explicitly of this very misunderstanding.
“The use of the term `Break-even’ as defining the present programme to achieve an energy balance in the Hydrogen-Deuterium plasma reaction is open to misunderstanding. IN OUR VIEW 'BREAK-EVEN' SHOULD BE USED AS DESCRIPTIVE OF THE STAGE WHEN THERE IS AN ENERGY BREAKEVEN IN THE SYSTEM AS A WHOLE. IT IS THIS ACHIEVEMENT WHICH WILL OPEN THE WAY FOR FUSION POWER TO BE USED FOR ELECTRICITY GENERATION.”
They then point out the risk:
“In our view the correct scientific criterion must dominate the programme from the earliest stages. The danger of not doing this could be that the entire programme is dedicated to pursuing performance parameters which are simply not relevant to the eventual goal. The result of doing this could, in the very worst scenario be the enormous waste of resources on a program that is simply not scientifically feasible.”
So where are we today? Well, we’re spending lots of money on increasing Q-plasma instead of increasing the relevant quantity Q-total. How big is the difference? Let us look at ITER as an example.

You have seen in the earlier quotes about ITER that the energy input is normally said to be 50 MegaWatts. But according to the head of the Electrical Engineering Division of the ITER Project, Ivone Benfatto, ITER will consume about 440 MegaWatts while it produces fusion power. That gives us an estimate for the total energy that goes in.

Though that is misleading already because 120 of those 440 MegaWatts are consumed whether or not there’s any plasma in the reactor, so using this number assumes the thing would be running permanently. But okay, let’s leave this aside.

The plan is that ITER will generate 500 MegaWatts of fusion power in heat. If we assume a 50% efficiency for converting this heat into electricity, ITER will produce about 250 MegaWatts of electric power.

That gives us a Q total of about 0.57. That’s less than a tenth of the normally stated Q plasma of 10. Even optimistically, ITER will still consume roughly twice the power it generates. What’s with the earlier claim of a Q of 0.67 for the JET experiment? Same thing. 

If you look at the total energy, JET consumed more than 700 MegaWatts of electricity to get its sixteen MegaWatts of fusion power, that’s heat not electric. So if you again assume 50 percent efficiency in the heat to electricity conversion you get a Q-total of about 0.01 and not the claimed 0.67. 

 And those recent headlines about the NIF success? Same thing again. It’s the Q-plasma that is 0.7. That’s calculated with the energy that the laser delivers to the plasma. But how much energy do you need to fire the laser? I don’t know for sure, but NIF is a fairly old facility, so a rough estimate would be 100 times as much. If they’d upgrade their lasers, maybe 10 times as much. Either way, the Q-total of this experiment is almost certainly well below 0.1. 

Of course the people who work on this know the distinction perfectly well. But I can’t shake the impression they quite like the confusion between the two Qs. Here is for example a quote from Holtkamp who at the time was the project construction leader of ITER. He said in an interview in 2006:
“ITER will be the first fusion reactor to create more energy than it uses. Scientists measure this in terms of a simple factor—they call it Q. If ITER meets all the scientific objectives, it will create 10 times more energy than it is supplied with.”
Here is Nick Walkden from JET in a TED talk referring to ITER “ITER will produce ten times the power out from fusion energy than we put into the machine.” and “Now JET holds the record for fusion power. In 1997 it got 67 percent of the power out that we put in. Not 1 not 10 but still getting close.”

But okay, you may say, no one expects accuracy in a TED talk. Then listen to ITER Director General Dr. Bigot speaking to the House of Representatives in April 2016:

[Rep]: I look forward to learning more about the progress that ITER has made under Doctor Bigot’s leadership to address previously identified management deficiencies and to establish a more reliable path forward for the project.

[Bigot]:Okay, so ITER will have delivered in that full demonstration that we could have okay 500 Megawatt coming out of the 50 Megawatt we will put in.
What are we to make of all this?

Nuclear fusion power is a worthy research project. It could have a huge payoff for the future of our civilization. But we need to be smart about just what research to invest into because we have limited resources. For this, it is super important that we focus on the relevant question: Will it output energy into the grid. 

There seem to be a lot of people in fusion research who want you to remain confused about just what the total energy gain is. I only recently read a new book about nuclear fusion “The Star Builders” which does the same thing again (review here). Only briefly mentions the total energy gain, and never gives you a number. This misinformation has to stop. 

If you come across any popular science article or interview or video that does not clearly spell out what the total energy gain is, please call them out on it. Thanks for watching, see you next week.