Pages

Saturday, April 30, 2022

Did the W-boson just "break the standard model"?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Hey there’s yet another anomaly in particle physics. You have probably seen the headlines, something with the mass of one of those particles called a W-boson. And supersymmetry is once again the alleged explanation. How seriously should you take this? And why are particle physicists constantly talking about supersymmetry, hasn’t that been ruled out? That’s what we’ll talk about today.

Last time I talked about an anomaly in particle physics was a few months ago and would you know two weeks later it disappeared. Yes, it disappeared. If you remember there was something weird going on with the neutrino oscillations in an experiment called LSND, then a follow-up experiment called Mini-Boone confirmed this, and then they improved the accuracy of the follow-up experiment and the anomaly was gone. Poof, end of story. No more neutrino anomaly.

You’d think this would’ve taught me to not get excited about anomalies but, ha, know me better. Now there’s another experimental group that claims to have found an anomaly and of course we have to talk about this. This one actually isn’t a new experiment, it’s a new analysis of data from an experiment that was discontinued more than 10 years ago, a particle collider called the Tevatron at Fermilab in the United States. It reached collision energies of about a Tera electron volt, Tev for short, hence the name.

The data were collected from 2002 to 2011 by the collaboration of the CDF experiment. During that time they measured about 4 million events that contained a particle called the W-boson.

The W-boson is one of the particles in the standard model, it’s one of those that mediate the weak nuclear force. So it’s similar to the photon, but it has a mass and it’s extremely short-lived. It really only shows up in particle colliders. The value of the mass of the W-boson is related to other parameters in the standard model which have also been measured, so it isn’t an independent parameter, it has to fit to the others.

The mass of the W-boson has been measured a few times previously, you can see a summary of those measurements in this figure. On the horizontal axis you have the mass of the W-boson. The grey line is the expectation if the standard model is correct. The red dots with the error bars are the results from different experiments. The one at the bottom is the result from the new analysis.

One thing that pops into your eye right away is that the mean value of the new measurement isn’t so different from earlier data analyses. The striking thing about this new analysis is the small error bar. That the error bar is so small is the reason why this result has such a high statistical significance. They quote a disagreement with the standard model at 6.9 sigma. That’s well above the discovery threshold in particle physics which is often somewhat arbitrarily put at 5 sigma.

What did they do to get the error bar so small? Well for one thing they have a lot of data. But they also did a lot of calibration cross-checks with other measurements, which basically means they know very precisely how to extract the physical parameters from the raw data, or at least they think they do. Is this reasonable? Yes. Is it correct? I don’t know. It could be. But in all honesty, I am very skeptical that this result will hold up. More likely, they have underestimated the error and their result is actually compatible with the other measurements.

But if it does hold up, what does it mean? It would mean that the standard model is wrong because there’d be a measurement that don’t fit together with the predictions of the theory. Then what? Well then we’d have to improve the standard model. Theoretical particle physicists have made many suggestions for how to do that, the most popular one has for a long time been supersymmetry. It’s also one of the possible explanations for the new anomaly that the authors of the paper discuss.

What is supersymmetry? Supersymmetry isn’t a theory, it’s a property of a class of models. And that class of models is very large. These models have all in common that they introduce a new partner particle for each particle in the standard model. And then there are usually some more new particles. So, in a nutshell, it’s a lot more particles.

What the predictions of a supersymmetric model are depends strongly on the masses of those new particles and how they decay and interact. In practice this means whatever anomaly you measure, you can probably find some supersymmetric model that will “explain” it. I am scare quoting “explain” because if you can explain everything you really explain nothing.

This is why supersymmetry is mentioned in one breath with every anomaly that you hear of: because you can use it to explain pretty much everything if you only try hard enough. For example, you may remember the 4.2 sigma deviation from the standard model in the magnetic moment of the muon. Could it be supersymmetry? Sure. Or what’s with this B-meson anomaly, that lingers around at 3 sigma and makes headlines once or twice year. Could that be supersymmetry? Sure.

Do we in any of these cases actually *know that it has to be supersymmetry? No. There are many other models you could fumble together that would also fit the bill. In fact, the new CDF paper about the mass of the W-boson also mentions a few other possible explanations: additional scalar fields, a second Higgs, dark photons, composite Higgs, and so on.

There’s literally thousands of those models, none of which has any evidence going in its favor. And immediately after the new results appeared particle physicists have begun cooking up new “explanations”. Here are just a few examples of those. By the time this video appears there’ll probably be a few dozen more.

But wait, you may wonder now, hasn’t the Large Hadron Collider ruled out supersymmetry? Good point. Before the Large Hadron Collider turned on, particle physicists claimed that it would either confirm or rule out supersymmetry. Supersymmetry was allegedly an easy to find signal. If supersymmetric particles existed, they should have shown up pretty much immediately in the first collisions. That didn’t happen. What did particle physicists do? Oh suddenly they claimed that of course this didn’t rule out supersymmetry. It’d just ruled out certain supersymmetric models. So which version is correct? Did or didn’t the LHC rule out supersymmetry?

The answer is that the LHC indeed did not rule out supersymmetry, it never could. As I said, supersymmetry isn’t a theory. It’s a huge class of models that can be made to fit anything. Those physicists who said otherwise were either incompetent or lying or both, the rest knew it but almost all of them kept their mouth shut, and now they hope you’ll forget about this and give them money for a bigger collider.

As you can probably tell, I am very not amused that the particle physics community never came clear on that. They never admitted to having made false statements, accidentally or deliberately, and they never gave us any reason to think it wouldn’t happen again. I quite simply don’t trust them.

Didn’t supersymmetry have something to do with string theory? Yes, indeed. So what does this all mean for string theory? The brief answer is: nothing whatsoever. String theory requires supersymmetry, but the opposite is not true, supersymmetry doesn’t necessarily require string theory. So even in the unlikely event that we would find evidence for supersymmetry, this wouldn’t tell us whether string theory is correct. It would certainly boost confidence in string theory but ultimately wouldn’t help much because string theorists never managed to get the standard model out of their theory, despite the occasional claim to the contrary.

I’m afraid all of this sounds rather negative. Well. There’s a reason I left particle physics. Particle physics has degenerated into a paper production enterprise that is of virtually no relevance for societal progress or for progress in any other discipline of science. The only reason we still hear so much about it is that a lot of funding goes into it and so a lot of people still work on it, most of them don’t like me. But the disciplines where the foundations of physics currently make progress are cosmology and astrophysics, and everything quantum, quantum information, quantum computing, quantum metrology, and so on, which is why that’s what I mostly talk about these days.

The LHC has just been upgraded and started operating again a few days ago. In the coming years, they will collect a lot more data than they have so far and this could lead to new discoveries. But when the headlines come in, keep in mind that the more data you collect, the more anomalies you’ll see, so it’s almost guaranteed they will see a lot of bumps at low significance “that could break the standard model” but then go away. It’s possible of course that one of those is the real thing, but to borrow a German idiom, don’t eat the headlines as hot as they’re cooked.

Saturday, April 23, 2022

I stopped working on black hole information loss. Here’s why.

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


It occurred to me the other day that I’ve never told you what I did before I ended up in the basement in front of a green screen. So today I want to tell you why I, as many other physicists, was fascinated by the black hole paradox that Steven Hawking discovered before I was even born. And why I, as many other physicists, tried to solve it. But why I, in the end, unlike many other physicists, decided that it’s a waste of time. What’s the black hole information paradox? Has it been solved and if not, will it ever be solved? What if anything is new about those recent headlines? That’s what we’ll talk about today.

First things first, what’s the black hole information loss paradox. Imagine you have a book and you throw it into a black hole. The book disappears behind the horizon, the black hole emits some gravitational waves and then you have a black hole with a somewhat higher mass. And that’s it.

This is what Einstein’s theory of general relativity says. Yes, that guy again. In Einstein’s theory of general relativity black holes are extremely simple. They are completely described by only three properties: their mass, angular moment, and electric charge. This is called the “no hair” theorem. Black holes are bald and featureless and you can mathematically prove it.

But that doesn’t fit together with quantum mechanics. In quantum mechanics, everything that happens is reversible so long as you don’t make a measurement. This doesn’t mean processes look the same forward and backward in time, this would be called time-reversal “invariance”. It merely means that if you start with some initial state and wait for it to develop into a final state, then you can tell from the final state what the initial state was. In this sense, information cannot get lost. And this time-reversibility is a mathematical property of quantum mechanics which is experimentally extremely well confirmed.

However, in practice, reversing a process is possible only in really small systems. Processes in large systems become for all practical purposes irreversible extremely quickly. If you burn your book, for example, then for all practical purposes the information in it was destroyed. However, in principle, if we could only measure the properties of the smoke and ashes well enough, we could calculate what the letters in the book once were.

But when you throw the book into a black hole that’s different. You throw it in, the black hole settles into its hairless state, and the only difference between the initial and final state is the total mass. The process seems irreversible. There just isn’t enough information in the hairless black hole to tell what was in the book. The black hole doesn’t fit together with quantum mechanics. And note that making a measurement isn’t necessary to arrive at this conclusion.

You may remember that I said the black hole emits some gravitational waves. And those indeed contain some information, but so long as general relativity is correct, they don’t contain enough information to encode everything that’s in the book.

Physicists knew about this puzzle since the 1960s or so, but initially they didn’t take it seriously. At this time, they just said, well, it’s only when we look at the black hole from the outside that we don’t know how reverse this process. Maybe the missing information is inside. And we don’t really know what’s inside a black hole because Einstein’s theory breaks down there. So maybe not a problem after all.

But then along came Stephen Hawking. Hawking showed in the early 1970s that actually black holes don’t just sit there forever. They emit radiation, which is now called Hawking radiation. This radiation is thermal which means it’s random except for its temperature, and the temperature is inversely proportional to the mass of the black hole.

This means two things. First, there’s no new information which comes out in the Hawking radiation. And second, as the black hole radiates, its mass shrinks because E=mc^2 and energy is conserved, and that means the black hole temperature increases as it evaporates. As a consequence, the evaporation of a black hole speeds up. Eventually the black hole is gone. All you have left is this thermal radiation which contains no information.

And now you have a real problem. Because you can no longer say that maybe the information is inside the black hole. If a black hole forms, for example, in the collapse of a star, then after it’s evaporated, all the information about that initial star, and everything that fell into the black hole later, is completely gone. And that’s inconsistent with quantum mechanics.

This is the black hole information loss paradox. You take quantum mechanics and general relativity, combine them, and the result doesn’t fit together with quantum mechanics.

There are many different ways physicists have tried to solve this problem and every couple of months you see yet another headline claiming that it’s been solved. Here is the most recent iteration of this cycle, which is about a paper by Steve Hsu and Xavier Calmet. The authors claim that the information does get out. Not in gravitational waves, but in gravitons that are quanta of the gravitational field. Those are not included in Hawking’s original calculation. These gravitons add variety to black holes, so now they have hair. This hair can store information and release it with the radiation.

This is a possibility that I thought about at some point myself, as I am sure many others in the field have too. I eventually came to the conclusion that it doesn’t work. So I am somewhat skeptical that their proposal actually solves the problem. But maybe I was wrong and they are right. Gerard ‘t Hooft by the way also thinks the information comes out in gravitons, though in a different way then Hsu and Calmet. So this is not an outlandish idea.

I went through different solutions to the black hole information paradox in an earlier video and will not repeat them all here, but I want to instead give you a general idea for what is happening. In brief, the issue is that there are many possible solutions.

Schematically, the way that the black hole information loss paradox comes about is that you take Einstein’s general relativity and combine it with quantum mechanics. Each has its set of assumptions. If you combine them, you have to make some further assumptions about how you do this. The black hole information paradox then states that all those assumptions together are inconsistent. This means you can take some of them, combine them and obtain a statement which contracts another assumption.  Simple example for what I mean with “inconsistent”, the assumption x< 0 is inconsistent with the assumption x > 1.

If you want to resolve an inconsistency in a set of assumptions, you can remove some of the assumptions. If you remove sufficiently many, the inconsistency will eventually vanish. But then the predictions of your theory become ambiguous because you miss details on how to do calculations. So you have to put in new assumptions to replace the ones that you have thrown out. And then you show that this new set of assumptions is no longer inconsistent. This is what physicists mean when they say they “solved the problem”.

But. There are many different ways to resolve an inconsistency because there are many different assumptions you can throw out. And this means there are many possible solutions to the problem which are mathematically correct. But only one of them will be correct in the sense of describing what indeed happens in nature. Physics isn’t math. Mathematics is a great tool, but in the end you have to make an actual measurement to see what happens in reality.

And that’s the problem with the black hole information loss paradox. The temperature of the black holes that we can observe today is way too small to measure the Hawking radiation. Remember that the larger the black hole, the smaller its temperature. The temperature of astrophysical black holes is below the temperature of the CMB. And even if that wasn’t the case, what do you want to do? Sit around 100 billion years to catch all the radiation and see if you can figure out what fell into the black hole? It’s not going to happen.

What’s going to happen with this new solution? Most likely, someone’s going to find a problem with it, and everyone will continue working on their own solution. Indeed, there’s a good chance that by the time this video appears this has already happened. For me, the real paradox is why they keep doing it. I guess they do it because they have been told so often this is a big problem that they believe if they solve it they’ll be considered geniuses. But of course their colleagues will never agree that they solved the problem to begin with. So by all chances, half a year from now you’ll see another headline claiming that the problem has been solved.

And that’s why I stopped working on the black hole information loss paradox. Not because it’s unsolvable. But because you can’t solve this problem with mathematics alone, and experiments are not possible, not now and probably not in the next 10000 years.

Why am I telling you this? I am not talking about this because I want to change the mind of my colleagues in physics. They have grown up thinking this is an important research question and I don’t think they’ll change their mind. But I want you to know that you can safely ignore headlines about black hole information loss. You’re not missing anything if you don’t read those articles. Because no one can tell which solution is correct in the sense that it actually describes nature, and physicists will not agree on one anyway. Because if they did, they’d have to stop writing papers about it.

Saturday, April 16, 2022

How serious is antibiotic resistance?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Antibiotics save lives. But increasingly more bacteria are becoming resistant to antibiotics. As a result, some infections can simply no longer be treated. Just a few weeks ago an international team of scientists lead by researchers at the University of Washington published a report in the Lancet, according to which antibiotic resistance now kills more than a million people worldwide each year. And the numbers are rising.

How serious is the situation? What are scientists doing to develop new antibiotics? Did you know that bacteria are not the most abundant organism on earth? And what do rotten eggplants have to do with all of that? That’s what we will talk about today.

First things first, what are antibiotics? Literally the word means “against life” which doesn’t sound particularly healthy. But “antibiotic” just refers to any type of substance that kills bacteria (bactericidal) or inhibits their growth (bacteriostatic). Antibiotics are roughly categorized either as “broad spectrum”, which target many types of bacteria, or “narrow spectrum” which target very specific bacteria.

The big challenge for antibiotics is that you want them to work in or on the body of an infected person, without killing the patient along with the bacteria. That’s what makes things difficult.

There are various ways antibiotics work, and most of them target some difference between bacteria and cells, so that the antibiotic harms the bacteria but not the cell.

For example, our cells have membranes, but they don’t have cell walls, which is a rigid protective layer that covers the membrane. But bacteria do have cell walls. So one way that antibiotics work is to destabilize the cell wall. Penicillin for example does that.

Another thing you can do is to prevent bacterial cells from producing certain enzymes that the bacteria need for replication, or inhibit their synthesis of folic acid which they need to grow.

As you can see, antibiotics work in a number of entirely different ways. And each of them can fight some bacteria but not others. You also have to take into account w*here the bacterial infection is, because not all antibiotics reach all parts of the body equally well. This is why you need a prescription for antibiotics – they have to fit to the infection you’re dealing with, otherwise they’re in the best case useless. In the worst case you may breed yourself a tough strain that will resist further treatment.

This problem was pointed out already by the Scottish physician Alexander Fleming who discovered the first antibiotic, penicillin, 1928. Penicillin is still used today, for example to treat scarlet fever. According to some estimates, it has saved about 200 million lives, so far.

But already in 1945, Fleming warned the world of what would happen next, namely that bacteria would adapt to the antibiotics and learn to survive them. They become “resistant”. Fleming wrote
“The greatest possibility of evil in self-medication is the use of too-small doses, so that, instead of clearing up infection, the microbes are educated to resist penicillin and a host of penicillin-fast organisms is bred out which can be passed on to other individuals.”
To some extent antibiotic resistance is unavoidable – it’s just how natural selection works. But the problem becomes significantly worse if one doesn’t pull through an antibiotic treatment at full force, because then bacteria will develop resistance much faster.

The world didn’t listen to Fleming’s warning. One big reason was that in the 1940s, scientists discovered that antibiotics were good for something else: They made farm animals grow faster, regardless of whether those animals were ill.

On average, livestock that were fed antibiotic growth promoters grew 3-11% faster. So farmers began feeding antibiotics to chickens, pigs, and cattle because that way they would have more meat to sell.

Things were pretty crazy at the time. By the 1950’s the US industry was “painting” steaks with antibiotics to extend their shelf life. They were washing spinach with antibiotics. Sometimes they even mixed antibiotics into ground meat. You could buy antibiotic soap. The stuff leaked everywhere. Studies at the time found penicillin even in milk and some people promptly developed an allergy to it.

It wasn’t until 1971 that the UK banned the use of some antibiotics for animal farming. But it’s only since 2006 that the use of antibiotics as growth promoters in animals is generally forbidden in the European Union. In the USA it took until 2017 for a similar ban to come into effect.

Using antibiotics for meat production isn’t the only problem. Another problem is over-prescription. According to the American Center for Disease Control, about 30 percent of prescriptions for antibiotics in the USA are unnecessary or useless, in most cases because they are mistakenly prescribed against respiratory infections that are caused by viruses, against which antibiotics do nothing.

A 2018 paper found that the global consumption of antibiotics per person has increased by 39% from 2000 to 2015 and it’s probably still increasing, though the increase is largely driven by low and middle income countries which are catching up. And with that, antibiotic resistance is on the rise.

Already in 2019, the World Health Organization (WHO) declared that antimicrobial resistance (which includes antibiotic resistance) is currently one of the top 10 global public health threats. They say that “antibiotics are becoming increasingly ineffective as drug-resistance spreads globally leading to more difficult to treat infections and death”.

According to the recent study from the Lancet which I mentioned in the introduction, the number of people who die from treatment-resistant bacterial infections is currently about 1.27 million per year. That’s about twice as many people who die from malaria. They also estimate that antibiotic resistance indirectly contributes to as many as 4.95 million deaths each year. The Lancet article also found that young children below 5 years are at the highest risk.

So the situation is not looking good. What are scientists doing?

First there are a couple of obvious ideas, like bringing back old antibiotics that have gone out of use, because bacteria may have lost their resistance to them, and keep looking for new inspirations in nature. For example, in 2016, a group of researchers from Denmark reported they’d found that leaf-cutting ants use natural antibiotics. The next one you probably guessed: Artificial Intelligence to the rescue.

2 years ago, researchers from MIT published a paper in the magazine Cell in which they explain how they used deep learning to find new antibiotics. They first trained their software on 2500 molecules whose antibiotic functions are known and also taught it to recognize structures that are known to be toxic.

Then they rated 6000 other molecules with scores from 0 to 1 for how likely the molecules were to make good antibiotics. Among the molecules with high scores they focused on those whose structure was different from that of the known antibiotics because they were hoping to find something really new.

They found one molecule that fit the bill: halicin. Halicin is not a new drug, they just renamed what was previously known under the catchy name c-Jun N-terminal kinase inhibitor SU3327. They called it halicin after HAL from the Space Odyssey, I am guessing because their Artificial Intelligence is exploring a big “chemical space” or otherwise I’m too dumb to get it.

They did an experiment and found that indeed halicin worked against some multiresistant bacteria, both in a petri dish and in in mice. Then they repeated the process but with a much bigger library of more than ten million molecules. They identified some promising candidates for new antibiotics and are now doing further tests.

It’s a long way from the petri dish to the market, but this seems really promising, though it has the usual limitations of artificial intelligence: software can only learn if there’s something to train on, so this is unlikely to discover entirely new pathways of knocking out bacteria.

Another avenue that researchers are pursuing is the revival of phage therapy. Phages are viruses that attack bacteria. They are about 100 times smaller than bacteria and are the most abundant organism in the planet. There are an estimated 10 million trillion trillion of them around us, that’s ten to the 31. And phages are everywhere: on surfaces, in soil, on our skin, even inside our body. They enter a bacterium and replicate inside of it, until the bacterium bursts and dies in the process. You can see the potential: breed phages that infect the right bacteria and you’ve solved the problem.

One great benefit of phages is that they target very specific bacteria so they spare the beneficial bacteria in our body. The question is, where do you get the right phage for an infection? The first successful phage treatment was done in 1919, however, the method was never widely adopted because breeding the right phages is slow and cumbersome and when antibiotics were discovered they were just vastly more convenient.

However, with antibiotic resistance on the rise, phage treatments are getting new attention. Researchers now hope that genetic engineering will make it faster and easier to breed the right phages. The first successful treatment with genetically modified phages was reported in 2019 in Nature Medicine by a group of researchers from the United States and the UK. They bred a cocktail of three phages, one of which they found on a rotting eggplant from South Africa.

The group around Dr. Strathdee at the University of California San Diego hopes that one day we will have an open source library for genetically engineered phages which is accessible to everyone and she’s currently raising funds for that. Strathdee and her team don’t think that phage therapy will ever replace antibiotics altogether but that it will be an important contribution for particularly hopeless cases.

Another new method to fight bacteria was proposed in 2019 by researchers from Texas. They have found a way to kill bacteria while they are passive, so while they are not replicating. This can’t be done with normal antibiotics that usually target growth or replication. But the researchers have found substances that open a particularly large channel in the membrane on the surface of the bacterium. The bacterium then basically leaks out and dies. Another good thing about this method is that even if it doesn’t kill a bacterium it can make it easier for antibiotics to enter. They have tested this in a petri dish and seen good results.

To name one final line of research that scientists are pursuing: Several groups are looking for new ways to use antimicrobial peptides. Peptides are part of our innate immune system. They are natural broad spectrum antibiotics and earlier studies have shown that they’re effective even against bacteria that resist antibiotics.

Problem is, peptides break down quickly when they come into contact with bodily fluids, such as blood. But researchers from Italy and Spain have found a way to make peptides more stable by attaching them to nanoparticles that fight off certain enzymes which would otherwise break down the peptides. These peptide nanoparticles can for example be inhaled to treat lung infections. They tested it successfully in mice and rats and published their results in a 2020 paper. And just last year, researchers from Sweden have developed a hydrogel that contains these peptides and that can be put on top of skin wounds.

It is hard to overstate just how dramatically antibiotics have changed our life. Typhus, tuberculosis, the plague, cholera, leprosy. These are all bacterial infections, and before we had antibiotics they regularly killed people, especially children. During World War I more people died from bacterial infection than from the fights.

As you have seen, bacterial resistance is a real problem and it’ll probably get worse for some more time. But scientists are on the case, and some recent research looks quite promising.

Saturday, April 09, 2022

Is Nuclear Power Green?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


A lot of people have asked me to do a video about nuclear power. But that turned out to be really difficult. You won’t be surprised to hear that opinions about nuclear power are extremely polarized and every source seems to have an agenda to push. Will nuclear power help us save the environment and ourselves, or is it too dangerous and too expensive? Do thorium reactors or the small modular ones change the outlook? Is nuclear power green? That’s what we’ll talk about today.

I want to do this video a little differently so you know where I’m coming from. I’ll first tell you what I thought about nuclear power before I began working on this video. Then we’ll look at the numbers, and in the end, I’ll tell you if I’ve changed my mind.

When the accident in Chernobyl happened I was 9 years old. I didn’t know anything about nuclear power or radioactivity. But I was really scared because I saw that the adults were scared. We were just told, you can’t see it but it’ll kill you.

Later, when I understood that this had been an unnecessary scare, I was somewhat pissed off at adults in general and my teachers in particular. Yes, radioactive pollution is dangerous, but in contrast to pretty much any other type of pollution it’s easy to measure. That doesn’t make it go away but at least we know if it’s there. Today, I worry much more about pollution from the chemical industry which you won’t find unless you know exactly what you’re looking for and also have a complete chemistry lab in the basement. And I worry about climate change.

So, I’ve been in favor of nuclear power as a replacement for fossil fuels since I was in high school. In 2008, I over-optimistically predicted the return of nuclear power. Then of course in 2011, the Fukushima accident happened, after which the German government decided to phase out nuclear power, but continued digging up coal, buying gas from Russia, and importing nuclear power from France.

However, in all fairness I haven’t looked at the numbers for more than 20 years. So that’s what we’ll do next, and then we’ll talk again later.

Fossil fuels presently make up almost two thirds of global electric power production. Hydropower makes up about 16 percent, and all other renewables together about 10 percent. Power from nuclear fission makes up the rest, also about 10 percent.

Nuclear power is “green” in the sense that it doesn’t directly produce carbon dioxide. I say “directly” because even though the clouds coming out of nuclear power plants are only water vapor, power plants don’t grow on trees. They have to be built from something by someone, and the materials, their transport, and the construction itself have a carbon footprint. But then, so does pretty much everything else. I mean, even breathing has a carbon footprint. So one really has to look at those numbers in comparison.

A good comparison comes from the 2014 IPCC report. This table summarizes several dozens of studies with a minimum, maximum, and median value. All the following numbers are in grams of carbon dioxide per kilowatt hour and they are average values for the entire lifecycle of those technologies, so including the production.

For coal, the median that the IPCC quotes is 820, gas is a bit lower with 490, solar is a factor 10 lower than gas, with about 40. Wind is even better than solar with a median of about 11. And the median for nuclear is 12 grams per kiloWatthour, so comparable to that of wind, but there is a huge gap to the maximum value which according to some sources is as high as 110, so about twice as high as solar.

An estimate that’s a little bit higher than even the highest value the IPCC quotes comes from the World Information Service on Energy, WISE, which is based in the Netherlands. They calculated that nuclear plants produce 117 grams of carbon dioxide per kilowatt-hour.

It’s not entirely irrelevant to mention that the mission of WISE is to “fight nuclear” according to their own website. That doesn’t make their number wrong, but they clearly have an agenda and may not be the most reliable source. 

But these estimates differ not so much because someone is stupid or lying, at least not always, but because there is some uncertainty in these numbers that affect the outcome. That’s things like the quality of uranium resources, how far they need to be transported, different methods of mining or fuel production, and their technological progress, and so on. In the scientific literature, the value that is typically used is somewhat higher than the IPCC median, about 60-70 grams of carbon dioxide per kilo Watthour. And the numbers for renewables should also be taken with a grain of salt because they need to come with energy storage which will also have a carbon footprint.

I think the message we can take away here is that either way you look at it, the carbon footprint of nuclear power is dramatically lower than that of fossil fuels, and roughly comparable to some renewables, exact numbers are hard to come by.

So that’s one thing nuclear has going in its favor: it has a small carbon footprint. Another advantage is that compared to wind and solar, it doesn’t require much space. Nuclear power is therefore also “green” in the sense that it doesn’t get in the way of forests or agriculture. And yet another advantage is that it generates power on demand, and not just when the wind blows or the sun shines.

Let us then talk about what is maybe the biggest disadvantage of nuclear power. It’s not renewable. The vast majority of nuclear power plants which are currently in operation work with Uranium 235.

At the moment, we use about 60 thousand tons per year. The world resources are estimated to be about 8 million tons. This means if we were to increase nuclear power production by a factor of ten, then within 15 to 20 years uranium mining would become too expensive to make economic sense.

This was pretty much the conclusion of a paper that was published a few months ago by a group of researchers from Austria. They estimate that optimistically nuclear power from uranium-235 would save about 2 percent of global carbon dioxide emissions by 2040. That’s not nothing, but it isn’t going to fix climate change – there just isn’t enough uranium on this planet.

The second big problem with nuclear power is that it’s expensive. A medium sized nuclear power plant currently costs about 5-10 billion US dollars, though large ones can cost up to 20 billion.

Have a look at this figure is from the World Nuclear Energy Status report 2021 (page 293). It shows what’s called the levelized cost of energy in US dollar per megawatt hour, that’s basically how much it costs to produce power over the entire lifetime of some technology, so not just the running cost but including the production. As you can see, nuclear is the most expensive. It’s even more expensive than coal, and at the moment roughly 5 times more expensive than solar or wind.  If the current trend continues, the gap is going to get even wider.


On top of this comes that insurance for nuclear power plants is mandatory, the premium is high, and those expenses from the plant owners go on top of the electricity price. So at the moment nuclear power just doesn’t make a lot of economic sense. Of course this might change with new technologies, but before we get to those we have to talk about the biggest problem that nuclear power has. People are afraid of it.

Accidents in nuclear power plants are a nightmare because radioactive contamination can make regions uninhabitable for decades, and tragic accidents like Chernobyl and Fukushima have arguably been bad publicity. However, the data say that nuclear power has historically been much safer than fossil fuels, it’s just that the death toll from fossil fuels is less visible.

In 2013, researchers from the NASA Goddard Institute for Space Studies and Columbia University calculated the fatalities caused by coal, gas and nuclear, and summarized their findings in units of Deaths per TeraWatthour. They found that coal kills more than a hundred times more people than nuclear power, the vast majority by air pollution. They also calculate that since the world began using nuclear power instead of coal and gas, nuclear power has prevented more than 1.8 million deaths. 

Another study in 2016 found a death rate for nuclear that was even lower, about a factor 5 less. The authors of this paper also compared the risk of nuclear to hydro and wind and found that these renewables actually have a slightly higher death rate, though in terms of economic damage, nuclear is far worse.

I am guessing now you all want to know just how exactly people die from renewables. Well, since you ask. For wind it’s stuff like “a bus collided with a truck transporting a turbine tower” or an aircraft crashed into a wind turbine, or workers falling off the platform of an offshore windfarm. For solar, it’s accidents in manufacturing sites, electric shocks from improper wiring, or falls from roofs.

The number for hydropower is dominated by a single accident when a dam broke in China in 1975. The water flooded several villages and killed more than 170 thousand people.

The Chernobyl accident, in comparison, killed less than 40 people directly. The World Health Organization estimates long-term deaths from cancer as a consequence of the Chernobyl accident to be 4000-9000. There is a group of researchers which claims it’s at least a factor 10 higher but this claim has remained highly controversial.

The number of direct fatalities from the Fukushima accident is zero. One worker died 7 years later from lung cancer, almost certainly a consequence of radiation exposure. About 500 died from the evacuation, mostly elderly and ill people whose care was interrupted. And this number is unlikely to change much in the long run.

According to the WHO, the radiation exposure of the Fukushima accident was low except for the direct vicinity of the power plant which was evacuated. They do not expect the cancer risk for the general population to significantly rise. The tsunami which caused the accident to begin with killed considerably more people, at least 15 thousand.

I don’t want to trivialize accidents in the nuclear industry, of course they are tragic. But there’s no doubt that they pale in comparison to fossil fuels, which cause pollution that, according to some estimates kills as much as a million people per year. Also, fun fact, coal contains traces of radioactive minerals that are released when you burn it. Indeed, radioactivity levels are typically *higher* near coal plants than near nuclear power plants.

Again, you see, there are some differences in the details but pretty much everyone who has ever seriously looked at the numbers agrees that nuclear is one of the safest power sources we know of.

Okay, so we have seen that the biggest two disadvantages of nuclear power are that it’s not renewable and that it’s expensive. But this is for the conventional nuclear power plants that use uranium 235 which is only 0 point 7 percent of all uranium we find on Earth.

Another option is to use fast breeder reactors which work with the other 99 point 3 percent of uranium on earth, that’s the isotope uranium-238.

A fast breeder transmutes uranium-238 to plutonium-239, which can then be used as reactor fuel like uranium-235. And this process continues running with the neutrons that are produced in the reaction itself, so the reactor “breeds” its own fuel, hence the name

Fast breeders are not new; they have been used here and there since the 1940’s. But they turned out to be expensive, unreliable, and troublesome. The major problems are that they are cooled with sodium which is very reactive, and they also can’t be shut down as quickly as the conventional nuclear power plants. To make a long story short, they didn’t catch on, and I don’t think they ever will.

But technology in the nuclear industry has much advanced in the past decades. The most important innovations are molten salt reactors, thorium reactors, and small modular reactors.

Molten salt reactors work by mixing the fuel into some type of molten salt. The big benefit of doing this is that it’s much safer. That’s partly because molten salt reactors operate at lower pressure, but mostly because the reaction has a “negative temperature coefficient”. That’s a complicated way of saying that the energy-production slows down when the reactor overheats, so you don’t get a runaway effect.

Molten salt reactors have their own problems though. The biggest one is that the molten salt fuel is highly corrosive and quickly degrades the material meant to contain it.  How much of a problem this is in practice is currently unclear.

Molten salt reactors can be run with a number of different fuels, one of them is thorium. Thorium is about 4 times more abundant than uranium, however, fewer resources are known, so in practice this isn't going to make a big difference in the short run.

The real advantage is that these reactors can use essentially the entire thorium, not just a small fraction of it, as is the case with the normal uranium reactors. This means, thorium reactors produce more energy from the same amount of fuel and, as a consequence, thorium could last for thousands of years.  Thorium is also a waste product of the rare-earth mining industry, so trying to put it to use is a good idea.

However, the problem is still that the technology is expensive. There is currently only one molten salt thorium reactor in operation, and that’s in China. It started operating in September 2021.

It’s just a test facility that will generate only 2 Megawatt, but if they are happy with the test the Chinese have plans for a bigger reactor with 373 Megawatt for the next decade, though that is still fairly small for a power plant. It’ll be very interesting to see what comes out of this.

And the biggest hope of the nuclear industry is currently small modular reactors. The idea is that instead of building big and expensive power plants, you build reactors that are small enough to be transported. Mass-producing them in a factory could bring down the cost dramatically.

A conventional plant generates typically a few Gigawatt in electric energy. The small modular reactors are comparable in size to a small house, and have an energy output of some tens of Megawatt. For comparison, that’s about ten times as much as a wind turbine on a good day. That they are modular means they are designed to work together so one can build up power plants gradually to the desired capacity.

Several projects for small modular reactors are at an advanced stage in the USA, Russia, China, Canada, the UK, and South Korea. Most of the current projects use uranium as fuel, partly in the molten salt design.

But the big question is, will the economics work out in the end? This isn’t at all clear, because making the reactors smaller may make them cheaper to manufacture, but they’ll also produce less energy during their lifetime. Certainly at this early stage, small modular reactors aren’t any cheaper than big ones.

A cautious example comes from the American company NuScale. They sit in Utah and have been in business since 2007. They were planning to build twelve small reactors with 60 MegaWatt by 2027. Except for being small they are basically conventional reactors that work with enriched Uranium.

Each of those of those reactors is a big cylinder, about 3 meters in diameter and 20 feet tall. Their original cost estimate was about 4.2 billion dollars. However, last year they announced that had to revise their estimate to $6.2 billion and said they’d need three years longer.

In terms of cost per energy that’s even more expensive than conventional nuclear power plants. The project is subsidized by the department of energy with 1.4 billion, but several funders backed out after the announcement that the cost had significantly increased.

Ok, so that concludes my rundown of the numbers. Let’s see what we’ve learned.

What speaks in favor of nuclear energy is that it’s climate friendly, has a small land use, and creates power on demand. What speaks against it is that it’s expensive and ultimately not renewable. The disadvantages could be alleviated with new technologies, but it’s unclear whether that will work, and even if it works, it almost certainly won’t have a significant impact on climate change in the next 20 years.

It also speaks against nuclear power that people are afraid of it. Even if these fears are not rational that doesn’t mean they don’t exist. If someone isn’t comfortable near a nuclear power plant, that affects their quality of life, and that can’t just be dismissed.

There are two points I didn’t discuss which you may have expected me to mention. One is nuclear proliferation and the risk posed by nuclear power plants during war times. This is certainly an important factor, but it’s more political than scientific, and that would be an entirely different discussion.  

The other point I didn’t mention is nuclear waste. That’s because I think it’s a red herring which some activist groups are using in the attempt to scare people. For what I am concerned, burying the stuff in a safe place solves the problem just fine. It’s right that there aren’t any final disposal sites at the moment, but Finland is expected to open one next year and several other countries will follow. And no, provided adequate safety standards, I wouldn’t have a problem with a nuclear waste deposit in my vicinity.

So, what did I learn from this. I learned that nuclear power has become economically even more unappealing than it already was 20 years ago, and it’s not clear this will ever change. Personally I would say that this development can be left to the market. I am not in favor of regulation that makes it even more difficult for us to reduce carbon emission, to me this just seems insane. In all fairness it looks like nuclear won’t help much, but then again, every little bit helps.

Having said that, I think part of the reason the topic is so controversial is that what you think is the best strategy depends on local conditions. There is no globally “right” decision. If your country has abundant solar and wind power, it might not make sense to invest in nuclear. Though you might want to keep in mind that climate change can affect wind and precipitation patterns in the long run.

If your country is at a high risk of earthquakes, then maybe nuclear power just poses too high a risk. If on the other hand renewables are unreliable in your region of the world, you don’t have a lot of space, and basically never see earthquakes, nuclear power might make a lot of sense.

In the end I am afraid my answer to the question “Is nuclear power green?” is “It’s complicated.”


Saturday, April 02, 2022

How close is wireless power technology?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


“Battery’s almost empty. Connect to a power source.” Why do we still not have wireless power? That’s what we’ll talk about today.

Wireless power would have many benefits. First, it’d get rid of all those different plugs. We’d also no longer have to crawl around on airport floors and fight over the only two available outlets. And if you don’t have to plug and unplug your things constantly, that’s one piece less that can break. It could power devices that can’t easily be reached, such as medical implants or robots used in contaminated areas. It’d also be much easier to make devices waterproof, though I’m afraid even with wireless power it’ll still be difficult to dry your hair underwater.

So why don’t we have wireless power already? The problem is fairly basic. Power is energy transferred per time, and energy travels locally. This means if you want to get energy from one place to another, it has to travel at least one path between these two places. If you have a cable, then the energy goes along that path, except for some unavoidable losses mostly due to heat. If you don’t have a cable, well, then you have to find some other way to get the energy where you want it to go. That leaves you with two options.

You either generously splatter energy into all directions and hope that some of it arrives at the intended destination. That’s like filling up your car tank not with a hose, but instead by parking it next to a gasoline sprinkler. Sure, it’ll work if you wait long enough, but it’s extremely wasteful.

The other thing you can do is to focus the energy at a particular target. This will be much more efficient, but now you have a focused beam that can deposit energy into anything that gets in the way, like walls, trees, or humans. That isn’t good. Maybe we could put some kind of shield around that energy so that nothing gets in the way. Like – a cable?

But, but, you may say, we already have wireless chargers. Indeed, wireless charging exists, but it currently works only over short distances. This is called “near-field” wireless power transfer. In these systems the power does not travel freely through space but can be extracted near a source, though the power you can extract that way drops quickly with the distance to the source.

There are a variety of ways this can be done with alternating currents, that is the stuff that comes out of your wall outlet. The simplest and also the oldest method is inductive power transfer.

Inductive power transfer uses two coils – one transmitter and one receiver – very close to each other, that’s usually millimeters to centimeters. If an alternating current goes through the transmitter coil it creates an oscillating magnetic field. The same field then also passes through the receiving coil, where it induces another alternating current. So you’ve moved some power from the one coil to the other.

This is currently the most common method of wireless power transfer. It’s used for example to recharge electric toothbrushes and shavers and things like that. With this method, the power transferred increases with frequency but decreases with distance between the coils. That’s because the larger the distance, the less of the magnetic field from the transmitter coil reaches the receiving coil. In practice this means that the two coils have to be right next to each other.

There are several variations of this method, for example magnetic resonant power transfer. In this case you add capacitors to both circuits. They then have a preferred resonance frequency at which the power transfer is more efficient. This method also works over somewhat larger distances than the inductive transfer, up to a few times the size of the coils. This method is often used for wireless phone chargers.

But let’s be honest this isn’t what we’re looking for. We want our phone to magically recharge while we’re queuing for coffee, right? And for this we need far-field wireless power transfer.

 
The most obvious way to get this done is by sending energy with electromagnetic waves that travel freely. It’s like radio, except you don’t want to transmit the sound signal, but the energy.

This too is an old technology. Already on 1964, the American engineer William Brown did a series of experiments in which he powered a small helicopter for hours using microwaves.

Microwaves, guess what, are also used in microwave ovens. Those ovens are shielded for a reason, the reason being that you don’t want this energy to bounce around in your kitchen. Before microwave ovens became consumer products, they were used in scientific laboratories and often didn’t have shields. And that had some interesting consequences, which I learned from Tom Scott’s video about unfreezing hamsters. Yes, hamsters. Here’s how James Lovelock recalls these first microwave ovens.

"In the course of the experiments, while we were building it, the thing was running open and the radiation was bouncing all around the room. And the light bulbs would light up without warning. The filaments just had the same wavelength as the radiation and it would absorb it and light up. And the pound notes were the funniest ones, they’d catch fire because the metal strip inside was just about the wavelength of the magnetron."

If you haven’t watched Tom’s video, you should, I swear you won’t regret it. But don’t forget to come back. 

So, yes, you can use electromagnetic radiation for wireless power transfer, but most of us don’t want things in our vicinity to catch fire every now and then. This means this technology can really only be used at small power. But at small power it is being used already.

Samsung, for example, has produced a TV remote that is powered by energy extracted from your home Wireless. Another example that you have probably all seen are RFID security tags. The abbreviation stands for radio frequency identification. They are powered by another device that serves as a reader. The power transferred in those examples is typically a micro Watt. This isn’t remotely enough to charge your phone, which needs some million times more.

At this point you may be wondering why not just extract energy from all that radiation which is in the air already? Indeed, there’s a group at Georgia Tech which has developed a receiver that grabs energy of the 5G network, again that’s not much, some microwatts, but at distances up to 180 meters from a network antenna.

If you want to transfer more power without setting things on fire, you have to find a way to focus power transfer smartly. This isn’t all that easy with electromagnetic waves. They tend to spread out into many directions, and that reduces the amount of energy that arrives at the target. You could use lasers, but again there’s the issue that things might get in the way. Also, lasers are themselves not exactly power efficient.

However, the intensity of the radiation drops with 1/R^2 in the far field only if you’re far away from all antennas, not if you are close by, or actually inside, an array of antennas. Then the intensity may drop far less.

The start-up Ossia makes use of this. Their system is called Cota. It’s an array of about 60 times 60 centimeter size with a few hundred tiny antennas. They emit radiation in the frequency range of several GHz, so that’s somewhat higher than the frequency used in a typical home wireless. The system can detect the position of your phone or other devices and then focus a signal to only that position, instead of wasting energy by emitting it everywhere.

The company claims that their system should be able to deliver 2 to 3 Watts at one meter distance, 1 Watt at 2 meters, and 10 to 50 miliwatt across 10 meters.

For comparison, a USB-3 port, which you most likely currently charge your phone with, delivers up to 4.5 Watts. So this isn’t so bad, but still really only good to power small devices like maybe alarm clocks and other things you don’t use. Also, the machine consumes between 40 and 60 Watts, which is roughly half the power consumption of a typical fridge, so not exactly energy efficient.

This technology was approved for sale and use in the USA already in 2019, and a few months ago it was also approved in the UK and the EU. The company says that Walmart is piloting the system in some distribution centers for inventory tracking and asset management, and Toyota is testing its viability to replace some wiring in cars, where it could power sensors and make it easier to replace them. These are all cases where you need really small amounts of power.

There are now a number of other companies with similar products, for example Energos whose system is called Watt-Up, and GuRu. The GuRu system operates at somewhat higher frequencies, at 24 GHz, which they claim makes it easier to miniaturizing the device.

I had a run-in with some guys who worked on wireless power about 15 years ago because they claimed in their product description that the electromagnetic energy “tunnels” from one place to another. If you’ve followed my blog for a really, really long time, you may remember this.

The reason I picked on this “tunneling” description is that I was afraid it may raise the impression you can somehow avoid the need for energy to go from one place to another with quantum something. This is not the case. Energy doesn’t just jump from one place to another, and quantum mechanics doesn’t change anything about this. For all I know that start-up no longer exists.

But referring to quantum something to attract customers hasn’t gone entirely out of fashion. There is a new wireless power company now, technovatar, which says on their website that they use neither electric nor magnetic fields but instead “transfer energy through energy quantization” which is “based on the creation of energy structures in space”. I have no idea what that means. They also claim their system does not use “any of existing methods of energy transfer”.  So I guess this means they sell a non-existing method of energy transfer.

Some of you may also remember that a few years ago there was a lot of buzz around wireless power transfer by ultrasound. The idea is that you convert electric energy into sound, transmit that sound, and then convert it back. You *can do that and it *does work, but of course this too doesn’t solve the problem that the energy has to somehow get from one place to another, and if you get in the way, some of that energy might be deposited into your body.

This company was called “u-Beam” and at the time and it made quite a few headlines. But in 2016, the entire engineering team left the company, and a former employee wrote a series of blog posts explaining that the technology did not work. He wrote: “While in theory [uBeam] may be possible in limited cases, the safety, efficiency, and economics of it mean it is not even remotely practical.” The company still exists, though it’s been renamed to SonicEnergy. It seems like wireless power by ultrasound isn’t going to become reality any time soon, maybe not ever.

There are however several technologies under development that might soon improve the efficiency of wireless power transfer and make the transfer more stable.

For example, in 2018 a group of researchers published a paper in PRL in which they proposed a way to improve the absorption of wireless power by a method of self-tuning called “Coherently enhanced wireless power transfer”. It’s basically a feedback loop that allows the system to adapt to reflections in the environment. A further step forward was made in 2020 by researchers from Stanford University. They developed a circuit that can adapt wireless power transfer to a moving source at times less than a millisecond. This drastically reduces loss and improves efficiency.

And near-field wireless power systems can be improved with metamaterials. Metamaterials are engineered to have desired responses to electromagnetic fields, sometimes also in the optical part of the spectrum. They can therefore much enhance the efficiency of the system. There’s a lot more to say about metamaterials. Let me know in the comments if you would be interested in a video on metamaterials specifically.

It looks like wireless power technology slowly gets going, but there’s already a new problem on the horizon. Wireless power might do away with the plugs, but the receiver has to be embedded inside the device. And unless everybody agrees on some standard receiver, which seems incredibly unlikely going by past experience, we’re going to see a lot of devices that won’t work in a room with another sender.

Personally I clearly think the way to go is tiny flying robots that deliver charge to your phone by crawling inside. Now I just need a catchy name and a website and soon I’ll be rich and famous. Until this happens, please don’t forget to like this video and subscribe.