Pages

Saturday, October 29, 2022

What Do Longtermists Want?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Have you ever put away a bag of chips because they say it isn’t healthy? That makes sense. Have you ever put away a bag of chips because you want to increase your chances of having more children so we can populate the entire galaxy in a billion years? That makes… That makes you a longtermist. Longtermism is a currently popular philosophy among rich people like Elon Musk, Peter Thiel, and Jaan Tallinn. What do they believe and how crazy is it? That’s what we’ll talk about today.

The first time I heard of longtermism I thought it was about terms of agreement that get longer and longer. But no. Longtermism is the philosophical idea that the long-term future of humanity is way more important than the present and that those alive today, so you, presumably, should make sacrifices for the good of all the generations to come.

Longtermism has its roots in the effective altruism movement, whose followers try to be smart about donating money so that it has the biggest impact, for example by telling everyone how smart they are about donating money. Longtermists are concerned with how our future will look like in some billion years or longer. Their goal is to make sure that we don’t go extinct. So stop being selfish, put away that junk food and make babies.

The key argument of longtermists is that our planet will remain habitable for a few billion years, which means that most people who’ll ever be alive are yet to be born.

Here’s a visual illustration of this. Each grain of sand in this hourglass represents 10 million people. The red grains are those who lived in the past, about 110 billion. The green one are those alive today, that’s about 8 billion more. But that is just a tiny part of all the lives that are yet to come.

A conservative estimate is to assume that our planet will be populated by at least a billion people for at least a billion years, so that’s a billion billion human life years. With today’s typical life span of 100 years, that’d be about 10 to the 16 human lives. If we’ll go on to populate the galaxy or maybe even other galaxies, this number explodes into billions and billions and billions.

Unless. We go extinct. Therefore, the first and foremost priority of longtermists is to minimize “existential risks.” This includes events that could lead to human extinction, like an asteroid hitting our planet, a nuclear world war, or stuffing the trash so tightly into the bin that it collapses to a black hole. Unlike effective altruists, longtermists don’t really care about famines or floods because those won’t lead to extinction.

One person who has been pushing longtermism is the philosopher Nick Bostrom. Yes, that’s the same Bostrom who believes we live in a computer simulation because his maths told him so. The first time I heard him give a talk was in 2008 and he was discussing the existential risk that the programmer might pull the plug on that simulation we supposedly live in. In 2009 he wrote a paper arguing:

“A non-existential disaster causing the breakdown of global civilization is, from the perspective of humanity as a whole, a potentially recoverable setback: a giant massacre for man, a small misstep for mankind”

Yeah, breakdown of global civilization is exactly what I would call a small misstep. But Bostrom wasn’t done. By 2013 he was calculating the value of human lives: “We find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives [in the present]. One might consequently argue that even the tiniest reduction of existential risk has an expected value greater than that of the definite provision of any ‘ordinary’ good, such as the direct benefit of saving 1 billion lives ”

Hey, maths doesn’t lie, so I guess that means okay to sacrifice a billion people or so. Unless possibly you’re one of them. Which Bostrom probably isn’t particularly worried about because he is now director of the Future of Humanity Institute in Oxford where he makes a living from multiplying powers of ten. But I don’t want to be unfair, Bostrom’s magnificent paper also has a figure to support his argument that I don’t want to withhold from you, here we go, I hope that explains it all.

By the way, this nice graphic we saw earlier comes from Our World in Data which is also located in Oxford. Certainly complete coincidence. Another person who has been promoting longtermism is William MacAskill. He is a professor for philosophy at, guess what, the University of Oxford. MacAskill recently published a book titled “What We Owe The Future”.

I didn’t read the book because if the future thinks I owe it, I’ll wait until it sends an invoice. But I did read a paper that MacAskill wrote in 2019 with colleague Hilary Greaves titled “The case for strong longtermism”. Hilary Greaves is a philosopher and director of the Global Priorities Institute which is located in, surprise, Oxford. In their paper they discuss a case of long-termism in which decision makers choose “the option whose effects on the very long-run future are best,” while ignoring the short-term. In their own words:

“The idea is then that for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects.”

So in the next 100 years, anything goes so long as we don’t go extinct. Interestingly enough, the above passage was later removed from their paper and can no longer be found in the 2021 version.

In case you think this is an exclusively Oxford endeavour, the Americans have a similar think tank in Cambridge, Massachusetts, called The Future of Life Institute. It’s supported among others by billionaires Peter Thiel, Elon Musk, and Jaan Tallinn who have expressed their sympathy for longtermist thinking. Musk for example recently commented that MacAskill’s book “is a close match for [his] philosophy”. So in a nutshell longtermists say that the current conditions of our living don’t play a big role and a few million deaths are acceptable, so long as we don’t go extinct.

Not everyone is a fan of longtermism. I can’t think of a reason why. I mean, the last time a self-declared intellectual elite said it’s okay to sacrifice some million people for the greater good, only thing that happened was a world war, just a “small misstep for mankind.”

But some people have criticized longtermists. For example, the Australian philosopher Peter Singer. He is one of the founders of the effective altruism movement, and he isn’t pleased that his followers are flocking over to longtermism. In a 2015 book, titled The Most Good You Can Do he writes:

“To refer to donating to help the global poor or reduce animal suffering as a “feel-good project” on which resources are “frittered away” is harsh language. It no doubt reflects Bostrom’s frustration that existential risk reduction is not receiving the attention it should have, on the basis of its expected utility. Using such language is nevertheless likely to be counterproductive. We need to encourage more people to be effective altruists, and causes like helping the global poor are more likely to draw people toward thinking and acting as effective altruists than the cause of reducing existential risk.”

Basically Singer wants Bostrom and his likes to shut up because he’s afraid people will just use longtermism as an excuse to stop donating to Africa without benefit to existential risk reduction. And that might well be true, but it’s not a particularly convincing argument if the people you’re dealing with have a net worth of several hundred billion dollars. Or if their “expected utility” of “existential risk reduction” is that their institute gets more money.

Singers second argument is that it’s kind of tragic if people die. He writes that longtermism “overlooks what is really so tragic about premature death: that it cuts short the lives of specific living persons whose plans and goals are thwarted.”

No shit. But then he goes on to make an important point: “just how bad the extinction of intelligent life on our planet would be depends crucially on how we value lives that have not yet begun and perhaps never will begin.” Yes, indeed, the entire argument for longtermism depends crucially on how much value you put on future lives. I’ll say more about this in a minute, but first let’s look at some other criticism.

The cognitive scientist Steven Pinker, after reading MacAskill’s What We Owe The Future, shared a similar reaction on twitter in which he complained about: “Certainty about matters on which we’re ignorant, since the future is a garden of exponentially forking paths; stipulating correct answers to unsolvable philosophical conundrums [and] blithe confidence in tech advances played out in the imagination that may never happen.”

The media also doesn’t take kindly to longtermism. Some, like Singer, complain that that longtermism draws followers away from the effective altruism movement. Others argue that the technocratic vision of longtermists is also anti-democratic. For example Time Magazine wrote that Elon Musk has “sold the fantasy that faith in the combined power of technology and the market could change the world without needing a role for the government”

Christine Emba, in an opinion piece for the Washington Post, argued that “the turn to longtermism appears to be a projection of a hubris common to those in tech and finance, based on an unwarranted confidence in its adherents’ ability to predict the future and shape it to their liking” and that “longtermism seems tailor-made to allow tech, finance and philosophy elites to indulge their anti-humanistic tendencies while patting themselves on the back for their intelligence and superior IQs. The future becomes a clean slate onto which longtermists can project their moral certitude and pursue their techno-utopian fantasies, while flattering themselves that they are still “doing good.””

Okay, so now that we have seen what either side says, what are we to make of this.

The logic of longtermists hinges on the question what the value of a life in the future is compared to ours while factoring in the uncertainty of this estimate. There are two elements which goes into this evaluation. One is an uncertainty estimate for the future projection. The second is a moral value, it’s how much future lives matter to you compared to ours. This moral value is not something you can calculate. That’s why longtermism is a philosophical stance, not a scientific one. Longtermist try to sweep this under the rug by blinding the reader with numbers that look kind of sciencey.

To see how difficult these arguments are, it’s useful to look at a thought experiment known as Pascal’s mugging. Imagine you’re in dark alley. A stranger steps in front of you and says “Excuse me, I’ve forgotten my knife but I’m a mugger, so please give me your wallet.” Do you give him your money? Probably not.

But then he offers to pay back double the money in your wallet next month. Do you give him your money? Hell no, he’s almost certainly lying. But what if he offers a hundred times more? Or a million times? Going by economic logic, eventually the risk of losing your money because he might be lying becomes worth it because you can’t be sure he’s lying. Say you consider the chances of him being honest 1 in 10,000. If he offered to return you 100 thousand times the amount of money in your wallet, the expected return would be larger than the expected loss.

But most people wouldn’t use that logic. They wouldn’t give the guy their money no matter what he promises. If you disagree, I have a friend who is a prince in Nigeria, if you send him 100 dollars, he’ll send back a billion, just leave your email in the comments and we’ll get in touch.

The point of this thought experiment is that there’s a second logical way to react to the mugger. Rather than to calculate the expected wins and losses, you note that if you agree to his terms on any value, then anyone can use the same strategy to take literally everything from you. Because so long as your risk assessment is finite, there’s always some promise that’ll make the deal worth the risk. But in this case you’d lose all your money and property and quite possibly also your life just because someone made a promise that’s high enough. This doesn’t make any sense, so it’s reasonable to refuse giving money to the mugger. I’m sure you’re glad to hear.

What’s the relation to longtermism? In both cases the problem is how to assign a probability to unlikely future events. For Pascal’s mugger that’s the unlikely event that the mugger will actually do what he promised. For longtermism the unlikely events are the existential threats. In both cases our intuitive reaction is to entirely disregard them because if we did, the logical conclusion seems to be that we’d have to spend as much as we can on these unlikely events about which we know the least. And this is basically why longtermists think people who are currently alive are expendable.

However, when you’re arguing about the value of human lives you are inevitably making a moral argument that can’t be derived from logic alone. There’s nothing irrational about saying you don’t care about starving children in Africa. There’s also nothing irrational about saying you don’t care about people who may or may not live on Mars in a billion years. It’s a question of what your moral values are.

Personally I think it’s good to have longterm strategies. Not just for the next 10 or 20 years. But also for the next 10 thousand or 10 billion years. So I really appreciate the longtermists’ focus on the prevention of existential risks. However, I also think they underestimate just how much technological progress depends on the reliability and sustainability of our current political, economic, and ecological systems. Progress needs ideas, and ideas come from brains which have to be fed both with food and with knowledge. So you know what, I would say, grab a bag of chips and watch a few more videos.

Saturday, October 22, 2022

What If the Effect Comes Before the Cause?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


The cause comes before the effect. And I’m glad it does because it’d be awkward if my emails arrived before I’d written them. If that ever happens, it’ll be a case of “retrocausality.” What does that mean? What’s it got to do with quantum mechanics and what is the “transactional interpretation”? That’s what we’ll talk about today.

Causality is a relation between events in space and time, so I’ll be using space-time diagrams again. In such a diagram, the vertical axis is time and the horizontal axis is one dimension of space. Anything that moves with constant velocity is a straight line at some angle. By convention, a 45 degree angle depicts the speed of light.

According to Einstein, yes that guy again, the speed of light is an upper limit for the transmission of information. This means if you take any one point in space-time, then you can only send or receive information from points within cones of less than 45 degrees to the vertical through the point. The boundaries of those areas are called the forward light cone and the backward light cone.

Every philosopher in existence had something to say about causality, so there are many different definitions, but luckily today we’ll need only need two. The first one is space-time causality. Suppose you have two events that are causally related, then each must be inside the other’s light cone, and the one in the past is the cause. It’s as simple as that. This is the notion of causality that’s used in General Relativity. One direction is forward, that’s the future, one direction is backward, that’s the past.

But it turns out that not all space-times allow you to tell apart past from future. This is much like on a Moebius strip you can’t tell the front from the back because they’re the same! In some space-times you can’t tell the past from the future because they’re the same.

Normally this doesn’t happen because you can’t turn around in time. If you wanted to do that, you’d have to go faster than light. But some space-times allow you to go back in time and visit your own past without moving faster than light. It’s called a time-like closed curve. That it’s time-like means you can travel along it below the speed of light, and that it’s closed means it’s a loop.

The simplest example of this is a space-time with a wormhole. Let’s say if you enter the wormhole it transports you from this point A to this point B. From where you started, the wormhole entrance is in your future. But it’s also in your past.

This is weird and it creates some causality problems that we’ll talk about in a bit. However, for all we know time-like closed curves don’t exist on our space-time which is a little bit disappointing because I know you really want to go back and revisit all your Latin exams. This is why I first want to talk about another notion of causality which makes going back in time a little easier. It’s called “interventionist causality”. It’s all about what you can “intervene” with, and it works without a time-order.

To understand interventionist causality you ask which event depends on another one. Think back of the example of writing an email and someone receiving it. If I intervene with you writing the email, for example by distracting you with jokes about retrocausality, then the other person won’t receive it. So neither of the two events happen. But if I intervene with them receiving the email, for example by boring them to death with jokes about retrocausality, you’ll still write the email. According to interventionist causality, the event that you can intervene with to stop both from happening is the cause. In general you don’t have to actually prevent an event from happening, you look at what affects its probability.

Normally the causal order you get from the interventionist approach agrees with the order you get from the space-time approach. So then why use it at all? It’s because in practice we often have correlations in data, but we don’t know the time order, so we need another method to infer causality. We encountered many examples for this in our recent video on obesity. What came first, the obesity or the change in the microbiome? Interventionist causality is really popular in the life sciences, where longitudinal studies are rare, because it gives you an alternative way to analyze data to infer causality.

But back to physics. This notion of interventionist causality is implicitly based on entropy increase. Let me illustrate that with another example. Suppose you flip a switch and the light turns on. If I stop you from flipping the switch, the light doesn’t turn on. But if I stop the light from turning on that doesn’t stop you from flipping the switch. Interventionist causality then tells us that flipping the switch is the cause and the light turning on the effect.

However, from a purely mathematical perspective, there exists some configuration of atoms and photons in which the light stays off in just exactly the right way so that you don’t flip the switch. But such an “intervention” would have to place photons so that they go back into the bulb and the electric signal back into the cable and all the neural signals in your brain go in reverse. And this may be mathematically possible, but it would require an enormous amount of entropy decrease, so it’s not practically possible.

This is why the two notions of causality usually agree. Forward in space-time is the same direction as the arrow of time from entropy increase. But what if that wasn’t the case? What if there’d be places in the universe where the arrow of time went in the other direction than it does here? Then you could have effects coming before their causes, even without wormholes or some other weird space-time geometry. You’d have retrocausality.

As you have probably noticed, emails don’t arrive before you’ve written them, and lights don’t normally cause people to flip switches. So, if retrocausality exists, it’s subtle or it’s rare or it’s elsewhere.

In addition, most physicists don’t like retrocausality because going back and time can create inconsistencies, that’s situations where something both happens and doesn’t happen. The common example is the grandfather paradox, in which you go back in time and kill your own grandfather, accidentally we hope, so you are never born and can’t go back in time to kill him. I guess we could call that a retrocasualty.

The movie industry deals with those causal paradoxes in one of three ways. The first one is that if you go back in time, you don’t go back into your own time, but into a parallel universe which has a similar but slightly different history. Then there’s no inconsistency, but you have the problem that you don’t know how to get back to where you came from. This happens for example in the movie “The Butterfly Effect” in which the protagonist repeatedly travels to the past only to end up in a future that is less and less like what he was hoping to achieve.

Another way of dealing with time-travel paradoxes is that you allow temporary inconsistencies, they just have to be fixed so that everything works out in the end. An example of this is Back to the Future, where Marty accidentally prevents his parents from meeting. He has to somehow fix that issue before he can go back into his own future.

These two ways of dealing with time travel inconsistencies are good for story-telling, but they’re hard to make sense of scientifically. We don’t have any theoretical framework that includes multiple pasts that may join back to the same future.

The option that’s the easiest to make sense of scientifically is to just pick a story that’s consistent in the first place. An example for this is the Time Traveler’s Wife, in which the time traveler meets his future wife the first time when she is a child but he already an adult, and then he meets her again when they’re both the same age. It makes for a really depressing story though.

But even if time travel is consistent, it can still have funny consequences.

Imagine for example you open your microwave and find a notebook in it. The notebook contains instructions for how to turn your microwave into a time machine. But it takes you ten years to figure out how to make it work, and by the time you’re finished the notebook is really worn out. So, you copy its content into a new notebook, put it into the microwave, and send it back in time to your younger self. Where did the notebook come from?

It's often called the “bootstrap paradox” but there’s nothing paradoxical about it in the sense that nothing is inconsistent. We’re just surprised there was nothing before the appearance of the notebook that could have given rise to it, but it can have consequences later on. For starters, you’ll have to buy a new microwave. This means that in the presence of causal loops the past no longer determines the future.

Hmm, indeterminism. Where have we heard that before? In quantum mechanics, right! Could retrocausality have something to do with quantum mechanics?

Indeed, that retrocausality could explain the seemingly strange features of quantum mechanics was proposed by John Cramer in the 1980s. It’s called the Transactional Interpretation. It was further developed by Ruth Kastner but Cramer seems to not be particularly enchanted by Kastner’s version. In a 2015 paper he called it “not incorrect, but we consider it to be unnecessarily abstract.” That’s academia. Nothing quite like being dissed in the 1st person plural.

Kramer’s idea goes back to Wheeler and Feynman who were trying to find a new way to think about electrodynamics. Suppose you have light going from a sender to a receiver. If we draw this into a space-time diagram, we just get this line at a 45 degree angle from the event of emission to the absorption.

Those waves of the light oscillate in the direction of the two dimensions that we didn’t draw. That’s because I can’t afford a graphic designer who works in 4 dimensions, so we’ll just draw another graph down here that shows the phase of the wave as a function of the coordinate. This is how we normally think of light being emitted and absorbed.

But in this case we have to tell the emitter what is the forward direction of time. Wheeler and Feynman didn’t like this. They wanted a version that would treat the future and past the same way. So, they said, suppose the wave that comes from the emitter actually goes into both directions in time but the wave that goes backward in time has the opposite phase. When it arrives at the absorber, it sends back an answer wave. In the range between the emitter and absorber, the answer wave has the same phase as the one that came from the emitter. But it has the opposite phase going forward in time. Because of this, there’s constructive interference between the event of emission and that of absorption, but destructive interference before the emission and after the absorption.

The result looks exactly the same as the normal version of electrodynamics where the wave just starts at the emitter and ends at the absorber. Indeed, it turned out that Wheeler and Feynman’s reinterpretation of electrodynamics was identical to the normal version and they didn’t pursue it any further. However, I want to draw your attention here already to an odd feature of this interpretation. It’s that it suggests a second notion of time which doesn’t exist in the physics.

When we say something like: when the wave from the emitter arrives at the absorber, the absorber returns a wave, that doesn’t play out in time. Because time is the axis on this diagram. If you illustrate the physical process, then both the emission and absorption are in this diagram in the final version, period. They don’t get drawn into it, that’d be a second notion of time.

That said, let’s look at Cramer’s Transactional Interpretation. In this case, we use wave-functions instead of electromagnetic waves, and there isn’t one absorber, but several different ones. The several different absorbers are different possible measurement outcomes.

Suppose for example you emit a single quantum of light, a photon, from a source. You know where the photon came from but you don’t know where it’s going. That’s not because the photon has lost its internet connection and now can’t find a tube entrance, it’s because of the uncertainty principle. This means that its wave-function spreads into all directions. If you then measure the photon at one particular place, the wave-function instantaneously collapses, everywhere. This brings up the question: How did the wave-function on one side know about the measurement on the other side. That’s what Einstein referred to as “spooky action at a distance,” which I talked about in my earlier video.

Let us draw this into our space-time diagram. We have only one direction of space, so the photon wave-function goes left and right with probability ½. If you measure it on one side, say the right side, the probability there jumps to 1 and that on the other side to 0.

Cramer’s transactional interpretation now says that this isn’t what happens. Instead, what happens is this. The source sends out an offer wave, both forward and backward in time. In the forward direction, that approaches the detectors. Again down here we have drawn the phase of that wave. It’s now a probability amplitude rather than the amplitude of an electromagnetic field.

When the offer wave arrives at the detectors, they both send back a confirmation wave. When those waves arrive at the source, the source randomly picks one. Then the waves to that one detection event enter a back-and-forth echoing process, until the probability for that outcome is 1 and that for the other possible outcomes is zero. That reproduces the collapse of the wave-function.

Cramer calls this a “transaction” between the source and the detector. He claims it makes more sense than the usual Copenhagen Interpretation with the collapse, because in the transactional interpretation all causes all propagate locally and in agreement with Einstein’s speed of light limit. You “just” have to accept that some of those causes go back in time.

Take for example the bomb experiment in which you want to find out whether a bomb is live or a dud, but without exploding it because last time that happened, Ken spilled his coffee, and it was a mess. If the bomb is live, a single photon will blow it up. If it’s a dud, the photon just goes through. What you do is that you put the bomb into an interferometer. You send a single photon through and measure it down here. If you measure the photon in this detector, you can tell the bomb is live even though the photon didn’t interact with the bomb because otherwise it’d have blown up. For a more detailed explanation, please watch my earlier video on the bomb experiment.

In Cramer’s interpretation of the bomb experiment an offer wave goes over both paths. But if there’s a live bomb on that path, the offer wave is aborted and can’t go through. The other offer wave still reaches the detector. The detector sends answer waves, and again the answer wave that goes along the bomb path doesn’t pass through. This means the only transaction that can happen is along the other path. This is the same as in quantum mechanics. But in the transactional interpretation the path with the bomb is probed by both the offer wave and the answer wave, so that’s why the measurement can contain information about it.

Great! So quantum mechanics is just a little bit of reverse causality. Does that finally explain it? Not quite. The issue with Cramer’s interpretation is the same as with the Wheeler-Feynman idea. This notion of time with the wave propagating this way and back seems to be a second time, internal to the wave, that has no physical relevance. And the outcome in the end is just the same as in normal quantum mechanics. Indeed, if you use the interventionist notion of causality, then emitting the particle is still the cause of its eventual detection and not the other way round.

Personally I don’t really see why bother with all that sending back and forth if it walks and talks like the usual Copenhagen Interpretation? But if it makes you feel better, I think it’s a consistent way to think of quantum mechanics.

In conclusion, I’m afraid I have to report that physicists have not found a way to travel into the past, at least not yet. But watch out for that notebook in the microwave.

Wednesday, October 19, 2022

Science News Oct 19

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Welcome everyone to this week’s science news. Today we’ll talk about the Quantum Internet Alliance, wormholes in the New York Times, a black hole that doesn’t like its meal, energy worries at CERN, species loss, and the robot which holds the world record for running 100 meters.

The European Quantum Internet Alliance announced on Friday that they have started their 7-year plan to “build a global quantum internet made in Europe.” Dr. Stephan Ritter from the executive team of the Alliance said “Our goal is to create quantum internet innovation that will ultimately be available to everyone.” The Quantum Internet doesn’t only boom in Europe, the US government also has a “Blueprint for the Quantum Internet”. the Chinese are working on it too, and many other countries have similar plans.

What is the quantum internet and, most importantly, will it make YouTube better? I’m afraid not.

The quantum internet would be a network with hubs, cables, relay stations and maybe one day also with satellites. It’d make it possible to exchange particles which maintain their quantum behaviour, like being into two states at once. Why would you want to do that? Because quantum particles have this funny property that if you measure them, then their state suddenly “collapses” from two states at once to one state in particular. This is Einstein’s “spooky action at a distance”.

It does not allow you to communicate faster than light, but you’ll notice if someone tries to intercept your line. Because that’ll collapse the wave-function. For this reason, data transferred through the quantum internet would be secure in the sense that if the receiver doesn’t get the data, then no one else gets it either.

The problem with the quantum internet is that those quantum states are fragile. Using them over macroscopic distances to transmit any interesting amount of data is possible, but impractical and I doubt it’ll ever make financial sense. It's like the soup tube start-up of this woman’s boyfriend on reddit. Put soup pipes under the ground, connect apartment buildings to soup kitchens, then sell soup subscription. That’s certainly possible. But it’s an expensive solution to a problem which doesn’t exist. Just like the quantum internet.

Don’t get me wrong guys, I am totally in favour of the quantum internet. Because it’ll be great for research in quantum foundations which I work on myself. But I’d really like to know how so many governments and CEOs got talked into believing they need a quantum internet. Because if I knew, I’d like to talk to them about my global soup network.

Mr President.

Sure. Chicken soup, tomato soup, noodle soup.

Cheese soup? You’re my kind of president.

According to several headlines you may have seen this week, a black hole has spewed out material, or burped it out, or puked it out. I guess we’re lucky the headlines haven’t used metaphors for worse digestive problems, at least not yet. But of course, black holes don’t puke out stuff. Nothing escapes from black holes, that’s why they’re called black holes. It’s just that when a black hole attracts a star and tears it into pieces, some of the remains may escape or they may orbit around the black hole for a long time without falling in.

Both cases have previously been observed. What’s new is that a black hole trapped a star and ejected the remains with a delay of three years. These observations were just published in The Astrophysical Journal by scientists at Harvard.

The black hole they observed is about six hundred sixty-five million light years away. It was seen sucking a star into its vicinity in 2018, then it was silent until last year June, when it suddenly, and rather unexpectedly, began to reject stuff. Some of the rejected material was seen moving away at half the speed of light, that’s five times faster than what’s been found in previous observations. It’s a remarkably strange and violent event that’ll give astrophysicists a lot to think about. But please. Black holes don’t puke. They just sometimes don’t clear their plate.

But speaking of black holes and puking, a few days ago, the New York Times published a new article by Dennis Overbye (Over-bee) titled “Black Holes May Hide a Mind-Bending Secret About Our Universe”. You may have hoped the mind-bending secret was the last digit of pi or maybe how the Brits managed to find a prime minister even more incompetent than Boris Johnson. But no, it’s a speculation about the relation between wormholes and entanglement.

One problem with the article is that it refers to “entanglement” as “spooky action at a distance”, a mistake that Overbye could have avoided if he’d watch this channel because I just complained about this last week. Einstein wasn’t referring to entanglement when he used that phrase. He was referring to the collapse of the wave-function. But the bigger problem is that Overbye doesn’t explain how little there is to learn from the allegedly “mind bending secret”. So let me fill you in.

The speculation that entanglement might be related to wormholes is based on another speculation, which is that the universe is a hologram. The idea is not supported by evidence. What would it mean if it was correct? It’d mean that you could encode the information inside our universe on its surface, more precisely its conformal surface.

Unfortunately, for all we know, our universe doesn’t have such a surface. The idea that the universe is a hologram only works if the cosmological constant is negative. The cosmological constant in our universe is positive. And even if that wasn’t so, the idea that the universe is a hologram would still not be supported by evidence. What does any of that have to do with real holograms? Nothing. I talked about this in an earlier video.

This means we are dealing with a mathematical speculation that has no connection to observation. It’s for all practical purposes untestable. That entanglement is kind of like being linked by a wormhole is then an interpretation of the mathematics which describes a universe that we don’t inhabit. It’s got nothing to do with real wormholes in the real universe.

It also isn’t a new idea. What’s new is that the physicists who work on this stuff, and that’s mostly former string theorists, want to get a share of all the money that’s now going into quantum technologies. So they’ve converted their wormhole interpretation of the holographic speculation into an algorithm that can be run on a quantum computer. This is then called a quantum simulation. Of course, the quantum computers to do that don’t actually exist, so it’s a hypothetical simulation.

In summary, a more accurate headline of Overbye’s piece would have been “Physicists want to simulate an interpretation of a speculation on a device which doesn’t exist”. This is why they don’t let me write my own headlines.

Liz.

No I don’t want to talk to you.

Because you’re changing laws faster than I can make jokes about them.

Okay, go ahead.

How many u-turns can you make in a ten-dimensional space before tying yourself into a knot. I don’t know but I can get you in touch with a few string theorists, they can teach you some tricks. Sure thing, bye.

NASA confirmed a few days ago that their attempt to change the course of an asteroid was successful. Calculations based on the so far available data indicate that the asteroid’s orbit was shortened by 32 minutes. According to a NASA spokesperson who requested anonymity, the biggest challenge of the mission has been to endure the Bruce Willis jokes.

The European Organization for Nuclear Research, CERN, announced that they will take energy-reduction measures. This step comes after energy prices have sky-rocketed in Europe due to lacking gas-supply from Russia. The main energy consumption at CERN comes from its flagship machine, the Large Hadron Collider, which averages about 1 point 3 terawatt hours a year, that’s about half the energy consumption of the nearby city of Geneva.

According to a press release, CERN will push their year-end technical stop forward by two weeks to November 28th. Next year, operations will be reduced by 20 percent, which will delay many of the planned experiments. I think it’s the right thing to do in the current situation.

Yes, even particle physicists have noticed that those aren’t good times for building power consuming mega machines for no particular purpose, so they, too are hoping to get some money out of the Quantum Tech windfall. Which is why CERN has a Quantum Technology Initiative, like everyone else besides me, it seems. What do they want to do with it? Well, look for supersymmetry for example. Some things don’t change.

But they also want to contribute to societal progress. According to a press release from last week, “CERN has joined a coalition of science and industry partners proposing the creation of an Open Quantum Institute” in Geneva. CERN Director General Fabiola Gianotti said that the members of the institute “will work to ensure that quantum technologies have a positive impact for all of society.” Though possibly the most positive impact for all of society might be to spend the money on something more useful.

Initial tests of a nasal-spray vaccine by Oxford University researchers and AstraZeneca yielded poor results, stunting the potential release of a globally-accepted non-traditional vaccine. According to a press release from Oxford University, the immune response among trial participants was significantly weaker than that from a shot-in-the-arm vaccination. While no safety issues were found, the “weak and inconsistent” results mean that the nasal-spray vaccine will not be rolled out any time soon and will instead go back to the research stage.

The results are disappointing to many after China and India both approved a COVID vaccines that can be delivered by the nose or mouth in September. But at least we won’t have to decide whether to call it a nosiccine or immunosation

Now we come to the depressing part. The new 2022 Living Planet Index was released by the World Wildlife Fund last week. According to the report, the population of 32 thousand vertebrate species has declined an average of 69 percent between nineteen-seventy and twenty-eighteen.

This percentage is not weighted by species population, so it’s difficult to interpret. Here’s an example. Suppose you have two YouTube channels. One channel sees its subscribers increase from one million to two million in a year. That’s a one hundred percent increase. The other had only ten subscribers to begin with and it lost them all. That’s a 100 percent decrease. Taken together you might call this a remarkable success. According to the World Wildlife Fund, that’s an average of zero percent.

So the sixty-nine percent decrease doesn’t mean animal *populations have decreased by that much. It also doesn’t mean that 69 percent of animal species are in decline. Indeed, according to a paper published in Nature two years ago, the index is driven by just three percent of all vertebrate species. If those are removed from the sample, the trend switches to an increase. What does it mean? I don’t know, but I’d say if a number is that difficult to interpret, don’t put it into headlines.

A better report that was recently published but didn’t make as many headlines comes from Bird Life International. They found that 49 percent of bird species are in decline, and one in eight is at risk of extinction. The major factors for the decline of bird populations are linked to humans, including agricultural expansion and intensification, unsustainable logging, invasive species, exploitation, and climate change.

About 5 point three billion mobile phones will be thrown away this year, according to the international waste electrical and electronic equipment (WEEE) forum. They just released the results of a survey they conducted earlier this year in several European countries. The survey was not just about mobile devices, but about small electronic devices in general, including hair dryers and toasters. Many of those contain raw materials that are valuable and could be reused, such as copper, gold, or palladium. However, many people keep devices that they no longer use, rather than recycling them. According to the survey results Europeans hoard on the average 15 percent of those devices. The biggest hoarders are by far the Italians, following a long tradition that goes back to the Roman empire.

Yes.

The electronic waste forum, oh really.

My phone? It’s about as old as I am, 21 years, take or give.

No, I don’t want to recycle it. I’m preserving the natural diversity of the technological ecosystem. Thank you.

That was the depressing news for today, let’s finish on something more cheerful. Cassie the robot has broken the world-record for the fastest 100 meter run by a bi-pee-dal robot.

Cassie was developed by engineers at Oregon State University and has been in development since 2017. The latest achievement became possible thanks to intensive training with advanced machine learning. Cassie was able to complete the 100 meters in 24 point seven three seconds which earned it an entry into the Guinness Book of world records. It also demonstrated that the best way to reduce your marathon time is taking off your upper body.

Saturday, October 15, 2022

Why are we getting fatter?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Today I want to talk about personal energy storage. I don’t mean that drawer we all have that’s full of batteries in the wrong size, I mean our expanding waistlines that store energy in form of fat. What are the main causes of obesity, how much of a problem is it, and what are you to make of the recent claims that obesity is caused by plastic? That’s what we’ll talk about today.

Obesity is common and becoming more common, so common in fact that the World Health Organization says “the issue has grown to epidemic proportions.” The proportions of the issue are, I guess, approaching that of a sphere, and the word “epidemic” just means it’s a widely spread health problem. Though, interestingly enough, obesity is in some sense infectious. Not because it’s spread by a virus, but because eating habits spread in social networks. If your friends are obese, you’re more likely to be obese too.

In the past decades, the fraction of people who are obese has steadily increased everywhere. In the United States, more than a third of adults are now obese. Canada, the UK, and Germany are not far behind. The United States by the way are not world-leaders in obesity, that title goes to a pacific island by name Nauru where 60 percent of adults are obese. I have no idea what is going on there, but I want to move there when I retire.

Jokes aside, the economic burden of obesity is staggering. According to an estimate from the Milken Institute for the United States alone, obesity and overweight account for direct health care costs of at least four-hundred eighty billion dollars per year. The indirect costs due to lost economic productivity are even higher, about 1 point two four trillion dollars per year. Together that’s more than 9 percent of the entire American GDP.

But what, exactly, is obesity? The World Health Organization defines obesity as an “abnormal or excessive fat accumulation that may impair health”. I believe that an “excessive fat accumulation” does not mean stuffing your fridge with so much cheese that it hits you in the face when you open the door. They probably mean an accumulation of fat in the human body. This is why obesity is most commonly measured with the body mass index, BMI for short, that’s your weight in kilogram divided by the square of your height in meters. For adults, a BMI over 25 kilogram per square meter is overweight, and over 30 it’s obese.

The BMI is widely used because it’s easy to measure, but it has several shortcomings. The biggest issue is that it doesn’t take into account how much of your body weight is fat. You can put on muscle rather than fat and your BMI goes up. A more reliable measure is the amount of body fat.

There are no accepted definitions for obesity based on the percent of body fat, because academia wouldn’t be academia if people could just agree on definitions and move on. But a commonly used indicator for overweight is more than 25 percent fat for men and more than 35 percent for women. Turns out that about 12 percent of men who have a BMI about 25 have a normal amount of body fat, so probably just a lot of muscles. And on the flip side, about 15 percent of women with a BMI below 25 have an abnormally high percentage of body fat.

But using body fat as a measure for obesity is also overly simplistic because the problem isn’t actually the fat, it’s where you store it.

You see, the body removes fat from the blood and then tries to store it in fat cells. But in adults the number of fat cells doesn’t increase, it’s just the size of the cells that increases. And there’s only so much fat that a single cell can store. The problem begins when all the fat cells are full. Because then the body has to store the fat in places where it doesn’t belong and it does that notably around the liver and in muscle tissue. That’s right, the fat isn’t the problem, it’s the lack of fat cells that’s the problem. I would like to add that for the same reason books aren’t the problem, it’s the lack of bookshelves that’s the problem.

The out-of-place storage of fat seems to be the major reason for most of the health problems associated with obesity. Among others, that’s an increased risk for heart attacks and strokes, type 2 diabetes, breathing problems, knee, hip, and back problems, and an increased risk for certain types of cancer. But since there are individual differences in how much fat the body can store in the available fat cells, the onset of disease doesn’t happen at any particular amount of fat. About 20 percent of people with a BMI above thirty appear to be metabolically healthy.

This is why researchers have tried to figure out specifically where too much fat is a problem. But it’s not as easy as saying, if your waist circumference is more than so-and-so then that’s too much. To begin with, as you have undoubtably noticed, men and women store fat in different places. Men store it mostly at the belly, women store it primarily at the breasts, thighs (!th), and bottom, which is why you never get to see my bottom. It doesn’t fit on the screen. So, if you use a measure like waist circumference, at the very least you have to use a different one for men and women. To make matters more complicated, the onset of metabolic disease with waist circumference seems to depend on the ethnic group.

To return to the definition from the WHO, we see that it’s not all that easy to figure out when “an accumulation of fat impairs health”. It’s clearly not just the number on your scale that matters and it’s true that some obese people are healthy. Still, after all is said and done, the BMI is very strongly correlated with the ill effects of obesity.

So what are the possible causes of this obesity epidemic? Yes, the universe also expands, but it’s the space between galaxies that expands, not planets, or people on planets. So I’m afraid we can’t blame Einstein for this one. But if not Einstein, then what?

The first possible cause of obesity is too much food. Bet you didn’t think of this one.

Food has become more easily accessible to almost everyone on the planet, but especially in the developed world. It doesn’t help that food companies have an incentive to make you want to eat more. In a familiar pattern, the sugar industry has tried to play down evidence that sugar contributes to coronary heart disease for decades, though the evidence is now so overwhelming that they’ve given up denying it.

But just saying obesity is caused by easy access to food is a poor explanation because not everyone who has food in abundance also gets fat. Let’s therefore look at the second possible cause, the wrong type of food.

Modern food is nothing like the food our ancestors ate. A lot of the stuff we eat today is highly processed, which means for example that meat is shredded to small pieces, mixed with saturated fats and preservatives, and then formed to shapes. Processed food also often lacks fiber and protein and is instead rich in salt, sugar and fat, possibly with chemical compositions that don’t occur in nature.

The issue with processed food is that we may say we’re “burning energy” but, I mean I’m just a physicist, but I believe the human body doesn’t literally burn food. It’s rather that we have to take apart the molecules to extract energy from them. And taking apart the food requires energy, so the net turnout depends on how easy the food is to digest. Processed food is easy to digest, so we get more energy out of it, and put on weight faster.

The correlation between processed food and overweight is well-established. For example, in 2020 a a study looked at a sample of about 6000 adults from the UK. They found that the highest consumption of processed food was associated with 90 percent higher odds for being obese. Another paper from a just a few weeks ago analyzed data from adolescents who had participated in a Health and Nutrition Survey in the United States. They too found that the highest consumption of ultra-processed food was associated with up to 60 percent higher odds of being obese.

A particular problem are trans fats, that are fats which don’t identify with the hydrogen bonds they were assigned at birth. Trans fats are chemically modified so that they can be produced in solid or semi-solid form which makes them handy for the food industry. The consumption of trans fats is positively correlated not only with obesity but also with cardiovascular diseases, disorders of the nervous system, and certain types of cancer, among others. Forty countries have banned or are in the process of banning trans fats. I’m not a doctor but I guess that means they really aren’t healthy.

Let’s then look at the third possible cause, lack of exercise. Just last year, a group of European researchers published a review of reviews on the topic, so I guess that’s a meta-meta-review. They found that exercise led to a significant weight loss in obese people. But when they say “significant” they mean statistically significant not that it’s a lot of weight. If you look at the numbers they are referring to fat loss of about 2 kilogram on average, a difference that most obese people would hardly notice. Part of the reason may be that exercise has a rebound effect. If people start exercising, they also eat more.

Don’t get me wrong, exercise has a lot of health benefits in and by itself, so it’s a good thing to do, but it seems that its impact on reducing obesity is limited.

The next cause we’ll look at are your genes. Obesity has a strong genetic component. Twin, family, and adoption studies show that the chance that obesity is passed down the family line lies between 40 and 70 percent.

The genetic influence on obesity comes in two types, monogenic and polygenic. The monogenic type is caused by only one of a number of genes that affect the regulation of food intake. Monogenic obesity is rare and affects fewer than 5 percent of obese people. Those who are affected usually gain weight excessively already as infants.

Polygenic obesity has contributions from many different genes, though some of them stand out. For example, the so-called FTO gene affects the production of a hormone called ghrelin. Ghrelin is often called the “hunger hormone” because it regulates appetite. The ghrelin level is high if you’re hungry and decreases after you’ve eaten. A study in 2013 found that people with the obesity variant of the FTO gene showed less reduction of ghrelin levels after they’d eaten. So they’ll stay hungry longer, and you don’t need to be Einstein to see how this can lead to weight gain.

How many obese people are obese because of their genes? No one really knows. For one, not all genes that contribute to obesity are known, not all people who carry those genes are obese, and the prevalence of certain genes depends strongly on the population you look at. For example, the variation of the FTO gene that’s correlated with an increase of BMI is present in about 42 of people with European ancestry, but only 12 percent of those with African ancestry. So, it’s complicated, but that’s why you’re here, so let’s make things more complicated and look at the next possible cause of obesity, the microbiome.

The microbiome, that’s all those bacteria, viruses, and fungi that live in your gut. They strongly affect what you can digest and how well. Several studies have shown that obese people tend to have a somewhat different microbiome. However, some other studies seem to contradict that, and it’s also again unclear whether this is a cause or an effect of obesity. So my conclusion on this one is basically, more work is needed. The next suspect is your circadian rhythm.

Your inner clock regulates metabolic processes, that includes digestion. If you mess with it and eat or sleep at the wrong time, you might extract more or less energy from food. A 2018 review paper reported strong evidence that disrupted sleep and circadian misalignment, such a working night shifts, contributes to obesity.

However, while the correlation is again statistically significant, the effect isn’t particularly large. A survey by the American Cancer Society found that in women, missing out on sleep is associated with a BMI that’s greater by 1.39 kilogram per square meter, while in men it’s a difference of up to 0 point 57 kilogram per square meter. That typically converts to one or two kilogram in weight.

Another possible cause of obesity is stress. For example, a 2019 paper looked a group of about 3000 adult Americans and their exposure to a wide range of psychosocial stressors, such as financial strain, relationship trouble, people who believe in the many worlds interpretation of quantum mechanics, etc. They found that stress increased the risk of obesity by 15 to 25 percent.

But correlation doesn’t mean causation. For one thing, stressed people don’t sleep well, which we’ve seen also makes obesity more likely. Or maybe they are stressed because they’re obese to begin with?

Yet another possible cause of obesity that researchers have looked at are vitamin and mineral deficiencies. These are indeed common among overweight and obese individuals. That’s been shown by a number of independent studies. But in this case, too, the direction of causation remains unclear because excess body weight alters how well those nutrients are absorbed and distributed.

Another option to explain why you’re fat is to blame your mother. Indeed, there’s quite convincing evidence that if your mother smoked while pregnant, you’re more likely to be obese. For example, a 2016 meta-review from a group in the UK found that children born to smoking mothers had a 55 percent increased risk of being obese. A similar meta-analysis from 2020 came to a similar conclusion. However, these meta-reviews are difficult to interpret because there are many factors that have to be controlled for. Maybe your mother smoked because she’s stressed and that’s why you didn’t get enough sleep and now you’re fat. This is all getting very confusing, so let’s talk about the two newest ideas that scientists have come up with to explain the obesity epidemic, viruses and plastic.

Did I say obesity is not caused by a virus, oops!

Turns out that a number of viruses are known to cause obesity in chickens, mice, and rats. And it’s not just animals. A meta-analysis from 2013 showed that a previous Adenovirus 36 infection is correlated with an increased risk of obesity of about 60 percent. A few other viruses are suspect, too, but this Adenovirus 36 seems to be the biggest offender. An infection with adenovirus 36 has the symptoms of a common cold but can also lead to eye infections.

A review published in 2021 in the International Journal of Obesity found that 31 out of 37 studies. reported a positive correlation between Adenovirus 36 antibodies and weight gain, obesity, or metabolic changes. However, this virus is incredibly common. About every second person has antibodies against it, not all of them are obese, and not all obese people have had an infection. So this might play a role for some people but it’s unclear at the moment how relevant it is. In any case, face masks also protect from obesity, just don’t take them off.

This finally brings us to the headlines you may have seen some weeks ago claiming that plastic makes us fat. These headlines were triggered by three review papers that appeared simultaneously in the same journal about the role of obesogens in the obesity epidemic.

“Obesogen” is a term coined in 2006 by two researchers from UC Irvine. They are a type of “endocrine disrupting chemicals” that resemble natural hormones and can interfere with normal bodily functions. About a thousand chemicals are currently known to have, or are suspected to have endocrine disrupting effects. About 50 of those are believed to be obesogens that affect the metabolic rate, the composition of the microbiome, or the hormones that influence eating behavior.

Obesogens are in cosmetics, preservatives, sun lotion, furniture, electronics, plastics, and the list goes on. From there they drift into the environment. They have been found also in dust, water, and even in medication and processed foods.

The review papers report fairly convincing evidence that obesogens affect the development of fat and muscle cells in a petri dish, not to be confused with a peach tree disk. There is also some evidence that obesogens affect the development of mice and other animals. However, the evidence that they play a significant role for obesity is not quite as convincing.

The majority of studies on the topic show a positive correlation between obesogen exposure and an elevated BMI, especially when exposure occurs during pregnancy or in childhood. However, as we saw earlier, just because a correlation is statistically significant doesn’t mean the effect is large. How large the effect may be is at present guess work. In a recent interview with The Guardian the lead author of one of the reviews, Robert Lustig, said “If I had to guess, based on all the work and reading I’ve done, I would say obesogens will account for about 15 to 20 percent of the obesity epidemic. But that’s a lot.”

Maybe. If it was correct. But maybe asking the lead author isn’t the most objective way to judge the relevance of a study. There are also some studies which didn’t find correlations between obesogen exposure and obesity risk and again, the results are difficult to interpret, because exposure to certain chemicals depends on living conditions that are correlated with all kinds of demographic factors.

Despite that, the researchers claim that “epidemiological studies substantiate a causal link between obesogen exposure and human obesity”. They don’t stop there but put forward a new hypothesis for the cause of obesity.

In their own words: “This alternative hypothesis states that obesity is a growth disorder, in effect, caused by hormonal/enzymatic defects triggered by exposures to environmental chemicals and specific foods in our diet.” You can see how this would make big headlines. It’s such a convenient explanation, all those damned chemicals are making us fat. However, until there’s data on how big the effect is, I will remain skeptical of this hypothesis.

So, let’s wrap up. The best evidence for factors that make obesity more likely are genetics and the consumption of processed food. Factors that may make the situation worse, are stress, lack of sleep, exercise, or essential vitamins and minerals, and an abnormal microbiome, but in all those cases it’s difficult to tell apart cause and effect. When it comes to viruses and exposure to certain chemicals, the headlines are bigger than the evidence.

If you’re interested in a video on possible treatment options for obesity, let us know in the comments.

Saturday, October 08, 2022

Cold Fusion is Back (there's just one problem)

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Cold fusion could save the world. It’d be a basically unlimited, clean, source of energy. It sounds great. There’s just one problem: it’s not working. Indeed, most physicists think it can’t work even in theory. And yet, the research is making a comeback. So, what’s going on? What do we know about cold fusion? Is it the real deal, or is it pseudoscience? What’s cold fusion to begin with? That’s what we’ll talk about today.

If you push two small atomic nuclei together, they will form a heavier one. This nuclear fusion releases an enormous amount of energy. There’s just one problem: Atomic nuclei all have a positive electric charge, so they repel each other. And they do so very strongly. The closer they are, the stronger the repulsion. It’s called the Coulomb barrier, and it prevents fusion until you get the nuclei so close together that the strong nuclear force takes over. Then the nuclei merge, and boom.

The sun does nuclear fusion with its enormous gravitational pressure. On earth, we can do it by heating a soup of nuclei to enormous temperatures, or by slamming the nuclei into each other with lasers. This is called “hot nuclear fusion”. And that indeed works. There’s just one problem: At least so, far hot fusion eats up more energy than it releases. We talked about the problems with hot nuclear fusion in this earlier video.

But nuclear fusion is possible at far lower energy, and then it’s called cold fusion. The reason this works is that atomic nuclei don’t normally float around alone but have electrons sitting in shells around the nucleus. These electrons shield the positive charges of the nuclei from each other and that makes it easier for the nuclei to approach each other.

There’s just one problem: If the atoms float around freely, the electron shells are really large compared to the size of the nucleus. If you bring these nuclei close together, then their electron shells will be much farther apart than the nuclei. So the electron shells don’t help with the fusion if the nuclei just float around.

One thing you can do is strip off the electrons and replace them with muons. Muons are basically heavier versions of electrons, and since they are heavier, their shells are closer to the nucleus. This shields the electric fields of the nuclei better from each other and makes nuclear fusion easier. It’s called “muon catalyzed fusion”.

Muon catalyzed fusion was theoretically predicted already in the 1940s and successfully done in experiments in the 1950s. It’s cold fusion that actually works. There’s just one problem: muons are unstable. They must be produced with particle accelerators and those take up a lot of energy. The muons then get mostly lost in the first fusion reaction so you can’t reuse them. There’s a lot more to say about muon catalyzed fusion, but we’ll save this for another time.

There’s another type of “cold fusion” that we know works, which is actually a method for neutron production. For this you send a beam of deuterium ions into a metal, for example titanium. Deuterium is a heavy isotope of hydrogen. Its nucleus is a proton with one neutron. At first, the beam just deposits a lot of deuterium in the metal. But when the metal is full of deuterium, some of those nuclei fuse. These devices can be pretty small. The piece of metal where the fusion happens may just be a few millimeters in size. Here is an example of such a device from Sandia Labs which they call the “neutristor”.

The major reason scientists do this is because the fusion releases neutrons, and they want the neutrons. It’s not just because lab life is lonely, and neutrons are better than no company. Neutrons can also be used for treating materials to make them more durable, or for making radioactive waste decay faster.

But the production of the neutrons is quite an amazing process. Because the beam of deuterium ions which you send into this metal typically has an energy of only 5-20 kilo electron Volt. But the neutrons you get out, have almost a thousand times more energy, in the range of a few Mega electron Volt. It’s often called “beam-target fusion” or “solid-state fusion”. It’s a type of cold fusion, and again we know it works.

There’s just one problem: The yield of this method is really, really low. It’s only about one in a million deuterium nuclei that fuse, and the total energy you get out is far less than what you put in with the beam. So, it’s a good method to produce neutrons, but it won’t save the world.

However, when physicists studied this process of neutron production, they made a surprising discovery. When you lower the energy of the incoming particles, the fusion rates are higher than theoretically expected. Why is that? The currently accepted explanation is that the lattice of the metal helps shielding the charges of the deuterium nuclei from each other. So, it lowers the Coulomb barrier, and that makes it more likely that the nuclei fuse when they’re inside the metal. This isn’t news, physicists have known about this since the 1980s.

But if putting the deuterium into metal reduces the Coulomb barrier, maybe we can find some material in which it’s lowered even further? Maybe we can lower it so far that we create energy with it? This idea had been brought up already in the 1920s by researchers in the US and Germany. And it’s what Pons and Fleischman claimed to have achieved in their experiment that made headlines in 1989.

Pons and Fleischman used a metal called palladium. The metal was inside a tank of heavy water, so that’s water where the normal hydrogen is replaced with deuterium. Ponds and Fleischman then applied a current going through the palladium and the heavy water. They claimed this created excess heat, so more than what you’d get from the current alone. They also said they’d seen some decay products of fusion reactions, notably neutrons and tritium. Everyone was very excited.

There was just one problem... Other laboratories were unable to reproduce the claims. It probably didn’t help that Pons and Fleischmann were both chemists, but nuclear fusion has traditionally been territory of physicists. And physicists largely think that chemical reactions simply cannot cause nuclear fusion because the typical energies that are involved in chemical processes are far too low.

A few groups said they’d seen something similar to Ponds and Fleischman, but the findings were inconsistent, and it remained unclear why it would sometimes work and sometimes not. By the early nineties, the Pons and Fleischmann claim was largely considered debunked. Soon enough, no scientist wanted to touch cold fusion because they were afraid it would damage their reputation. The philosopher Huw Price calls it the “reputation trap”. In fact, while I was working on this video, I’ve been warned that I, too, would be damaging my reputation.

Of course not everyone just stopped working on cold fusion. After all, it might save the world! Some carried on, and a few tried to capitalize on the hope.

One such case is that of Andrea Rossi who already in the 1970s said he knew how to build a cold fusion device. In 1998, the Italian government shut down his company on charges of tax fraud and dumping toxic waste into the environment. In the mid 1990s, Rossi moved to the USA and by 2011, he claimed to have a working cold fusion device that produced excess heat.

He tried to patent it, but the international patent office rejected the application arguing that the device goes “against the generally accepted laws of physics and established theories”. A rich Australian guy offered $1 million to Rossi if he could prove that the device produces net power. Rossi didn’t take up the offer and that’s the last we heard from him. There’s more than one problem with that.

In 2019, Google did a research project on cold fusion and they found that the observed fusion rate was 100 times higher than theoretically expected. But it wasn’t enough to create excess heat.

The allure of cold fusion hasn’t entirely gone away. For example, there are two companies in Japan, Technova Inc. and Clean Planet Inc, which claim to have produced excess heat. Clean Planet Inc has a very impressive roadmap on their website, according to which they’ll complete a model reactor for commercial application next year. There’s just one problem: No one has seen the world-saving machine, and no one has reproduced the results.

The people who still work on cold fusion have renamed it to “Low Energy Nuclear Reactions”, LENR for short. Part of the reason is that “cold” isn’t particularly descriptive. I mean, these devices may be cold compared to the interior of the sun, but they can heat up to some hundred degrees Celsius, and maybe that’s not everybody’s idea of cold. But no doubt the major reason for the rebranding is to get out of the reputation trap. So make no mistake, LENR is cold fusion reborn.

I admit that this doesn’t sound particularly convincing. But I think it’s worth looking a little closer at the details. First of all, there are two separate measurements that cold fusion folks usually look at. That’s the production of decay products from the nuclear fusion, and the production of excess heat.

An experiment that tried to shed light on what might be going on comes from a 2010 paper by a group in the United States. They used a setup very similar to that from Fleischmann and Pons but in addition they directed a pulsed laser at the palladium with specific frequencies. They claimed to see excess power generation for specific pulse frequencies, which suggests that phonon excitations have something to do with it. There’s just one problem: a follow-up experiment failed to replicate the result.

Edmund Storms who has been working on this for decades published a paper in 2016 claiming to have measured excess heat in a device that’s very similar to the original Ponds and Fleischman setup. In this figure you see how the deuterium builds up in the palladium, that’s the red dots, and the amount of power that Storms says he measured.

He claims that the reason that these experiments are difficult to reproduce is that the nuclear reactions happen in appreciable rates only in some regions of the palladium which have specific defects that he calls nano-cracks. These could be caused by the treatment of the metal, so some samples have them and others not, and this is why the experiments sometimes seem to work and sometimes not. At least according to Storms. There’s just one problem: No one’s been able to replicate his findings.

There is also a 2020 paper from the Japanese company, Clean Planet Inc which I already mentioned. They use a somewhat different setup with nanoparticles of certain metals that are surrounded by a gas that contains deuterium. The whole thing is put under pressure and heated. They claim that the resulting temperature increase is higher than you’d expect and that their device generates net power. In this figure you see the measured temperature increase in their experiment with Helium gas and with a gas that contains deuterium. The Helium gas serves as a control. As you see there’s more heating with the deuterium. There’s just one problem: No one’s been able to replicate this finding.

The issue with these heat measurements is that they’re incredibly difficult to verify. For this reason it’s much better to look at the decay products. Those are in and by themselves mysterious. In a typical nuclear fusion reaction, there is a very specific amount of energy that’s released, and so the energy distribution of the decay products is very sharply peaked. In deuterium fusion, the neutrons in particular should have an energy of 2.45 MeV. In those cold fusion reactions, however, they see a fairly broad distribution of neutron energies and at higher energies than expected.

Here is an example. The red bars show the number of deuterium ions as a function of energy, the black ones are the background. As you can see the spectrum looks nowhere like the expected peak at about 2.5 MeV. Something is going on and we don’t know what. Forget saving the world for a moment, it’s much simpler, there’s an observation that we don’t understand.

In a recent paper, a group from MIT has put forward two different hypotheses that could explain why nuclear fusion happens more readily in certain metals than you’d naively assume. One is that there are some unknown nuclear resonances which can become excited and make fusion easier. The other one is that the lattice of the metal facilitates an energy transfer from the deuterium to some of the palladium nuclei. So then you have excited Palladium nuclei and those decay. Since the Palladium nuclei have more decay channels than are typical for fusion outputs, this can explain why the energy distribution looks so weird. There’s just one problem: We don’t know that that’s actually correct.

What are we to make of this? The major reason cold fusion has been discarded as pseudoscience is that most physicist think it can’t possibly be that chemical processes cause nuclear reactions. But I think they overestimate how much we know both about nuclear physics and chemistry.

Nuclear physics is dominated by the strong nuclear force which holds quarks and gluons together so that they form neutrons and protons. The strong nuclear force has the peculiar property that it becomes weaker at high energies. This is called asymptotic freedom. Arvin Ash recently did a great video about the strong nuclear force, so check this out for more details.

The Large Hadron Collider pumps a lot of energy into proton collisions. This is why understanding the strong nuclear force in LHC collisions is quite simple, by which I mean a PhD in particle physics will do. The difficult part comes after the collisions, when the quarks and gluons recombine to protons, neutrons, and other bound states such as pions and rhos and so on. It’s called hadronization, and physicists don’t know how to calculate this. They just extract the properties of these processes from data and parameterize it.

I am telling you this to illustrate that just because we understand the properties of the constituents of atomic nuclei doesn’t mean we understand atoms. We can’t even calculate how quarks and gluons hold together.

Another big gap in our understanding are material properties because we often can’t calculate electron bands. That’s especially true for materials with irregularities that, according to Storms, are relevant for cold fusion. Indeed, if you remember, calculating material properties is one of those questions that physicists want to put on a quantum computer exactly because we can’t currently do the calculation. So, is it possible that there is something going on with the nuclei or electron bands in those metals that we haven’t yet figured out? I think that’s totally possible.

But, let me honest, I find it somewhat suspicious that the power production in cold fusion experiments always just so happens to be very close to the power that goes in. I mean, there isn’t a priori any reason why this should be the case. If there is nuclear fusion going on efficiently, why doesn’t it just blow up the lab and settle the case once and for all?

So, well, I am extremely skeptical that we’ll see a working cold fusion device in the next couple of years. But it seems to me there’s quite convincing evidence that something odd is going on in these devices that deserves further study.

I’m not the only one who thinks so. In the past couple of years, research into cold fusion has received a big funding boost, and that’s already showing results. For example, in 1991, a small group of researchers proposed a method to produce palladium samples that generate excess heat more reliably. And, I hope you’re sitting, research groups at NASA and at the US Navy have recently been able to reproduce those results.

A project at the University of Michigan is trying to reproduce the findings by the Japanese companies. The Department of Energy in the United States just put out a call for research projects on low energy nuclear reactions, and also the European research council has been caught in the act of supporting some cold fusion projects.

I think this is a good development. Cold fusion experiments are small and relatively inexpensive and given the enormous potential, it’s worth the investment. It’s a topic that we’ll certainly talk about again, so if you want to stay up to date, don’t forget to subscribe. Many thanks to Florian Metzler for helping with this video.

Wednesday, October 05, 2022

The First Ever Episode of Science News Without the Gobbledygook

One thing I miss about the blogging days is the ability to comment on current events short notice. It's much harder with video than in writing. This is why on my YouTube channel, we now have a weekly Science News episode. 


I hope we will all have some fun with this :o)

I have 6 other people involved in the production of this weekly news show (it's more difficult than you might think). We will only be able to continue with this if we find sponsors, and we will only find sponsors if we have sufficiently many views. That is to say, if you like our science news and would like them to continue, please help us spread the word!

Saturday, October 01, 2022

Can we make flying "Green"?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



“Flight shaming” is a social movement that originated in Sweden a few years ago. Its aim is to discourage people from flying because it’s bad for the environment. But let’s be honest, we’re not going to give up flying just because some Swedes think we should. I mean, we already shop at IKEA, isn’t that Swedish enough?

But seriously, maybe the flight shamers have a point. If aliens come to visit us one day, how are we supposed to explain this mess? Maybe we should indeed try to do something about airplane emissions. What are airlines doing anyway, isn’t it their job? What are the technological options, and will any of them give you a plausible excuse if flight shamers come for you? That’s what we’ll talk about today.

Flying may be good for watching four movies in a row, but it really isn’t good for the planet. It’s the third biggest contribution to carbon emissions from individuals, after having children and driving a car. Altogether, flying currently accounts for around 2 point 5 percent of global carbon dioxide emissions, that’s about a billion tons each year. 81 percent of this comes from passenger flights, and another 60 percent of that, so about half of the total, are international flights.

Most of the flights, not so surprisingly, come from high-income countries. If flying was as a country itself, it would rank sixth in carbon dioxide emissions, and it would congratulate the new British Prime Minister by reminding her that “The closest emergency exit may be behind you.”

The total carbon dioxide emissions from flying have been increasing steeply in the past decades, but the relative contribution has remained stable at 2 point 5 percent. This is partly because everybody is emitting more with everything, but also because planes have become way more fuel efficient. Planes consume today about half as much fuel as they did in the mid 1960’s.

Carbon dioxide emissions are not the only way that flying contributes to climate change. It also adds some other greenhouse gasses, and it creates clouds at high altitude that trap heat. But in this video, we’ll focus on the carbon dioxide emissions because that’s the biggest issue, right after the length of this video.

There are four ways that airlines are currently trying to reduce their carbon emissions, that’s electric planes, hydrogen, biofuel, and synthetic fuel. We’ll talk about each of those starting with electric planes.

1. Electric planes

The idea of electric planes is pretty obvious, charge a battery with a carbon neutral source of energy, use the battery to drive a propeller, try to not fall off the sky. Then you can partly recharge the battery when you’re landing.

And at first sight it does sound like a good idea. In 2016, the Swiss aircraft Solar Impulse 2 completed its first trip around the world. Its wings are covered with solar cells with a wingspan that is comparable to that of a Boeing 747.

A Boeing 747 typically flies at about one thousand kilometers per hour and carries 500 or so people. The Solar Impulse carries two and it flies about 70 kilometers per hour. At that speed it would take about 4 days to get from Frankfurt to New York which requires more in-flight entertainment than the CDC recommends.

You might think the issue is the solar panels, but the bigger problem is that electric batteries are heavy, and you don’t need to be Albert Einstein to understand that something that’s supposed to fly better not be heavy.

One way to see the problem is to compare batteries to kerosene, which is illustrated nicely in this figure. On the vertical axis you have the energy per mass and on the horizontal axis the energy per volume. Watch out, both axes are logarithmic.

You want energy sources that are as much in the top right corner as possible. Kerosene is up here, And lithium-ion batteries down here. You can see that kerosene has 18 times more energy in the same volume as a typical lithium-ion battery and sixty times more energy in the same mass. This means it’s difficult to pack power onto an aircraft in form of electric batteries. Consequently, electric planes are slow and don’t get far.

For example, in 2020, the Slovenian aircraft company Pipistrel brought the first electric aircraft onto the market. It’s powered by lithium-ion batteries, can carry up to 200 kilogram, and flies up to 50 minutes with a speed of about 100 kilometers per hour. It’s called Velis Electro which sounds like a good name for a superhero. And indeed, carrying 200 kilograms at 100 kilometers per hour is great if you want to rescue the new British Prime Minister from an oncoming truck, I mean, lorry. Though there isn’t much risk of that happening because the lorries are stuck at the French border. Which, incidentally, is farther away from London than this plane can even fly.

Nevertheless, some other companies are working on electric planes too. The Swedish start-up Heart Aerospace plans to build the first electric commercial aircraft by 2026. They ambitiously want to reach 400 kilometers of range and hope it’ll carry up to 19 passengers. Presumably that’s 19 average Swedes, which is about the same weight as 2 average Germans.

So, unless there’s a really big breakthrough in battery technology, electric planes aren’t going to replace kerosene powered ones for any sizeable share of the market, though they might be used, for example, to train pilots. Train them to fly, that is, not to rescue prime ministers.

A plus point of electric planes however is that they are more energy-efficient than kerosene powered ones. An electric system has an efficiency of up to ninety percent, but kerosene engines only reach about fifty percent efficiency. To this you must add other inefficiencies in the gearbox and the mechanics of the propeller or fan and so on. The total efficiency is then around 70-75 percent for electric engines and between thirty and forty percent for kerosene engines.

The technological developments that are going to have the biggest impact on electric planes are new types of batteries, that are either lighter or more efficient or, ideally, both. Lithium-sulfur and lithium-oxygen batteries are two examples that are currently attracting attention. They pack three to ten times more energy into the same mass as than Lithium-ion batteries.

2. Hydrogen

Let’s then talk about hydrogen. No, we don’t want to bring the Zeppelin back. We are talking about planes powered by liquid hydrogen. If we look back at this handy figure, you can see that liquid hydrogen really packs a lot of energy into a small amount of mass. It has, however, still a fairly large volume compared to kerosene. And volume on an airplane means you must make the plane bigger which makes it heavier, so the weight-issue creeps back in.

Also, hydrogen usually isn’t liquid, so you have to either cool it or keep it under pressure. Cooling requires a lot of energy, which is bad for efficiency. But keeping hydrogen under pressure requires thick tanks which are heavy. It’s like there’s a reason fossil fuels became popular.

The downside of hydrogen is its low efficiency, which is only around 45 percent. Still, together with the high energy density, it’s not bad, and given that burning hydrogen doesn’t create carbon dioxide it’s worth a try.

Hydrogen powered airplanes aren’t new. Hybrid airplanes that used hydrogen were being tested already in the 1950’s. Airbus is now among the companies who are developing this technology further. Just a few months ago, they presented what they call the ZEROe demonstrator. It’s a hydrogen powered engine, that will be tested both on the ground and in flight, though in the test phase the plane will still be carried by standard engines.

They recently did a 4-hour test flight for the hydrogen engine. The plane they used was an A380, that’s a two-deck plane that can transport up to eight hundred passengers or so. They used this large plane because it has plenty of room for the hydrogen tanks plus the measurement equipment plus a group of engineers plus their emotional support turkeys. However, the intended use of the hydrogen engine is a somewhat smaller plane, the A350. Airbus wants to build the world’s first zero-emission commercial aircraft by 2035.

The Airbus competitor Boeing is not quite so enthusiastic about hydrogen. Their website explains that hydrogen “introduces certification, infrastructure, and operational challenges”. And just to clarify the technical terms, “challenge” is American English for “problem”. Because of those challenges, Boeing focusses on sustainable fuels. According to its CEO, sustainable fuels are “the only answer between now and 2050”. So let’s look at those next.

3. Biofuel

Bio-fuels are usually made from plants. And when I say plants, I mean plants that have recently deceased and not been underground for a hundred million years. Bio-fuels still create carbon dioxide when burned, but the idea is that it’s carbon dioxide you took out of the air when you grew the plants, so the carbon just goes around in a cycle. This means unlike regular jet fuel, which releases carbon dioxide that was long stored underground in oil, bio-fuels don’t increase the net amount of carbon dioxide in the atmosphere.

The most common bio-fuel is ethanol, which can be made for example from corn. It can and is being used for cars. But ethanol isn’t a good choice for airplanes because it’s not energy dense enough, basically, it’s too heavy. In the figure we already looked at earlier, it’s up here.

Another issue with bio-fuel is that, to be used for aircraft, it must fulfil a lot of requirements, in particular it must continue to flow well at low temperatures. I’m not much of an engineer but even I can see that if the fuel freezes midflight that might be a challenge.

A bio jet fuel which fits the bill is synthetic paraffinic kerosene, which can be made from vegetable

oils or animal fats, but also from sugar or corn. Paraffinic kerosene is in some sense better than fossil kerosene. For example, it generates less carbon dioxide and less sulfur.

The International Airport Transport Association considers bio-jet fuel a key element to get off fossil fuels. Indeed, some airlines are already using biofuels. The Brazilian company Azul Airlines has been using biofuel from sugarcane on some of their flights for a decade. British Airways has partnered with the fuel company LanzaJet to develop biofuels that are suitable for aircraft. The American Airline United is also investing into biofuels. And the Scandinavian airline SAS has the goal to use 17 percent biofuel by 2025.

The problem, I mean challenge, is that bio jet fuels still cost three to six times more than conventional jet fuel. Moreover, researchers from the UK and Netherlands have estimated that the start-up cost for a commercial bio jet fuel plant is upwards of 100 million dollar which is a barrier to get things going.

But making the production of bio jet fuel easier and more affordable is a presently a very active research area. An approach that’s attracting a lot of attention is using microalgea. They produce a lot of biomass, and they do so quickly. Microalgae reach about three to eight percent efficiency in transforming solar energy to chemical energy, while conventional biofuel crops stay below 1 percent. Take that, conventional biofuel crops!

Algae also generate more oil than most plants, and genetic engineering can further improve the yield. A few years ago, ExxonMobil partnered with Craig Venter’s Synthetic Genomics and they developed a new strain of algae using the gen editing tool CRISPR. The gene engineered algea had a fat content of 40-55 percent compared to only 20 percent in the naturally occurring strain.

But bio fuels from algae also have a downside. They have a high nitrogen content so the fuel produced from them will release nitrogen oxides. If you remember, we talked about this earlier in our video on pollution from diesel engines. Then again, you can try to filter this out, at least to some extent.

Another way to make biofuels more affordable is to let the customer pay for it. SAS for example says if you pay more for the ticket, they’ll put more biofuel into the jet. So, it’s either more legroom or saving the planet, though times for tall people.

4.Synthetic jet fuel

Finally, you can go for entirely synthetic jet fuel. For this, you take the carbon from the atmosphere using carbon capture, so you get rid of the plants in the production process. Instead, you use a renewable energy source to produce a chemical that’s similar to kerosene from the carbon dioxide and water.

The resulting fuels are not completely carbon neutral because of the production process but compared to fossil fuels they have small carbon footprint. According to some sources, it’s about 70 to 80 percent less than fossil fuels though those number are at present not very reliable.

Synthetic kerosene is already in use. Since 2009, it can be blended with conventional jet fuel. The maximum blending ratio depends on the properties of the synthetic component, but it can be up to fifty percent. This restriction is just a precautionary requirement and it’s likely to be relaxed in the future. The problem, I mean challenge, is that at the moment synthetic kerosene is about 4 to 5 times more expensive than fossil kerosene.

Nevertheless, a lot of airlines have expressed interest in synthetic kerosene. For example, last October, Lufthansa agreed on of annual purchase of at least 25 thousand liters for at least five years. That isn’t a terrible lot. Just for comparison, a single A380 holds up to three hundred twenty thousand liters. But it’s a first step to test if the synthetic stuff works. Quantas announced a few months ago that they’ll invest 35 million dollars in research and development for synthetic jet fuel. They hope to start using it in the early 2030s.

But let me give you some numbers to illustrate the… challenge. In 2020 the market for commercial jet fuel was about 106 billion gallons globally. Twenty-one billion-gallon in the US alone. According to the US Energy Information Administration, it is expected to grow to 230 billion gallons globally by 2050.

At current, the global production of synthetic kerosene is about 33 *million gallons per year. That’s less than a tenth of a percent of the total jet fuel. Still, the International Air Transport Association is tentatively hopeful. They recently issued a report, according to which current investments will expand the annual production of synthetic kerosene to 1 point three billion gallons by 2025. They say that production could reach eight billion gallons by 2030 with effective government incentives, by which they probably mean subsidies.

So, even if we’re widely optimistic and pour a lot of money into it, we might be able to replace 5 percent of jet fuel with synthetic fuel by 2030. It isn’t going to save the planet. But maybe it’s enough to push transatlantic flight down on the sin-list below eating meat, so the Swedes can move on from flight-shaming to meat-shaming.