Saturday, November 27, 2021

Does Anti-Gravity Explain Dark Energy?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

One of the lesser known facts about me is that I’m one of the few world experts on anti-gravity. That’s because 20 years ago I was convinced that repulsive gravity could explain some of the puzzling observations astrophysicists have made which they normally attribute to dark matter and dark energy. In today’s video I’ll tell you why that didn’t work, what I learned from that, and also why anti-matter doesn’t fall up.

Newton’s law of gravity says that the gravitational force between two masses is the product of the masses, divided by the square of the distance between them. And then there’s a constant that tells you how strong the force is. For the electric force between two charges, we have Coulomb’s law, that says the force is the product of the charges, divided by the square of the distance between them. And again there’s a constant that tells you how strong the force is.

These two force laws look pretty much the same. But the electric force can be both repulsive and attractive, depending on whether you have two negative or two positive charges, or a positive and a negative one. The gravitational force, on the other hand, is always attractive because we don’t have any negative masses. But why not?

Well, we’ve never seen anything fall up, right? Then again, if there was any anti-gravitating matter, it would be repelled by our planet. So maybe it’s not so surprising that we don’t see any anti-gravitating matter here. But it could be out there somewhere. Why aren’t physicists looking for it?

One argument that you may have heard physicists bring up is that negative masses can’t exist because that would make the vacuum decay. That’s because, if negative masses exist, then so do negative energies. Because, E equals m c squared and so on. Yes, that guy again.

And if we had negative energies, then you could create pairs of particles with negative and positive energy from nothing and particle pairs would spontaneously pop up all around us. A theory with negative masses would therefore predict that the universe doesn’t exist, which is in conflict with evidence. I’ve heard that argument many times. Unfortunately it doesn’t work.

This argument doesn’t work because it confuses to different types of mass. If you remember, Einstein’s theory of general relativity is based on the Equivalence Principle, that’s the idea that gravitational mass equals inertial mass. The gravitational mass is the mass that appears in the law of gravity. The inertial mass is the mass that resists acceleration. But if we had anti-gravitating matter, only its gravitational mass would be negative. The inertial mass always remains positive. And since the energy-equivalent of inertial mass is as usual conserved, you can’t make gravitating and anti-gravitating particles out of nothing.

Some physicists may argue that you can’t make anti-gravity compatible with general relativity because particles in Einstein’s theory will always obey the equivalence principle. But this is wrong. Of course you can’t do it in general relativity as it is. But I wrote a paper many years ago in which I show how general relativity can be extended to include anti-gravitating matter, so that the equivalence principle only holds up to a sign. That means, gravitational mass is either plus or minus inertial mass. So, in theory that’s possible. The real problem is, well, we don’t see any anti-gravitating matter.

Is it maybe that anti-matter anti-gravitates. Anti-matter is made of anti-particles. Anti-particles are particles which have the opposite electric charge to normal particles. The anti-particle of an electron, for example, is the same as the electron just with a positive electric charge. It’s called the positron. We don’t normally see anti-particles around us because they annihilate when they come in contact with normal matter. Then they disappear and leave behind a flash of light or, in other words, a bunch of photons. And it’s difficult to avoid contact with normal matter on a planet made of normal matter. This is why we observe anti-matter only in cosmic radiation or if it’s created in particle colliders.

But if there is so little anti-matter around us and it lasts only for such short amounts of time, how do we know it falls down and not up? We know this because both matter and anti-matter particles hold together the quarks that make up neutrons and protons.

Inside a neutron and proton there aren’t just three quarks. There’s really a soup of particles that holds the quarks together, and some of the particles in the soup are anti-particles. Why don’t those anti-particles annihilate? They do. They are created and annihilate all the time. We therefore call them “virtual particles.” But they still make a substantial contribution to the gravitational mass of neutrons and protons. That means, crazy as it sounds, the masses of anti-particles make a contribution to the total mass of everything around us. So, if anti-matter had a negative gravitational mass, the equivalence principle would be violated. It isn’t. This is why we know anti-matter doesn’t anti-gravitate.

But that’s just theory, you may say. Maybe it’s possible to find another theory in which anti-particles only anti-gravitate sometimes, so that the masses of neutrons and protons aren’t affected. I don’t know any way to do this consistently, but even so, three experiments at CERN are measuring the gravitational behavior of anti-matter.

Those experiments have been running for several years but so far the results are not very restrictive. The ALPHA experiment has ruled out that anti-particles have anti-gravitating masses, but only if the absolute value of the mass is much larger than the mass of the corresponding normal particle. This means so far they ruled out something one wouldn’t expect in the first place. However, give it a few more years and they’ll get there. I don’t expect surprises from this experiment. That’s not to say that I think it shouldn’t be done. Just that I think the theoretical arguments for why anti-matter can’t anti-gravitate are solid.

Okay, so anti-matter almost certainly doesn’t anti-gravitate. But maybe there’s another type of matter out there, something new entirely, and that anti-gravitates. If that was the case, how would it behave? For example, if anti-gravitating matter repels normal matter, then does it also repel among itself, like electrons repel among themselves? Or does it attract its own type?

This question, interestingly enough, is pretty easy to answer with a little maths. Forces are mediated by fields and those fields have a spin which is a positive integer, so, 0, 1, 2, etc.

For gravity, the gravitational mass plays the role of a charge. And the force between two charges is always proportional to the product of those charges times minus one to the power of the spin.

For a spin zero field, the force is attractive between like charges. But electromagnetism is mediated by a spin-1 field, that’s electromagnetic radiation or photons if you quantize it. And this is why, for electromagnetism, the force between like charges is repulsive but unlike charges attract. Gravity is mediated by a spin-2 field, that’s gravitational radiation or gravitons if you quantize it. And so for gravity it’s just the other way round again. Like charges attract and unlike charges repel. Keep in mind that for gravity the charge is the gravitational mass.

This means, if there is anti-gravitating matter it would be repelled by the stuff we are made of, but clump among itself. Indeed, it could form planets and galaxies just like ours. The only way we would know about it, is its gravitational effect. That sound kind of like, dark matter and dark energy, right?

Indeed, that’s why I thought it would be interesting. Because I had this idea that anti-gravitating matter could surround normal galaxies and push in on them. Which would create an additional force that looks much like dark matter. Normally the excess force we observe is believed to be caused by more positive mass inside and around the galaxies. But aren’t those situations very similar? More positive mass inside, or negative mass outside pushing in? And if you remember, the important thing about dark energy is that it has negative pressure. Certainly if you have negative energy you can also get negative pressure somehow.

So using anti-gravitating matter to explain dark matter and dark energy sounds good at first sight. But at second sight neither of those ideas work. The idea that galaxies would be surrounded by anti-gravitating matter doesn’t work because such an arrangement would be dramatically unstable. Remember the anti-gravitating stuff wants to clump just like normal matter. It wouldn’t enclose galaxies of normal matter, it would just form its own galaxies. So getting anti-gravity to explain dark matter doesn’t work even for galaxies, and that’s leaving aside all the other evidence for dark matter.

And dark energy? Well, the reason that dark energy makes the expansion of the universe speed up is actually NOT that it has negative pressure. It’s that the ratio of the energy density over the pressure is negative. And for anti-gravitating matter, they both turn negative so that the ratio is the same. Contrary to what you expect, that does not speed up the expansion of the universe.

Another way to see this is by noting that anti-gravitating matter is still matter and behaves like matter. Dark energy on the contrary does not behave like matter, regardless of what type of matter. This is why I get a little annoyed when people claim that dark energy is kind of like anti-gravity. It isn’t.

So in the end I developed this beautiful theory with a new symmetry between gravity and anti-gravity. And it turned out to be entirely useless. What did I learn from this? Well, that I wasted a considerable amount of my time on this was one of the reasons I began thinking about more promising ways to develop new theories. Clearly just guessing something because it’s pretty is not a good strategy. In the end, I wrote an entire book about this. Today I try to listen to my own advice, at least some of time. I don’t always listen to myself, but sometimes it’s worth the effort.

Saturday, November 20, 2021

The 3 Best Explanations for the Havana Syndrome

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

In late 2016, United States diplomats working in Cuba began reporting health problems: persistent headaches, vertigo, blurred vision. They were dizzy. They heard sounds coming from nowhere. The affected diplomats were questioned and examined by doctors but the symptoms didn’t fit any known disease. They called it the “Havana Symptom”.

More cases were later reported from China and Russia, Germany and Austria, even from near the White House. A CIA agent in Moscow was allegedly so badly affected that he had to retire. And just a few weeks ago another case made headlines: a CIA officer fell ill during a visit to India. What explanations have doctors put forward for those incidents? What could the Havana Syndrome be? That’s what we’ll talk about today.

Before we talk about sounds from nowhere, I want to briefly thank our supporters on Patreon. Your support makes it so much easier to keep this channel going. Special thanks to our biggest supporters in tier four. We couldn’t do it without you. And too you can help us. Go check out our Patreon page, or support us right here on YouTube by clicking on the “join” button just below this video. Now let’s look at the Havana Syndrome.

The “Havana” Syndrome got its name from the place where it was first reported in 2016. But since then it has appeared in many other countries. For this reason, a spokesperson from the US State Department told Newsweek “We refer to these incidents as “unexplained health incidents” or “UHIs.””

A common report among the affected people is that they hear recurring sounds but can’t identify a source. The Associated Press obtained a recording of what is allegedly one of those mysterious sounds. It was recorded in a private home of a diplomat in Cuba. Here is how that sounds.

Hmm. But not all the affected people in Cuba heard sounds, and it’s not clear that those who *did heard exactly the same thing. Doctors have focused on three different explanations (a) mass hysteria (b) microwaves and (c) ultrasound. We’ll go through these one by one.

(a) mass hysteria

Are those people just imagining they’ve been targeted by some secret weapon and are making themselves ill by worrying about their health? Are they maybe just stressed or bored? Well, in Cuba, the affected diplomats were examined by a military doctor who found most of the patients had suffered inner-ear damage, apparently from an external force. The problem is though that the patients’ health records from before the incident are spotty, so it’s difficult to pinpoint when that damage happened, if it happened.

In the United States, the affected government personnel were also thoroughly examined. Unfortunately, a 2018 paper about their symptoms was widely discredited, but in 2019 a group of neurologists published another paper in the Journal of the American Medical Association and they did find quite compelling evidence for neurological problems among the affected people.

They compared 40 members of the US government who had reported suffering from the Havana Syndrome with 48 control patients with similar demographics, so similar age, gender and educational attainment. They scanned all these people’s brains using magnetic resonance imaging and found the following:

First, no significant difference between groups in the brain gray matter, and no significant difference in the so-called executive control subnetwork, that’s the part of the brain involved in thinking and planning.

But, they did find significant between-group differences for the brain white matter that contains the connective tissue between the neurons. The patients’ volume of white matter was on the average twenty-seven cubic centimeters smaller. That means they’ve lost about five percent of the entire white matter.

This finding that has a p-value of below 0.001. As a reminder, the p-value tells you how statistically significant a finding is. The smaller, the more significant. The typical threshold is 0.05, so this finding meets the criterion of statistical significance.

They also found that patients had a significantly lower mean diffusivity in the connection between two hemispheres of the brain. Just exactly what consequences this has is somewhat unclear, but this difference too has a p-value of below 0.001.

Then there’s a significantly lower mean functional connectivity in the auditory subnetwork that you need for hearing and orientation with a p-value of about 0.003, and a lowered mean functional connectivity in the part of the brain necessary for spatial orientation, with a p-value of 0.002

The lead author of the paper, told the New York Times that this means a wholly psychogenic or psychosomatic cause is very unlikely. In other words, they probably didn’t imagine it.

Case settled? Of course not. A caveat of this study is that the patients had done exercises to improve their physical and cognitive health already before the examination, so the differences to the control group may have been affected by that. However, seeing those p-values I am willing to believe that something strange is going on.

There are other reasons to think that purely psychosomatic reasons don’t explain what’s happened. For example, the first cases in Cuba were treated confidentially and didn’t appear in the news until six months later. And yet there were several different people suddenly seeing doctors for similar symptoms at almost the same time. Those symptoms came on rather suddenly and were reportedly accompanied by strange sounds. The affected people described those sounds as sharp, disorienting, or oddly focused.

Let’s then talk about the second explanation, microwaves.

Microwave troubles in embassies aren’t entirely new. During the cold war, the US embassy in Moscow was permanently radiated by microwaves, presumably by the Soviets. No one knows exactly why, but the speculation is that it was for surveillance or counter-surveillance and not designed to cause health damage.

But in the 1970s the US ambassador to the Soviet Union, Walter Stoessel, fell dramatically ill, besides nausea and anemia, one of his symptoms was that his eyes were bleeding. Ugh.

In a now declassified nineteen seventy-five phone call, Henry Kissinger linked Stoessel’s illness to microwaves, admitting “we are trying to keep the thing quiet.” Stoessel died of leukemia at the age of sixty-six, about ten years after he first fell ill.

So, microwaves have been the main suspect because they have history. Could they maybe have caused those mysterious sounds? But how could that possibly be? Microwaves are electromagnetic waves, not sound waves. Certainly our ears don’t detect microwaves.

Well, actually. Let me introduce you to Allan Frey. Frey was an American neuroscientist. In 1960, a radar technician told Frey he could hear microwave pulses. This didn’t make any sense to Frey but he tried it himself and heard it too! He then did a series of experiments in which he exposed people to pulses of microwave radiation at low power, well within the safe regime. He found that not only did they generally hear the pulses, much weirder: deaf people could hear them too. It’s a real thing and is now called the “Frey effect.”

Frey explained that this works as follows. First, the electromagnetic energy from the radiation is absorbed by neural tissue near the surface of the skull. This creates tiny periodic temperature changes. It’s only about five millionths of a degree Celsius but these temperature changes further cause a periodic thermal expansion and contraction of the tissues. And this oscillating tissue creates a pressure wave that propagates and excites the cochlea in the inner ear. This is why we interpret it as a sound.

The frequency of the induced sound, interestingly enough, does not depend on the frequency of the microwaves. It’s a kind of resonance effect and the frequency you hear depends on the acoustic properties of brain tissue and… the size of your head. So, could microwaves lead to mystery sounds? Totally.

Microwave pulses have also been tested as weapons by various nations and are known to cause a variety of symptoms like headaches, dizziness, or nausea. There is also the work of professor James Lin an American electrical engineer who subjected himself to microwaves in his laboratory during the 1970’s. He has written a book on the subject of auditory effects of microwaves and continues publishing papers on the subject. His descriptions match the ones of the people affected by the Havana syndrome quite well.

The authors of the most in detail paper on the cases in Havana also concluded that microwaves were the most likely explanation. And more anecdotally, there’s the report of Beatrice Golomb, a professor at the University of California, San Diego.

Golomb has long researched the health effects of microwaves and offered help to the diplomats affected in China. She claims that family members of personnel tried to measure if there were microwaves by using commercially available equipment. She told the BBC: “The needle went off the top of the available readings.” Then again one person’s story about how someone else tried to measure something isn’t exactly the most reliable evidence.

Still, microwaves seem plausible. A recent piece in the New York Times claimed that microwave weapons are too large to target people in secret. However, several experts have argued that it’s full well possible to put such a weapon into a van and this way bring it into the vicinity of an embassy. Of course this makes you wonder why the heck someone would want to expose diplomats around the world to microwaves with no particular purpose or outcome.

Let’s then talk about option (c) Ultrasound.

Depending on the intensity, exposure to sound, even if we can’t hear it, can cause temporary discomfort, nausea, or even permanent damage of the eardrum. In some countries, for example the United States and Germany, the police sometimes use sonic weapons to disperse crowds. But last year, the US Academy of Doctors of Audiology released a statement warning that these devices sometimes cause permanent loss of hearing, problems with orientation and balance, tinnitus, and injury to the ear. That doesn’t sound so different from the symptoms of the Havana syndrome.

The advantage of this hypothesis is that there’s a possible answer to the “why” question. In 2018, researchers from the University of Michigan proposed the effects could have been caused by improperly placed Cuban spy gear. If two or more surveillance devices that use ultrasound are placed too closely together, they can interfere and create an audible sound. Then again, if you want to explain all the reported cases that way, you’d need a lot of incompetent spies.

So, well. Let’s hear that recording from the associated press again. Hmm. What does it sound like to you? When Fernando Montealegre heard the sound it reminded him of the crickets he collected as a child. Montealegre is a professor of sensory biology at the University of Lincoln in the UK. Together with a colleague, he searched a database of insect sounds to see if any matched the tape. The researchers found that the recording from Cuba matches perfectly to the call of the Indies short-tailed cricket.

As you see, this is a really difficult story and no one presently has a good explanation for what has happened. Most importantly I think we must keep in mind that there could actually be a number of different reasons for why those people fell ill. While it seems unlikely that the first cases in Cuba spread by mass hysteria, the cases in China only began after those in Cuba had made headlines, so that’s an entirely different situation.

There are also of course a lot of conspiracy theories ranking around the Havana syndrome. Is it a coincidence that the cases in Cuba began right after Trump’s election? It is a coincidence that Fidel Castro died around the same time? Is it a coincidence that only a few weeks later Russia and Cuba signed a defense cooperation agreement? I don’t have any insights into this, but let me know what you think in the comments.

Saturday, November 13, 2021

Why can elementary particles decay?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Physicists have so far discovered twenty-five elementary particles that, for all we currently know, aren’t made up of anything else. Most of those particles are unstable, and they’ll decay to lighter particles within fractions of a second. But how can it possibly be that a particle which decays is elementary. If it decays doesn’t this mean it was made up of something else? And why do particles decay in the first place? At the end of this video, you’ll know the answers.

The standard model of particle physics contains 25 particles. But the matter around us is almost entirely made up of only half of them. First, there’s the electron. Then there are the constituents of atomic nuclei, the neutrons and protons, which are made of different combination of up and down quarks. That’s 3. And those particles are held together by photons and the 8 gluons of the strong nuclear force. So that’s twelve.

What’s with the other particles? Let’s take for example the tau. The tau is very similar to the electron, except it’s heavier by about a factor 4000. It’s unstable and has a lifetime of only three times ten to the minus thirteen seconds. It then decays, for example into an electron, a tau-neutrino and an electron anti-neutrino. So is the tau maybe just made up of those three particles. And when it decays they just fly apart?

But no, the tau isn’t made up of anything, at least not according to all the observations that we currently have. There are several reasons physicists know this.

First, if the tau was made up of those other particles, you’d have to find a way to hold them together. This would require a new force. But we have no evidence for such a force. For more about this, check out my video about fifth forces.

Second, even if you’d come up with a new force, that wouldn’t help you because the tau can decay in many different ways. Instead of decaying into an electron, a tau-neutrino and an electron anti-neutrino, it could for example decay into a muon, a tau-neutrino and a muon anti-neutrino. Or it could decay into a tau-neutrino and a pion. The pion is made up of two quarks. Or it could decay into a tau-neutrino and a rho. The rho is also made up of two quarks, but different ones than the pion. And there are many other possible decay channels for the tau.

So if you’d want the tau to be made up of the particles it decays into, at the very least there’d have to be different tau particles, depending on what they’re made up of. But we know that that this can’t be. The taus are exactly identical. We know this because if they weren’t, they’d themselves be produced in larger numbers in particle collisions than we observe. The idea that there are different versions of taus is therefore just incompatible with observation.

This, by the way, is also why elementary particles can’t be conscious. It’s because we know they do not have internal states. Elementary particles are called elementary because they are simple. The only way you can assign any additional property to them, call that property “consciousness” or whatever you like, is to make that property entirely featureless and unobservable. This is why panpsychism which assigns consciousness to everything, including elementary particles, is either bluntly wrong – that’s if the consciousness of elementary particles is actually observable, because, well, we don’t observe it – or entirely useless – because if that thing you call consciousness isn’t observable it doesn’t explain anything.

But back to the question why elementary particles can decay. A decay is really just a type of interaction. This also means that all these decays in principle can happen in different orders. Let’s stick with the tau because you’ve already made friends with it. That the tau can decay into the two neutrinos and an electron just means that those four particles interact. They actually interact through another particle, with is one of the vector bosons of the weak interaction. But this isn’t so important. Important is that this interaction could happen in other orders. If an electron with high enough energy runs into a tau neutrino, that could for example produce a tau and an electron neutrino. In that case what would you think any of those particles are “made of”? This idea just doesn’t make any sense if you look at all the processes that we know of that taus are involved in.

Everything that I just told you about the tau works similarly for all of the other unstable particles in the standard model. So the brief answer to the question why elementary particles can decay is that decay doesn’t mean the decay products must’ve been in the original particle. A decay’s just a particular type of interaction. And we’ve no observations that’d indicate elementary particles are made up of something else; they have no substructure. That’s why we call them elementary.

But this brings up another question, why do those particles decay to begin with? I often come across the explanation that they do this to reach the state of lowest energy because the decay products are lighter than the original. But that doesn’t make any sense because energy is conserved in the decay. Indeed, the reason those particles decay has nothing to do with energy, it has all to do with entropy.

Heavy particles decay simply because they can and because that’s likely to happen. As Einstein told us, mass is a type of energy. Yes, that guy again. So a heavy particle can decay into several lighter particles because it has enough energy. And the rest of the energy that doesn’t go into the masses of the new particles goes into the kinetic energy of the new particles. But for the opposite process to happen, those light particles would have to meet in the right spot with a sufficiently high energy. This is possible, but it’s very unlikely to happen coincidentally. It would be a spontaneous increase of order, so it would be an entropy decrease. That’s why we don’t normally see it happening, just like we don’t normally see eggs unbreak. To sum it up: Decay is likely. Undecay unlikely.

It is worth emphasizing though that the reverse of all those particle-decay processes indeed exists and it can happen in principle. Mathematically, you can reverse all those processes, which means the laws of nature are time-reversible. Like a movie, you can run them forwards and backwards. It’s just that some of those processes are very unlikely to occur in the word we actually inhabit, which is why we experience our life with a clear forward direction of time that points towards more wrinkles.

Friday, November 12, 2021

New book now available for pre-order

In the past years I have worked on a new book, which is now available for pre-order here (paid link). My editors decided on the title "Existential Physics: A Scientist's Guide to Life's Biggest Questions" which, I agree, is more descriptive than my original title "More Than This". My title was trying to express that physics is about more than just balls rolling down inclined planes and particles bumping into each other. It's a way to make sense of life.

In "Existential Physics" each chapter is the answer to a question. I have also integrated interviews with Tim Palmer, David Deutsch, Roger Penrose, and Zeeya Merali, so you don't only get to hear my opinion. I'll show you a table of contents when the page proofs are in. I want to remind you that comments have moved over to my Patreon page.

Saturday, November 06, 2021

How bad is plastic?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Plastic is everywhere, and we have all heard it’s bad for the environment because it takes a long time to biodegrade. But is this actually true? If I look at our outside furniture, that seems to biodegrade beautifully. How much should we really worry about all that plastic? Did you know that most bioplastics aren’t biodegradable? And will we end up driving cars made of soybeans? That’s what we will talk about today.

Pens, bags, cups, trays, toys, shoe soles and wrappers for everything – it’s all plastic. Those contact lenses that I’m wearing? Yeah, that’s plastic too.

The first plastic was invented in nine-teen-0-seven by the chemist Leo Baekeland. Today we use dozens of different plastics. They’re synthetic materials, molecules that just didn’t exist before humans began producing them. Plastics usually have names starting with “poly” like polyethylene, polypropylene, or polyvinyl chloride. The poly is sometimes hidden in abbreviations like PVC or PET.

You probably know the prefix “poly” from “polymer”. It means “many” and tells you that the molecules in plastic are long, repetitive chains. These long chains are the reason why plastics can be easily molded. And because plastics can be quickly mass-produced in custom shapes, they’ve become hugely popular. Today, more than twenty thousand plastic bottles are produced – each second. That’s almost two billion a day! Chewing gum by the way also contains plastic.

Those long molecular chains are also the reason why plastic is so durable, because bacteria that evolved to break down organic materials can’t digest plastic. So how long does plastic last? Well, we can do our own research, so let’s ask Google. Actually we don’t even have to do that ourselves, because just a year ago, a group of American scientists searched for public information on plastic lifetime and wrote a report for the NAS about it.

For some cases, like Styrofoam, they found lifetimes varying from one year to one thousand years to forever. For fishing lines, all thirty-seven websites they found said it lasts six-hundred years, probably because they all copied from each other. If those websites list a source at all, it’s usually a website of some governmental or educational institution. The most often named one is NOAA, the National Oceanic and Atmospheric Administration in the United States. When the researchers contacted NOAA they learned that the numbers on their website are estimates and not based on peer-reviewed science.

Fact is, no one has any good idea how long plastics last in the environment. The studies which have been done, often don’t list crucial information such as exposure to sunlight, temperature, or size and shape of the sample, so it’s unclear what those numbers mean in real life. Scientists don’t even have an agreed-upon standard for what “degradation of plastic” is.

If anything, then recent peer-reviewed literature suggests that plastic in the environment may degrade faster than previously recognized, not because of microbes but because of sunlight. For example, a paper published by a group from Massachusetts found that polystyrene, one of the world’s most ubiquitous plastics, may degrade in a couple of centuries when exposed to sunlight, rather than thousands of years as previously thought. That plastic isn’t as durable as once believed is also rapidly becoming a problem for museums who see artworks of the last century crumbling away.

But why do we worry about the longevity of plastic to begin with? Plastics are made from natural gas or oil, which is why burning them is an excellent source of energy, but has the same problem as burning oil and gas – it releases carbon dioxide which has recently become somewhat unpopular. Plastic can in principle be recycled by shredding and re-molding it, but if you mix different types of plastics the quality degrades rapidly, and in practice the different types are hard to separate.

And so, a lot of plastic trash ends up in landfills or in places where it doesn’t belong, makes its way into rivers and, eventually, into the sea. According to a study by the Ellen Macarthur Foundation, there are more than one-hundred fifty million tons plastic trash in the oceans already, and we add about 10 million tons each year. Most of that plastic sinks to the ground, but more than 250000 tons keep floating on the surface.

The result is that a lot of wildlife, birds and fish in particular, gets trapped in plastic trash or swallows it. According to a 2015 estimate from researchers in Australia and the UK, over ninety percent of seabirds now have plastic in their guts. That’s bad. Swallowing plastic cannot only physically block parts of the digestive system, a lot of plastics also contain chemicals to keep them soft and stable. Many of those chemicals are toxic and they’ll leak into the animals.

Okay you may say who cares about seabirds and fish. But the thing is, once you have a substance in the food chain, it’ll spread through the entire ecosystem. As it spreads, the plastic gets broken down into smaller and smaller pieces, eventually down to below micrometer size. Those are the so-called microplastics. From animals, they make their way into supermarkets, and from there back into the canalization and on into other parts of the environment from where they return to us, and so on. Several independent studies have shown that most of us now quite literally shit plastic.

What are the consequences? No one really knows.

We do know that microplastics are fertile ground for pathogenic bacteria, which isn’t exactly what you want to eat. But of course other microparticles, for example those stemming from leaves or rocks, have that problem too, and we probably eat some of those as well. Indeed, in 2019 a group of Chinese researchers studied bacteria on different microparticles, and they found that the amount of bacteria on microplastics was less than that on micoparticles from leaves. That’s because leaves are organic and deteriorate faster, which provides more nutrients for the bacteria. It’s presently unclear whether eating microplastics is a health hazard.

But some of those microplastics are so small they circulate in the air together with other dust and we regularly breathe them in. Studies have found that at least in cell-cultures, those particles are small enough to make it into the lymphatic and circulatory system. But how much this happens in real life and to what extent this may lead to health problems hasn’t been sorted out. Though we know from several occupational studies that workers processing plastic fibers, who probably breathe in microplastics quite regularly, are more likely to have respiratory problems than the general population. The problems include a reduced lung capacity and coughing. The data for lung cancer induced by breathing microplastics is inconclusive.

Basically we’ve introduced an entirely new substance into the environment and are now finding out what consequences this has.

That problem isn’t new. As Jordi Busque has pointed out, planet Earth had this problem before, namely, when all that coal formed which we’re now digging back up. This happened during a period called the carboniferous which lasted from three-hundred sixty to sixty million years ago. It began when natural selection “invented” for the first time wood trunks with bark, which requires a molecule called lignin. But, no bug, bacteria, or fungus around at that time knew how to digest lignin. So, when trees died, their trunks just piled up in the forests and, over millions of years, they were covered by sediments and turned into coal. The carboniferous ended when evolution created fungi that were able to eat and biodegrade lignin.

Now, the carboniferous lasted 300 million years but maybe we can speed up evolution a bit by growing bacteria that can digest plastics. Why not? There’s nothing particularly special about plastics that would make this impossible.

Indeed, there are already bacteria which have learned to digest plastic. In twenty-sixteen a group of Japanese scientists published a paper in Science magazine, in which they reported the discovery of a bacterium that degrades PET, which is the material most plastic bottles are made of. They found it while they were analyzing sediment samples from nearby a plastic recycling facility. They also identified the enzyme that enables the bacteria to digest plastic and called it PETase.

The researchers found that thanks to PETase, the bacterium converts PET into two environmentally benign components. Moreover 75 percent of the resulting products are further transformed into organic matter by other microorganisms. That, plus carbon-dioxide. As I said in my earlier video about carbon capture, plastics are basically carbon storage, so maybe we should actually be glad that they don’t biodegrade?

But in 2018, a British team accidentally modified PETase making it twenty percent faster at degrading PET, and by 2020 scientists from the University of Portsmouth had found a way to speed up the PET digestion by a factor of six. Just this year, researchers from Germany, France and Ireland used another enzyme which found in a compost pile to degrade PET.

And the French startup Carbios has developed another bacterium that can almost completely digest old plastic bottles in just a few hours. They are building a demonstration factory that will use the enzymes to takes plastic polymers apart into monomers, which can then be polymerized again to make new bottles. The company says it will open a full-scale factory in twenty-twenty-four with a goal of producing the ingredients for forty thousand tons of recycled plastic each year.

The problem with this idea is that the PET used in bottles is highly crystalline and very resistant to enzyme degradation. So if you want the enzymes to do their work, you first have to melt the plastic and extrude it. That requires a lot of energy. For this reason, bacterial PET digestion doesn’t currently make a lot of sense neither economically nor ecologically. But it demonstrates that it’s a real possibility that plastics will just become biodegradable because bacteria evolve to degrade them, naturally or by design.

What’s with bioplastics? Unfortunately, bioplastics look mostly like hype to me.

Bioplastics are plastics produced from biomass. This isn’t a new idea. For example, celluloid, the material of old films, was made from cellulose, an organic material. And in nineteen 41 Ford built a plastic car made from soybeans. Yes, soybeans. Today we have bags made from potatoes or corn. That certainly sounds very bio, but unfortunately, according to a review by scientists from Georgia Southern University that came out just in April, about half of bioplastics are not biodegradable.

How can it possibly be that potato and corn isn’t biodegradable? Well, the potato or corn is biodegradable. But, to make the bioplastics, one uses the potatoes or the corn to produce bioethanol and from the bioethanol you produce plastic in pretty much the same way you always do. The result is that the so-called bioplastics are chemically pretty much the same as normal plastics.

So about half of bioplastics aren’t biodegradable. And most of the ones that are, biodegrade only in certain conditions. This means they have to be sent to industrial compost facilities that have the right conditions of temperature and pressure. If you just trash them they will end up in landfill or migrate into the sea like any other plastic. A paper by researchers from Michigan State University found no difference in degradation when they compared normal plastics with these supposedly biodegradable ones.

So the word “bioplastic” is very misleading. But there are some biodegradable bioplastics. For example Mexican scientists have produced a plastic out of certain types of cacti. It naturally degrades in a matter of months. Unfortunately, there just aren’t enough of those cacti to replace plastic that way.

More promising are PHAs, that are a family of molecules that evolved for certain biological functions and that can be used to produce plastics that actually do biodegrade. Several companies are working on this, for example Anoxkaldnes, Micromidas, and Mango Materials. Mango Materials. Seriously?

Researchers from the University of Queensland in Australia have estimated that a bottle of PHA in the ocean would degrade in one and a half to three and a half years, and a thin film would need 1 to 2 months. Sounds good! But at present PHA is difficult to produce and therefore 2 to 4 times more expensive than normal plastic. And let’s not forget that the faster a material biodegrades the faster it returns its carbon dioxide into the atmosphere. So what you think is “green” might not be what I think is “green”.

Isn’t there something else we can do with all that plastic trash? Yes, for example make steel. If you remember, steel is made from iron and carbon. The carbon usually comes from coal. But you can instead use old plastic, remember the stuff’s made of oil. In a paper that appeared in Nature Catalysis last year, a group of researchers from the UK explained how that could work. Use microwaves to convert the plastic into hydrogen and carbon. Use the hydrogen to convert iron oxides into iron, and then combine it with the carbon to get steel.

Personally I’d prefer steel from plastic over cars of non-biodegradable so-called bioplastics, but maybe that’s just me. Let me know in the comments what you think, I’m curious. Don’t forget to like this video and subscribe if you haven’t already, that’s the easiest way to support us. See you next week.

Saturday, October 30, 2021

The delayed choice quantum eraser, debunked

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

A lot of you have asked me to do a video about the delayed choice quantum eraser, an experiment that supposedly rewrites the past. I haven’t done that simply because there are already lots of videos about it, for example Matt from PBS Space-time, the always amazing Joe Scott, and recently also Don Lincoln from Fermilab. And how many videos do you really need about the same thing if that thing isn’t a kitten in a box. However, having watched all those gentlemen’s videos about quantum erasing, I think they’re all wrong. The quantum eraser isn’t remotely as weird as you think, doesn’t actually erase anything, and certainly doesn’t rewrite the past. And that’s what we’ll talk about today.

Let’s start with a puzzle that has nothing to do with quantum mechanics. Peter is forty-six years old and he’s captain of a container ship. He ships goods between two places that are 100 kilometers apart, let’s call them A and B. He starts his round trip at A with the ship only half full. Three-quarters of the way to B he adds more containers to fill the ship, which slows him down by a factor of two. On the return trip, his ship is empty. How old is the captain?

If you don’t know the answer, let’s rewind this question to the beginning.

Peter is forty-six years old. The answer’s right there. Everything I told you after that was completely unnecessary and just there to confuse you. The quantum eraser is a puzzle just like this.

The quantum eraser is an experiment that combines two quantum effects, interference and entanglement. Interference of quantum particles can itself be tested by the double slit experiment. For the double slit experiment you shoot a coherent beam of particles at a plate with two thin openings, that’s the double slit. On the screen behind it, you then observe several lines, usually five or seven, but not two. This is an interference pattern created by overlapping waves. When a crest meets a trough, the waves cancel and that makes a dark spot on the screen. When crest meets crest they add up and that makes a bright spot.

The amazing thing about the double slit is that you get this pattern even if you let only one particle at a time pass through the slits. This means that even single particles act like waves. We therefore describe quantum particles with a wave-function, usually denoted psi. The interesting thing about the double-slit experiment is that if you measure which slit the particles go through, the interference pattern disappears. Instead the particles behave like particles again and you get two blobs, one from each of the slits.

Well, actually you don’t. Though you’ve almost certainly seen that elsewhere. Just because you know which slit the wave-function goes through doesn’t mean it stops being a wave-function. It’s just no longer a wave-function going through two slits. It’s now a wave-function going through only one slit, so you get a one-slit diffraction pattern. What’s that? That’s also an interference pattern but a fuzzier one and indeed looks mostly like a blob. But a very blurry blob. And if you add the blobs from the two individual slits, they’ll overlap and still pretty much look like one blob. Not, as you see in many videos two cleanly separated ones.

You may think this is nitpicking, but it’ll be relevant to understanding the quantum eraser, so keep this in mind. It’s not so relevant for the double slit experiment, because regardless of whether you think it’s one blob or two, the sum of the images from both separate slits is not the image you get from both slits together. The double slit experiment therefore shows that in quantum mechanics, the result of a measurement depends on what you measure. Yes, that’s weird.

The other ingredient that you need for the quantum eraser is entanglement. I have talked about entanglement several times previously, so let me just briefly remind you: entangled particles share some information, but you don’t know which particle has which share until you measure it. It could be for example that you know the particles have a total spin of zero, but you don’t know the spin of each individual particle. Entangled particles are handy because they allow you to measure quantum effects over large distances which makes them super extra weird.

Okay, now to the quantum eraser. You take your beam of particles, usually photons, and direct it at the double slit. After the double slit you place a crystal that converts each single photon into a pair of entangled photons. From each pair you take one and direct it onto a screen. There you measure whether they interfere. I have drawn the photons which come from the two different places in the crystal with two different colors. But this is just so it’s easier to see what’s going on, these photons actually have the same color.

If you create these entangled pairs after the double slit, then the wave-function of the photon depends on which slit the photons went through. This information comes from the location where the pairs were created and is usually called the “which way information”. Because of this which-way information, the photons on the screen can’t create an interference pattern.

What’s with the other side of the entangled particles? That’s where things get tricky. On the other side, you measure the particles in two different ways. In the first case, you measure the which-way information directly, so you have two detectors, let’s call them D1 and D2. The first detector is on the path of the photons from the left slit, the second detector on the path of the photons from the right slit. If you measure the photons with detectors D1 and D2, you see no interference pattern.

But alternatively you can turn off the first two detectors, and instead combine the two beams in two different ways. These two white bars are mirrors and just redirect the beam. The semi-transparent one is a beam splitter. This means half of the photons go through, and the other half is reflected. This looks a little confusing but the point is just that you combine the two beams so that you no longer know which way the photon came. This is the “erasure” of the “which way information”. And then you measure those combined beams in detectors D3 and D4. A measurement on one of those two detectors does not tell you which slit the photon went through.

Finally, you measure the distribution of photons on the screen that are entangled partners of those photons that went to D3. These photons create an interference pattern. You can alternatively measure the distribution of photons on the screen that are partner particles of those photons that went to D4. Those will also create an interference pattern.

This is the “quantum erasure”. It seems you’ve managed to get rid of the which way information by combining those paths, and that restores the interference pattern. In the delayed choice quantum eraser experiment, the erasure happens well after the entangled partner particle hit the screen. This is fairly easy to do just by making the paths of those photons long enough.

If you watch the other videos about this experiment on YouTube, they’ll now go on to explain that this seems to imply that the choice of what you measure on the one side of the experiment decides what happened on the other side before you even made that choice. Because the photons must have known whether to interfere or not before you decided whether to erase the which-way information. But this is clearly nonsense. Because, let’s rewind this explanation to the beginning.

The photons on the screen can’t create an interference pattern. Everything I told you after this is completely irrelevant. It doesn’t matter at all what you do on the other side of the experiment. The photons on the screen will always create the same pattern. And it’ll never be an interference pattern.

Wait. Didn’t I just tell you that you do get an interference pattern if you use detectors D3 and D4? Indeed. But I’ve omitted a crucial part of the information which is missing in those other YouTube videos. It’s that those interference patterns are not the same. And if you add them, you get exactly the same as you get from detectors 1 and 2. Namely these two overlapping blurry blobs. This is why it matters that you know the combined pattern of two single slits doesn’t give you two separate blobs, as they normally show you.

What you actually do in the eraser experiment, is that you sample the photon pairs in two groups. And you do that in two different ways. If you use detector 1 and 2 you sample them so that the entangled partners on the screen do not create an interference pattern for each detector separately. If you use detector 3 and 4, they each separately create an interference pattern but together they don’t.

This means that the interference pattern really comes from selectively disregarding some of the particles. That this is possible has nothing to do with quantum mechanics. I could throw coins on the floor and then later decide to disregard some of those and create any kind of pattern. Clearly this doesn’t rewrite the past.

This by the way has nothing to do with the particular realization of the quantum eraser experiment that I’ve discussed. This experiment has been done in a number of different ways, but what I just told you is generally true, these interference patterns will always combine to give the original non-interference pattern.

This is not to say that there is nothing weird going on in this experiment. But what’s weird about it is the same thing that’s weird already about the normal double slit experiment. Namely, if you look at the wave-function of a single particle, then that distributes in space. Yet when you measure it, the particle is suddenly in one particular place, and the result must be correlated throughout space and fit to the measurement setting. I actually think the bomb experiment is far weirder than the quantum eraser. Check out my earlier video for more on that.

When I was working on this video I thought certainly someone must have explained this before. But the only person I could find who’d done that is… Sean Carroll in a blogpost two years ago. Yes, you can trust Sean with the quantum stuff. I’ll leave you a link to Sean’s piece in the info.

Wednesday, October 27, 2021

Comments now on Patreon

Many of you have sent me notes asking what happened to the comments. Comments are permanently off on this blog. I just don't have the time to deal with it. In all honesty, since I have turned them off my daily routine has considerably improved, so they'll remain off. If you've witnessed the misery in my comment sections, you probably saw this coming.

This problem has been caused not so much by commenters themselves, as by Google's miserable commenting platform that doesn't allow blocking or managing problematic people in any way. It adds to this that the threaded comments are terrible to read and that you have to know to click on "LOAD MORE" after 200 comments to even read all replies is a remarkably shitty piece of coding.

I am genuinely sorry about this development because over the years I have come to value the feedback from many of you and I feel like I've lost some friends now. At some point I want to move this blog to a different platform and also write some other stuff again, rather than just posting transcripts. But at the moment I don't have the time.

Having said that, I will from now on cross-post transcripts of my videos at Patreon, where you can interact with me and other Patreons for as little as 2 Euro a month. Hope to see you there.

Saturday, October 23, 2021

Does Captain Kirk die when he goes through the transporter?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Does Captain Kirk die when he goes through the transporter? This question has kept me up at night for decades. I’m not kidding. And I still don’t have an answer. So this video isn’t going to answer the question, but I will explain why it’s more difficult than you may think. If you haven’t thought about this before, maybe pause the video for a moment and try to make up your mind. Do you think Kirk dies when he goes through the transporter? Let me know if at the end of this video you’ve changed your mind.

So how does the transporter work? The idea is that the person who enters a transporter is converted into an energy pattern that contains all the information. That energy can be sent or “beamed” at the speed of light. And once it’s arrived at its final destination, it can be converted back-into-the-person.

Now of course energy isn’t something in and by itself. Energy, like momentum or velocity is a property of something. This means the beam has to be made of something. But that doesn’t really matter for the purpose of transportation, it only matters that the beam can contain all the information about the person and it can be sent much faster and much easier than you could send the person in its material form.

Current technology is far, far away from being able to read out all the information that’s necessary to build up a human being from elementary particles. And even if we could do that, it’d take ridiculously long to send that information anywhere. According to a glorious paper by a group of students from the University of Leicester, assuming a bandwidth of about 30 Giga Hertz, just sending the information of a single cell would take more than 10^15 years, and that’s not counting travel time. Just for comparison, the age of the universe is about 10^10 years. So, even if you increase the bandwidth by a quadrillion, it’d still take at least a year just to move a cell one meter to the left.

Clearly we’re not going build a transporter isn’t going to happen any time soon, but from the perspective of physics there’s no reason why it should not be possible. I mean, what makes you you is not a particular collection of elementary particles. Elementary particles are identical to each other. What makes you you is the particular arrangement of those particles. So why not just send that information instead of all the particles? That should be possible.

And according to the best theories that we currently have, that information is entirely contained in the configuration of the particles at any one moment in time. That’s just how the laws of nature seem to work. Once we know the exact state of a system at one moment, say the position and velocity of an apple, then we can calculate what happens at any later time, say, where the apple will fall. I talked about this in more detail in my video about differential equations, so check this out for more.

For the purposes of this video you just need to know that the idea that all the information about a person is contained in the exact configuration at one moment in time is correct. This is also true in quantum mechanics, though quantum mechanics brings in a subtlety that I will get to in a moment.

So, what happens in the transporter is “just” that you get converted into a different medium, all cell and brain processes are put on pause, and then you’re reassembled back and all those processes continue exactly as before. For you, no time has passed, you just find yourself elsewhere. At first sight it seems, Kirk doesn’t die when he goes through the transporter, it’s just a conversion.

But. There’s no reason why you have to convert the person into something else when you read out the information. You can well imagine that you just read out the information, send it elsewhere, and then build a person out of that information. And then, after you’ve done that, you blast the original person into pieces. The result is exactly the same. It’s just that now there’s a time delay between reading out the information and converting the person into something else. Suddenly it looks like Kirk dies and the person on the other end is a copy. Let’s call this the “Copy Argument”.

It might be that this isn’t possible though. For one, when we read out the exact state of a system at any one moment in time that doesn’t only tell you what this system will do in the future, it also tells you what it’s done in the past. This means, strictly speaking, the only way to copy a system elsewhere would require you to also reproduce its entire past, which isn’t possible.

However, you could say that the details of the past don’t matter. Think of a pool table. Balls are rolling around and bouncing off each other. Now imagine that at one particular moment, you record the exact positions and velocities of those balls. Then you can place other balls on another pool table at the right places and give them the correct kick. This should produce the same motion as on the original table, in principle exactly. And that’s even though the past of the copied table isn’t the same because the velocities of the balls came about differently. It’s just that this difference doesn’t matter for the motion of the balls.

Can one do the same for elementary particles? I don’t think so. But maybe you can do it for atoms, or at least for molecules, and that might be enough.

But there’s another reason you might not be able to read out the information of a person without annihilating them in that process, namely that quantum mechanics says that this isn’t possible. You just can’t copy an arbitrary quantum state exactly. However, it’s somewhat questionable whether this matters for people because quantum effects don’t seem to be hugely relevant in the human body. But if you think that those quantum effects are relevant, then you simply cannot copy the information of a person without destroying the original. So in that case the Copy Argument doesn’t work and we’re back to Kirk lives. Let’s call this the No-Copy Argument.

However… there’s another problem. The receiving side of the transporter is basically a machine that builds humans out of information. Now, if you don’t have the information that makes up a particular person, it’s incredibly unlikely you will correctly assemble them. But it’s not impossible. Indeed, if such machines are possible at all and the universe is infinitely large, or if there are other universes, then somewhere there will be a machine that will coincidentally assemble you. Even though the information was never beamed there in the first place. Indeed, this would happen infinitely often.

So you can ask what happens with Kirk in this case. He goes into the transporter, disappears. But copies of him appear elsewhere, coincidentally, even though the information of the original was never read out. You can conclude from this that it doesn’t really matter whether you actually read out the information in the first place. The No-Copy argument fails and it looks again like that the Kirk which we care about dies.

There are various ways people have tried to make sense of this conundrum. The most common one is abandoning our intuitive idea of what it means to be yourself. We have this idea that our experience is continuous and if you go into the transporter there has to be an answer to what you experience next. Do you find yourself elsewhere? Or is that the end of your story and someone else finds themselves elsewhere? It seems that there has to be a difference between these two cases. But if there is no observable difference, then this just means we’re wrong in thinking that being yourself is continuous to begin with.

The other way to deal with the problem is to take our experience seriously and conclude that there is something wrong with physics. That the information about yourself is not contained in any one particular moment. Instead, what makes you you is the entire story of all moments, or at least some stretch of time. In that case, it would be clear that if you convert a person into some other physical medium and then reassemble it, that person’s experience remains intact. Whereas if you break that person’s story in space-time apart, by blasting them away at one place and assembling a copy elsewhere, that would not result in a continuous experience.

At least for me, this seems to make intuitively more sense. But this conflicts with the laws of nature that we currently have. And human intuition is not a good guide to understanding the fundamental laws of nature, quantum mechanics is exhibit A. Philosophers by the way are evenly divided between the possible answers to the question. In a survey, about a third voted for “death” another third for “survival” and yet another third for “other”. What do you think? And did this video change your mind? Let me know in the comments.

Saturday, October 16, 2021

Terraforming Mars in 3 Simple Steps

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

We have made great progress screwing up the climate on this planet, so the time is right to look for a new home on which to continue the successes of the human race. What better place could there be than our neighbor planet Mars. It’s a little cold and a little dusty and it takes seven months to get there, but otherwise it’s a lovely place, and with only 3 simple steps, it can be turned into an Earthlike place or be “terraformed” as they say. Just like magic. And that’s what we’ll talk about today.

First things first, Mars is about one hundred million kilometers farther away from the Sun than Earth. Its average temperature is minus 60 degrees Celsius or minus 80 Fahrenheit. Its atmosphere is very thin and doesn’t contain oxygen. That doesn’t sound very hospitable to life as we know it, but scientists have come up with a solution for our imminent move to Mars.

We’ll start with the atmosphere, which is actually two issues namely, the atmosphere of Mars is very thin and contains basically no oxygen. Instead, it’s mostly carbon-dioxide and nitrogen.

One reason the atmosphere is so thin is that Mars is smaller than Earth and its mass is only a tenth that of Earth. That’d make for interesting Olympic games, but it also makes it easier for gas to escape. This by the way is why I strongly recommend you don’t play with your anti-gravity device. You don’t want the atmosphere of Earth to escape, do you?

But that Mars is lighter than Earth is a minor problem. The bigger problem with the atmosphere of Mars is that Mars doesn’t have a magnetic field, or at least it doesn’t have one any more. The magnetic field of a planet, like the one we have here on Earth, is important because it redirects the charged particles which the sun constantly emits, the so-called solar wind. Without that protection, the solar wind can rip off the atmosphere. That’s not good. Check out my earlier video about solar storms for more about how dangerous they can be.

That the solar wind rips off the atmosphere if the protection from the magnetic field fades away is what happened to Mars. Indeed, it’s still happening. In 2015, NASA’s MAVEN spacecraft measured the slow loss of atmosphere from Mars. They estimate it to be 100 grams per second. This constant loss is balanced by the evaporation of gas from the crust of Mars, so that the pressure has stabilized at a few milli-bar. The atmospheric pressure on the surface of earth is approximately one bar.

Therefore, before we try to create an atmosphere on Mars we first have to create a magnetic field because otherwise the atmosphere would just be wiped away again. How do you create a magnetic field for a planet? Well, physicists have understood magnetic fields two centuries ago and it’s really straight-forward.

In a paper that was just published in April in the International Journal of Astrobiology, two physicists explain that all you have to do put a superconducting wire around Mars, simple enough, isn’t it? The circle would have to have a radius of about 3400 kilometers but the diameter of the collected wires only needs to be about five centimeters. Well, okay, you need an insulation and a refrigeration system to keep it superconducting. And you need a power station to generate a current. But other than that, no fancy technology required.

That superconducting wire would have a weight of about one million tons which is only about 100 times the total weight of the Eiffel tower. The researchers propose to make it of bismuth strontium calcium copper oxide (BSCCO). Where do you get so much bismuth from? Asteroid Mining. Piece of cake.

Meanwhile on Earth. Will Cutbill from the UK earned an entry into the Guinness Book of World Records by stapling 5 M and Ms on top of each other.

Back to Mars. With the magnetic field in place, we can move to step 2 of terraforming Mars, creating an atmosphere. This can be done by releasing the remaining carbon dioxide that’s stored in frozen caps on the poles and in the rocks. In 2018, a group of American researchers published a paper in Nature in which they estimate that using the most wildly optimistic assumptions this would get us to about twenty percent of the atmospheric pressure on earth.

Leaving aside that no one knows how to release the gas, if we would release the gas this would lead to a moderate greenhouse effect. It would increase the average temperature on Mars by about 10 Kelvin to a balmy minus 50 Celsius. That still seems a little chilly, but I hear that fusion power is almost there, so I guess we can heat with that.

Meanwhile on Earth. Visitors of London can now enjoy a new tourist attraction, it’s a man-built hill of 30 meters height from which you have a great view on… construction areas.

Back to Mars. Okay, so we have a magnetic field and created some kind of atmosphere by releasing carbon-dioxide with the added benefit of increasing the average temperature by a few degrees. The remaining problem is that we can’t breathe carbon-dioxide. I mean, we can, but not for very long. So step 3 of terraforming Mars is converting carbon-dioxide to di-oxide. Only thing we need to do for this is to grow a sufficient amount of plants.

There’s the issue that plants tend to not flourish at minus fifty degrees, but that’s easy to fix with a little genetic engineering. Plants as we know them also need a range of nutrients they normally get from soil, most importantly Nitrogen, Phosphorus and Potassium. Luckily, those are present on Mars. The bigger problem may be that the soil on mars is too thin and too hard which makes it difficult for plants to grow roots. It also retains water very poorly, so you have to water the plants very often. How do you water plants at -50 degrees? Good question!

Meanwhile on Earth you can buy fake Mars soil and try your luck growing plants on it yourself!

Ok, so I admit that the last bit with the plants was a tiny bit sketchy. But there might be a better way to do it. In July 2019 researchers from JPL, Harvard and Edinburgh University published a paper in Nature in which they proposed to cover patches of Mars with a thin layer of aerogel.

An aerogel is a synthetic material which contains a lot of gas. It is super light and has an extremely low thermal conductivity, which means it could keep the surface of Mars warm. The gel would be transparent to visible light but can be somewhat opaque in the infrared, so this could create an enhanced greenhouse effect directly on the surface. That would heat up the surface, which would release more carbon dioxide. The carbon dioxide would accumulates under the gel, and then plants should be able to grow in that space. So, we’re not talking about oaks but more like algae or something that covers the ground.

In their paper, the researchers estimate that a layer of about 3 centimeters aerogel could raise the surface temperature of Mars by about 45 Kelvin. With that the average temperature on Mars would still be below the freezing point of water, but in some places it might rise above it. Sounds great! Except that the atmospheric pressure is so low that the liquid water would start boiling as soon as it melts.

So as you see our move to Mars is well on the way, better pack your bags, see you there!

Saturday, October 09, 2021

How I learned to love pseudoscience

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

On this channel, I try to separate the good science from the bad science, the pseudoscience. And I used to think that we’d be better off without pseudoscience, that this would prevent confusion and make our lives easier. But now I think that pseudoscience is actually good for us. And that’s what we’ll talk about today.

Philosophers can’t agree on just what defines “pseudoscience” but in this episode I will take it to mean theories that are in conflict with evidence, but that promoters believe in, either by denying the evidence, or denying the scientific method, or maybe just because they have no idea what either the evidence or the scientific method is.

But what we call pseudoscience today might once have been science. Astrology for example, the idea that the constellations of the stars influence human affairs was once a respectable discipline. Every king and queen had a personal astrologer to give them advice. And many early medical practices weren’t just pseudoscience, they were often fatal. The literal snake oil, obtained by boiling snakes in oil, was at least both useless and harmless. However, they also prescribed tape worms for weight loss. Though in all fairness, that might actually work, if you survive it.

And sometimes, theories accused of being pseudoscientific turned out to be right, for example the idea that the continents on earth today broke apart from one large tectonic plate. That was considered pseudoscience until evidence confirmed it. And the hypothesis of atoms was at first decried as pseudoscience because one could not, at the time, observe atoms.

So the first lesson we can take away is that pseudoscience is a natural byproduct of normal science. You can’t have one without the other. If we learn something new about nature, some fraction of people will cling on to falsified theories longer than reasonable. And some crazy ideas in the end turn out to be correct.

But pseudoscience isn’t just a necessary evil. It’s actually useful to advance science because it forces scientists to improve their methods.

Single-blind trials, for example, were invented in the 18th century to debunk the practice of Mesmerism. At that time, scientists had already begun to study and apply electromagnetism. But many people were understandably mystified by the first batteries and electrically powered devices. Franz Mesmer exploited their confusion.

Mesmer was a German physician who claimed he’d discovered a very thin fluid that penetrated the entire universe, including the human body. When this fluid was blocked from flowing, he argued, the result was that people fell ill.

Fortunately, Mesmer said, it was possible to control the flow of the fluid and cure people. And he knew how to do it. The fluid was supposedly magnetic, and entered the body through “poles”. The north pole was on your head and that’s where the fluid came in from the stars, and the south pole was at your feet where it connected with the magnetic field of earth.

Mesmer claimed that the flow of the fluid could be unblocked by “magnetizing” people. Here is how the historian Lopez described what happened after Mesmer moved to Paris in 1778:
“Thirty or more persons could be magnetized simultaneously around a covered tub, a case made of oak, about one foot high, filled with a layer of powdered glass and iron filings... The lid was pierced with holes through which passed jointed iron branches, to be held by the patients. In subdued light, absolutely silent, they sat in concentric rows, bound to one another by a cord. Then Mesmer, wearing a coat of lilac silk and carrying a long iron wand, walked up and down the crowd, touching the diseased parts of the patients’ bodies. He was a tall, handsome, imposing man.”
After being “magnetized” by Mesmer, patients frequently reported feeling significantly better. This, by the way, is the origin of the word mesmerizing.

Scientists of the time, Benjamin Franklin and Antoine Lavoisier among them, set out to debunk Mesmer’s claims. For this, they blindfolded a group of patients. Some of them they told they’d get a treatment, but then they didn’t do anything, and others they gave a treatment without their knowledge.

Franklin and his people found that the supposed effects of mesmerism were not related to the actual treatment, but to the belief of whether one received a treatment. This isn’t to say there were no effects at all. Quite possibly some patients actually did feel better just believing they’d been treated. But it’s a psychological benefit, not a physical one.

In this case the patients didn’t know whether they received an actual treatment, but those conducting the study did. Such trials can be improved by randomly assigning people to one of the two groups so that neither the people leading the study nor those participating in it know who received an actual treatment. This is now called a “double blind trial,” and that too was invented to debunk pseudoscience, namely homeopathy.

Homeopathy was invented by another German, Samuel Hahnemann. It’s based on the belief that diluting a natural substance makes it more effective in treating illness. In eighteen thirty-five, Friedrich Wilhelm von Hoven, a public health official in Nuremberg, got into a public dispute with the dedicated homeopath Johann Jacob Reuter. Reuter claimed that dissolving a single grain of salt in 100 drops of water, and then diluting it 30 times by a factor of 100 would produce “extraordinary sensations” if you drank it. Von Hoven wouldn’t have it. He proposed and then conducted the following experiment.

He prepared 50 samples of homeopathic salt-water following Reuter’s recipe, and 50 samples of plain water. Today, we’d call the plain water samples a “placebo.” The samples were numbered and randomly assigned to trial participants by repeated shuffling. Here is how they explained this in the original paper from 1835:
“100 vials… are labeled consecutively… then mixed well among each other and placed, 50 per table, on two tables. Those on the table at the right are filled with the potentiation, those on the table at the left are filled with pure distilled snow water. Dr. Löhner enters the number of each bottle, indicating its contents, in a list, seals the latter and hands it over to the committee… The filled bottles are then brought to the large table in the middle, are once more mixed among each other and thereupon submitted to the committee for the purpose of distribution.”
The assignments were kept secret on a list in a sealed envelope. Neither von Hoven nor the patients knew who got what.

They found 50 people to participate in the trial. For three weeks von Hoven collected reports from the study participants, after which he opened the sealed envelope to see who had received what. It turned out that only eight participants had experienced anything unusual. Five of those had received the homeopathic dilution, three had received water. Using today’s language you’d say the effect wasn’t statistically significant.

Von Hoven wasn’t alone with his debunking passion. He was a member of the “society of truth-loving men”. That was one of the skeptical societies that had popped up to counter the spread of quackery and fraud in the 19th century. The society of truth loving men no longer exists. But the oldest such society that still exists today was founded as far back as 1881 in the Netherlands. It’s called the Vereniging tegen de Kwakzalverij, literally the “Society Against Quackery”. This society gave out an annual price called the Master Charlatan Prize to discourage the spread of quackery. They still do this today.

Thanks to this Dutch anti-quackery society, the Netherlands became one of the first countries with governmental drug regulation. In case you wonder, the first country to have such a regulation was the United Kingdom with the 1868 Pharmacy Act. The word “skeptical” has suffered somewhat in recent years because a lot of science deniers now claim to be skeptics. But historically, the task of skeptic societies was to fight pseudoscience and to provide scientific information to the public.

And there are more examples where fighting pseudoscience resulted in scientific and societal progress. For example, to debunk telepathy in the late nineteenth century. At the time, some prominent people believed in it, for example Nobel Prize winners Lord Rayleigh and Charles Richet. Richet proposed to test telepathy by having one person draw a playing card at random and concentrating on it for a while. Then another person had to guess the card. The results were then compared against random chance. This is basically how we today calculate statistical significance.

And if you remember, Karl Popper came up with his demarcation criterion of falsification because he wanted to show that Marxism and Freud’s psychoanalysis wasn’t proper science. Now, of course we know today that falsification is not the best way to go about it, but Popper’s work was arguably instrumental to the entire discipline of the philosophy of science. Again that came out of the desire to fight pseudoscience.

And this fight isn’t over. We’re still today fighting pseudoscience and in that process scientists constantly have to update their methods. For example, all this research we see in the foundations of physics on multiverses and unobservable particles doesn’t contribute to scientific progress. I am pretty sure in fifty years or so that’ll go down as pseudoscience. And of course there’s still loads of quackery in medicine, just think of all the supposed COVID remedies that we’ve seen come and go in the past year.

The fight against pseudoscience today is very much a fight to get relevant information to those who need it. And again I’d say that in the process scientists are forced to get better and stronger. They develop new methods to quickly identify fake studies, to explain why some results can’t be trusted, and to improve their communication skills.

In case this video inspired you to attempt self-experiments with homeopathic remedies, please keep in mind that not everything that’s labeled “homeopathic” is necessarily strongly diluted. Some homeopathic remedies contain barely diluted active ingredients of plants that can be dangerous when overdosed. Before you assume it’s just water or sugar, please check the label carefully.

If you want to learn more about the history of pseudoscience, I can recommend Michael Gordin’s recent book “On the Fringe”.

Saturday, October 02, 2021

How close is nuclear fusion power?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Today I want to talk about nuclear fusion. I’ve been struggling with this video for some while. This is because I am really supportive of nuclear fusion, research and development. However, the potential benefits of current research on nuclear fusion have been incorrectly communicated for a long time. Scientists are confusing the public and policy makers in a way that makes their research appear more promising than it really is. And that’s what we’ll talk about today.

There is a lot to say about nuclear fusion, but today I want to focus on its most important aspect, how much energy goes into a fusion reactor, and how much comes out. Scientists quantify this with the energy gain, that’s the ratio of what comes out over what goes in and is usually denoted Q. If the energy gain is larger than 1 you create net energy. The point where Q reaches 1 is called “Break Even”.

The record for energy gain was just recently broken. You may have seen the headlines. An experiment at the National Ignition Facility in the United States reported they’d managed to get out seventy percent of the energy they put in, so a Q of 0.7. The previous record was 0.67. It was set in nineteen ninety-seven by the Joint European Torus, JET for short.

The most prominent fusion experiment that’s currently being built is ITER. You will find plenty of articles repeating that ITER, when completed, will produce ten times as much energy as goes in, so a Gain of 10. Here is an example from a 2019 article in the Guardian by Phillip Ball who writes
“[The Iter project] hopes to conduct its first experimental runs in 2025, and eventually to produce 500 megawatts (MW) of power – 10 times as much as is needed to operate it.”

Here is another example from Science Magazine where you can read “[ITER] is predicted to produce at least 500 megawatts of power from a 50 megawatt input.”

So this looks like we’re close to actually creating energy from fusion right? No, wrong.

Remember that nuclear fusion is the process by which the sun creates power. The sun forces nuclei into each other with the gravitational force created by its huge mass. We can’t do this on earth so we have to find some other way. The currently most widely used technology for nuclear fusion is heating the fuel in strong magnetic fields until it becomes a plasma. The temperature that must be reached is about 150 million Kelvin. The other popular option is shooting at a fuel pellet with lasers. There are some other methods but they haven’t gotten very far in research and development.

The confusion which you find in pretty much all popular science writing about nuclear fusion is that the energy gain which they quote is that for the energy that goes into the plasma and comes out of the plasma.

In the technical literature, this quantity is normally not just called Q but more specifically Q-plasma. This is not the ratio of the entire energy that comes out of the fusion reactor over that which goes into the reactor, which we can call Q-total. If you want to build a power plant, and that’s what we’re after in the end, it’s the Q-total that matters, not the Q-plasma. 

 Here’s the problem. Fusion reactors take a lot of energy to run, and most of that energy never goes into the plasma. If you keep the plasma confined with a magnetic field in a vacuum, you need to run giant magnets and cool them and maintain that. And pumping a laser isn’t energy efficient either. These energies never appear in the energy gain that is normally quoted.

The Q-plasma also doesn’t take into account that if you want to operate a power plant, the heat that is created by the plasma would still have to be converted into electric energy, and that can only be done with a limited efficiency, optimistically maybe fifty percent. As a consequence, the Q total is much lower than the Q plasma.

If you didn’t know this, you’re not alone. I didn’t know this until a few years ago either. How can such a confusion even happen? I mean, this isn’t rocket science. The total energy that goes into the reactor is more than the energy that goes into the plasma. And yet, science writers and journalists constantly get this wrong. They get the most basic fact wrong on a matter that affects tens of billions of research funding.

It’s not like we are the first to point out that this is a problem. I want to read you some words from a 1988 report from the European Parliament, more specifically from the Committee for Scientific and Technological Options Assessment. They were tasked with establishing criteria for the assessment of European fusion research.

In 1988, they already warned explicitly of this very misunderstanding.
“The use of the term `Break-even’ as defining the present programme to achieve an energy balance in the Hydrogen-Deuterium plasma reaction is open to misunderstanding. IN OUR VIEW 'BREAK-EVEN' SHOULD BE USED AS DESCRIPTIVE OF THE STAGE WHEN THERE IS AN ENERGY BREAKEVEN IN THE SYSTEM AS A WHOLE. IT IS THIS ACHIEVEMENT WHICH WILL OPEN THE WAY FOR FUSION POWER TO BE USED FOR ELECTRICITY GENERATION.”
They then point out the risk:
“In our view the correct scientific criterion must dominate the programme from the earliest stages. The danger of not doing this could be that the entire programme is dedicated to pursuing performance parameters which are simply not relevant to the eventual goal. The result of doing this could, in the very worst scenario be the enormous waste of resources on a program that is simply not scientifically feasible.”
So where are we today? Well, we’re spending lots of money on increasing Q-plasma instead of increasing the relevant quantity Q-total. How big is the difference? Let us look at ITER as an example.

You have seen in the earlier quotes about ITER that the energy input is normally said to be 50 MegaWatts. But according to the head of the Electrical Engineering Division of the ITER Project, Ivone Benfatto, ITER will consume about 440 MegaWatts while it produces fusion power. That gives us an estimate for the total energy that goes in.

Though that is misleading already because 120 of those 440 MegaWatts are consumed whether or not there’s any plasma in the reactor, so using this number assumes the thing would be running permanently. But okay, let’s leave this aside.

The plan is that ITER will generate 500 MegaWatts of fusion power in heat. If we assume a 50% efficiency for converting this heat into electricity, ITER will produce about 250 MegaWatts of electric power.

That gives us a Q total of about 0.57. That’s less than a tenth of the normally stated Q plasma of 10. Even optimistically, ITER will still consume roughly twice the power it generates. What’s with the earlier claim of a Q of 0.67 for the JET experiment? Same thing. 

If you look at the total energy, JET consumed more than 700 MegaWatts of electricity to get its sixteen MegaWatts of fusion power, that’s heat not electric. So if you again assume 50 percent efficiency in the heat to electricity conversion you get a Q-total of about 0.01 and not the claimed 0.67. 

 And those recent headlines about the NIF success? Same thing again. It’s the Q-plasma that is 0.7. That’s calculated with the energy that the laser delivers to the plasma. But how much energy do you need to fire the laser? I don’t know for sure, but NIF is a fairly old facility, so a rough estimate would be 100 times as much. If they’d upgrade their lasers, maybe 10 times as much. Either way, the Q-total of this experiment is almost certainly well below 0.1. 

Of course the people who work on this know the distinction perfectly well. But I can’t shake the impression they quite like the confusion between the two Qs. Here is for example a quote from Holtkamp who at the time was the project construction leader of ITER. He said in an interview in 2006:
“ITER will be the first fusion reactor to create more energy than it uses. Scientists measure this in terms of a simple factor—they call it Q. If ITER meets all the scientific objectives, it will create 10 times more energy than it is supplied with.”
Here is Nick Walkden from JET in a TED talk referring to ITER “ITER will produce ten times the power out from fusion energy than we put into the machine.” and “Now JET holds the record for fusion power. In 1997 it got 67 percent of the power out that we put in. Not 1 not 10 but still getting close.”

But okay, you may say, no one expects accuracy in a TED talk. Then listen to ITER Director General Dr. Bigot speaking to the House of Representatives in April 2016:

[Rep]: I look forward to learning more about the progress that ITER has made under Doctor Bigot’s leadership to address previously identified management deficiencies and to establish a more reliable path forward for the project.

[Bigot]:Okay, so ITER will have delivered in that full demonstration that we could have okay 500 Megawatt coming out of the 50 Megawatt we will put in.
What are we to make of all this?

Nuclear fusion power is a worthy research project. It could have a huge payoff for the future of our civilization. But we need to be smart about just what research to invest into because we have limited resources. For this, it is super important that we focus on the relevant question: Will it output energy into the grid. 

There seem to be a lot of people in fusion research who want you to remain confused about just what the total energy gain is. I only recently read a new book about nuclear fusion “The Star Builders” which does the same thing again (review here). Only briefly mentions the total energy gain, and never gives you a number. This misinformation has to stop. 

If you come across any popular science article or interview or video that does not clearly spell out what the total energy gain is, please call them out on it. Thanks for watching, see you next week.

Wednesday, September 29, 2021

[Guest Post] Brian Keating: How to Think Like a Nobel Prize Winner

[The following is an excerpt from Think Like a Nobel Prize Winner, Brian Keating’s newest book based on his interviews with 9 Nobel Prize winning physicists. The book isn’t a physics text, nor even a memoir like Keating’s first book Losing the Nobel Prize. Instead, it’s a self-help guide for technically minded individuals seeking to ‘level-up’ their lives and careers.]

When 2017 Nobel Prize winner Barry Barish told me he had suffered from the imposter syndrome, the hair stood up on the back of my neck. I couldn’t believe that one of the most influential figures in my life and career—as a scientist, as a father, and as a human—is mortal. He sometimes feels insecure, just like I do. Every time I’m teaching, in the back of my head, I am thinking, who am I to do this? I always struggled with math, and physics never came naturally to me. I got where I am because of my passion and curiosity, not my SAT scores. Society venerates the genius. Maybe that’s you, but it’s certainly not me.

I’ve always suffered from the imposter syndrome. Discovering that Barish did too, even after winning a Nobel Prize—the highest regard in our field and in society itself—immensely comforted me. If he was insecure about how he compared to Einstein, I wanted to comfort him: Ein- stein was in awe of Isaac Newton, saying Newton “... determined the course of Western thought, research, and practice like no one else before or since.” And compared to whom did Newton feel inadequate? Jesus Christ almighty!

The truth is, the imposter syndrome is just a normal, even healthy, dose of inadequacy. As such, we can never overcome or defeat it, nor should we try to. But we can manage it through understanding and acceptance. Hearing about Barry’s experience allowed me to do exactly that, and I hoped sharing that message would also help others manage better. This was the moment I decided to create this book.

This isn’t a physics book. These pages are not for aspir- ing Nobel Prize winners, mathematicians, or any of my fellow geeks, dweebs, or nerds. In fact, I wrote it specifically for nonscientists—for those who, because of the quotidian demands of everyday life, sometimes lose sight of the biggest-picture topics humans are capable of learning about and contributing to. Most of all, I hope by humanizing science, by showing the craft of science as performed by its master practitioners, you my reader will see common themes emerge that will boost your creativity, stoke your imagination, and most of all, help overcome barriers like the imposter syndrome, thereby unlocking your full potential for out-of-this-universe success.

Though I didn’t write it for physicists, it’s appropriate to consider why the subjects of this book—who are all physicists—are good role models. Physicists are mental Swiss Army knives, or a cerebral SEAL Team Six. We dwell in uncertainty. We exist to solve problems.

We are not the best mathematicians (just ask a real mathematician). We’re not the best engineers. We also aren’t the best writers, speakers, or communicators—but no single group can simultaneously do all of these disparate tasks so well as the physicists I’ve compiled here. That’s what makes them worth listening to and learning from. I sure have.

The individuals in this book have balanced collaboration with competition. All scientists stand on the proverbial shoulders of giants of the past and present. Yet some of the most profound moments of inspiration do breathe magic into the equation of a single individual one unique time. There is a skill to know when to listen and when to talk, for you can’t do both at the same time. These scientists have navigated the challenging waters between focus and diversity, balancing intellectual breadth with depth, which are challenges we all face. Whether you’re a scientist or a salesman, you must “niche down” to solve problems. (Imagine trying to sell every car model made!)

I wrote this book for everyone who struggles to balance the mundane with the sublime—who is attending to the day-to-day hard work and labor of whatever craft they are in while also trying to achieve something greater in their profession or in life. I wanted to deconstruct the mental habits and tactics of some of society’s best and brightest minds in order to share their wisdom with readers—and also to show readers that they’re just like us. They struggle with compromise. They wrestle with perfection. And they aspire always to do something great. We can too.

By studying the habits and tactics of the world’s brightest, you can recognize common themes that apply to your life— even if the subject matter itself is as far removed from your daily life as a black hole is from a quark. Honestly, even though I am a physicist, the work done by most of the subjects in this book is no more similar to my daily work than it is to yours, and yet I learned much from them about issues common between us. These pages include enduring life lessons applicable to anyone eager to acquire new the true keys to success!


A theme pops up throughout these interviews regarding the connection between teaching and learning. In the Russian language, the word for “scientist” translates into “one who was taught.” That is an awesome responsibility with many implications. If we were taught, we have an obligation to teach. But the paradox is this: To be a good teacher, you must also be a good student. You must study how people learn in order to teach effectively. And to learn, you must not only study but also teach. In that way, I also have a selfish motivation behind this book: I wanted to share everything I learned from these laureates in order to learn it even more durably. Mostly, however, I see this book as an extension of my duty as an educator. That’s also how the podcast Into the Impossible began.

I’ve always had an insatiable curiosity about learning and education, combined with the recognition that life is short and I want to extract as much wisdom as I can while I can.

As a college professor, I think of teachers as shortcuts in this endeavor. Teachers act as a sort of hack to reduce the amount of time otherwise required to learn something on one’s own, compressing and making the learning process as efficient as possible—but no more so. In other words, there is a value in wrestling with material that cannot be hacked away.

As part of my duty as an educator, I wanted to cultivate a collection of dream faculty comprised of minds I wish I had encountered in my life. The next best thing to having them as my actual teachers is to learn from their interviews in a way that distills their knowledge, philosophy, struggles, tactics, and habits.

I started doing just that at UC San Diego in 2018 and realized I was extremely privileged to have access to some of the greatest minds in human history, ranging from Pulitzer Prize winners and authors to CEOs, artists, and astronauts. As the codirector of the Arthur C. Clarke Center for Human Imagination, I had access to a wide variety of writers, thinkers, and inventors from all walks of life, courtesy of our guest-speaker series. The list of invited speakers is not at all limited to the sciences. The common denominator is conversations about human curi- osity, imagination, and communication from a variety of vantage points.

I realized it would be a missed opportunity if only those people who attended our live events benefited from these world-class intellects. So we supplemented their visit- ing lectures with podcast interviews, during which we explored topics in more detail. I started referring to the podcast as the “university I wish I’d attended where you can wear your pajamas and don’t incur student-loan debt.”

The goal of the podcast is to interview the greatest minds for the greatest number of people. My very first guest was the esteemed physicist Freeman Dyson. I next inter- viewed science-fiction authors, such as Andy Weir and Kim Stanley Robinson; poets and artists, including Herbert Sigüenza and Ray Armentrout; astronauts, such as Jessica Meir and Nicole Stott; and many others. Along the way, I also started to collect a curated subset of interviews with Nobel Prize–winning physicists.

Then in February 2020, my friend Freeman Dyson died. Dyson was the prototype of a truly overlooked Nobel laureate. His contributions to our understanding of the fundamentals of matter and energy cannot be overstated, yet he was bypassed for the Nobel Prize he surely deserved. I was honored to host him for his winter visits to enjoy La Jolla’s sublime weather.

Freeman’s passing lent an incredible sense of urgency to my pursuits, forcing me to acknowledge that most prize- winning physicists are getting on in years. I don’t know how to say this any other way, but I started to feel sick to my stomach, thinking that I might miss an opportunity to talk to some of the most brilliant minds in history who, because of winning the Nobel Prize, have had an outsized influence on society and culture. So in 2020, I started reaching out to them. Most said yes, although sadly, both of the living female Nobel laureate physicists declined to be interviewed. I’m incredibly disappointed not to have female voices in this book, but it’s due to the reality of the situation and not for lack of trying.

A year later, I had this incredible collection of legacy interviews with some of the most celebrated minds on the planet. T.S. Eliot once said, “The Nobel is a ticket to one’s own funeral. No one has ever done anything after he got it.” No one proves that idea more wrong than the physicists in this book. It’s a rarefied group of individuals to learn from—especially when the focus is on life lessons instead of their research. It would be a dereliction of my intellectual duty not to preserve and share them.


These chapters are not transcripts. From the lengthy interviews I conducted with each laureate, I pulled all of the bits exemplifying traits worthy of emulation. Then, after each exchange, I added context or shared how I have been affected by that quote or idea. I have also edited for clarity, since spoken communication doesn’t always translate directly to the page.

All in all, I have done my best to maintain the authenticity of my exchanges with my guests. For example, you’ll notice that my questions don’t always relate to the take-away. Conversations often go in unexpected directions. I could’ve rephrased the questions for this book so they more accurately represented the laureates’ responses, but I didn’t want to misrepresent context. Still, any mistakes accidentally introduced are definitely mine, not theirs.

Each chapter contains a small box briefly explaining the laureate’s Prize-winning work—not because there will be a test at the end, but because it’s interesting context, and further, I know a lot of my readers will want to learn a bit of the fascinating science in these pages, consider- ing the folks from whom you’ll be learning. Perhaps their work will ignite further curiosity in you. If that’s not you, feel free to skip these boxes. If you’re looking for more, I refer you to the laureates’ Nobel lectures at There, you will find their knowledge. But here, you will find examples of their wisdom—distilled and compressed into concentrated, actionable form.

Each interview ends with a handful of lightning-round questions designed to investigate more deeply, to provide you with insight into what these laureates are like as human beings. Often these questions reoccur.

Further, you’ll find several recurrent themes from interview to interview, including the power of curiosity, the importance of listening to your critics, and why it’s paramount to pursue goals that are “useless.” I truly hope you’ll enjoy going out of this Universe and the benefits it will accrue to your life and career!

Buy your copy of Think Like A Nobel Prize Winner here!