Saturday, December 25, 2021

We wish you a nerdy Xmas!

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Happy holidays everybody, today we’re celebrating Isaac Newton’s birthday with a hand selected collection of nerdy Christmas facts that you can put to good use in every appropriate and inappropriate occasion.

You have probably noticed that in recent years worshipping Newton on Christmas has become somewhat of a fad on social media. People are wishing each other a happy Newtonmas rather than Christmas because December 25th is also Newton’s birthday. But did you know that this fad is more than a century old?

In 1891, The Japan Daily Mail reported that a society of Newton worshippers had sprung up at the University of Tokyo. It was founded, no surprise, by mathematicians and physicists. It was basically a social club for nerds, with Newton’s picture residing over meetings. The members were expected to give speeches and make technical jokes that only other members would get. So kind of like physics conferences basically.

The Japan Daily Mail also detailed what the nerds considered funny. For example, on Christmas, excuse me, Newtonmas, they’d have a lottery in which everyone drew a paper with a scientists’ name and then got a matching gift. So if you drew Newton you’d get an apple, if you drew Franklin a kite, Archimedes got you a naked doll, and Kant-Laplace would get you a puff of tobacco into your face. That was supposed to represent the Nebular Hypothesis. What’s that? That’s the idea that solar systems form from gas clouds, and yes, that was first proposed by Immanuel Kant. No, it doesn’t rhyme to pissant, sorry.

Newton worship may not have caught on, but nebular hypotheses certainly have.

By the way, did you know that Xmas isn’t an atheist term for Christmas? The word “Christ” in Greek is Christos written like this (Χριστός.) That first letter is called /kaɪ/ and in the Roman alphabet it becomes an X. It’s been used as an abbreviation for Christ since at least the 15th century.

However, in the 20th century the abbreviation has become somewhat controversial among Christians because the “X” is now more commonly associated with a big unknown. So, yeah, use at your own risk. Or maybe stick with Happy Newtonmas after all?

Well that is controversial too because it’s not at all cl

ear that Newton’s birthday is actually December 25th. Isaac Newton was born on December 25, 1642 in England.

But. At that time, the English still used the Julian calendar. That is already confusing because the new, Gregorian calendar, was introduced by Pope Gregory in 1584, well before Newton’s birth. It replaced the older, Julian calendar, that didn’t properly match the months to the orbit of Earth around the sun.

Yet, when Pope Gregory introduced the new calendar, the British were mostly Anglicans and they weren’t going to have some pope tell them what to do. So for over a hundred years, people in Great Britain celebrated Christmas 10 or 11 days later than most of Europe. Newton was born during that time. Great Britain eventually caved in and adopted the Gregorian calendar in 1751. They passed a law that overnight moved all dates forward by 11 days. So now Newton would have celebrated his birthday on January 4th, except by that time he was dead.

However, it gets more difficult because these two calendars continue running apart, so if you’d run forward the old Julian calendar until today, then December 25th according to the old calendar would now actually be January 7th. So, yeah, I think sorting this out will greatly enrich your conversation over Christmas lunch. By the way, Greece didn’t adopt the Gregorian calendar until 1923. Except for the Monastic Republic of Mount Athos, of course, which still uses the Gregorian calendar.

Regardless of exactly which day you think Newton was born, there’s no doubt he changed the course of science and with that the course of the world. But Newton was also very religious. He spent a lot of time studying the Bible looking for numerological patterns. On one occasion he argued, I hope you’re sitting, that the Pope is the anti-Christ, based in part on the appearance of the number 666 in scripture. Yeah, the Brits really didn’t like the Catholics, did they.

Newton also, at the age of 19 or 20, had a notebook in which he kept a list of sins he had committed such as eating an apple at the church, making pies on Sunday night, “Robbing my mother’s box of plums and sugar” and “Using Wilford’s towel to spare my own”. Bad boy. Maybe more interesting is that Newton recorded his secret confessions in a cryptic code that was only deciphered in 1964. There are still four words that nobody has been able to crack. If you get bored over Christmas, you can give it a try yourself, link’s in the info below.

Newton may now be most famous for inventing calculus and for Newton’s laws and Newtonian gravity, all of which sound like he was a pen on paper person. But he did some wild self-experiments that you can put to good use for your Christmas conversations. Merry Christmas, did you know that Newton once poked a needle into his eye? I think this will go really well.

Not a joke. In 1666, when he was 23, Newton, according to his own records, poked his eye with a bodkin, which is more or less a blunt stitching needle. In his own words “I took a bodkine and put it between my eye and the bone as near to the backside of my eye as I could: and pressing my eye with the end of it… there appeared several white dark and coloured circles.”

If this was not crazy enough, in the same year, he also stared at the Sun taking great care to first spend some time in a dark room so his pupils would be wide open when he stepped outside. Here is how he described this in a letter to John Locke 30 years later:
“in a few hours’ time I had brought my eyes to such a pass that I could look upon no bright object with either eye but I saw the sun before me, so that I could neither write nor read... I began in three or four days to have some use of my eyes again & by forbearing a few days longer to look upon bright objects recovered them pretty well.”
Don’t do this at home. Since we’re already talking about needles, did you know that pine needles are edible? Yes, they are edible and some people say they taste like vanilla, so you can make ice cream with them. Indeed, they are a good source of vitamin C and were once used by sailors to treat and prevent scurvy.

By some estimate, scurvy killed more than 2 million sailors between the 16th and 18th centuries. On a long trip it was common to lose about half of the crew, but in extreme cases it could be worse. On his first trip to India in 1499, Vasco da Gama reportedly lost 116 of 170 men, almost all to scurvy.

But in 1536, the crew of the French explorer Jacques Cartier was miraculously healed from scurvy upon arrival in what is now Québec. The miracle cure was a drink that the Iroquois prepared by boiling winter leaves and the bark from an evergreen tree, which was rich in vitamin C.

So, if you’ve run out of emphatic sounds to make in response to aunt Emma, just take a few bites off the Christmas tree, I’m sure that’ll lighten things up a bit.

Speaking of lights. Christmas lights were invented by no other than Thomas Edison. According to the Library of Congress, Edison created the first strand of electric lights in 1880, and he hung them outside his laboratory in New Jersey, during Christmastime. Two years later, his business partner Edward Johnson had the idea to wrap a strand of hand-wired red, white, and blue bulbs around a Christmas tree. So maybe take a break with worshipping Newton and spare a thought for Edison.

But watch out when you put the lights on the tree. According to the United States Consumer Product Safety Commission, in 2018, 17,500 people sought treatment at a hospital for injuries sustained while decorating for the holiday.

And this isn’t the only health risk on Christmas. In 2004 researchers in the United States found that people are much more likely to die from heart problems than you expect both on Christmas and on New Year. A 2018 study from Sweden made a similar finding. The authors of the 2004 study speculate that the reason may be that people delay seeking treatment during the holidays. So if you feel unwell don’t put off seeing a doctor even if it’s Christmas.

And since we’re already handing out the cheerful news, couples are significantly more likely to break up in the weeks before Christmas. This finding comes from a 2008 paper by British researchers who analyzed facebook status updates. Makes you wonder, do people break up because they can’t agree which day Newton was born or do they just not want to see their in-laws? Let me know what you think in the comments.

Saturday, December 18, 2021

Does Superdeterminism save Quantum Mechanics? Or Does It Kill Free Will and Destroy Science?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Superdeterminism is a way to make sense of quantum mechanics. But some physicists and philosophers have argued that if one were to allow it, it would destroy science. Seriously. How does superdeterminism work, what is it good for, and why does it allegedly destroy science? That’s what we’ll talk about today.

First things first, what is superdeterminism? Above all, it’s a terrible nomenclature because it suggests something more deterministic than deterministic and how is that supposed to work? Well, that’s just not how it works. Superdeterminism is exactly as deterministic as plain old vanilla determinism. Think Newton’s laws. If you know the initial position and velocity of an arrow, you can calculate where it will land, at least in principle. That’s determinism: Everything that happens follows from what happened earlier. But in quantum mechanics we can only predict probabilities for measurement outcomes, rather than the measurement outcomes themselves. The outcomes are not determined, so quantum mechanics is indeterministic.

Superdeterminism returns us to determinism. According to superdeterminism, the reason we can’t predict the outcome of a quantum measurement is that we are missing information. This missing information is usually referred to as the “hidden variables”. I’ll tell you more about those later. But didn’t this guy what’s his name Bell prove that hidden variables are wrong?

No, he didn’t, though this is a very common misunderstanding, depressingly, even among physicists. Bell proved that a hidden variables theory which is (a) local and (b) fulfills an obscure assumption called “statistical independence” must obey an inequality, now called Bell’s inequality. We know experimentally that this inequality is violated. It follows that any local hidden variable theory which fits to our observations, has to violate statistical independence.

If statistical independence is violated, this means that what a quantum particle does depends on what you measure. And that’s how superdeterminism works: what a quantum particle does depends on what you measure. I’ll give you an example in a moment. But first let me tell you where the name superdeterminism comes from and why physicists get so upset if you mention it.

Bell didn’t like the conclusion which followed from his own mathematics. Like so many before and after him, Bell wanted to prove Einstein wrong. If you remember, Einstein had said that quantum mechanics can’t be complete because it has a spooky action at a distance. That’s why Einstein thought quantum mechanics is just an average description for a hidden variables theory. Bell in contrast wanted physicists to accept this spooky action. So he had to somehow convince them that this weird extra assumption, statistical independence, makes sense. In a 1983 BBC interview he said the following:
“There is a way to escape the inference of superluminal speeds and spooky action at a distance. But it involves absolute determinism in the universe, the complete absence of free will. Suppose the world is super-deterministic, with not just inanimate nature running on behind-the-scenes clockwork, but with our behavior, including our belief that we are free to choose to do one experiment rather than another, absolutely predetermined, including the “decision” by the experimenter to carry out one set of measurements rather than another, the difficulty disappears.”
This is where the word “superdeterminism” comes from. Bell called a violation of statistical independence “superdeterminism” and claimed that it would require giving up free will. He argued that there are only two options: either accept spooky action and keep free will which would mean that Bell was right, or reject spooky action but give up free will which would mean that Einstein was right. Bell won. Einstein lost.

Now you all know that I think free will is logically incoherent nonsense. But even if you don’t share my opinion, Bell’s argument just doesn’t work. Spooky action at a distance doesn’t make any difference for free will because the indeterministic processes in quantum mechanics are not influenced by anything, so they are not influenced by your “free will,” whatever that may be. And in any case, throwing out determinism just because you don’t like its consequences is really bad science.

Nevertheless, the mathematical assumption of “statistical independence” has since widely been called the “free will” assumption, or the “free choice” assumption. And physicists stopped questioning it to the point that today most of them don’t know that Bell’s theorem even requires this additional assumption.

This is not a joke. All the alleged strangeness of quantum mechanics has its origin in nomenclature. It was forced on us by physicists who called a mathematical statement the “free will assumption”, never mind that it’s got nothing to do with free will, and then argued that one must believe in it because one must believe in free will.

If you find this hard to believe, I can’t blame you, but let me read you a quote from a book by Nicolas Gisin, who is Professor for Physics in Geneva and works on quantum information theory.
“This hypothesis of superdeterminism hardly deserves mention and appears here only to illustrate the extent to which many physicists, even among specialists in quantum physics, are driven almost to despair by the true randomness and nonlocality of quantum physics. But for me, the situation is very clear: not only does free will exist, but it is a prerequisite for science, philosophy, and our very ability to think rationally in a meaningful way. Without free will, there could be no rational thought. As a consequence, it is quite simply impossible for science and philosophy to deny free will.”
Keep in mind that superdeterminism just means statistical independence is violated which has nothing to do with free will. However, even leaving that aside, fact is, the majority of philosophers either believe that free will is compatible with determinism, about 60% of them, or they agree with me that free will doesn’t exist anyway, about 10% of them.

But in case you’re still not convinced that physicists actually bought Bell’s free will argument, here is another quote from a book by Anton Zeilinger, one of the probably most famous physicists alive. Zeilinger doesn’t use the word superdeterminism in his book, but it is clear from the context that he is justifying the assumption of statistical independence. He writes:
“[W]e always implicitly assume the freedom of the experimentalist. This is the assumption of free will… This fundamental assumption is essential to doing science.”
So he too bought Bell’s claim that you have to pick between spooky action and free will. At this point you must be wondering just what this scary mathematical expression is that supposedly eradicates free will. I am about to reveal it, brace yourself. Here we go.

I assume you are shivering in fear of being robbed of your free will if one ever were to allow this. And not only would it rob you of free will, it would destroy science. Indeed, already in 1976, Shimony, Horne, and Clauser argued that doubting statistical independence must be verboten. They wrote: “skepticism of this sort will essentially dismiss all results of scientific experimentation”. And here is one final quote about superdeterminism from the philosopher Tim Maudlin: “besides being insane, [it] would undercut scientific method.”

As you can see, we have no shortage of men who have strong opinions about things they know very little about, but not like this is news. So now let me tell you how superdeterminism actually works, using the double slit experiment as an example.

In the double slit experiment, you send a coherent beam of light at a plate with two thin openings, that’s the double slit. On the screen behind the slit you then see an interference pattern. The interference isn’t in and by itself a quantum effect, you can do this with any type of wave, water waves or sound waves for example.

The quantum effects only become apparent when you let a single quantum of light go through the slits at a time. Each of those particles makes a dot on the screen. But the dots build up… to an interference pattern. What this tells you is that even single particles act like waves. This is why we describe them with wave-functions usually denoted psi. From the wave-function we can calculate the probability of measuring the particle in a particular place, but we can’t calculate the actual place.

Here’s the weird bit. If you measure which slit the particles go through, the interference pattern vanishes. Why? Well, remember that the wave-function – even that of a single particle – describes probabilities for measurement outcomes. In this case the wave-function would first tell you the particle goes through the left and right slit with 50% probability each. But once you measure the particle you know 100% where it is.

So when you measure at which slit the particle is you have to “update” the wave-function. And after that, there is nothing coming from the other slit to interfere with. You’ve destroyed the interference pattern by finding out what the wave did. This update of the wave-function is sometimes also called the collapse or the reduction of the wave-function. Different words, same thing.

The collapse of the wave-function doesn’t make sense as a physical process because it happens instantaneously, and that violates the speed of light limit. Somehow the part of the wave-function at the one slit needs to know that a measurement happened at the other slit. That’s Einstein’s “spooky action at a distance.”

Physicists commonly deal with this spooky action by denying that wave-function collapse is a physical process. Instead, they argue it’s just an update of information. But information about… what? In quantum mechanics there isn’t any further information beyond the wave-function. Interpreting the collapse as an information update really only makes sense in a hidden variables theory. In that case, a measurement tells you more about the possible values of the hidden variables.

Think about the hidden variables as labels for the possible paths that the particle could take. Say the labels 1 2 3 go to the left slit and the labels 4 5 6 go to the right slit and the labels 7 to 12 go through both. The particle really has only one of those hidden variables, but we don’t know which. Then, if we measure the particle at the left slit, that simply tells us that the hidden variable was in the 1 2 3 batch, if we measure it right, it was in the 4 5 6 batch, if we measure it on the screen, it was in the 7 - 12 batch. No mystery, no instantaneous collapse, no non-locality. But it means that the particle’s path depends on what measurement will take place. Because the particles must have known already when they got on the way whether to pick one of the two slits, or go through both. This is just what observations tell us.

And that’s what superdeterminism is. It takes our observations seriously. What the quantum particle does depends on what measurement will take place. Now you may say uhm drawing lines on YouTube isn’t proper science and I would agree. If you’d rather see equations, you’re most welcome to look at my papers instead.

Let us then connect this with what Bell and Zeilinger were talking about. Here is again the condition that statistical independence is violated. The lambda here stands for the hidden variables, and rho is the probability distribution of the hidden variables. This distribution tells you how likely it is that the quantum particle will do any one particular thing. In Bell’s theorem, a and b are the measurement settings of two different detectors at the time of measurement. And this bar here means you’re looking at a conditional probability, so that’s the probability for lambda given a particular combination of settings. When statistical independence is violated, this means that the probability for a quantum particle to do a particular thing depends on the detector settings at the time of measurement.

Since this is a point that people often get confused about, let me stress that it doesn’t matter what the setting is at any earlier or later time. This never appears in Bell’s theorem. You only need to know what’s the measurement that actually happens. It also doesn’t matter how one chooses the detector settings, that never makes any appearance either. And contrary to what Bell and Zeilinger argued, this relation does not restrict the freedom of the experimentalist. Why would it? The experimentalist can measure whatever they like, it’s just that what the particle does depend on what they measure.

And of course this won’t affect the scientific method. What these people were worrying about is that random control trials would be impossible if choosing a control group could depend on what you later measure.

Suppose you randomly assign people into two groups to test whether a vaccine is efficient. People in one group get the vaccine, people in the other group a placebo. The group assignment is the “hidden variable.” If someone falls ill, you do a series of tests to find out what they have, so that’s the measurement. If you think that what happens to people depends on what measurement you will do on them, then you can’t draw conclusions about the efficiency of the vaccine. Alrighty. But you know what, people aren’t quantum particles. And believing that superdeterminism plays a role for vaccine trials is like believing Schrödinger’s cat is really dead and alive.

The correlation between the detector settings and the behavior of a quantum particle which is the hallmark of superdeterminism only occurs when quantum mechanics would predict a non-local collapse of the wave-function. Remember that’s what we need superdeterminism for: that there is no spooky action at a distance. But once you have measured the quantum state, that’s the end of those violations of statistical independence.

I should probably add that a “measurement” in quantum mechanics doesn’t actually require a measurement device. What we call a measurement in quantum mechanics is really any sufficiently strong or frequent interaction with an environment. That’s why we don’t see dead and alive cats. Because there’s always some environment, like air, or the cosmic microwave background. And that’s also why we don’t see superdeterministic correlations in people.

Okay, so I hope I’ve convinced you that superdeterminism doesn’t limit anyone’s free will and doesn’t kill science, now let’s see what it’s good for.

Once you understand what’s going on with the double slit, all the other quantum effects that are allegedly mysterious or strange also make sense. Take for example a delayed choice experiment. In such an experiment, it’s only after the particle started its path that you decide whether to measure which slit it went through. And that gives the same result as the usual double slit experiment.

Well, that’s entirely unsurprising. If you considered measuring something but eventually didn’t, that’s just irrelevant. The only relevant thing is what you actually measure. The path of the particle has to be consistent with that. Or take the bomb experiment that I talked about earlier. Totally unsurprising, the photon’s path just depends on what you measure. Or the quantum eraser. Of course the path of the particle depends on what you measure. That’s exactly what superdeterminism tells you!

So, in my eyes, all those experiments have been screaming us into the face for half a century that what a quantum particle does depends on the measurement setting, and that’s superdeterminism. The good thing about superdeterminism is that since it’s local it can easily be combined with general relativity, so it can help us find a theory of quantum gravity.

Let me finally talk about something less abstract, namely how one can test it. You can’t test superdeterminism by measuring violations of Bell’s inequality because it doesn’t fulfil the assumptions of Bell’s theorem, so doesn’t have to obey the inequality. But superdeterminism generically predicts that measurement outcomes in quantum mechanics are actually determined, and not random.

Now, any theory that solves the measurement problem has to be non-linear, so the reason we haven’t noticed superdeterminism is almost certainly that all our measurements so far have been well in the chaotic regime. In that case trying to make a prediction for a measurement outcome is like trying to make a weather forecast for next year. The best you can do is calculate average values. That’s what quantum mechanics gives us.

But if you want to find out whether measurement outcomes are actually determined, you have to get out of the chaotic regime. This means looking at small systems at low temperatures and measurements in a short sequence, ideally on the same particle. Those measurements are currently just not being done. However, there is a huge amount of progress in quantum technologies at the moment, especially in combination with AI which is really good for finding new patterns. And this makes me think that at some point it’ll just become obvious that measurement outcomes are actually much more predictable than quantum mechanics says. Indeed, maybe someone already has the data, they just haven’t analyzed it the right way.

I know it’s somewhat boring coming from a German but I think Einstein was right about quantum mechanics. Call me crazy if you want but to me it’s obvious that superdeterminism is the correct explanation for our observations. I just hope I’ll live long enough to see that all those men who said otherwise will be really embarrassed.

Thursday, December 16, 2021

Public Event in Canada coming up in April

Yes, I have taken traveling back up and optimistically agreed to a public event in Vancouver on April 14, together with Lawrence Krauss and Chris Hadfield. If you're in the area, it would be lovely to see you there! Don't miss the trailer video

Tickets will be on sale from Jan 1st on this website.

Saturday, December 11, 2021

Is the Hyperloop just Hype?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

A few weeks ago I talked about hypersonic flight and why that doesn’t make sense to me. A lot of you asked what’s with Elon Musk’s hyperloop. Does it make any more sense to push high speed trains through vacuum tubes? Can we maybe replace flights with hyperloops? And what’s a hyperloop in the first place? That’s what we’ll talk about today.

As I told you in my previous video, several companies have serious plans to build airplanes that fly more than five times the speed of sound. But physics gets in the way. At such high speed, air resistance rises rapidly. Even if you manage to prevent the plane from melting or simply falling into pieces, you still need a lot of fuel to counter the pressure of the atmosphere. You could instead try flying so high up that the atmosphere is incredibly thin. But you have to get there in the first place, and that too consumes a lot of fuel.

So why don’t we instead build airtight tubes, pump as much air out of them as possible, and then accelerate passenger capsules inside until they exceed the speed of sound? That’s the idea of the “hyperloop” which Elon Musk would like to see become reality. He is a busy man, however, so he made his take on the idea available open source and hopes someone else does it.

The idea of transporting things by pushing them through tubes isn’t exactly new. It’s been used since the eighteenth century to transport small goods and mail. You still find those tube systems today in old office buildings or in hospitals.

In 1908, Joseph Stoetzel, an inventor from Chicago sent his own child through such a tube to prove it was safe. Yeah I’m not sure what ethics committees would say about that today.

The idea to create vacuum in a tube and put a train inside is also not new. It was proposed already in 1904 by the engineer and physicist Robert Goddard, who called it the “vactrain”.

The quality of a vacuum can be measured either in pressure or in percent. A zero percent vacuum is no vacuum, so just standard atmospheric pressure. A one hundred percent vacuum would be no air at all. An interest group in Switzerland has outlined a plan to build a network of high speed trains that would use tunnels with a 93 percent vacuum in which trains could reach about 430 kilometers per hour. That’s about 270 miles per hour if you’re American or about one point four times 10 to the 9 hands per fortnight if you’re British.

It doesn’t look like the Swiss plan has much of a chance to become reality, but about 10 years ago Elon Musk put forward his plan for the hyperloop. Its first version should reach about 790 miles per hour, which is just barely above the speed of sound. But you should think of it as a proof of concept. If it works for reaching the speed of sound, you can probably go above that too. Once you’ve removed the air, speed is really far less of a problem.

Hyperloop is not the name of a company, but the name for the conceptual idea, so there are now a number of different companies trying to turn the idea into reality. The first test for the hyperloop with passengers took place last year with technology from the company Virgin Hyperloop. But there are other companies working on it too, for example Hyperloop Transportation Technologies which is based in California, or TransPod which is based in Canada.

Why the thing’s called the hyperloop to begin with is somewhat of a mystery, probably not because it’s hype going around in a loop. More likely because it should one day reach hypersonic speeds and go in a loop, maybe around the entire planet. Who knows.

Elon did his first research on the hyperloop using a well-known rich-man’s recipe: let others do the job for free. From 2015 to 2019 Musk’s company Space X sponsored a competition in which teams presented their prototypes to be tested in a one kilometer tube. All competitions were won by the Technical University of Munich, and their design served as the base for further developments.

So what are the details of the hyperloop? In 2013 Elon Musk published a white paper called “Hyperloop Alpha” in which he proposed that the capsules would carry 28 passengers each through a 99.9 percent vacuum, that’s about 100 Pascal, and they would be levitated by air-cushions. The idea was that you suck in the remaining air in the tunnel from the front of the capsule and blow it out at the bottom.

This sounds good at first, but that’s where the technical problems begin. If you crunch the numbers, then the gap which the air-cushion creates between the bottom of the capsule and the tube is about one millimeter. This means if there’s any bump or wiggle or two people stand up to go to the loo at the same time, the thing’s going to run into the ground. That’s not good. This is why the companies working on the hyperloop have abandoned the air cushion idea and instead go for magnetic levitation. The best way to achieve the strong fields necessary for magnetic levitation is to use superconducting magnets.

The downside is that they need to be cooled with expensive cryogenic systems, but magnetic levitation can create a gap of about ten centimeters between the passenger capsule and the tunnel which should be enough to comfortably float over all bumps and wiggles.

But there are lots of other technical problems to solve and they’re all interconnected. This figure from Virgin Hyperloop explains it all in one simple diagram. Just in case that didn’t explain it, let me mention some of the biggest problems.

First, you need to maintain the vacuum in the tube, possibly over hundreds of kilometers, and the tube needs to have exits, both regular ones at the stations and emergency exits in between. If you put the tube in a tunnel, you have to cope with geological stress. But putting the tube on pillars over ground has its own problems.

A group of researchers from the UK showed last year that at such high speeds as the hyperloop is supposed to go, the risk of resonance catastrophes significantly increases. In a nutshell this means that the pillars would have to be much stronger than usual and have extra vibration dampers.

The other problem with putting the tube over ground is that temperature changes will create stress on the tube by expansion and contraction. That’s a bigger problem than you may expect because the vacuum slows down the equilibration of temperature changes in the tube. Since temperature changes tend to be larger over ground, digging a tunnel seems the way to go. Unfortunately, digging tunnels is really expensive, so there’s a lot of upfront investment.

This brings me to the second problem. To keep costs low you want to keep the tunnel small, but if the space between the capsule and the tunnel wall is too small you can’t reach high speeds despite near vacuum.

The issue is that even though the air pressure is so low, there’s still air in that tunnel which needs to go around the capsule. If the air can’t go around the capsule, it’ll be pushed ahead of the capsule, limiting its speed. This is known as the Kantrowitz limit. Exactly when this happens is difficult to calculate because the capsules trigger acoustic waves that go back and forth through the tunnels.

The third problem is that you don’t want the passengers to stick flat to the walls each time the capsule changes direction. But the forces coming from the change of direction increase with the square of the velocity. They also go down inversely with the increase of the radius of curvature though. The radius of curvature is loosely speaking the radius of a circle you can match to a stretch of a curve, in this case to a stretch of the hyperloop track. To keep the acceleration inside the capsule manageable, if you double the speed you have to increase the radius of curvature by four. This means basically that the hyperloop has to go in almost perfectly straight lines, or slow down dramatically to change direction.

And this brings me to the fourth problem. The thing shakes, it shakes a lot, and it’s not clear how to solve the problem. Take a look to the footage of the Virgin Hyperloop test and pay attention to the vibration.

It’s noticeable, but you may say it’s not too bad. Then again, they reached a velocity of merely 100 miles per hour. Passengers may be willing to accept the risk of dying from leaks in a capsule surrounded by near vacuum. But only as long as they’re comfortable before they die. I don’t think they’ll accept having their teeth shook out along the way.

So the hyperloop is without doubt facing a lot of engineering challenges that will take time to sort out. However, I don’t really see a physical obstacle to making the hyperloop economically viable in the long run. Also, in the short run it doesn’t even have to be profitable. Some governments may want to build one just to show off their technological capital. Indeed, small scale hyperloops are planned for the near future in China, Abu Dhabi and India, though none of those will reach the speed of sound, and they’re basically just magnetically levitated trains in tubes.

What do governments think? In 2017, the Science Advisory Council of the Department of Transport in the UK looked at Musk’s 2013 white paper. They concluded that “because of the scale of the technical challenges involved, an operational Hyperloop system is likely to be at least a couple of decades away.” A few months ago they reasserted this position and stated that they still favor high speed rail. To me this assessment sounds reasonable for the time being.

In summary, the hyperloop isn’t just hype, it may one day become a real alternative to airplanes. But it’s probably not going to happen in the next two decades.

Saturday, December 04, 2021

Where is the antimatter?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Is it ant-ee-matter or ant-ai-matter? What do ants think about it and why isn’t aunt pronounced aunt. These are all good questions that we’ll not talk about today. Instead I want to talk about why the universe contains so little antimatter, why that’s not a good question, and if there might be stars made antimatter in our galaxy. Welcome to another episode of science without the gobbledygook.

Last year, I took part in a panel debate on the question why there’s so little anti-matter in the universe. It was organized by the Institute of Art and Ideas in the UK and I was on the panel together with Lee Smolin from Perimeter Institute and Tara Shears who’s a professor for particle physics in Liverpool. This debate was introduced with the following description:

“Antimatter has fascinated since it was proposed by Dirac in the 1920s and confirmed with the discovery of the positron a few years later. Heisenberg - the father of modern physics - referred to its discovery as “the biggest jumps of all the big jumps in physics”. But there’s a fundamental problem. The theory predicts the disappearance of the universe within moments of its inception as matter and antimatter destroy each other in a huge cataclysm.”

Unfortunately, that’s wrong, and I don’t just mean that Heisenberg wasn’t the father of modern physics, I mean it’s wrong that Dirac’s theory predicts the universe shouldn’t exist. I mean, if it did, it would have been falsified.

When I did the debate I found this misunderstanding a little odd… but I have since noticed that it’s far more common than I realized. And it’s yet another one of those convenient misunderstandings that physicists don’t make a lot of effort clearing up. In this case it’s convenient because they want you to believe that there is this big mystery and to solve it, we need some expensive experiments.

So let’s have a look at what Dirac actually discovered. Dirac was bothered by the early versions of quantum mechanics because they were incompatible with Einstein’s theory of special relativity. In a nutshell this is because, if you just quickly look at the equation which they used at the time, you have one time derivative but two space derivatives. So space and time are not treated the same way which, according to Einstein, they should be.

Dirac found a way to remedy this problem, and his remedy is what’s now called the Dirac equation. You can see right away that it treats space and time derivatives the same way. And so it’s neatly compatible with Einstein’s Special Relativity.

But. Here’s the thing. He also found that the solutions to this equation always come in pairs. And, after some back and forth, it turned out that those pairs are particles of the same mass but of opposite electric charge. So, every particle has partner particle with opposite charge, which is called it’s “anti-particle”. Though in some cases the particle and anti-particle are identical. That’s the case for example for the photon, which has no electric charge.

The first anti-particle was detected only a few years after Dirac’s derivation. In my mind this is one of the most impressive predictions in the history of science. Dirac solved a mathematical problem and from that he correctly predicted a new type of particle. But what Dirac’s equation tells you is really just that those particles exist. It tells you nothing about how many of them are around in the universe. Dirac’s equation doesn’t say any more about the amount of anti-matter in the universe than Newton’s law of gravity tells you about the number of apples on earth.

The number of particles of any kind in the universe is an initial condition, which means you have to specify this number at some moment in time, usually early in the universe, and then you can use Dirac’s and other equations to calculate what happens later. This means that the amount of particles is just a number that you must enter into the model for the universe. This number can’t be calculated, so one just extracts it from observations. It can’t be calculated because all our current theories work with differential equations. And those equations need the initial conditions to work. The only way you can explain an initial condition is with an even earlier initial condition. You’ll never get rid of it. I talked more about differential equations in an earlier video, so check this out for more.

The supposed problem with the amount of antimatter is often called the matter anti-matter asymmetry or the baryon asymmetry problem. Those terms refer to the same issue. The argument is that if matter and anti-matter had been present in the early universe in exactly the same amounts, then they’d have annihilated and just left behind a lot of radiation. So how come we have all this stuff around?

Well, was it maybe because there wasn’t an equal amount of matter and anti-matter in the early universe? Indeed, that solves the problem.

Case closed? Of course not. Because physicists make a living from solving problems, so they have an incentive to create problems where there aren’t any. For anti-matter this works as follows. You can calculate that to correctly obtain the amount of radiation and matter we see today, the early universe must have contained just a tiny little bit more matter than anti-matter. A tiny little bit means a ratio of about 1.0000000001.

If it had been exactly one, there’d be only radiation left. But it wasn’t exactly one, so today there’s us.

Particle physicists now claim that the ratio should have been 1 exactly. That’s because for some reason they believe that this number is somehow better than the number which actually describes our observations. Why? I don’t know. Remember that none of our theories can actually predict this number one way or another. But once you insist that the ratio was actually one, you have to come up with a mechanism for how it ended up not being one. And then you can publish papers with all kinds of complicated solutions to the problem which you just created.

To see why I say this is a fabricated problem, let us imagine for a moment that if the matter anti-matter ratio was 1 exactly that would actually describe our universe. It doesn’t, of course, but just for the sake of the argument imagine the theory was so that 1 was indeed compatible with observer. Remember that this is the value that physicists argue the number should have. What would they say if it was actually correct? They would probably remember that Dirac’s theory actually did not predict that this number must have been exactly. So then they’d ask why it is equal to one, just like they do now ask why its 1.0000000001. As I said, it doesn’t matter what the number is, we can’t explain it one way or another.

You sometimes hear particle physicists claim that you can shed light on this alleged problem with particle colliders. They say this because you can use particle colliders to test for certain interactions that would shift the matter anti-matter ratio. However, these shifts are too small to bring us from 1 exactly to the observed ratio. This means not only is there no problem to begin with, even if you think there is a problem, particle colliders won’t solve it.

The brief summary is that the matter antimatter asymmetry is a pseudo-problem. It can be solved by using an initial value that agrees with observations, and that’s that. Of course it would be nice to have a deeper explanation for that initial value. But within the framework of the theories that we currently have, such an explanation is not possible. You always have to choose an initial state, and you do that just because it explains what we observe. If a physicist tries to tell you otherwise, ask them where they get their initial state from.

You may now wonder though how well we actually know how much anti-matter there is in the universe. If Dirac’s theory doesn’t predict how much it is, maybe we’re underestimating how much there is? Indeed, it isn’t entirely impossible that big chunks of antimatter float around somewhere in the universe. Weirder still, if you remember, anti-matter is identical to normal matter except for its electric charge.

So for all we know you can make stars and planets out of anti-matter, and they would work exactly like ours. Such “anti-stars” could survive in the present universe for quite a long time because there is very little matter in outer space, so they would annihilate only very slowly. But when the particles floating around in outer space come in contact with such an anti-star that would create an unusual glow.

Astrophysicists can and have looked for such a glow around stars that might indicate the stars made of antimatter. Earlier this year, a group of researchers from Toulouse in France analyzed data from the Fermi telescope. They identified fourteen candidates for anti-stars in our galactic neighborhood which they now investigate closer. They also use this to put a bound on the overall fraction of anti-stars which is about 2 per million, in galactic environments similar as ours.

While such anti-stars could in principle exist, it’s very difficult to understand how they would have escaped annihilation during the formation of our galaxy. So it is a very speculative idea which is a polite way of saying I think it’s nonsense. But, well, when Dirac predicted anti-matter his colleagues also thought that was nonsense, so let’s wait and see what further observations show.

Saturday, November 27, 2021

Does Anti-Gravity Explain Dark Energy?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

One of the lesser known facts about me is that I’m one of the few world experts on anti-gravity. That’s because 20 years ago I was convinced that repulsive gravity could explain some of the puzzling observations astrophysicists have made which they normally attribute to dark matter and dark energy. In today’s video I’ll tell you why that didn’t work, what I learned from that, and also why anti-matter doesn’t fall up.

Newton’s law of gravity says that the gravitational force between two masses is the product of the masses, divided by the square of the distance between them. And then there’s a constant that tells you how strong the force is. For the electric force between two charges, we have Coulomb’s law, that says the force is the product of the charges, divided by the square of the distance between them. And again there’s a constant that tells you how strong the force is.

These two force laws look pretty much the same. But the electric force can be both repulsive and attractive, depending on whether you have two negative or two positive charges, or a positive and a negative one. The gravitational force, on the other hand, is always attractive because we don’t have any negative masses. But why not?

Well, we’ve never seen anything fall up, right? Then again, if there was any anti-gravitating matter, it would be repelled by our planet. So maybe it’s not so surprising that we don’t see any anti-gravitating matter here. But it could be out there somewhere. Why aren’t physicists looking for it?

One argument that you may have heard physicists bring up is that negative masses can’t exist because that would make the vacuum decay. That’s because, if negative masses exist, then so do negative energies. Because, E equals m c squared and so on. Yes, that guy again.

And if we had negative energies, then you could create pairs of particles with negative and positive energy from nothing and particle pairs would spontaneously pop up all around us. A theory with negative masses would therefore predict that the universe doesn’t exist, which is in conflict with evidence. I’ve heard that argument many times. Unfortunately it doesn’t work.

This argument doesn’t work because it confuses to different types of mass. If you remember, Einstein’s theory of general relativity is based on the Equivalence Principle, that’s the idea that gravitational mass equals inertial mass. The gravitational mass is the mass that appears in the law of gravity. The inertial mass is the mass that resists acceleration. But if we had anti-gravitating matter, only its gravitational mass would be negative. The inertial mass always remains positive. And since the energy-equivalent of inertial mass is as usual conserved, you can’t make gravitating and anti-gravitating particles out of nothing.

Some physicists may argue that you can’t make anti-gravity compatible with general relativity because particles in Einstein’s theory will always obey the equivalence principle. But this is wrong. Of course you can’t do it in general relativity as it is. But I wrote a paper many years ago in which I show how general relativity can be extended to include anti-gravitating matter, so that the equivalence principle only holds up to a sign. That means, gravitational mass is either plus or minus inertial mass. So, in theory that’s possible. The real problem is, well, we don’t see any anti-gravitating matter.

Is it maybe that anti-matter anti-gravitates. Anti-matter is made of anti-particles. Anti-particles are particles which have the opposite electric charge to normal particles. The anti-particle of an electron, for example, is the same as the electron just with a positive electric charge. It’s called the positron. We don’t normally see anti-particles around us because they annihilate when they come in contact with normal matter. Then they disappear and leave behind a flash of light or, in other words, a bunch of photons. And it’s difficult to avoid contact with normal matter on a planet made of normal matter. This is why we observe anti-matter only in cosmic radiation or if it’s created in particle colliders.

But if there is so little anti-matter around us and it lasts only for such short amounts of time, how do we know it falls down and not up? We know this because both matter and anti-matter particles hold together the quarks that make up neutrons and protons.

Inside a neutron and proton there aren’t just three quarks. There’s really a soup of particles that holds the quarks together, and some of the particles in the soup are anti-particles. Why don’t those anti-particles annihilate? They do. They are created and annihilate all the time. We therefore call them “virtual particles.” But they still make a substantial contribution to the gravitational mass of neutrons and protons. That means, crazy as it sounds, the masses of anti-particles make a contribution to the total mass of everything around us. So, if anti-matter had a negative gravitational mass, the equivalence principle would be violated. It isn’t. This is why we know anti-matter doesn’t anti-gravitate.

But that’s just theory, you may say. Maybe it’s possible to find another theory in which anti-particles only anti-gravitate sometimes, so that the masses of neutrons and protons aren’t affected. I don’t know any way to do this consistently, but even so, three experiments at CERN are measuring the gravitational behavior of anti-matter.

Those experiments have been running for several years but so far the results are not very restrictive. The ALPHA experiment has ruled out that anti-particles have anti-gravitating masses, but only if the absolute value of the mass is much larger than the mass of the corresponding normal particle. This means so far they ruled out something one wouldn’t expect in the first place. However, give it a few more years and they’ll get there. I don’t expect surprises from this experiment. That’s not to say that I think it shouldn’t be done. Just that I think the theoretical arguments for why anti-matter can’t anti-gravitate are solid.

Okay, so anti-matter almost certainly doesn’t anti-gravitate. But maybe there’s another type of matter out there, something new entirely, and that anti-gravitates. If that was the case, how would it behave? For example, if anti-gravitating matter repels normal matter, then does it also repel among itself, like electrons repel among themselves? Or does it attract its own type?

This question, interestingly enough, is pretty easy to answer with a little maths. Forces are mediated by fields and those fields have a spin which is a positive integer, so, 0, 1, 2, etc.

For gravity, the gravitational mass plays the role of a charge. And the force between two charges is always proportional to the product of those charges times minus one to the power of the spin.

For a spin zero field, the force is attractive between like charges. But electromagnetism is mediated by a spin-1 field, that’s electromagnetic radiation or photons if you quantize it. And this is why, for electromagnetism, the force between like charges is repulsive but unlike charges attract. Gravity is mediated by a spin-2 field, that’s gravitational radiation or gravitons if you quantize it. And so for gravity it’s just the other way round again. Like charges attract and unlike charges repel. Keep in mind that for gravity the charge is the gravitational mass.

This means, if there is anti-gravitating matter it would be repelled by the stuff we are made of, but clump among itself. Indeed, it could form planets and galaxies just like ours. The only way we would know about it, is its gravitational effect. That sound kind of like, dark matter and dark energy, right?

Indeed, that’s why I thought it would be interesting. Because I had this idea that anti-gravitating matter could surround normal galaxies and push in on them. Which would create an additional force that looks much like dark matter. Normally the excess force we observe is believed to be caused by more positive mass inside and around the galaxies. But aren’t those situations very similar? More positive mass inside, or negative mass outside pushing in? And if you remember, the important thing about dark energy is that it has negative pressure. Certainly if you have negative energy you can also get negative pressure somehow.

So using anti-gravitating matter to explain dark matter and dark energy sounds good at first sight. But at second sight neither of those ideas work. The idea that galaxies would be surrounded by anti-gravitating matter doesn’t work because such an arrangement would be dramatically unstable. Remember the anti-gravitating stuff wants to clump just like normal matter. It wouldn’t enclose galaxies of normal matter, it would just form its own galaxies. So getting anti-gravity to explain dark matter doesn’t work even for galaxies, and that’s leaving aside all the other evidence for dark matter.

And dark energy? Well, the reason that dark energy makes the expansion of the universe speed up is actually NOT that it has negative pressure. It’s that the ratio of the energy density over the pressure is negative. And for anti-gravitating matter, they both turn negative so that the ratio is the same. Contrary to what you expect, that does not speed up the expansion of the universe.

Another way to see this is by noting that anti-gravitating matter is still matter and behaves like matter. Dark energy on the contrary does not behave like matter, regardless of what type of matter. This is why I get a little annoyed when people claim that dark energy is kind of like anti-gravity. It isn’t.

So in the end I developed this beautiful theory with a new symmetry between gravity and anti-gravity. And it turned out to be entirely useless. What did I learn from this? Well, that I wasted a considerable amount of my time on this was one of the reasons I began thinking about more promising ways to develop new theories. Clearly just guessing something because it’s pretty is not a good strategy. In the end, I wrote an entire book about this. Today I try to listen to my own advice, at least some of time. I don’t always listen to myself, but sometimes it’s worth the effort.

Saturday, November 20, 2021

The 3 Best Explanations for the Havana Syndrome

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

In late 2016, United States diplomats working in Cuba began reporting health problems: persistent headaches, vertigo, blurred vision. They were dizzy. They heard sounds coming from nowhere. The affected diplomats were questioned and examined by doctors but the symptoms didn’t fit any known disease. They called it the “Havana Symptom”.

More cases were later reported from China and Russia, Germany and Austria, even from near the White House. A CIA agent in Moscow was allegedly so badly affected that he had to retire. And just a few weeks ago another case made headlines: a CIA officer fell ill during a visit to India. What explanations have doctors put forward for those incidents? What could the Havana Syndrome be? That’s what we’ll talk about today.

Before we talk about sounds from nowhere, I want to briefly thank our supporters on Patreon. Your support makes it so much easier to keep this channel going. Special thanks to our biggest supporters in tier four. We couldn’t do it without you. And too you can help us. Go check out our Patreon page, or support us right here on YouTube by clicking on the “join” button just below this video. Now let’s look at the Havana Syndrome.

The “Havana” Syndrome got its name from the place where it was first reported in 2016. But since then it has appeared in many other countries. For this reason, a spokesperson from the US State Department told Newsweek “We refer to these incidents as “unexplained health incidents” or “UHIs.””

A common report among the affected people is that they hear recurring sounds but can’t identify a source. The Associated Press obtained a recording of what is allegedly one of those mysterious sounds. It was recorded in a private home of a diplomat in Cuba. Here is how that sounds.

Hmm. But not all the affected people in Cuba heard sounds, and it’s not clear that those who *did heard exactly the same thing. Doctors have focused on three different explanations (a) mass hysteria (b) microwaves and (c) ultrasound. We’ll go through these one by one.

(a) mass hysteria

Are those people just imagining they’ve been targeted by some secret weapon and are making themselves ill by worrying about their health? Are they maybe just stressed or bored? Well, in Cuba, the affected diplomats were examined by a military doctor who found most of the patients had suffered inner-ear damage, apparently from an external force. The problem is though that the patients’ health records from before the incident are spotty, so it’s difficult to pinpoint when that damage happened, if it happened.

In the United States, the affected government personnel were also thoroughly examined. Unfortunately, a 2018 paper about their symptoms was widely discredited, but in 2019 a group of neurologists published another paper in the Journal of the American Medical Association and they did find quite compelling evidence for neurological problems among the affected people.

They compared 40 members of the US government who had reported suffering from the Havana Syndrome with 48 control patients with similar demographics, so similar age, gender and educational attainment. They scanned all these people’s brains using magnetic resonance imaging and found the following:

First, no significant difference between groups in the brain gray matter, and no significant difference in the so-called executive control subnetwork, that’s the part of the brain involved in thinking and planning.

But, they did find significant between-group differences for the brain white matter that contains the connective tissue between the neurons. The patients’ volume of white matter was on the average twenty-seven cubic centimeters smaller. That means they’ve lost about five percent of the entire white matter.

This finding that has a p-value of below 0.001. As a reminder, the p-value tells you how statistically significant a finding is. The smaller, the more significant. The typical threshold is 0.05, so this finding meets the criterion of statistical significance.

They also found that patients had a significantly lower mean diffusivity in the connection between two hemispheres of the brain. Just exactly what consequences this has is somewhat unclear, but this difference too has a p-value of below 0.001.

Then there’s a significantly lower mean functional connectivity in the auditory subnetwork that you need for hearing and orientation with a p-value of about 0.003, and a lowered mean functional connectivity in the part of the brain necessary for spatial orientation, with a p-value of 0.002

The lead author of the paper, told the New York Times that this means a wholly psychogenic or psychosomatic cause is very unlikely. In other words, they probably didn’t imagine it.

Case settled? Of course not. A caveat of this study is that the patients had done exercises to improve their physical and cognitive health already before the examination, so the differences to the control group may have been affected by that. However, seeing those p-values I am willing to believe that something strange is going on.

There are other reasons to think that purely psychosomatic reasons don’t explain what’s happened. For example, the first cases in Cuba were treated confidentially and didn’t appear in the news until six months later. And yet there were several different people suddenly seeing doctors for similar symptoms at almost the same time. Those symptoms came on rather suddenly and were reportedly accompanied by strange sounds. The affected people described those sounds as sharp, disorienting, or oddly focused.

Let’s then talk about the second explanation, microwaves.

Microwave troubles in embassies aren’t entirely new. During the cold war, the US embassy in Moscow was permanently radiated by microwaves, presumably by the Soviets. No one knows exactly why, but the speculation is that it was for surveillance or counter-surveillance and not designed to cause health damage.

But in the 1970s the US ambassador to the Soviet Union, Walter Stoessel, fell dramatically ill, besides nausea and anemia, one of his symptoms was that his eyes were bleeding. Ugh.

In a now declassified nineteen seventy-five phone call, Henry Kissinger linked Stoessel’s illness to microwaves, admitting “we are trying to keep the thing quiet.” Stoessel died of leukemia at the age of sixty-six, about ten years after he first fell ill.

So, microwaves have been the main suspect because they have history. Could they maybe have caused those mysterious sounds? But how could that possibly be? Microwaves are electromagnetic waves, not sound waves. Certainly our ears don’t detect microwaves.

Well, actually. Let me introduce you to Allan Frey. Frey was an American neuroscientist. In 1960, a radar technician told Frey he could hear microwave pulses. This didn’t make any sense to Frey but he tried it himself and heard it too! He then did a series of experiments in which he exposed people to pulses of microwave radiation at low power, well within the safe regime. He found that not only did they generally hear the pulses, much weirder: deaf people could hear them too. It’s a real thing and is now called the “Frey effect.”

Frey explained that this works as follows. First, the electromagnetic energy from the radiation is absorbed by neural tissue near the surface of the skull. This creates tiny periodic temperature changes. It’s only about five millionths of a degree Celsius but these temperature changes further cause a periodic thermal expansion and contraction of the tissues. And this oscillating tissue creates a pressure wave that propagates and excites the cochlea in the inner ear. This is why we interpret it as a sound.

The frequency of the induced sound, interestingly enough, does not depend on the frequency of the microwaves. It’s a kind of resonance effect and the frequency you hear depends on the acoustic properties of brain tissue and… the size of your head. So, could microwaves lead to mystery sounds? Totally.

Microwave pulses have also been tested as weapons by various nations and are known to cause a variety of symptoms like headaches, dizziness, or nausea. There is also the work of professor James Lin an American electrical engineer who subjected himself to microwaves in his laboratory during the 1970’s. He has written a book on the subject of auditory effects of microwaves and continues publishing papers on the subject. His descriptions match the ones of the people affected by the Havana syndrome quite well.

The authors of the most in detail paper on the cases in Havana also concluded that microwaves were the most likely explanation. And more anecdotally, there’s the report of Beatrice Golomb, a professor at the University of California, San Diego.

Golomb has long researched the health effects of microwaves and offered help to the diplomats affected in China. She claims that family members of personnel tried to measure if there were microwaves by using commercially available equipment. She told the BBC: “The needle went off the top of the available readings.” Then again one person’s story about how someone else tried to measure something isn’t exactly the most reliable evidence.

Still, microwaves seem plausible. A recent piece in the New York Times claimed that microwave weapons are too large to target people in secret. However, several experts have argued that it’s full well possible to put such a weapon into a van and this way bring it into the vicinity of an embassy. Of course this makes you wonder why the heck someone would want to expose diplomats around the world to microwaves with no particular purpose or outcome.

Let’s then talk about option (c) Ultrasound.

Depending on the intensity, exposure to sound, even if we can’t hear it, can cause temporary discomfort, nausea, or even permanent damage of the eardrum. In some countries, for example the United States and Germany, the police sometimes use sonic weapons to disperse crowds. But last year, the US Academy of Doctors of Audiology released a statement warning that these devices sometimes cause permanent loss of hearing, problems with orientation and balance, tinnitus, and injury to the ear. That doesn’t sound so different from the symptoms of the Havana syndrome.

The advantage of this hypothesis is that there’s a possible answer to the “why” question. In 2018, researchers from the University of Michigan proposed the effects could have been caused by improperly placed Cuban spy gear. If two or more surveillance devices that use ultrasound are placed too closely together, they can interfere and create an audible sound. Then again, if you want to explain all the reported cases that way, you’d need a lot of incompetent spies.

So, well. Let’s hear that recording from the associated press again. Hmm. What does it sound like to you? When Fernando Montealegre heard the sound it reminded him of the crickets he collected as a child. Montealegre is a professor of sensory biology at the University of Lincoln in the UK. Together with a colleague, he searched a database of insect sounds to see if any matched the tape. The researchers found that the recording from Cuba matches perfectly to the call of the Indies short-tailed cricket.

As you see, this is a really difficult story and no one presently has a good explanation for what has happened. Most importantly I think we must keep in mind that there could actually be a number of different reasons for why those people fell ill. While it seems unlikely that the first cases in Cuba spread by mass hysteria, the cases in China only began after those in Cuba had made headlines, so that’s an entirely different situation.

There are also of course a lot of conspiracy theories ranking around the Havana syndrome. Is it a coincidence that the cases in Cuba began right after Trump’s election? It is a coincidence that Fidel Castro died around the same time? Is it a coincidence that only a few weeks later Russia and Cuba signed a defense cooperation agreement? I don’t have any insights into this, but let me know what you think in the comments.

Saturday, November 13, 2021

Why can elementary particles decay?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Physicists have so far discovered twenty-five elementary particles that, for all we currently know, aren’t made up of anything else. Most of those particles are unstable, and they’ll decay to lighter particles within fractions of a second. But how can it possibly be that a particle which decays is elementary. If it decays doesn’t this mean it was made up of something else? And why do particles decay in the first place? At the end of this video, you’ll know the answers.

The standard model of particle physics contains 25 particles. But the matter around us is almost entirely made up of only half of them. First, there’s the electron. Then there are the constituents of atomic nuclei, the neutrons and protons, which are made of different combination of up and down quarks. That’s 3. And those particles are held together by photons and the 8 gluons of the strong nuclear force. So that’s twelve.

What’s with the other particles? Let’s take for example the tau. The tau is very similar to the electron, except it’s heavier by about a factor 4000. It’s unstable and has a lifetime of only three times ten to the minus thirteen seconds. It then decays, for example into an electron, a tau-neutrino and an electron anti-neutrino. So is the tau maybe just made up of those three particles. And when it decays they just fly apart?

But no, the tau isn’t made up of anything, at least not according to all the observations that we currently have. There are several reasons physicists know this.

First, if the tau was made up of those other particles, you’d have to find a way to hold them together. This would require a new force. But we have no evidence for such a force. For more about this, check out my video about fifth forces.

Second, even if you’d come up with a new force, that wouldn’t help you because the tau can decay in many different ways. Instead of decaying into an electron, a tau-neutrino and an electron anti-neutrino, it could for example decay into a muon, a tau-neutrino and a muon anti-neutrino. Or it could decay into a tau-neutrino and a pion. The pion is made up of two quarks. Or it could decay into a tau-neutrino and a rho. The rho is also made up of two quarks, but different ones than the pion. And there are many other possible decay channels for the tau.

So if you’d want the tau to be made up of the particles it decays into, at the very least there’d have to be different tau particles, depending on what they’re made up of. But we know that that this can’t be. The taus are exactly identical. We know this because if they weren’t, they’d themselves be produced in larger numbers in particle collisions than we observe. The idea that there are different versions of taus is therefore just incompatible with observation.

This, by the way, is also why elementary particles can’t be conscious. It’s because we know they do not have internal states. Elementary particles are called elementary because they are simple. The only way you can assign any additional property to them, call that property “consciousness” or whatever you like, is to make that property entirely featureless and unobservable. This is why panpsychism which assigns consciousness to everything, including elementary particles, is either bluntly wrong – that’s if the consciousness of elementary particles is actually observable, because, well, we don’t observe it – or entirely useless – because if that thing you call consciousness isn’t observable it doesn’t explain anything.

But back to the question why elementary particles can decay. A decay is really just a type of interaction. This also means that all these decays in principle can happen in different orders. Let’s stick with the tau because you’ve already made friends with it. That the tau can decay into the two neutrinos and an electron just means that those four particles interact. They actually interact through another particle, with is one of the vector bosons of the weak interaction. But this isn’t so important. Important is that this interaction could happen in other orders. If an electron with high enough energy runs into a tau neutrino, that could for example produce a tau and an electron neutrino. In that case what would you think any of those particles are “made of”? This idea just doesn’t make any sense if you look at all the processes that we know of that taus are involved in.

Everything that I just told you about the tau works similarly for all of the other unstable particles in the standard model. So the brief answer to the question why elementary particles can decay is that decay doesn’t mean the decay products must’ve been in the original particle. A decay’s just a particular type of interaction. And we’ve no observations that’d indicate elementary particles are made up of something else; they have no substructure. That’s why we call them elementary.

But this brings up another question, why do those particles decay to begin with? I often come across the explanation that they do this to reach the state of lowest energy because the decay products are lighter than the original. But that doesn’t make any sense because energy is conserved in the decay. Indeed, the reason those particles decay has nothing to do with energy, it has all to do with entropy.

Heavy particles decay simply because they can and because that’s likely to happen. As Einstein told us, mass is a type of energy. Yes, that guy again. So a heavy particle can decay into several lighter particles because it has enough energy. And the rest of the energy that doesn’t go into the masses of the new particles goes into the kinetic energy of the new particles. But for the opposite process to happen, those light particles would have to meet in the right spot with a sufficiently high energy. This is possible, but it’s very unlikely to happen coincidentally. It would be a spontaneous increase of order, so it would be an entropy decrease. That’s why we don’t normally see it happening, just like we don’t normally see eggs unbreak. To sum it up: Decay is likely. Undecay unlikely.

It is worth emphasizing though that the reverse of all those particle-decay processes indeed exists and it can happen in principle. Mathematically, you can reverse all those processes, which means the laws of nature are time-reversible. Like a movie, you can run them forwards and backwards. It’s just that some of those processes are very unlikely to occur in the word we actually inhabit, which is why we experience our life with a clear forward direction of time that points towards more wrinkles.

Friday, November 12, 2021

New book now available for pre-order

In the past years I have worked on a new book, which is now available for pre-order here (paid link). My editors decided on the title "Existential Physics: A Scientist's Guide to Life's Biggest Questions" which, I agree, is more descriptive than my original title "More Than This". My title was trying to express that physics is about more than just balls rolling down inclined planes and particles bumping into each other. It's a way to make sense of life.

In "Existential Physics" each chapter is the answer to a question. I have also integrated interviews with Tim Palmer, David Deutsch, Roger Penrose, and Zeeya Merali, so you don't only get to hear my opinion. I'll show you a table of contents when the page proofs are in. I want to remind you that comments have moved over to my Patreon page.

Saturday, November 06, 2021

How bad is plastic?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

Plastic is everywhere, and we have all heard it’s bad for the environment because it takes a long time to biodegrade. But is this actually true? If I look at our outside furniture, that seems to biodegrade beautifully. How much should we really worry about all that plastic? Did you know that most bioplastics aren’t biodegradable? And will we end up driving cars made of soybeans? That’s what we will talk about today.

Pens, bags, cups, trays, toys, shoe soles and wrappers for everything – it’s all plastic. Those contact lenses that I’m wearing? Yeah, that’s plastic too.

The first plastic was invented in nine-teen-0-seven by the chemist Leo Baekeland. Today we use dozens of different plastics. They’re synthetic materials, molecules that just didn’t exist before humans began producing them. Plastics usually have names starting with “poly” like polyethylene, polypropylene, or polyvinyl chloride. The poly is sometimes hidden in abbreviations like PVC or PET.

You probably know the prefix “poly” from “polymer”. It means “many” and tells you that the molecules in plastic are long, repetitive chains. These long chains are the reason why plastics can be easily molded. And because plastics can be quickly mass-produced in custom shapes, they’ve become hugely popular. Today, more than twenty thousand plastic bottles are produced – each second. That’s almost two billion a day! Chewing gum by the way also contains plastic.

Those long molecular chains are also the reason why plastic is so durable, because bacteria that evolved to break down organic materials can’t digest plastic. So how long does plastic last? Well, we can do our own research, so let’s ask Google. Actually we don’t even have to do that ourselves, because just a year ago, a group of American scientists searched for public information on plastic lifetime and wrote a report for the NAS about it.

For some cases, like Styrofoam, they found lifetimes varying from one year to one thousand years to forever. For fishing lines, all thirty-seven websites they found said it lasts six-hundred years, probably because they all copied from each other. If those websites list a source at all, it’s usually a website of some governmental or educational institution. The most often named one is NOAA, the National Oceanic and Atmospheric Administration in the United States. When the researchers contacted NOAA they learned that the numbers on their website are estimates and not based on peer-reviewed science.

Fact is, no one has any good idea how long plastics last in the environment. The studies which have been done, often don’t list crucial information such as exposure to sunlight, temperature, or size and shape of the sample, so it’s unclear what those numbers mean in real life. Scientists don’t even have an agreed-upon standard for what “degradation of plastic” is.

If anything, then recent peer-reviewed literature suggests that plastic in the environment may degrade faster than previously recognized, not because of microbes but because of sunlight. For example, a paper published by a group from Massachusetts found that polystyrene, one of the world’s most ubiquitous plastics, may degrade in a couple of centuries when exposed to sunlight, rather than thousands of years as previously thought. That plastic isn’t as durable as once believed is also rapidly becoming a problem for museums who see artworks of the last century crumbling away.

But why do we worry about the longevity of plastic to begin with? Plastics are made from natural gas or oil, which is why burning them is an excellent source of energy, but has the same problem as burning oil and gas – it releases carbon dioxide which has recently become somewhat unpopular. Plastic can in principle be recycled by shredding and re-molding it, but if you mix different types of plastics the quality degrades rapidly, and in practice the different types are hard to separate.

And so, a lot of plastic trash ends up in landfills or in places where it doesn’t belong, makes its way into rivers and, eventually, into the sea. According to a study by the Ellen Macarthur Foundation, there are more than one-hundred fifty million tons plastic trash in the oceans already, and we add about 10 million tons each year. Most of that plastic sinks to the ground, but more than 250000 tons keep floating on the surface.

The result is that a lot of wildlife, birds and fish in particular, gets trapped in plastic trash or swallows it. According to a 2015 estimate from researchers in Australia and the UK, over ninety percent of seabirds now have plastic in their guts. That’s bad. Swallowing plastic cannot only physically block parts of the digestive system, a lot of plastics also contain chemicals to keep them soft and stable. Many of those chemicals are toxic and they’ll leak into the animals.

Okay you may say who cares about seabirds and fish. But the thing is, once you have a substance in the food chain, it’ll spread through the entire ecosystem. As it spreads, the plastic gets broken down into smaller and smaller pieces, eventually down to below micrometer size. Those are the so-called microplastics. From animals, they make their way into supermarkets, and from there back into the canalization and on into other parts of the environment from where they return to us, and so on. Several independent studies have shown that most of us now quite literally shit plastic.

What are the consequences? No one really knows.

We do know that microplastics are fertile ground for pathogenic bacteria, which isn’t exactly what you want to eat. But of course other microparticles, for example those stemming from leaves or rocks, have that problem too, and we probably eat some of those as well. Indeed, in 2019 a group of Chinese researchers studied bacteria on different microparticles, and they found that the amount of bacteria on microplastics was less than that on micoparticles from leaves. That’s because leaves are organic and deteriorate faster, which provides more nutrients for the bacteria. It’s presently unclear whether eating microplastics is a health hazard.

But some of those microplastics are so small they circulate in the air together with other dust and we regularly breathe them in. Studies have found that at least in cell-cultures, those particles are small enough to make it into the lymphatic and circulatory system. But how much this happens in real life and to what extent this may lead to health problems hasn’t been sorted out. Though we know from several occupational studies that workers processing plastic fibers, who probably breathe in microplastics quite regularly, are more likely to have respiratory problems than the general population. The problems include a reduced lung capacity and coughing. The data for lung cancer induced by breathing microplastics is inconclusive.

Basically we’ve introduced an entirely new substance into the environment and are now finding out what consequences this has.

That problem isn’t new. As Jordi Busque has pointed out, planet Earth had this problem before, namely, when all that coal formed which we’re now digging back up. This happened during a period called the carboniferous which lasted from three-hundred sixty to sixty million years ago. It began when natural selection “invented” for the first time wood trunks with bark, which requires a molecule called lignin. But, no bug, bacteria, or fungus around at that time knew how to digest lignin. So, when trees died, their trunks just piled up in the forests and, over millions of years, they were covered by sediments and turned into coal. The carboniferous ended when evolution created fungi that were able to eat and biodegrade lignin.

Now, the carboniferous lasted 300 million years but maybe we can speed up evolution a bit by growing bacteria that can digest plastics. Why not? There’s nothing particularly special about plastics that would make this impossible.

Indeed, there are already bacteria which have learned to digest plastic. In twenty-sixteen a group of Japanese scientists published a paper in Science magazine, in which they reported the discovery of a bacterium that degrades PET, which is the material most plastic bottles are made of. They found it while they were analyzing sediment samples from nearby a plastic recycling facility. They also identified the enzyme that enables the bacteria to digest plastic and called it PETase.

The researchers found that thanks to PETase, the bacterium converts PET into two environmentally benign components. Moreover 75 percent of the resulting products are further transformed into organic matter by other microorganisms. That, plus carbon-dioxide. As I said in my earlier video about carbon capture, plastics are basically carbon storage, so maybe we should actually be glad that they don’t biodegrade?

But in 2018, a British team accidentally modified PETase making it twenty percent faster at degrading PET, and by 2020 scientists from the University of Portsmouth had found a way to speed up the PET digestion by a factor of six. Just this year, researchers from Germany, France and Ireland used another enzyme which found in a compost pile to degrade PET.

And the French startup Carbios has developed another bacterium that can almost completely digest old plastic bottles in just a few hours. They are building a demonstration factory that will use the enzymes to takes plastic polymers apart into monomers, which can then be polymerized again to make new bottles. The company says it will open a full-scale factory in twenty-twenty-four with a goal of producing the ingredients for forty thousand tons of recycled plastic each year.

The problem with this idea is that the PET used in bottles is highly crystalline and very resistant to enzyme degradation. So if you want the enzymes to do their work, you first have to melt the plastic and extrude it. That requires a lot of energy. For this reason, bacterial PET digestion doesn’t currently make a lot of sense neither economically nor ecologically. But it demonstrates that it’s a real possibility that plastics will just become biodegradable because bacteria evolve to degrade them, naturally or by design.

What’s with bioplastics? Unfortunately, bioplastics look mostly like hype to me.

Bioplastics are plastics produced from biomass. This isn’t a new idea. For example, celluloid, the material of old films, was made from cellulose, an organic material. And in nineteen 41 Ford built a plastic car made from soybeans. Yes, soybeans. Today we have bags made from potatoes or corn. That certainly sounds very bio, but unfortunately, according to a review by scientists from Georgia Southern University that came out just in April, about half of bioplastics are not biodegradable.

How can it possibly be that potato and corn isn’t biodegradable? Well, the potato or corn is biodegradable. But, to make the bioplastics, one uses the potatoes or the corn to produce bioethanol and from the bioethanol you produce plastic in pretty much the same way you always do. The result is that the so-called bioplastics are chemically pretty much the same as normal plastics.

So about half of bioplastics aren’t biodegradable. And most of the ones that are, biodegrade only in certain conditions. This means they have to be sent to industrial compost facilities that have the right conditions of temperature and pressure. If you just trash them they will end up in landfill or migrate into the sea like any other plastic. A paper by researchers from Michigan State University found no difference in degradation when they compared normal plastics with these supposedly biodegradable ones.

So the word “bioplastic” is very misleading. But there are some biodegradable bioplastics. For example Mexican scientists have produced a plastic out of certain types of cacti. It naturally degrades in a matter of months. Unfortunately, there just aren’t enough of those cacti to replace plastic that way.

More promising are PHAs, that are a family of molecules that evolved for certain biological functions and that can be used to produce plastics that actually do biodegrade. Several companies are working on this, for example Anoxkaldnes, Micromidas, and Mango Materials. Mango Materials. Seriously?

Researchers from the University of Queensland in Australia have estimated that a bottle of PHA in the ocean would degrade in one and a half to three and a half years, and a thin film would need 1 to 2 months. Sounds good! But at present PHA is difficult to produce and therefore 2 to 4 times more expensive than normal plastic. And let’s not forget that the faster a material biodegrades the faster it returns its carbon dioxide into the atmosphere. So what you think is “green” might not be what I think is “green”.

Isn’t there something else we can do with all that plastic trash? Yes, for example make steel. If you remember, steel is made from iron and carbon. The carbon usually comes from coal. But you can instead use old plastic, remember the stuff’s made of oil. In a paper that appeared in Nature Catalysis last year, a group of researchers from the UK explained how that could work. Use microwaves to convert the plastic into hydrogen and carbon. Use the hydrogen to convert iron oxides into iron, and then combine it with the carbon to get steel.

Personally I’d prefer steel from plastic over cars of non-biodegradable so-called bioplastics, but maybe that’s just me. Let me know in the comments what you think, I’m curious. Don’t forget to like this video and subscribe if you haven’t already, that’s the easiest way to support us. See you next week.

Saturday, October 30, 2021

The delayed choice quantum eraser, debunked

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

A lot of you have asked me to do a video about the delayed choice quantum eraser, an experiment that supposedly rewrites the past. I haven’t done that simply because there are already lots of videos about it, for example Matt from PBS Space-time, the always amazing Joe Scott, and recently also Don Lincoln from Fermilab. And how many videos do you really need about the same thing if that thing isn’t a kitten in a box. However, having watched all those gentlemen’s videos about quantum erasing, I think they’re all wrong. The quantum eraser isn’t remotely as weird as you think, doesn’t actually erase anything, and certainly doesn’t rewrite the past. And that’s what we’ll talk about today.

Let’s start with a puzzle that has nothing to do with quantum mechanics. Peter is forty-six years old and he’s captain of a container ship. He ships goods between two places that are 100 kilometers apart, let’s call them A and B. He starts his round trip at A with the ship only half full. Three-quarters of the way to B he adds more containers to fill the ship, which slows him down by a factor of two. On the return trip, his ship is empty. How old is the captain?

If you don’t know the answer, let’s rewind this question to the beginning.

Peter is forty-six years old. The answer’s right there. Everything I told you after that was completely unnecessary and just there to confuse you. The quantum eraser is a puzzle just like this.

The quantum eraser is an experiment that combines two quantum effects, interference and entanglement. Interference of quantum particles can itself be tested by the double slit experiment. For the double slit experiment you shoot a coherent beam of particles at a plate with two thin openings, that’s the double slit. On the screen behind it, you then observe several lines, usually five or seven, but not two. This is an interference pattern created by overlapping waves. When a crest meets a trough, the waves cancel and that makes a dark spot on the screen. When crest meets crest they add up and that makes a bright spot.

The amazing thing about the double slit is that you get this pattern even if you let only one particle at a time pass through the slits. This means that even single particles act like waves. We therefore describe quantum particles with a wave-function, usually denoted psi. The interesting thing about the double-slit experiment is that if you measure which slit the particles go through, the interference pattern disappears. Instead the particles behave like particles again and you get two blobs, one from each of the slits.

Well, actually you don’t. Though you’ve almost certainly seen that elsewhere. Just because you know which slit the wave-function goes through doesn’t mean it stops being a wave-function. It’s just no longer a wave-function going through two slits. It’s now a wave-function going through only one slit, so you get a one-slit diffraction pattern. What’s that? That’s also an interference pattern but a fuzzier one and indeed looks mostly like a blob. But a very blurry blob. And if you add the blobs from the two individual slits, they’ll overlap and still pretty much look like one blob. Not, as you see in many videos two cleanly separated ones.

You may think this is nitpicking, but it’ll be relevant to understanding the quantum eraser, so keep this in mind. It’s not so relevant for the double slit experiment, because regardless of whether you think it’s one blob or two, the sum of the images from both separate slits is not the image you get from both slits together. The double slit experiment therefore shows that in quantum mechanics, the result of a measurement depends on what you measure. Yes, that’s weird.

The other ingredient that you need for the quantum eraser is entanglement. I have talked about entanglement several times previously, so let me just briefly remind you: entangled particles share some information, but you don’t know which particle has which share until you measure it. It could be for example that you know the particles have a total spin of zero, but you don’t know the spin of each individual particle. Entangled particles are handy because they allow you to measure quantum effects over large distances which makes them super extra weird.

Okay, now to the quantum eraser. You take your beam of particles, usually photons, and direct it at the double slit. After the double slit you place a crystal that converts each single photon into a pair of entangled photons. From each pair you take one and direct it onto a screen. There you measure whether they interfere. I have drawn the photons which come from the two different places in the crystal with two different colors. But this is just so it’s easier to see what’s going on, these photons actually have the same color.

If you create these entangled pairs after the double slit, then the wave-function of the photon depends on which slit the photons went through. This information comes from the location where the pairs were created and is usually called the “which way information”. Because of this which-way information, the photons on the screen can’t create an interference pattern.

What’s with the other side of the entangled particles? That’s where things get tricky. On the other side, you measure the particles in two different ways. In the first case, you measure the which-way information directly, so you have two detectors, let’s call them D1 and D2. The first detector is on the path of the photons from the left slit, the second detector on the path of the photons from the right slit. If you measure the photons with detectors D1 and D2, you see no interference pattern.

But alternatively you can turn off the first two detectors, and instead combine the two beams in two different ways. These two white bars are mirrors and just redirect the beam. The semi-transparent one is a beam splitter. This means half of the photons go through, and the other half is reflected. This looks a little confusing but the point is just that you combine the two beams so that you no longer know which way the photon came. This is the “erasure” of the “which way information”. And then you measure those combined beams in detectors D3 and D4. A measurement on one of those two detectors does not tell you which slit the photon went through.

Finally, you measure the distribution of photons on the screen that are entangled partners of those photons that went to D3. These photons create an interference pattern. You can alternatively measure the distribution of photons on the screen that are partner particles of those photons that went to D4. Those will also create an interference pattern.

This is the “quantum erasure”. It seems you’ve managed to get rid of the which way information by combining those paths, and that restores the interference pattern. In the delayed choice quantum eraser experiment, the erasure happens well after the entangled partner particle hit the screen. This is fairly easy to do just by making the paths of those photons long enough.

If you watch the other videos about this experiment on YouTube, they’ll now go on to explain that this seems to imply that the choice of what you measure on the one side of the experiment decides what happened on the other side before you even made that choice. Because the photons must have known whether to interfere or not before you decided whether to erase the which-way information. But this is clearly nonsense. Because, let’s rewind this explanation to the beginning.

The photons on the screen can’t create an interference pattern. Everything I told you after this is completely irrelevant. It doesn’t matter at all what you do on the other side of the experiment. The photons on the screen will always create the same pattern. And it’ll never be an interference pattern.

Wait. Didn’t I just tell you that you do get an interference pattern if you use detectors D3 and D4? Indeed. But I’ve omitted a crucial part of the information which is missing in those other YouTube videos. It’s that those interference patterns are not the same. And if you add them, you get exactly the same as you get from detectors 1 and 2. Namely these two overlapping blurry blobs. This is why it matters that you know the combined pattern of two single slits doesn’t give you two separate blobs, as they normally show you.

What you actually do in the eraser experiment, is that you sample the photon pairs in two groups. And you do that in two different ways. If you use detector 1 and 2 you sample them so that the entangled partners on the screen do not create an interference pattern for each detector separately. If you use detector 3 and 4, they each separately create an interference pattern but together they don’t.

This means that the interference pattern really comes from selectively disregarding some of the particles. That this is possible has nothing to do with quantum mechanics. I could throw coins on the floor and then later decide to disregard some of those and create any kind of pattern. Clearly this doesn’t rewrite the past.

This by the way has nothing to do with the particular realization of the quantum eraser experiment that I’ve discussed. This experiment has been done in a number of different ways, but what I just told you is generally true, these interference patterns will always combine to give the original non-interference pattern.

This is not to say that there is nothing weird going on in this experiment. But what’s weird about it is the same thing that’s weird already about the normal double slit experiment. Namely, if you look at the wave-function of a single particle, then that distributes in space. Yet when you measure it, the particle is suddenly in one particular place, and the result must be correlated throughout space and fit to the measurement setting. I actually think the bomb experiment is far weirder than the quantum eraser. Check out my earlier video for more on that.

When I was working on this video I thought certainly someone must have explained this before. But the only person I could find who’d done that is… Sean Carroll in a blogpost two years ago. Yes, you can trust Sean with the quantum stuff. I’ll leave you a link to Sean’s piece in the info.