Saturday, October 23, 2021

Does Captain Kirk die when he goes through the transporter?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Does Captain Kirk die when he goes through the transporter? This question has kept me up at night for decades. I’m not kidding. And I still don’t have an answer. So this video isn’t going to answer the question, but I will explain why it’s more difficult than you may think. If you haven’t thought about this before, maybe pause the video for a moment and try to make up your mind. Do you think Kirk dies when he goes through the transporter? Let me know if at the end of this video you’ve changed your mind.

So how does the transporter work? The idea is that the person who enters a transporter is converted into an energy pattern that contains all the information. That energy can be sent or “beamed” at the speed of light. And once it’s arrived at its final destination, it can be converted back-into-the-person.

Now of course energy isn’t something in and by itself. Energy, like momentum or velocity is a property of something. This means the beam has to be made of something. But that doesn’t really matter for the purpose of transportation, it only matters that the beam can contain all the information about the person and it can be sent much faster and much easier than you could send the person in its material form.

Current technology is far, far away from being able to read out all the information that’s necessary to build up a human being from elementary particles. And even if we could do that, it’d take ridiculously long to send that information anywhere. According to a glorious paper by a group of students from the University of Leicester, assuming a bandwidth of about 30 Giga Hertz, just sending the information of a single cell would take more than 10^15 years, and that’s not counting travel time. Just for comparison, the age of the universe is about 10^10 years. So, even if you increase the bandwidth by a quadrillion, it’d still take at least a year just to move a cell one meter to the left.

Clearly we’re not going build a transporter isn’t going to happen any time soon, but from the perspective of physics there’s no reason why it should not be possible. I mean, what makes you you is not a particular collection of elementary particles. Elementary particles are identical to each other. What makes you you is the particular arrangement of those particles. So why not just send that information instead of all the particles? That should be possible.

And according to the best theories that we currently have, that information is entirely contained in the configuration of the particles at any one moment in time. That’s just how the laws of nature seem to work. Once we know the exact state of a system at one moment, say the position and velocity of an apple, then we can calculate what happens at any later time, say, where the apple will fall. I talked about this in more detail in my video about differential equations, so check this out for more.

For the purposes of this video you just need to know that the idea that all the information about a person is contained in the exact configuration at one moment in time is correct. This is also true in quantum mechanics, though quantum mechanics brings in a subtlety that I will get to in a moment.

So, what happens in the transporter is “just” that you get converted into a different medium, all cell and brain processes are put on pause, and then you’re reassembled back and all those processes continue exactly as before. For you, no time has passed, you just find yourself elsewhere. At first sight it seems, Kirk doesn’t die when he goes through the transporter, it’s just a conversion.

But. There’s no reason why you have to convert the person into something else when you read out the information. You can well imagine that you just read out the information, send it elsewhere, and then build a person out of that information. And then, after you’ve done that, you blast the original person into pieces. The result is exactly the same. It’s just that now there’s a time delay between reading out the information and converting the person into something else. Suddenly it looks like Kirk dies and the person on the other end is a copy. Let’s call this the “Copy Argument”.

It might be that this isn’t possible though. For one, when we read out the exact state of a system at any one moment in time that doesn’t only tell you what this system will do in the future, it also tells you what it’s done in the past. This means, strictly speaking, the only way to copy a system elsewhere would require you to also reproduce its entire past, which isn’t possible.

However, you could say that the details of the past don’t matter. Think of a pool table. Balls are rolling around and bouncing off each other. Now imagine that at one particular moment, you record the exact positions and velocities of those balls. Then you can place other balls on another pool table at the right places and give them the correct kick. This should produce the same motion as on the original table, in principle exactly. And that’s even though the past of the copied table isn’t the same because the velocities of the balls came about differently. It’s just that this difference doesn’t matter for the motion of the balls.

Can one do the same for elementary particles? I don’t think so. But maybe you can do it for atoms, or at least for molecules, and that might be enough.

But there’s another reason you might not be able to read out the information of a person without annihilating them in that process, namely that quantum mechanics says that this isn’t possible. You just can’t copy an arbitrary quantum state exactly. However, it’s somewhat questionable whether this matters for people because quantum effects don’t seem to be hugely relevant in the human body. But if you think that those quantum effects are relevant, then you simply cannot copy the information of a person without destroying the original. So in that case the Copy Argument doesn’t work and we’re back to Kirk lives. Let’s call this the No-Copy Argument.

However… there’s another problem. The receiving side of the transporter is basically a machine that builds humans out of information. Now, if you don’t have the information that makes up a particular person, it’s incredibly unlikely you will correctly assemble them. But it’s not impossible. Indeed, if such machines are possible at all and the universe is infinitely large, or if there are other universes, then somewhere there will be a machine that will coincidentally assemble you. Even though the information was never beamed there in the first place. Indeed, this would happen infinitely often.

So you can ask what happens with Kirk in this case. He goes into the transporter, disappears. But copies of him appear elsewhere, coincidentally, even though the information of the original was never read out. You can conclude from this that it doesn’t really matter whether you actually read out the information in the first place. The No-Copy argument fails and it looks again like that the Kirk which we care about dies.

There are various ways people have tried to make sense of this conundrum. The most common one is abandoning our intuitive idea of what it means to be yourself. We have this idea that our experience is continuous and if you go into the transporter there has to be an answer to what you experience next. Do you find yourself elsewhere? Or is that the end of your story and someone else finds themselves elsewhere? It seems that there has to be a difference between these two cases. But if there is no observable difference, then this just means we’re wrong in thinking that being yourself is continuous to begin with.

The other way to deal with the problem is to take our experience seriously and conclude that there is something wrong with physics. That the information about yourself is not contained in any one particular moment. Instead, what makes you you is the entire story of all moments, or at least some stretch of time. In that case, it would be clear that if you convert a person into some other physical medium and then reassemble it, that person’s experience remains intact. Whereas if you break that person’s story in space-time apart, by blasting them away at one place and assembling a copy elsewhere, that would not result in a continuous experience.

At least for me, this seems to make intuitively more sense. But this conflicts with the laws of nature that we currently have. And human intuition is not a good guide to understanding the fundamental laws of nature, quantum mechanics is exhibit A. Philosophers by the way are evenly divided between the possible answers to the question. In a survey, about a third voted for “death” another third for “survival” and yet another third for “other”. What do you think? And did this video change your mind? Let me know in the comments.

Saturday, October 16, 2021

Terraforming Mars in 3 Simple Steps

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


We have made great progress screwing up the climate on this planet, so the time is right to look for a new home on which to continue the successes of the human race. What better place could there be than our neighbor planet Mars. It’s a little cold and a little dusty and it takes seven months to get there, but otherwise it’s a lovely place, and with only 3 simple steps, it can be turned into an Earthlike place or be “terraformed” as they say. Just like magic. And that’s what we’ll talk about today.

First things first, Mars is about one hundred million kilometers farther away from the Sun than Earth. Its average temperature is minus 60 degrees Celsius or minus 80 Fahrenheit. Its atmosphere is very thin and doesn’t contain oxygen. That doesn’t sound very hospitable to life as we know it, but scientists have come up with a solution for our imminent move to Mars.

We’ll start with the atmosphere, which is actually two issues namely, the atmosphere of Mars is very thin and contains basically no oxygen. Instead, it’s mostly carbon-dioxide and nitrogen.

One reason the atmosphere is so thin is that Mars is smaller than Earth and its mass is only a tenth that of Earth. That’d make for interesting Olympic games, but it also makes it easier for gas to escape. This by the way is why I strongly recommend you don’t play with your anti-gravity device. You don’t want the atmosphere of Earth to escape, do you?

But that Mars is lighter than Earth is a minor problem. The bigger problem with the atmosphere of Mars is that Mars doesn’t have a magnetic field, or at least it doesn’t have one any more. The magnetic field of a planet, like the one we have here on Earth, is important because it redirects the charged particles which the sun constantly emits, the so-called solar wind. Without that protection, the solar wind can rip off the atmosphere. That’s not good. Check out my earlier video about solar storms for more about how dangerous they can be.

That the solar wind rips off the atmosphere if the protection from the magnetic field fades away is what happened to Mars. Indeed, it’s still happening. In 2015, NASA’s MAVEN spacecraft measured the slow loss of atmosphere from Mars. They estimate it to be 100 grams per second. This constant loss is balanced by the evaporation of gas from the crust of Mars, so that the pressure has stabilized at a few milli-bar. The atmospheric pressure on the surface of earth is approximately one bar.

Therefore, before we try to create an atmosphere on Mars we first have to create a magnetic field because otherwise the atmosphere would just be wiped away again. How do you create a magnetic field for a planet? Well, physicists have understood magnetic fields two centuries ago and it’s really straight-forward.

In a paper that was just published in April in the International Journal of Astrobiology, two physicists explain that all you have to do put a superconducting wire around Mars, simple enough, isn’t it? The circle would have to have a radius of about 3400 kilometers but the diameter of the collected wires only needs to be about five centimeters. Well, okay, you need an insulation and a refrigeration system to keep it superconducting. And you need a power station to generate a current. But other than that, no fancy technology required.

That superconducting wire would have a weight of about one million tons which is only about 100 times the total weight of the Eiffel tower. The researchers propose to make it of bismuth strontium calcium copper oxide (BSCCO). Where do you get so much bismuth from? Asteroid Mining. Piece of cake.

Meanwhile on Earth. Will Cutbill from the UK earned an entry into the Guinness Book of World Records by stapling 5 M and Ms on top of each other.

Back to Mars. With the magnetic field in place, we can move to step 2 of terraforming Mars, creating an atmosphere. This can be done by releasing the remaining carbon dioxide that’s stored in frozen caps on the poles and in the rocks. In 2018, a group of American researchers published a paper in Nature in which they estimate that using the most wildly optimistic assumptions this would get us to about twenty percent of the atmospheric pressure on earth.

Leaving aside that no one knows how to release the gas, if we would release the gas this would lead to a moderate greenhouse effect. It would increase the average temperature on Mars by about 10 Kelvin to a balmy minus 50 Celsius. That still seems a little chilly, but I hear that fusion power is almost there, so I guess we can heat with that.

Meanwhile on Earth. Visitors of London can now enjoy a new tourist attraction, it’s a man-built hill of 30 meters height from which you have a great view on… construction areas.

Back to Mars. Okay, so we have a magnetic field and created some kind of atmosphere by releasing carbon-dioxide with the added benefit of increasing the average temperature by a few degrees. The remaining problem is that we can’t breathe carbon-dioxide. I mean, we can, but not for very long. So step 3 of terraforming Mars is converting carbon-dioxide to di-oxide. Only thing we need to do for this is to grow a sufficient amount of plants.

There’s the issue that plants tend to not flourish at minus fifty degrees, but that’s easy to fix with a little genetic engineering. Plants as we know them also need a range of nutrients they normally get from soil, most importantly Nitrogen, Phosphorus and Potassium. Luckily, those are present on Mars. The bigger problem may be that the soil on mars is too thin and too hard which makes it difficult for plants to grow roots. It also retains water very poorly, so you have to water the plants very often. How do you water plants at -50 degrees? Good question!

Meanwhile on Earth you can buy fake Mars soil and try your luck growing plants on it yourself!

Ok, so I admit that the last bit with the plants was a tiny bit sketchy. But there might be a better way to do it. In July 2019 researchers from JPL, Harvard and Edinburgh University published a paper in Nature in which they proposed to cover patches of Mars with a thin layer of aerogel.

An aerogel is a synthetic material which contains a lot of gas. It is super light and has an extremely low thermal conductivity, which means it could keep the surface of Mars warm. The gel would be transparent to visible light but can be somewhat opaque in the infrared, so this could create an enhanced greenhouse effect directly on the surface. That would heat up the surface, which would release more carbon dioxide. The carbon dioxide would accumulates under the gel, and then plants should be able to grow in that space. So, we’re not talking about oaks but more like algae or something that covers the ground.

In their paper, the researchers estimate that a layer of about 3 centimeters aerogel could raise the surface temperature of Mars by about 45 Kelvin. With that the average temperature on Mars would still be below the freezing point of water, but in some places it might rise above it. Sounds great! Except that the atmospheric pressure is so low that the liquid water would start boiling as soon as it melts.

So as you see our move to Mars is well on the way, better pack your bags, see you there!

Saturday, October 09, 2021

How I learned to love pseudoscience

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


On this channel, I try to separate the good science from the bad science, the pseudoscience. And I used to think that we’d be better off without pseudoscience, that this would prevent confusion and make our lives easier. But now I think that pseudoscience is actually good for us. And that’s what we’ll talk about today.

Philosophers can’t agree on just what defines “pseudoscience” but in this episode I will take it to mean theories that are in conflict with evidence, but that promoters believe in, either by denying the evidence, or denying the scientific method, or maybe just because they have no idea what either the evidence or the scientific method is.

But what we call pseudoscience today might once have been science. Astrology for example, the idea that the constellations of the stars influence human affairs was once a respectable discipline. Every king and queen had a personal astrologer to give them advice. And many early medical practices weren’t just pseudoscience, they were often fatal. The literal snake oil, obtained by boiling snakes in oil, was at least both useless and harmless. However, they also prescribed tape worms for weight loss. Though in all fairness, that might actually work, if you survive it.

And sometimes, theories accused of being pseudoscientific turned out to be right, for example the idea that the continents on earth today broke apart from one large tectonic plate. That was considered pseudoscience until evidence confirmed it. And the hypothesis of atoms was at first decried as pseudoscience because one could not, at the time, observe atoms.

So the first lesson we can take away is that pseudoscience is a natural byproduct of normal science. You can’t have one without the other. If we learn something new about nature, some fraction of people will cling on to falsified theories longer than reasonable. And some crazy ideas in the end turn out to be correct.

But pseudoscience isn’t just a necessary evil. It’s actually useful to advance science because it forces scientists to improve their methods.

Single-blind trials, for example, were invented in the 18th century to debunk the practice of Mesmerism. At that time, scientists had already begun to study and apply electromagnetism. But many people were understandably mystified by the first batteries and electrically powered devices. Franz Mesmer exploited their confusion.

Mesmer was a German physician who claimed he’d discovered a very thin fluid that penetrated the entire universe, including the human body. When this fluid was blocked from flowing, he argued, the result was that people fell ill.

Fortunately, Mesmer said, it was possible to control the flow of the fluid and cure people. And he knew how to do it. The fluid was supposedly magnetic, and entered the body through “poles”. The north pole was on your head and that’s where the fluid came in from the stars, and the south pole was at your feet where it connected with the magnetic field of earth.

Mesmer claimed that the flow of the fluid could be unblocked by “magnetizing” people. Here is how the historian Lopez described what happened after Mesmer moved to Paris in 1778:
“Thirty or more persons could be magnetized simultaneously around a covered tub, a case made of oak, about one foot high, filled with a layer of powdered glass and iron filings... The lid was pierced with holes through which passed jointed iron branches, to be held by the patients. In subdued light, absolutely silent, they sat in concentric rows, bound to one another by a cord. Then Mesmer, wearing a coat of lilac silk and carrying a long iron wand, walked up and down the crowd, touching the diseased parts of the patients’ bodies. He was a tall, handsome, imposing man.”
After being “magnetized” by Mesmer, patients frequently reported feeling significantly better. This, by the way, is the origin of the word mesmerizing.

Scientists of the time, Benjamin Franklin and Antoine Lavoisier among them, set out to debunk Mesmer’s claims. For this, they blindfolded a group of patients. Some of them they told they’d get a treatment, but then they didn’t do anything, and others they gave a treatment without their knowledge.

Franklin and his people found that the supposed effects of mesmerism were not related to the actual treatment, but to the belief of whether one received a treatment. This isn’t to say there were no effects at all. Quite possibly some patients actually did feel better just believing they’d been treated. But it’s a psychological benefit, not a physical one.

In this case the patients didn’t know whether they received an actual treatment, but those conducting the study did. Such trials can be improved by randomly assigning people to one of the two groups so that neither the people leading the study nor those participating in it know who received an actual treatment. This is now called a “double blind trial,” and that too was invented to debunk pseudoscience, namely homeopathy.

Homeopathy was invented by another German, Samuel Hahnemann. It’s based on the belief that diluting a natural substance makes it more effective in treating illness. In eighteen thirty-five, Friedrich Wilhelm von Hoven, a public health official in Nuremberg, got into a public dispute with the dedicated homeopath Johann Jacob Reuter. Reuter claimed that dissolving a single grain of salt in 100 drops of water, and then diluting it 30 times by a factor of 100 would produce “extraordinary sensations” if you drank it. Von Hoven wouldn’t have it. He proposed and then conducted the following experiment.

He prepared 50 samples of homeopathic salt-water following Reuter’s recipe, and 50 samples of plain water. Today, we’d call the plain water samples a “placebo.” The samples were numbered and randomly assigned to trial participants by repeated shuffling. Here is how they explained this in the original paper from 1835:
“100 vials… are labeled consecutively… then mixed well among each other and placed, 50 per table, on two tables. Those on the table at the right are filled with the potentiation, those on the table at the left are filled with pure distilled snow water. Dr. Löhner enters the number of each bottle, indicating its contents, in a list, seals the latter and hands it over to the committee… The filled bottles are then brought to the large table in the middle, are once more mixed among each other and thereupon submitted to the committee for the purpose of distribution.”
The assignments were kept secret on a list in a sealed envelope. Neither von Hoven nor the patients knew who got what.

They found 50 people to participate in the trial. For three weeks von Hoven collected reports from the study participants, after which he opened the sealed envelope to see who had received what. It turned out that only eight participants had experienced anything unusual. Five of those had received the homeopathic dilution, three had received water. Using today’s language you’d say the effect wasn’t statistically significant.

Von Hoven wasn’t alone with his debunking passion. He was a member of the “society of truth-loving men”. That was one of the skeptical societies that had popped up to counter the spread of quackery and fraud in the 19th century. The society of truth loving men no longer exists. But the oldest such society that still exists today was founded as far back as 1881 in the Netherlands. It’s called the Vereniging tegen de Kwakzalverij, literally the “Society Against Quackery”. This society gave out an annual price called the Master Charlatan Prize to discourage the spread of quackery. They still do this today.

Thanks to this Dutch anti-quackery society, the Netherlands became one of the first countries with governmental drug regulation. In case you wonder, the first country to have such a regulation was the United Kingdom with the 1868 Pharmacy Act. The word “skeptical” has suffered somewhat in recent years because a lot of science deniers now claim to be skeptics. But historically, the task of skeptic societies was to fight pseudoscience and to provide scientific information to the public.

And there are more examples where fighting pseudoscience resulted in scientific and societal progress. For example, to debunk telepathy in the late nineteenth century. At the time, some prominent people believed in it, for example Nobel Prize winners Lord Rayleigh and Charles Richet. Richet proposed to test telepathy by having one person draw a playing card at random and concentrating on it for a while. Then another person had to guess the card. The results were then compared against random chance. This is basically how we today calculate statistical significance.

And if you remember, Karl Popper came up with his demarcation criterion of falsification because he wanted to show that Marxism and Freud’s psychoanalysis wasn’t proper science. Now, of course we know today that falsification is not the best way to go about it, but Popper’s work was arguably instrumental to the entire discipline of the philosophy of science. Again that came out of the desire to fight pseudoscience.

And this fight isn’t over. We’re still today fighting pseudoscience and in that process scientists constantly have to update their methods. For example, all this research we see in the foundations of physics on multiverses and unobservable particles doesn’t contribute to scientific progress. I am pretty sure in fifty years or so that’ll go down as pseudoscience. And of course there’s still loads of quackery in medicine, just think of all the supposed COVID remedies that we’ve seen come and go in the past year.

The fight against pseudoscience today is very much a fight to get relevant information to those who need it. And again I’d say that in the process scientists are forced to get better and stronger. They develop new methods to quickly identify fake studies, to explain why some results can’t be trusted, and to improve their communication skills.

In case this video inspired you to attempt self-experiments with homeopathic remedies, please keep in mind that not everything that’s labeled “homeopathic” is necessarily strongly diluted. Some homeopathic remedies contain barely diluted active ingredients of plants that can be dangerous when overdosed. Before you assume it’s just water or sugar, please check the label carefully.

If you want to learn more about the history of pseudoscience, I can recommend Michael Gordin’s recent book “On the Fringe”.

Saturday, October 02, 2021

How close is nuclear fusion power?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Today I want to talk about nuclear fusion. I’ve been struggling with this video for some while. This is because I am really supportive of nuclear fusion, research and development. However, the potential benefits of current research on nuclear fusion have been incorrectly communicated for a long time. Scientists are confusing the public and policy makers in a way that makes their research appear more promising than it really is. And that’s what we’ll talk about today.

There is a lot to say about nuclear fusion, but today I want to focus on its most important aspect, how much energy goes into a fusion reactor, and how much comes out. Scientists quantify this with the energy gain, that’s the ratio of what comes out over what goes in and is usually denoted Q. If the energy gain is larger than 1 you create net energy. The point where Q reaches 1 is called “Break Even”.

The record for energy gain was just recently broken. You may have seen the headlines. An experiment at the National Ignition Facility in the United States reported they’d managed to get out seventy percent of the energy they put in, so a Q of 0.7. The previous record was 0.67. It was set in nineteen ninety-seven by the Joint European Torus, JET for short.

The most prominent fusion experiment that’s currently being built is ITER. You will find plenty of articles repeating that ITER, when completed, will produce ten times as much energy as goes in, so a Gain of 10. Here is an example from a 2019 article in the Guardian by Phillip Ball who writes
“[The Iter project] hopes to conduct its first experimental runs in 2025, and eventually to produce 500 megawatts (MW) of power – 10 times as much as is needed to operate it.”

Here is another example from Science Magazine where you can read “[ITER] is predicted to produce at least 500 megawatts of power from a 50 megawatt input.”

So this looks like we’re close to actually creating energy from fusion right? No, wrong.

Remember that nuclear fusion is the process by which the sun creates power. The sun forces nuclei into each other with the gravitational force created by its huge mass. We can’t do this on earth so we have to find some other way. The currently most widely used technology for nuclear fusion is heating the fuel in strong magnetic fields until it becomes a plasma. The temperature that must be reached is about 150 million Kelvin. The other popular option is shooting at a fuel pellet with lasers. There are some other methods but they haven’t gotten very far in research and development.

The confusion which you find in pretty much all popular science writing about nuclear fusion is that the energy gain which they quote is that for the energy that goes into the plasma and comes out of the plasma.

In the technical literature, this quantity is normally not just called Q but more specifically Q-plasma. This is not the ratio of the entire energy that comes out of the fusion reactor over that which goes into the reactor, which we can call Q-total. If you want to build a power plant, and that’s what we’re after in the end, it’s the Q-total that matters, not the Q-plasma. 

 Here’s the problem. Fusion reactors take a lot of energy to run, and most of that energy never goes into the plasma. If you keep the plasma confined with a magnetic field in a vacuum, you need to run giant magnets and cool them and maintain that. And pumping a laser isn’t energy efficient either. These energies never appear in the energy gain that is normally quoted.

The Q-plasma also doesn’t take into account that if you want to operate a power plant, the heat that is created by the plasma would still have to be converted into electric energy, and that can only be done with a limited efficiency, optimistically maybe fifty percent. As a consequence, the Q total is much lower than the Q plasma.

If you didn’t know this, you’re not alone. I didn’t know this until a few years ago either. How can such a confusion even happen? I mean, this isn’t rocket science. The total energy that goes into the reactor is more than the energy that goes into the plasma. And yet, science writers and journalists constantly get this wrong. They get the most basic fact wrong on a matter that affects tens of billions of research funding.

It’s not like we are the first to point out that this is a problem. I want to read you some words from a 1988 report from the European Parliament, more specifically from the Committee for Scientific and Technological Options Assessment. They were tasked with establishing criteria for the assessment of European fusion research.

In 1988, they already warned explicitly of this very misunderstanding.
“The use of the term `Break-even’ as defining the present programme to achieve an energy balance in the Hydrogen-Deuterium plasma reaction is open to misunderstanding. IN OUR VIEW 'BREAK-EVEN' SHOULD BE USED AS DESCRIPTIVE OF THE STAGE WHEN THERE IS AN ENERGY BREAKEVEN IN THE SYSTEM AS A WHOLE. IT IS THIS ACHIEVEMENT WHICH WILL OPEN THE WAY FOR FUSION POWER TO BE USED FOR ELECTRICITY GENERATION.”
They then point out the risk:
“In our view the correct scientific criterion must dominate the programme from the earliest stages. The danger of not doing this could be that the entire programme is dedicated to pursuing performance parameters which are simply not relevant to the eventual goal. The result of doing this could, in the very worst scenario be the enormous waste of resources on a program that is simply not scientifically feasible.”
So where are we today? Well, we’re spending lots of money on increasing Q-plasma instead of increasing the relevant quantity Q-total. How big is the difference? Let us look at ITER as an example.

You have seen in the earlier quotes about ITER that the energy input is normally said to be 50 MegaWatts. But according to the head of the Electrical Engineering Division of the ITER Project, Ivone Benfatto, ITER will consume about 440 MegaWatts while it produces fusion power. That gives us an estimate for the total energy that goes in.

Though that is misleading already because 120 of those 440 MegaWatts are consumed whether or not there’s any plasma in the reactor, so using this number assumes the thing would be running permanently. But okay, let’s leave this aside.

The plan is that ITER will generate 500 MegaWatts of fusion power in heat. If we assume a 50% efficiency for converting this heat into electricity, ITER will produce about 250 MegaWatts of electric power.

That gives us a Q total of about 0.57. That’s less than a tenth of the normally stated Q plasma of 10. Even optimistically, ITER will still consume roughly twice the power it generates. What’s with the earlier claim of a Q of 0.67 for the JET experiment? Same thing. 

If you look at the total energy, JET consumed more than 700 MegaWatts of electricity to get its sixteen MegaWatts of fusion power, that’s heat not electric. So if you again assume 50 percent efficiency in the heat to electricity conversion you get a Q-total of about 0.01 and not the claimed 0.67. 

 And those recent headlines about the NIF success? Same thing again. It’s the Q-plasma that is 0.7. That’s calculated with the energy that the laser delivers to the plasma. But how much energy do you need to fire the laser? I don’t know for sure, but NIF is a fairly old facility, so a rough estimate would be 100 times as much. If they’d upgrade their lasers, maybe 10 times as much. Either way, the Q-total of this experiment is almost certainly well below 0.1. 

Of course the people who work on this know the distinction perfectly well. But I can’t shake the impression they quite like the confusion between the two Qs. Here is for example a quote from Holtkamp who at the time was the project construction leader of ITER. He said in an interview in 2006:
“ITER will be the first fusion reactor to create more energy than it uses. Scientists measure this in terms of a simple factor—they call it Q. If ITER meets all the scientific objectives, it will create 10 times more energy than it is supplied with.”
Here is Nick Walkden from JET in a TED talk referring to ITER “ITER will produce ten times the power out from fusion energy than we put into the machine.” and “Now JET holds the record for fusion power. In 1997 it got 67 percent of the power out that we put in. Not 1 not 10 but still getting close.”

But okay, you may say, no one expects accuracy in a TED talk. Then listen to ITER Director General Dr. Bigot speaking to the House of Representatives in April 2016:

[Rep]: I look forward to learning more about the progress that ITER has made under Doctor Bigot’s leadership to address previously identified management deficiencies and to establish a more reliable path forward for the project.

[Bigot]:Okay, so ITER will have delivered in that full demonstration that we could have okay 500 Megawatt coming out of the 50 Megawatt we will put in.
What are we to make of all this?

Nuclear fusion power is a worthy research project. It could have a huge payoff for the future of our civilization. But we need to be smart about just what research to invest into because we have limited resources. For this, it is super important that we focus on the relevant question: Will it output energy into the grid. 

There seem to be a lot of people in fusion research who want you to remain confused about just what the total energy gain is. I only recently read a new book about nuclear fusion “The Star Builders” which does the same thing again (review here). Only briefly mentions the total energy gain, and never gives you a number. This misinformation has to stop. 

If you come across any popular science article or interview or video that does not clearly spell out what the total energy gain is, please call them out on it. Thanks for watching, see you next week.

Wednesday, September 29, 2021

[Guest Post] Brian Keating: How to Think Like a Nobel Prize Winner

[The following is an excerpt from Think Like a Nobel Prize Winner, Brian Keating’s newest book based on his interviews with 9 Nobel Prize winning physicists. The book isn’t a physics text, nor even a memoir like Keating’s first book Losing the Nobel Prize. Instead, it’s a self-help guide for technically minded individuals seeking to ‘level-up’ their lives and careers.]

When 2017 Nobel Prize winner Barry Barish told me he had suffered from the imposter syndrome, the hair stood up on the back of my neck. I couldn’t believe that one of the most influential figures in my life and career—as a scientist, as a father, and as a human—is mortal. He sometimes feels insecure, just like I do. Every time I’m teaching, in the back of my head, I am thinking, who am I to do this? I always struggled with math, and physics never came naturally to me. I got where I am because of my passion and curiosity, not my SAT scores. Society venerates the genius. Maybe that’s you, but it’s certainly not me.

I’ve always suffered from the imposter syndrome. Discovering that Barish did too, even after winning a Nobel Prize—the highest regard in our field and in society itself—immensely comforted me. If he was insecure about how he compared to Einstein, I wanted to comfort him: Ein- stein was in awe of Isaac Newton, saying Newton “... determined the course of Western thought, research, and practice like no one else before or since.” And compared to whom did Newton feel inadequate? Jesus Christ almighty!

The truth is, the imposter syndrome is just a normal, even healthy, dose of inadequacy. As such, we can never overcome or defeat it, nor should we try to. But we can manage it through understanding and acceptance. Hearing about Barry’s experience allowed me to do exactly that, and I hoped sharing that message would also help others manage better. This was the moment I decided to create this book.

This isn’t a physics book. These pages are not for aspir- ing Nobel Prize winners, mathematicians, or any of my fellow geeks, dweebs, or nerds. In fact, I wrote it specifically for nonscientists—for those who, because of the quotidian demands of everyday life, sometimes lose sight of the biggest-picture topics humans are capable of learning about and contributing to. Most of all, I hope by humanizing science, by showing the craft of science as performed by its master practitioners, you my reader will see common themes emerge that will boost your creativity, stoke your imagination, and most of all, help overcome barriers like the imposter syndrome, thereby unlocking your full potential for out-of-this-universe success.

Though I didn’t write it for physicists, it’s appropriate to consider why the subjects of this book—who are all physicists—are good role models. Physicists are mental Swiss Army knives, or a cerebral SEAL Team Six. We dwell in uncertainty. We exist to solve problems.

We are not the best mathematicians (just ask a real mathematician). We’re not the best engineers. We also aren’t the best writers, speakers, or communicators—but no single group can simultaneously do all of these disparate tasks so well as the physicists I’ve compiled here. That’s what makes them worth listening to and learning from. I sure have.

The individuals in this book have balanced collaboration with competition. All scientists stand on the proverbial shoulders of giants of the past and present. Yet some of the most profound moments of inspiration do breathe magic into the equation of a single individual one unique time. There is a skill to know when to listen and when to talk, for you can’t do both at the same time. These scientists have navigated the challenging waters between focus and diversity, balancing intellectual breadth with depth, which are challenges we all face. Whether you’re a scientist or a salesman, you must “niche down” to solve problems. (Imagine trying to sell every car model made!)

I wrote this book for everyone who struggles to balance the mundane with the sublime—who is attending to the day-to-day hard work and labor of whatever craft they are in while also trying to achieve something greater in their profession or in life. I wanted to deconstruct the mental habits and tactics of some of society’s best and brightest minds in order to share their wisdom with readers—and also to show readers that they’re just like us. They struggle with compromise. They wrestle with perfection. And they aspire always to do something great. We can too.

By studying the habits and tactics of the world’s brightest, you can recognize common themes that apply to your life— even if the subject matter itself is as far removed from your daily life as a black hole is from a quark. Honestly, even though I am a physicist, the work done by most of the subjects in this book is no more similar to my daily work than it is to yours, and yet I learned much from them about issues common between us. These pages include enduring life lessons applicable to anyone eager to acquire new the true keys to success!

HOW IT ALL BEGAN

A theme pops up throughout these interviews regarding the connection between teaching and learning. In the Russian language, the word for “scientist” translates into “one who was taught.” That is an awesome responsibility with many implications. If we were taught, we have an obligation to teach. But the paradox is this: To be a good teacher, you must also be a good student. You must study how people learn in order to teach effectively. And to learn, you must not only study but also teach. In that way, I also have a selfish motivation behind this book: I wanted to share everything I learned from these laureates in order to learn it even more durably. Mostly, however, I see this book as an extension of my duty as an educator. That’s also how the podcast Into the Impossible began.

I’ve always had an insatiable curiosity about learning and education, combined with the recognition that life is short and I want to extract as much wisdom as I can while I can.

As a college professor, I think of teachers as shortcuts in this endeavor. Teachers act as a sort of hack to reduce the amount of time otherwise required to learn something on one’s own, compressing and making the learning process as efficient as possible—but no more so. In other words, there is a value in wrestling with material that cannot be hacked away.

As part of my duty as an educator, I wanted to cultivate a collection of dream faculty comprised of minds I wish I had encountered in my life. The next best thing to having them as my actual teachers is to learn from their interviews in a way that distills their knowledge, philosophy, struggles, tactics, and habits.

I started doing just that at UC San Diego in 2018 and realized I was extremely privileged to have access to some of the greatest minds in human history, ranging from Pulitzer Prize winners and authors to CEOs, artists, and astronauts. As the codirector of the Arthur C. Clarke Center for Human Imagination, I had access to a wide variety of writers, thinkers, and inventors from all walks of life, courtesy of our guest-speaker series. The list of invited speakers is not at all limited to the sciences. The common denominator is conversations about human curi- osity, imagination, and communication from a variety of vantage points.

I realized it would be a missed opportunity if only those people who attended our live events benefited from these world-class intellects. So we supplemented their visit- ing lectures with podcast interviews, during which we explored topics in more detail. I started referring to the podcast as the “university I wish I’d attended where you can wear your pajamas and don’t incur student-loan debt.”

The goal of the podcast is to interview the greatest minds for the greatest number of people. My very first guest was the esteemed physicist Freeman Dyson. I next inter- viewed science-fiction authors, such as Andy Weir and Kim Stanley Robinson; poets and artists, including Herbert Sigüenza and Ray Armentrout; astronauts, such as Jessica Meir and Nicole Stott; and many others. Along the way, I also started to collect a curated subset of interviews with Nobel Prize–winning physicists.

Then in February 2020, my friend Freeman Dyson died. Dyson was the prototype of a truly overlooked Nobel laureate. His contributions to our understanding of the fundamentals of matter and energy cannot be overstated, yet he was bypassed for the Nobel Prize he surely deserved. I was honored to host him for his winter visits to enjoy La Jolla’s sublime weather.

Freeman’s passing lent an incredible sense of urgency to my pursuits, forcing me to acknowledge that most prize- winning physicists are getting on in years. I don’t know how to say this any other way, but I started to feel sick to my stomach, thinking that I might miss an opportunity to talk to some of the most brilliant minds in history who, because of winning the Nobel Prize, have had an outsized influence on society and culture. So in 2020, I started reaching out to them. Most said yes, although sadly, both of the living female Nobel laureate physicists declined to be interviewed. I’m incredibly disappointed not to have female voices in this book, but it’s due to the reality of the situation and not for lack of trying.

A year later, I had this incredible collection of legacy interviews with some of the most celebrated minds on the planet. T.S. Eliot once said, “The Nobel is a ticket to one’s own funeral. No one has ever done anything after he got it.” No one proves that idea more wrong than the physicists in this book. It’s a rarefied group of individuals to learn from—especially when the focus is on life lessons instead of their research. It would be a dereliction of my intellectual duty not to preserve and share them.

HOW TO APPROACH THIS BOOK

These chapters are not transcripts. From the lengthy interviews I conducted with each laureate, I pulled all of the bits exemplifying traits worthy of emulation. Then, after each exchange, I added context or shared how I have been affected by that quote or idea. I have also edited for clarity, since spoken communication doesn’t always translate directly to the page.

All in all, I have done my best to maintain the authenticity of my exchanges with my guests. For example, you’ll notice that my questions don’t always relate to the take-away. Conversations often go in unexpected directions. I could’ve rephrased the questions for this book so they more accurately represented the laureates’ responses, but I didn’t want to misrepresent context. Still, any mistakes accidentally introduced are definitely mine, not theirs.

Each chapter contains a small box briefly explaining the laureate’s Prize-winning work—not because there will be a test at the end, but because it’s interesting context, and further, I know a lot of my readers will want to learn a bit of the fascinating science in these pages, consider- ing the folks from whom you’ll be learning. Perhaps their work will ignite further curiosity in you. If that’s not you, feel free to skip these boxes. If you’re looking for more, I refer you to the laureates’ Nobel lectures at nobelprize.org. There, you will find their knowledge. But here, you will find examples of their wisdom—distilled and compressed into concentrated, actionable form.

Each interview ends with a handful of lightning-round questions designed to investigate more deeply, to provide you with insight into what these laureates are like as human beings. Often these questions reoccur.

Further, you’ll find several recurrent themes from interview to interview, including the power of curiosity, the importance of listening to your critics, and why it’s paramount to pursue goals that are “useless.” I truly hope you’ll enjoy going out of this Universe and the benefits it will accrue to your life and career!

Buy your copy of Think Like A Nobel Prize Winner here!

Saturday, September 25, 2021

Where did the Big Bang happen?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



The universe started with a Big Bang and it’s expanded ever since. You probably know this. You probably also know that the universe doesn’t have a center. But where did the big bang happen, if not in the center of the universe? And if the universe expands, doesn’t that mean that matter on the average doesn’t move, contrary to what Einstein said, that absolute rest doesn’t exist? I get these questions a lot. And at the end of this video, you’ll know the answers.

First of all, what’s the Big Bang? The Big Bang, The Big Bang is what you get if you take Einstein’s equations and extrapolate the present state of the universe back in time. The universe presently expands, so if you go back in time it contracts, and the matter in it becomes more and more compressed. The equations say that when you’ve gone back about thirteen point seven billion years you run into a singularity at which the density of matter must have been infinitely large. This moment is what we call the “Big Bang”.

There are two warnings I have to add when it comes to the “Big Bang”. First, I don’t know anybody who actually believes that this singularity is physically real. It probably just means that Einstein’s equations break down and must be replaced by something else. For this reason, physicists use the term “Big Bang” to refer to whatever it is that replaces the singularity to within a Planck time or so. A Planck time is about ten to the minus forty-four seconds.

Second, we don’t actually know that this extrapolation all the way back to the Big Bang is correct because we have no observations dating back to before roughly the creation of atomic nuclei. It could be that Einstein’s equations actually aren’t the right ones for the very early universe. So instead of a Big Bang it could also be that an earlier universe collapsed and then expanded again which is called a Big Bounce. Or there could have been an infinitely long time in which not much happened after which expansion suddenly began. That would also look much like a big bang. We just don’t know which one’s right. The “Big Bang” is just the simplest scenario you get when you naively extrapolate the equations back in time.

But if the Big Bang did happen, where did it happen? It seems that if the universe expands, it must have come out of some place, right? Well, no. Like so many popular science confusions, this one is created by the attempt to visualize what can’t be visualized.

To begin with, as I explained in an earlier video, the universe doesn’t expand into anything. So the image of an inflating balloon is very misleading. When we say that the universe expands, we’re talking about what happens inside the universe.

Therefore, that the universe expands is not a statement about the size of the universe as a whole. That wouldn’t make sense because in Einstein’s theory, the universe is infinitely large. It is infinitely large now and has always been infinitely large. That the universe expands means that the distances between locations in the universe increase. And that can happen even though the size is infinite.

Suppose you have an elastic strap with buttons on it, and imagine the strap is space and the buttons are galaxy clusters. If you stretch the strap, the distances between the buttons increase. That’s what it means for the universe to expand. It’s the intergalactic space that expands. Now just imagine the strap is 3-dimensional and infinitely large.

Okay, easier said than done, I know, but this is how the mathematics works. If you go back in time to the Big Bang, all distances, areas, and volumes go to zero. But this happens at every point in space. And the size of the universe is still infinite. How can the size of the universe possibly be infinite if all distances go to zero? Well, have a look at this line. That’s a stretch of the real numbers from zero to 1. That’s a set of infinitely many points, each of which has size zero. And yet the line doesn’t have length zero. Infinity is weird. If you add up infinitely many zeros you can get anything, including infinity. I talked more about infinity in an earlier video.

But in all honesty, I also find it somewhat hard to interpret the Big Bang in terms of distances. That’s why I prefer to think of it as the moment when the density of matter in the universe goes to infinity – everywhere.

But wait, didn’t you hear someone say that the universe was the size of a grapefruit at the Big Bang? They were referring only to the part of the universe that we can see today. The part that we can see has a finite size because light had only those 13.7 billion years to travel, so anything farther away from us than that, we can’t see it. We are in the middle of the part we can see just because light travels the same in all directions. The mass in the visible part of the universe is finite. And, yes, if there really was a Big Bang then all that mass was once compressed into a volume similar to that of a grapefruit or really whatever fruit you want. But the Big Bang still happened everywhere in that grapefruit.

Okay, but that brings up another problem. If the universe expands the same everywhere, then doesn’t this define a frame of absolute rest. Think back of that elastic band again. If you sit on one of the buttons, then you move “with the expansion of the universe” in some sense. It seems fair to say that this would correspond to zero velocity. But didn’t Einstein say that velocities are relative, and that you’re not supposed to talk about absolute velocities. I mean, that’s why it’s called “relativity” right? Well, yes and no.

If you remember, Einstein really had two theories, first special relativity and then general relativity. Special relativity is the theory in which there is no such thing as absolute rest and you can only talk about relative velocities. But this theory does not contain gravity, which Einstein described as the curvature of space and time. If you want to describe gravity and the expansion of the universe, then you need to use general relativity.

In general relativity, matter, or all kinds of energy really, affect the geometry of space and time. And so, in the presence of matter the universe indeed gets a preferred direction of expansion. And you can be in rest with the universe. This state of rest is usually called the “co-moving frame”, so that’s the reference frame that moves with the universe. This doesn’t disagree with Einstein at all.

What is the co-moving frame of the universe? It’s normally assumed to be the same as the rest frame of the cosmic microwave background, or at least very similar to it. So what you can do is you measure the radiation of the cosmic microwave background that is coming at us from all directions. If we were in rest with the cosmic microwave background, the energy in that radiation should be the same in all directions. This isn’t the case though, instead we see that the radiation has somewhat more energy in one particular direction and less energy in the exact opposite direction. This can be attributed to our motion through the restframe of the universe.

How fast do we move? Well, we move in many ways, because the earth is spinning and orbiting around the sun which is orbiting around the center of the milky way. So really our direction constantly changes. But the Milky Way itself moves at about 630 kilometers per second relative to the cosmic microwave background. That’s about a million miles per hour. Where are we going? We’re moving towards something called “the great attractor” and no one has any idea what that is or why we’re going there.

Saturday, September 18, 2021

The physics anomaly no one talks about: What’s up with those neutrinos?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



In the past months we’ve talked a lot about topics that receive more attention than they deserve. Today I want to talk about a topic that doesn’t receive the attention it deserves. That’s a 20 years old anomaly in neutrino physics which has been above the discovery threshold since 2018, but chances are you’ve never even heard of it. So what are neutrinos, what’s going on with them, and what does it mean? That’s what we’ll talk about today.

I really don’t understand why some science results make headlines and others don’t. For example, we’ve seen loads of headlines about the anomaly in the measurement of the muon g-2 and the lepton anomaly at the Large Hadron Collider. In both of these cases the observations don’t agree with the prediction but neither is statistically significant enough to count as a new discovery, and in both cases there are reasons to doubt it’s actually new physics.

But in 2018, the MiniBooNE neutrino experiment at Fermilab confirmed an earlier anomaly from an experiment called LSND at the Los Alamos National Laboratory. The statistical significance of that anomaly is now at 6 σ. And in this case it’s really difficult to find an explanation that does not involve new physics. So why didn’t this make big headlines? I don’t know. Maybe people just don’t like neutrinos?

But there are lots of reasons to like neutrinos. Neutrinos are elementary particles in the standard model of particle physics. That they are elementary means they aren’t made of anything else, at least not for all we currently know. In the standard model, we have three neutrinos. Each of them is a partner-particle of a charged lepton. The charged leptons are the electron, muon, and tau. So we have an electron-neutrino, a muon-neutrino, and a tau-neutrino. Physicists call the types of neutrinos the neutrino “flavor”. The standard model neutrinos each have a flavor, have spin ½ and no electric charge.

So far, so boring. But neutrinos are decidedly weird for a number of reasons. First, they are the only particles that interact only with the weak nuclear force. All the other particles we know either interact with the electromagnetic force or the strong nuclear force or both. And the weak nuclear force is weak. Which is why neutrinos rarely interact with anything at all. They mostly just pass through matter without leaving a trace. This is why they are often called “ghostly”. While you’ve listened to this sentence about 10 to the fifteen neutrinos have passed through you.

This isn’t the only reason neutrinos are weird. What’s even weirder is that the three types of neutrino-flavors mix into each other. That means, if you start with, say, only electron-neutrinos, they’ll convert into muon-neutrinos as they travel. And then they’ll convert back into electron neutrinos. So, depending on what distance from a source you make a measurement, you’ll get more electron neutrinos or more muon neutrinos. Crazy! But it’s true. We have a lot of evidence that this actually happens and indeed a Nobel Prize was awarded for this in 2015.

Now, to be fair, neutrino-mixing in and by itself isn’t all that weird. Indeed, quarks also do this mixing, it’s just that they don’t mix as much. That *neutrinos mix is weird because neutrinos can only mix if they have masses. But we don’t know how they get masses.

You see the way that other elementary particles get masses is that they couple to the Higgs-boson. But the way this works is that we need a left-handed and a right-handed version of the particle, and the Higgs needs to couple to both of them together. That works for all particles *except the neutrinos”. Because no one has ever seen a right-handed neutrino, we only ever measure left-handed ones. So, the neutrinos mix, which means they must have masses, but we don’t know how they get these masses.

There are two ways to fix this problem. Either the right-handed neutrinos exist but are very heavy, so we haven’t seen them yet because creating them would take a lot of energy. Or the neutrinos are different from all the other spin ½ particles in that their left- and right-handed versions are just the same. This is called a Majorana particle. But either way, something is missing from our understanding of neutrinos.

And the weirdest bit is the anomaly that I mentioned. As I said we have three flavors of neutrinos and these mix into each other as they travel. This has been confirmed by a large number of observations on neutrinos from different sources. There are natural sources like the sun, and neutrinos that are created in the upper atmosphere when cosmic rays hit. And then there are neutrinos from manmade sources, particle accelerators and nuclear power plants. In all of these cases, you know how many neutrinos are created of which type at what energy. And then after some distance you measure them and see what you get.

What physicists then do is that they try to find parameters for the neutrino mixing that fit to all the data. This is called a global fit and you can look up the current status online. The parameters you need to fit are the differences in masses which determines the wavelength of the mixing and the mixing angles, that determine how much the neutrinos mix.

By 2005 or so physicists had pretty much pinned down all the parameters. Except. There was one experiment which didn’t make sense. That was the Liquid Scintillator Neutrino Detector, LSND for short, which ran from 1993 to 98. The LSND data just wouldn’t fit together with all the other data. It’s normally just excluded from the global fit.

In this figure, you see the LSND results from back then. The red and green is what you expect. The dots with the crosses are the data. The blue is the fit to the data. This excess has a statistical significance of 3.8 \sigma. As a quick reminder, 1 \sigma is a standard deviation. The more sigmas away from the expectation the data is the less likely the deviation is to have come about coincidentally. So, the more \sigma, the more impressive the anomaly. In particle physics, the discovery threshold is 5 \sigma. The 3.8 sigma of the LSND anomaly wasn’t enough to get excited, but too much to just ignore.

15 years ago, I worked on neutrino mixing for a while, and in my impression back then most physicists thought the LSND data was just wrong and it’d not be reproduced. That’s because this experiment was a little different from the others for several reasons. They detected only anti-neutrinos created by a particle accelerator and the experiment had a very short baseline of only 30 meters, shorter than all the other experiments.

Still, a new experiment was commissioned to check on this. This was the MiniBooNE experiment at Fermilab. That’s the Mini Booster Neutrino Experiment and it’s been running since 2003. As you can tell by then the trend of cooking up funky acronyms had taken hold in physics. MiniBooNE is basically a big tank full of mineral oil surrounded with photo-detectors which you see in this photo. The tank waits for neutrinos from the nearby Booster accelerator, which you see in this photo.

For the first data analysis in 2007, MiniBoone didn’t have a lot of data and the result seemed to disagree with LSND. This was what everyone expected. Look at this headline from 2007 for example. But then in 2018 with more data MiniBooNE confirmed the LSND result. Yes, you heard that right. They confirmed it with 4.7 σ, and the combined significance is 6 σ.

What does that mean? You can’t fit this observation by tweaking the other neutrino mixing parameters. There just aren’t sufficiently many parameters to tweak. The observations is just incompatible with the standard model. So you have to introduce something new. Some ideas that physicists have put forward are symmetry violations, or new neutrino-interactions that aren’t in the standard model. There is also of course still the possibility that physicists misunderstand something about the experiment itself, but given that this is an independent reproduction of an earlier experiment, I find this unlikely. The most popular idea, which is also the easiest, is what’s called “sterile neutrinos”.

A sterile neutrino is one that doesn’t have a lepton associated with it, it doesn’t have a flavor. So we wouldn’t have seen it produced in particle collisions. Sterile neutrinos can however still mix into the other neutrinos. Indeed, that would be the only way sterile neutrinos could interact with the standard model particles, and so the only way we can measure them. One sterile neutrino alone doesn’t explain the MiniBooNE/LSND data though. You need at least two or more, or something else in addition. Interestingly enough, sterile neutrinos could also make up dark matter.

When will we find out. Indeed, seeing that the result is from 2018, why don’t we know already. Well, it’s because neutrinos… interact very rarely. This means it takes a really long time to detect sufficiently many of them to come to any conclusions.

Just to give you an idea, the MiniBooNe experiment collected data from two thousand and two to two thousand and seventeen. During that time they saw an excess of about five hundred events. 500 events in 15 years. So I think we’re onto something here. But glaciers now move faster than particle physics.

This isn’t a mystery that will resolve quickly but I’ll keep you up to date, so don’t forget to subscribe.

Saturday, September 11, 2021

The Second Quantum Revolution

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Quantum mechanics is more than a hundred years old. That sounds like it’s the stuff of dusty textbooks, but research on quantum mechanics is more active now than a century ago. That’s because many rather elementary facts about quantum mechanics couldn’t be experimentally settled until the 1980s. But then, by the year 2000 or so, experimental progress had totally transformed the field. Today it’s developing faster than ever. How did this happen, why does it matter, and what’s quantum teleportation? That’s what we’ll talk about today.

Albert Einstein was famously skeptical of quantum mechanics. Hi Albert. He thought quantum mechanics couldn’t possibly be a complete description of nature and he argued that something was missing from it.

You see, in quantum mechanics we can’t predict the outcome of a measurement. We can only predict the *probability for getting a particular outcome. Without quantum mechanics, I could say if I shoot my particle cannon, then the particles will land right there. With quantum mechanics I can only say they’ll land right there 50% of the time but have a small chance to be everywhere, really.

Einstein didn’t like this at all. He thought that actually the outcome of a measurement is determined, it’s just that it’s determined by “hidden variables” which we don’t have in quantum mechanics. If that was so, the outcome would look random just because we didn’t have enough information to predict it.

To make this point, in 1935 Einstein wrote a famous paper with Boris Podolsky and Nathan Rosen, now called the EPR paper. In this paper they argued that quantum mechanics is incomplete, it can’t be how nature works. They were the first to realize how essential “entangled” particles are to understand quantum mechanics. This would become super-important and lead to odd technologies such as quantum teleportation which I’ll tell you about in a moment.

Entangled particles share some property, but you only know that property for both particles together. It’s not determined for the individual particles. You may know for example that the spin of two particles must add up to zero even though you don’t know which particle has which spin. But if you measure one of the particles, quantum mechanics says that the spin of the other particle is suddenly determined. Regardless of how far away it is. This is what Einstein called “spooky action at a distance” and it’s what he, together with Podolsky and Rosen, tried to argue can’t possibly be real.

But Einstein or not, physicists didn’t pay much attention to the EPR paper. Have a look at this graph which shows the number of citations that the paper got over the years. There’s basically nothing until the mid 1960s. What happened in the 1960s? That’s when John Bell got on the case.

Bell was a particle physicist who worked at CERN. The EPR paper had got him to think about whether a theory with hidden variables can always give the same results as quantum mechanics. The answer he arrived at was “no”. Given certain assumptions, any hidden variable theory will obey an inequality, now called “Bell’s inequality” that quantum mechanics does not have to fulfil.

Great. But that was just maths. The question was now, can we make a measurement in which quantum mechanics will actually violate Bell’s inequality and prove that hidden variables are wrong? Or will the measurements always remain compatible with a hidden variable explanation, thereby ruling out quantum mechanics?

The first experiment to find out was done in 1972 by Stuart Freedman and John Clauser at the University of California at Berkeley. They found that Bell’s inequality was indeed violated and the predictions of quantum theory were confirmed. For a while this result remained somewhat controversial because it didn’t have a huge statistical significance and it left a couple of “loopholes” by which you could make hidden variables compatible with observations. For example if one detector had time to influence the other, then you wouldn’t need any “spooky action” to explain correlations in the measurement outcomes.

But in the late 1970s physicists found out how to generate and detect single photons, the quanta of light. This made things much easier and beginning in the 1980s a number of experiments, notably those by Alain Aspect and his group, closed the remaining loopholes and improved the statistical significance.

For most physicists, that settled the case: Einstein was wrong. Hidden variables can’t work. There is one loophole in Bell’s theorem, called the “free will” loophole that cannot be closed with this type of experiment. This is something I’ve been working on myself. I’ll tell you more about this some other time but today let me just tell you what came out of all this.

These experiments did much more than just confirming quantum mechanics. By pushing the experimental limits, physicists understood how super-useful entangled particles are. They’re just something entirely different from anything they had dealt with before. And you cannot only entangle two particles but actually arbitrarily many. And the more of them you entangle, the more pronounced the quantum effects become.

This has given rise to all kinds of applications, for example quantum cryptography. This is a method to safely submit messages with quantum particles. The information is safe because quantum particles have this odd behavior that if you measure just what their properties are, that changes them. Because of this, if you use quantum particles to encrypt a message you can tell if someone intercepts it. I made a video about this specifically earlier, so check this out for more.

You can also use entangled particles to make more precise measurements, for example to study materials or measure gravitational or magnetic fields. This is called quantum metrology, I also have a video about this specifically.

But the maybe oddest thing to have come out of this is quantum teleportation. Quantum teleportation allows you to send quantum information with entangled states, even if you don’t yourself know the quantum information. It roughly works like this. First you generate an entangled state and you give one half to the sender, lets call her Alice, and the other half to the receiver, Bob. Alice takes her quantum information whatever that is, it’s just another quantum state. She mixes it together with her end of the entangled state, that entangles her information with the state that is entangled with Bob, and then she makes a measurement. The important thing is that this measurement only partly tells her what state the mixed system is in. So it’s still partly a quantum thing after the measurement.

But now remember, in quantum mechanics making a measurement on one end of an entangled state will suddenly determine the state on the other end. This means Alice has pushed the quantum information from her state into her end of the entangled state and then over to Bob. But how does Bob get this information back out? For this he needs to know the outcome of Alice’s measurement. If he doesn’t have that, his end of the entangled state isn’t useful. So, Alice lets Bob now about her measurement outcome. This tells him what measurement he needs to do to recreate the quantum information that Alice wanted to send.

So, Alice put the information into her end of the entangled state, tied the two together, sent information about the tie to Bob, who can then untie it on his end. In that process, the information gets destroyed on Alice’s end, but Bob can exactly recreate it on his end. It does not break the speed of light limit because Alice has to send information about her measurement outcome, but it’s an entirely new method of information transfer.

Quantum teleportation was successfully done first in 1997 by the groups of Sandu Popescu and Anton Zeilinger. By now they do it IN SPACE… I’m not kidding. Look at the citations to the EPR paper again. They’re higher now than ever before.

Quantum technologies have a lot of potential that we’re only now beginning to explore. And this isn’t the only reason this research matters. It also matters because it’s pushing the boundaries of our knowledge. It’s an avenue to discovering fundamentally new properties of nature. Because maybe Einstein was right after all, and quantum mechanics isn’t the last word.

Today research on quantum mechanics is developing so rapidly it’s impossible to keep up. There’s quantum information, quantum optics, quantum computation, quantum cryptography, quantum simulations, quantum metrology, quantum everything. It’s even brought the big philosophical questions about the foundations of quantum mechanics back on the table.

I think a Nobel prize for the second quantum revolution is overdue. The people whose names are most associated with it are Anton Zeilinger, John Clauser, and Alain Aspect. They’ve been on the list for a Nobel Prize for quite some while and I hope that this year they’ll get lucky.

Saturday, September 04, 2021

New Evidence against the Standard Model of Cosmology

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Physicists believe they understand quite well how the universe works on large scales. There’s dark matter and there’s dark energy, and there’s the expansion of the universe that allows matter to cool and clump and form galaxies. The key assumption to this model for the universe is the cosmological principle, according to which the universe is approximately the same everywhere. But increasingly more observations show that the universe just isn’t the same everywhere. What are those observations? Why are they a problem? And what does it mean? That’s what we’ll talk about today.

Let’s begin with the cosmological principle, the idea that the universe looks the same everywhere. Well. Of course the universe does not look the same everywhere. There’s more matter under your feet than above your head and more matter in the Milky way than in intergalactic space, and so on. Physicists have noticed that too, so the cosmological principle more precisely says that matter in the universe is equally distributed when you average over sufficiently large distances.

To see what this means, forget about matter for a moment and suppose you have a row of detectors and they measure, say, temperature. Each detector gives you a somewhat different temperature but you can average over those detectors by taking a few of them at a time, let’s say 5, calculate the average value from the reading of those five detectors, and replace the values of the individual detectors with their average value. You can then ask how far away this averaged distribution is from one that’s the same everywhere. In this example it’s pretty close.

But suppose you have a different distribution, for example this one. If you average over sets of 5 detectors again, the result still does not look the same everywhere. Now, if you average over all detectors, then of course the average is the same everywhere. So if you want to know how close a distribution is to being uniform, you average it over increasingly large distances and ask from what distance on it’s very similar to just being the same everywhere.

In cosmology we don’t want to average over temperatures, but we want to average over the density of matter. On short scales, which for cosmologists is something like the size of the milky way, matter clearly is not uniformly distributed. If we average over the whole universe, then the average is uniform, but that’s uninteresting. What we want to know is, if we average over increasingly large distances, at what distance does the distribution of matter become uniform to good accuracy?

Yes, good question. One can calculate this distance using the concordance model, which is the currently accepted standard model of cosmology. It’s also often called ΛCDM, where Λ is the cosmological constant and CDM stands for cold dark matter. The distance at which the cosmological principle should be a good approximation to the real distribution of matter was calculated from the concordance model in a 2010 paper by Hunt and Sarkar.

They found that the deviations from a uniform distribution fall below one part in a hundred from an averaging distance of about 200-300 Mpc on. 300 Megaparsec are about 1 billion light years. And just to give you a sense of scale, our distance to the next closest galaxy, Andromeda, is about two and a half million light years. A billion light years is huge. But from that distance on at the latest, the cosmological principle should be fulfilled to good accuracy – if the concordance model is correct.

One problem with the cosmological principle is that astrophysicists have on occasion assumed it is valid already on shorter distances, down to about 100 Megaparsec. This is an unjustified assumption, but it has for example entered the analysis of supernovae data from which the existence of dark energy was inferred. And yes, that’s what the Nobel Prize in physics was awarded for in 2011.

Two years ago, I told you about a paper by Subir Sarkar and his colleagues, that showed if one analyses the supernovae data correctly, without assuming that the cosmological principle holds on too short distances, then the evidence for dark energy disappears. That paper has been almost entirely ignored by other scientists. Check out my earlier video for more about that.

Today I want to tell you about another problem with the cosmological principle. As I said, one can calculate the scale from which on it should be valid from the standard model of cosmology. Beyond that scale, the universe should look pretty much the same everywhere. This means in particular there shouldn’t be any clumps of matter on scales larger than about a billion light years. But. Astrophysicists keep on finding those.

Already in nineteen-ninety-one they found the Clowes-Campusano-Quasar group, which is a collection of thirty-four Quasars, about nine point five Billion light years away from us and it extends over two Billion Light-years, clearly too large to be compatible with the prediction from the concordance model.

Since 2003 astrophysicists know the „great wall“ a collection of galaxies about a billion light years away from us that extends over 1.5 billion light years. That too, is larger than it should be.

Then there’s the “Huge quasar group” which is… huge. It spans a whopping four Billion light-years. And just in July Alexia Lopez discovered the “Giant Arc” a collection of galaxies, galaxy clusters, gas and dust that spans 3 billion light years.

Theoretically, these structures shouldn’t exist. It can happen that such clumps appear coincidentally in the concordance model. That’s because this model uses an initial distribution of matter in the early universe with random fluctuations. So it could happen you end up with a big clump somewhere just by chance. But you can calculate the probability for that to happen. The Giant Arc alone has a probability of less than one in a hundred-thousand to have come about by chance. And that doesn’t factor in all the other big structures.

What does it mean? It means the evidence is mounting that the cosmological principle is a bad assumption to develop a model for the entire universe and it probably has to go. It increasingly looks like we live in a region in the universe that happens to have a significantly lower density than the average in the visible universe. This area of underdensity which we live in has been called the “local hole”, and it has a diameter of at least 600 million light years. This is the finding of a recent paper by a group of astrophysicists from Durham in the UK.

They also point out that if we live in a local hole then this means that the local value of the Hubble rate must be corrected down. This would be good news because currently measurements for the local value of the Hubble rate are in conflict with the value from the early universe. And that discrepancy has been one of the biggest headaches in cosmology in the past years. Giving up the cosmological principle could solve that problem.

However, the finding in that paper from the Durham group is only a mild tension with the concordance model, at about three sigma, which is not highly statistically significant. But Sarkar and his group had another paper recently in which they do a consistency check on the concordance model and find a conflict at four point nine sigma, that is a less than one in a million chance for it to be coincidence.

This works as follows. If we measure the temperature of the cosmic microwave background, it appears hotter into the direction which we move against it. This gives rise to the so-called CMB dipole. You can measure this dipole. You can also measure the dipole by inferring our motion from the observations of quasars. If the concordance model was right, the direction and magnitude of the dipoles should be the same. But they are not. You see this in this figure from Sarkar’s paper. The star is the location of the cmb dipole, the triangle that of the quasar dipole. In this figure you see how far away from the cmb expectation the quasar result is.

These recent developments make me think that in the next ten years or so, we will see a major paradigm shift in cosmology, where the current standard model will be replaced with another one. Just what the new model will be, and if it will still have dark energy, I don’t know. But I’ll keep you up to date. So don’t forget to subscribe, see you next week.

Saturday, August 28, 2021

Why is quantum mechanics weird? The bomb experiment.

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



I have done quite a few videos in which I have tried to demystify quantum mechanics. Because many things people say are weird about quantum mechanics aren’t weird. Like superpositions or entanglement. Not weird. No, really, they’re not weird, just a little unintuitive. But now I feel like I may accidentally have left you with the impression that quantum mechanics is not weird at all. But of course it is. And that’s what we’ll talk about today.

Before we talk about quantum mechanics, big thanks to our tier four supporters on patreon. Your help is greatly appreciated. And you too can help us, go check out our page on Patreon or click on the join button, right below this video. Now let’s talk about quantum weirdness.

First I’ll remind you what’s not weird about quantum mechanics, though you may have been told it is. In quantum mechanics we describe everything by a wave-function, usually denoted with the Greek letter Psi. The wave-function itself cannot be observed. We just use it to calculate the probabilities of measurement outcomes, for example the probability that the particle hits a screen at a particular place. Some people say it’s weird that you can’t observe the wave-function. But I don’t see anything weird with that. You see, the wave-function describes probabilities. It’s like the average person. You never see The Average Person. It’s a math thing that we use to describe probabilities. The wave-function is like that.

Another thing that people seem to think is weird is that in quantum mechanics, the outcome of a measurement is not determined. Calculating the probability for the outcome is the best you can do. That is maybe somewhat disappointing, but there is nothing intrinsically weird about it. People just think it’s weird because they have beliefs about how nature should be.

Then there are the dead-and-alive cats. A lot of people seem to think those are weird. I would agree. But of course we don’t see dead and alive cats.

But then what’s with particles that are in two places at the same time, or two different spins. We do see those, right? Well, no. We actually don’t see them. When we “see” a particle, when we measure it, it does have definite properties, not two “at the same time”.

So what do physicists mean when they say that particles “can be at two places at the same time”? It means they have a certain mathematical expression, called a superposition, from which they calculate the probability of what they observe. A superposition is just a sum of wavefunctions for particles that are in two definite states. Yes, it’s just a sum. The math is easy, it’s just hard to interpret. What does it mean that you have a sum of a particle that’s here and a particle that’s there? Well, I don’t know. I don’t even know what could possibly answer this question. But I don’t need to know what it means to do a calculation with it. And I don’t think there’s anything weird with superpositions. They’re just sums. You add things. Like, you know, two plus two.

Okay, so superpositions, or particles which “are in two places” are just a flowery way to talk about sums. But what’s with entanglement? That’s nonlocal, right? And isn’t that weird?

Well, no. Entanglement is a type of correlation. Nonlocal correlations are all over the place and everywhere, they’re not specific to quantum mechanics, and there is nothing weird about nonlocal correlations because they are locally created. See, if I rip a photo into two and ship one half to New York, then the two parts of the photo are now non-locally correlated. They share information. But that correlation was created locally, so nothing weird about that.

Entanglement is also locally created. Suppose I have a particle with a conserved quantity that has value zero. It decays into two particles. Now all I know is that the shares of the conserved quantity for both particles have to add to zero. So if I call the one share x, then the other share is minus x, but I don’t know what x is. This means these particles are now entangled. They are non-locally correlated, but the correlation was locally created.

Now, entanglement is in a quantifiable sense a stronger correlation than what you can do with non-quantum particles, and that’s cool and is what makes quantum computers run, but it’s just a property of how quantum states combine. Entanglement is useful, but not weird. And it’s also not what Einstein meant by “spooky action at a distance”, check out my earlier video for more about that.

So then what is weird about quantum mechanics? What’s weird about quantum mechanics is best illustrated by the bomb experiment.

The bomb experiment was first proposed by Elitzur and Vaidman in 1993, and goes as follows.

Suppose you have a bomb that can be triggered by a single quantum of light. The bomb could either be live or a dud, you don’t know. If it’s a dud, then the photon doesn’t do anything to it, if it’s live, boom. Can you find out whether the bomb is live without blowing it up? Seems impossible. But quantum mechanics makes it possible. That’s where things get really weird.

Here’s what you do. You take a source that can produce single photons. Then you send those photons through a beam splitter. The beam splitter creates a superposition, so, a sum of the two possible paths that the photon could go. To make things simpler, I’ll assume that the weights of the two paths are the same, so it’s 50/50.

Along each possible path there’s a mirror, so that the paths meet again. And where they meet there’s another beam splitter. If nothing else happens, that second beam splitter will just reverse the effect of the first, so the photon continues in the same direction as before. The reason is that the two paths of the photon interfere like sound waves interfere. In the one direction they interfere destructively, so they cancel out each other. In the other direction they add together to 100 percent. We place a detector where we expect the photon to go, and call that detector A. And because we’ll need it later, we put another detector up here, where the destructive interference is, and call that detector B. In this setup, no photon ever goes into detector B.

But now, now we place the bomb into one of those paths. What happens?

If the bomb’s a dud, that’s easy. In this case nothing happens. The photon splits, takes both paths, recombines, and goes into detector A, as previously.

What happens if the bomb’s live? If the bomb’s live, it acts like a detector. So there’s a 50 percent chance that it goes boom because you detected the photon in the lower path. So far, so clear. But here’s the interesting thing.

If the bomb is live but doesn’t go boom, you know the photon’s in the upper path. And now there’s nothing coming from the lower path to interfere with.

So then the second beam splitter has nothing to recombine and the same thing happens there as at the first beam splitter, the photon goes both paths with equal probability. It is then detected either at A or B.

The probability for this is 25% each because it’s half of the half of cases when the photon took the upper path.

In summary, if the bomb’s live, it blows up 50% of the time, 25% of the time the photon goes into detector A, 25% of the time it goes into detector B. If the photon is detected at A, you don’t know if the bomb’s live or a dud because that’s the same result. But, here’s the thing, if the photon goes to detector B, that can only happen if the bomb is live AND it didn’t explode.

That means, quantum mechanics tells you something about the path that the photon didn’t take. That’s the sense in which quantum mechanics is truly non-local and weird. Not because you can’t observe the wave-function. And not because of entanglement. But because it can tell you something about events that didn’t happen.

You may think that this can’t possibly be right, but it is. This experiment has actually been done, not with bombs, but with detectors, and the result is exactly as quantum mechanics predicts.

Saturday, August 21, 2021

Everything vibrates. It really does.

[This is a transcript of the video embedded below.]



I’ve noticed that everything vibrates is quite a popular idea among alternative medicine gurus and holistic healers and so on. As most of the scientific ideas that pseudoscientists borrow, there’s a grain of truth to it. So in just which way is it true that everything vibrates? That’s what we’ll talk about today.

Today’s video was inspired by these two lovely ladies.

    We don't have the vibrational frequency to host that virus.  
    And I taught her that. 
    So if you don't have that vibrational frequency right here you're not going to get it. 
    We don't have the vibrational frequency to get COVID? 
    Correct. Do you know that everything in this universe vibrates. And is alive. There is life with that. That's what I'm talking about. I don't put life into COVID. I'm not going to wear a mask. 
    I'm not going to wear a mask either. I never wear a mask. Ever.

Now. There’s so much wrong with that, it’s hard to decide where to even begin. I guess the first thing to talk about is what we mean by vibration. As we’ve already seen a few times, definitions in science aren’t remotely as clear-cut as you might think, but roughly what we mean by vibration is a periodic deformation in a medium.

The typical example is a gong. So, some kind of metal that can slightly deform but has a restoring force. If you hit it, it’ll vibrate until air resistance damps the motion. Another example is that the sound waves created by the gong will make your eardrum vibrate. The earth itself also vibrates, because it’s not perfectly rigid and small earthquakes constantly make it ring. Indeed, the earth has what’s called a “breathing mode”, that’s an isotropic expansion and contraction. So the radius of earth expands and shrinks regularly with a period of about twenty point five minutes.

But. We also use the word vibration for a number of more general periodic motions, for example the vibration of your phone that’s caused by a small electric motor, or vibrations in vehicles that are caused by resonance.

What all these vibrations have in common is that they are also oscillations, where an oscillation is just any kind of periodic behavior. If you ask the internet, “vibrations” are a specific type of “mechanic” oscillation. But that doesn’t make sense because material properties, like those of the gong, are consequences of atomic energy levels of electrons, so, that’s electromagnetism and quantum mechanics, not mechanics. And we also talk of vibrational modes of molecules. Just where to draw the line between vibration and oscillation is somewhat unclear. You wouldn’t say electromagnetic waves vibrate, you’d say they oscillate, but just why I don’t know.

For this reason, I think it’s better to talk about oscillations than vibrations, because it’s clearer what it means. An oscillation is a regularly recurring change. In a water-wave for example, the height of the water oscillates around a mean value. Swings oscillate. Hormone levels oscillate. Traffic flow oscillates, and humans, yeah, humans can also oscillate.

With this hopefully transparent shift from the vague term vibration to oscillation, I’ll now try to convince you that everything oscillates. The reason is that everything is made of particles, and according to quantum mechanics, particles are also waves, and waves, well, oscillate.

Indeed, every massive particle has a wave-length, to so-called Compton wave-length, that’s inversely proportional to the mass of the particle. So here, lambda is the Compton wave-length, h is Planck’s constant, and c is the speed of light. The frequency of this oscillation is the speed of light divided by the wave-length. But just what is it that oscillates? Well, it’s this thing that we call the wave-function of the particle, usually denoted Psi. I have talked about psi a lot in my earlier videos. The brief summary is that physicists don’t agree on what it it, but they agree that Psi gives us the probability to observe the particle in one place or another, or with one velocity or another, or with spin or another, and so on.

For an electron, the wave-function oscillates about ten to the twenty times per second. This means, the particle carries its own internal clock with it. And all particles do this. The heavier ones, like protons or atoms, oscillate even faster than electrons because the frequency is proportional to the mass.

Neutrinos, which are lighter than electrons, don’t just oscillate by themselves, they actually oscillate into each other. This is called neutrino-mixing. There are three different types of neutrinos, and as they travel, the fraction between them periodically changes. If you start out with neutrinos of one particular type, after some while you have all three types of them. This can only happen if neutrinos have masses, so the neutrino oscillations tell us neutrinos are not massless, and a Nobel Prize was awarded for this discovery in 2015.

Photons, the particles that make up light, are, for all we know massless. This means they do not have an internal clock, but they also oscillate, it’s just that their oscillation frequency depends on the energy.

Okay, so we have seen that all particles oscillate constantly, thanks to quantum mechanics. But, you may say, particles alone don’t make up the universe, what about space and time. Well, unless you’ve been living under a rock you probably know that space-time can wiggle, that’s the so-called gravitational waves, which were first detected in twenty fifteen by the LIGO gravitational wave interferometer.

The gravitational waves that we can presently measure come from events in which space-time gets particularly strongly bent and curved, for example black holes colliding or a black hole eating up a neutron stars or something like that. But it’s not that this is the only thing that makes space-time wiggle. It’s just that normally the wiggles are way, way too small to measure. Strictly speaking though, every time you move, you make gravitational waves. Tiny ripples of space-time. So, yes, space-time also vibrates. Really, everything vibrates, kind of, all the time. It’s actually correct. But it doesn’t help against COVID.