Saturday, September 18, 2021

The physics anomaly no one talks about: What’s up with those neutrinos?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



In the past months we’ve talked a lot about topics that receive more attention than they deserve. Today I want to talk about a topic that doesn’t receive the attention it deserves. That’s a 20 years old anomaly in neutrino physics which has been above the discovery threshold since 2018, but chances are you’ve never even heard of it. So what are neutrinos, what’s going on with them, and what does it mean? That’s what we’ll talk about today.

I really don’t understand why some science results make headlines and others don’t. For example, we’ve seen loads of headlines about the anomaly in the measurement of the muon g-2 and the lepton anomaly at the Large Hadron Collider. In both of these cases the observations don’t agree with the prediction but neither is statistically significant enough to count as a new discovery, and in both cases there are reasons to doubt it’s actually new physics.

But in 2018, the MiniBooNE neutrino experiment at Fermilab confirmed an earlier anomaly from an experiment called LSND at the Los Alamos National Laboratory. The statistical significance of that anomaly is now at 6 σ. And in this case it’s really difficult to find an explanation that does not involve new physics. So why didn’t this make big headlines? I don’t know. Maybe people just don’t like neutrinos?

But there are lots of reasons to like neutrinos. Neutrinos are elementary particles in the standard model of particle physics. That they are elementary means they aren’t made of anything else, at least not for all we currently know. In the standard model, we have three neutrinos. Each of them is a partner-particle of a charged lepton. The charged leptons are the electron, muon, and tau. So we have an electron-neutrino, a muon-neutrino, and a tau-neutrino. Physicists call the types of neutrinos the neutrino “flavor”. The standard model neutrinos each have a flavor, have spin ½ and no electric charge.

So far, so boring. But neutrinos are decidedly weird for a number of reasons. First, they are the only particles that interact only with the weak nuclear force. All the other particles we know either interact with the electromagnetic force or the strong nuclear force or both. And the weak nuclear force is weak. Which is why neutrinos rarely interact with anything at all. They mostly just pass through matter without leaving a trace. This is why they are often called “ghostly”. While you’ve listened to this sentence about 10 to the fifteen neutrinos have passed through you.

This isn’t the only reason neutrinos are weird. What’s even weirder is that the three types of neutrino-flavors mix into each other. That means, if you start with, say, only electron-neutrinos, they’ll convert into muon-neutrinos as they travel. And then they’ll convert back into electron neutrinos. So, depending on what distance from a source you make a measurement, you’ll get more electron neutrinos or more muon neutrinos. Crazy! But it’s true. We have a lot of evidence that this actually happens and indeed a Nobel Prize was awarded for this in 2015.

Now, to be fair, neutrino-mixing in and by itself isn’t all that weird. Indeed, quarks also do this mixing, it’s just that they don’t mix as much. That *neutrinos mix is weird because neutrinos can only mix if they have masses. But we don’t know how they get masses.

You see the way that other elementary particles get masses is that they couple to the Higgs-boson. But the way this works is that we need a left-handed and a right-handed version of the particle, and the Higgs needs to couple to both of them together. That works for all particles *except the neutrinos”. Because no one has ever seen a right-handed neutrino, we only ever measure left-handed ones. So, the neutrinos mix, which means they must have masses, but we don’t know how they get these masses.

There are two ways to fix this problem. Either the right-handed neutrinos exist but are very heavy, so we haven’t seen them yet because creating them would take a lot of energy. Or the neutrinos are different from all the other spin ½ particles in that their left- and right-handed versions are just the same. This is called a Majorana particle. But either way, something is missing from our understanding of neutrinos.

And the weirdest bit is the anomaly that I mentioned. As I said we have three flavors of neutrinos and these mix into each other as they travel. This has been confirmed by a large number of observations on neutrinos from different sources. There are natural sources like the sun, and neutrinos that are created in the upper atmosphere when cosmic rays hit. And then there are neutrinos from manmade sources, particle accelerators and nuclear power plants. In all of these cases, you know how many neutrinos are created of which type at what energy. And then after some distance you measure them and see what you get.

What physicists then do is that they try to find parameters for the neutrino mixing that fit to all the data. This is called a global fit and you can look up the current status online. The parameters you need to fit are the differences in masses which determines the wavelength of the mixing and the mixing angles, that determine how much the neutrinos mix.

By 2005 or so physicists had pretty much pinned down all the parameters. Except. There was one experiment which didn’t make sense. That was the Liquid Scintillator Neutrino Detector, LSND for short, which ran from 1993 to 98. The LSND data just wouldn’t fit together with all the other data. It’s normally just excluded from the global fit.

In this figure, you see the LSND results from back then. The red and green is what you expect. The dots with the crosses are the data. The blue is the fit to the data. This excess has a statistical significance of 3.8 \sigma. As a quick reminder, 1 \sigma is a standard deviation. The more sigmas away from the expectation the data is the less likely the deviation is to have come about coincidentally. So, the more \sigma, the more impressive the anomaly. In particle physics, the discovery threshold is 5 \sigma. The 3.8 sigma of the LSND anomaly wasn’t enough to get excited, but too much to just ignore.

15 years ago, I worked on neutrino mixing for a while, and in my impression back then most physicists thought the LSND data was just wrong and it’d not be reproduced. That’s because this experiment was a little different from the others for several reasons. They detected only anti-neutrinos created by a particle accelerator and the experiment had a very short baseline of only 30 meters, shorter than all the other experiments.

Still, a new experiment was commissioned to check on this. This was the MiniBooNE experiment at Fermilab. That’s the Mini Booster Neutrino Experiment and it’s been running since 2003. As you can tell by then the trend of cooking up funky acronyms had taken hold in physics. MiniBooNE is basically a big tank full of mineral oil surrounded with photo-detectors which you see in this photo. The tank waits for neutrinos from the nearby Booster accelerator, which you see in this photo.

For the first data analysis in 2007, MiniBoone didn’t have a lot of data and the result seemed to disagree with LSND. This was what everyone expected. Look at this headline from 2007 for example. But then in 2018 with more data MiniBooNE confirmed the LSND result. Yes, you heard that right. They confirmed it with 4.7 σ, and the combined significance is 6 σ.

What does that mean? You can’t fit this observation by tweaking the other neutrino mixing parameters. There just aren’t sufficiently many parameters to tweak. The observations is just incompatible with the standard model. So you have to introduce something new. Some ideas that physicists have put forward are symmetry violations, or new neutrino-interactions that aren’t in the standard model. There is also of course still the possibility that physicists misunderstand something about the experiment itself, but given that this is an independent reproduction of an earlier experiment, I find this unlikely. The most popular idea, which is also the easiest, is what’s called “sterile neutrinos”.

A sterile neutrino is one that doesn’t have a lepton associated with it, it doesn’t have a flavor. So we wouldn’t have seen it produced in particle collisions. Sterile neutrinos can however still mix into the other neutrinos. Indeed, that would be the only way sterile neutrinos could interact with the standard model particles, and so the only way we can measure them. One sterile neutrino alone doesn’t explain the MiniBooNE/LSND data though. You need at least two or more, or something else in addition. Interestingly enough, sterile neutrinos could also make up dark matter.

When will we find out. Indeed, seeing that the result is from 2018, why don’t we know already. Well, it’s because neutrinos… interact very rarely. This means it takes a really long time to detect sufficiently many of them to come to any conclusions.

Just to give you an idea, the MiniBooNe experiment collected data from two thousand and two to two thousand and seventeen. During that time they saw an excess of about five hundred events. 500 events in 15 years. So I think we’re onto something here. But glaciers now move faster than particle physics.

This isn’t a mystery that will resolve quickly but I’ll keep you up to date, so don’t forget to subscribe.

Saturday, September 11, 2021

The Second Quantum Revolution

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Quantum mechanics is more than a hundred years old. That sounds like it’s the stuff of dusty textbooks, but research on quantum mechanics is more active now than a century ago. That’s because many rather elementary facts about quantum mechanics couldn’t be experimentally settled until the 1980s. But then, by the year 2000 or so, experimental progress had totally transformed the field. Today it’s developing faster than ever. How did this happen, why does it matter, and what’s quantum teleportation? That’s what we’ll talk about today.

Albert Einstein was famously skeptical of quantum mechanics. Hi Albert. He thought quantum mechanics couldn’t possibly be a complete description of nature and he argued that something was missing from it.

You see, in quantum mechanics we can’t predict the outcome of a measurement. We can only predict the *probability for getting a particular outcome. Without quantum mechanics, I could say if I shoot my particle cannon, then the particles will land right there. With quantum mechanics I can only say they’ll land right there 50% of the time but have a small chance to be everywhere, really.

Einstein didn’t like this at all. He thought that actually the outcome of a measurement is determined, it’s just that it’s determined by “hidden variables” which we don’t have in quantum mechanics. If that was so, the outcome would look random just because we didn’t have enough information to predict it.

To make this point, in 1935 Einstein wrote a famous paper with Boris Podolsky and Nathan Rosen, now called the EPR paper. In this paper they argued that quantum mechanics is incomplete, it can’t be how nature works. They were the first to realize how essential “entangled” particles are to understand quantum mechanics. This would become super-important and lead to odd technologies such as quantum teleportation which I’ll tell you about in a moment.

Entangled particles share some property, but you only know that property for both particles together. It’s not determined for the individual particles. You may know for example that the spin of two particles must add up to zero even though you don’t know which particle has which spin. But if you measure one of the particles, quantum mechanics says that the spin of the other particle is suddenly determined. Regardless of how far away it is. This is what Einstein called “spooky action at a distance” and it’s what he, together with Podolsky and Rosen, tried to argue can’t possibly be real.

But Einstein or not, physicists didn’t pay much attention to the EPR paper. Have a look at this graph which shows the number of citations that the paper got over the years. There’s basically nothing until the mid 1960s. What happened in the 1960s? That’s when John Bell got on the case.

Bell was a particle physicist who worked at CERN. The EPR paper had got him to think about whether a theory with hidden variables can always give the same results as quantum mechanics. The answer he arrived at was “no”. Given certain assumptions, any hidden variable theory will obey an inequality, now called “Bell’s inequality” that quantum mechanics does not have to fulfil.

Great. But that was just maths. The question was now, can we make a measurement in which quantum mechanics will actually violate Bell’s inequality and prove that hidden variables are wrong? Or will the measurements always remain compatible with a hidden variable explanation, thereby ruling out quantum mechanics?

The first experiment to find out was done in 1972 by Stuart Freedman and John Clauser at the University of California at Berkeley. They found that Bell’s inequality was indeed violated and the predictions of quantum theory were confirmed. For a while this result remained somewhat controversial because it didn’t have a huge statistical significance and it left a couple of “loopholes” by which you could make hidden variables compatible with observations. For example if one detector had time to influence the other, then you wouldn’t need any “spooky action” to explain correlations in the measurement outcomes.

But in the late 1970s physicists found out how to generate and detect single photons, the quanta of light. This made things much easier and beginning in the 1980s a number of experiments, notably those by Alain Aspect and his group, closed the remaining loopholes and improved the statistical significance.

For most physicists, that settled the case: Einstein was wrong. Hidden variables can’t work. There is one loophole in Bell’s theorem, called the “free will” loophole that cannot be closed with this type of experiment. This is something I’ve been working on myself. I’ll tell you more about this some other time but today let me just tell you what came out of all this.

These experiments did much more than just confirming quantum mechanics. By pushing the experimental limits, physicists understood how super-useful entangled particles are. They’re just something entirely different from anything they had dealt with before. And you cannot only entangle two particles but actually arbitrarily many. And the more of them you entangle, the more pronounced the quantum effects become.

This has given rise to all kinds of applications, for example quantum cryptography. This is a method to safely submit messages with quantum particles. The information is safe because quantum particles have this odd behavior that if you measure just what their properties are, that changes them. Because of this, if you use quantum particles to encrypt a message you can tell if someone intercepts it. I made a video about this specifically earlier, so check this out for more.

You can also use entangled particles to make more precise measurements, for example to study materials or measure gravitational or magnetic fields. This is called quantum metrology, I also have a video about this specifically.

But the maybe oddest thing to have come out of this is quantum teleportation. Quantum teleportation allows you to send quantum information with entangled states, even if you don’t yourself know the quantum information. It roughly works like this. First you generate an entangled state and you give one half to the sender, lets call her Alice, and the other half to the receiver, Bob. Alice takes her quantum information whatever that is, it’s just another quantum state. She mixes it together with her end of the entangled state, that entangles her information with the state that is entangled with Bob, and then she makes a measurement. The important thing is that this measurement only partly tells her what state the mixed system is in. So it’s still partly a quantum thing after the measurement.

But now remember, in quantum mechanics making a measurement on one end of an entangled state will suddenly determine the state on the other end. This means Alice has pushed the quantum information from her state into her end of the entangled state and then over to Bob. But how does Bob get this information back out? For this he needs to know the outcome of Alice’s measurement. If he doesn’t have that, his end of the entangled state isn’t useful. So, Alice lets Bob now about her measurement outcome. This tells him what measurement he needs to do to recreate the quantum information that Alice wanted to send.

So, Alice put the information into her end of the entangled state, tied the two together, sent information about the tie to Bob, who can then untie it on his end. In that process, the information gets destroyed on Alice’s end, but Bob can exactly recreate it on his end. It does not break the speed of light limit because Alice has to send information about her measurement outcome, but it’s an entirely new method of information transfer.

Quantum teleportation was successfully done first in 1997 by the groups of Sandu Popescu and Anton Zeilinger. By now they do it IN SPACE… I’m not kidding. Look at the citations to the EPR paper again. They’re higher now than ever before.

Quantum technologies have a lot of potential that we’re only now beginning to explore. And this isn’t the only reason this research matters. It also matters because it’s pushing the boundaries of our knowledge. It’s an avenue to discovering fundamentally new properties of nature. Because maybe Einstein was right after all, and quantum mechanics isn’t the last word.

Today research on quantum mechanics is developing so rapidly it’s impossible to keep up. There’s quantum information, quantum optics, quantum computation, quantum cryptography, quantum simulations, quantum metrology, quantum everything. It’s even brought the big philosophical questions about the foundations of quantum mechanics back on the table.

I think a Nobel prize for the second quantum revolution is overdue. The people whose names are most associated with it are Anton Zeilinger, John Clauser, and Alain Aspect. They’ve been on the list for a Nobel Prize for quite some while and I hope that this year they’ll get lucky.

Saturday, September 04, 2021

New Evidence against the Standard Model of Cosmology

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Physicists believe they understand quite well how the universe works on large scales. There’s dark matter and there’s dark energy, and there’s the expansion of the universe that allows matter to cool and clump and form galaxies. The key assumption to this model for the universe is the cosmological principle, according to which the universe is approximately the same everywhere. But increasingly more observations show that the universe just isn’t the same everywhere. What are those observations? Why are they a problem? And what does it mean? That’s what we’ll talk about today.

Let’s begin with the cosmological principle, the idea that the universe looks the same everywhere. Well. Of course the universe does not look the same everywhere. There’s more matter under your feet than above your head and more matter in the Milky way than in intergalactic space, and so on. Physicists have noticed that too, so the cosmological principle more precisely says that matter in the universe is equally distributed when you average over sufficiently large distances.

To see what this means, forget about matter for a moment and suppose you have a row of detectors and they measure, say, temperature. Each detector gives you a somewhat different temperature but you can average over those detectors by taking a few of them at a time, let’s say 5, calculate the average value from the reading of those five detectors, and replace the values of the individual detectors with their average value. You can then ask how far away this averaged distribution is from one that’s the same everywhere. In this example it’s pretty close.

But suppose you have a different distribution, for example this one. If you average over sets of 5 detectors again, the result still does not look the same everywhere. Now, if you average over all detectors, then of course the average is the same everywhere. So if you want to know how close a distribution is to being uniform, you average it over increasingly large distances and ask from what distance on it’s very similar to just being the same everywhere.

In cosmology we don’t want to average over temperatures, but we want to average over the density of matter. On short scales, which for cosmologists is something like the size of the milky way, matter clearly is not uniformly distributed. If we average over the whole universe, then the average is uniform, but that’s uninteresting. What we want to know is, if we average over increasingly large distances, at what distance does the distribution of matter become uniform to good accuracy?

Yes, good question. One can calculate this distance using the concordance model, which is the currently accepted standard model of cosmology. It’s also often called ΛCDM, where Λ is the cosmological constant and CDM stands for cold dark matter. The distance at which the cosmological principle should be a good approximation to the real distribution of matter was calculated from the concordance model in a 2010 paper by Hunt and Sarkar.

They found that the deviations from a uniform distribution fall below one part in a hundred from an averaging distance of about 200-300 Mpc on. 300 Megaparsec are about 1 billion light years. And just to give you a sense of scale, our distance to the next closest galaxy, Andromeda, is about two and a half million light years. A billion light years is huge. But from that distance on at the latest, the cosmological principle should be fulfilled to good accuracy – if the concordance model is correct.

One problem with the cosmological principle is that astrophysicists have on occasion assumed it is valid already on shorter distances, down to about 100 Megaparsec. This is an unjustified assumption, but it has for example entered the analysis of supernovae data from which the existence of dark energy was inferred. And yes, that’s what the Nobel Prize in physics was awarded for in 2011.

Two years ago, I told you about a paper by Subir Sarkar and his colleagues, that showed if one analyses the supernovae data correctly, without assuming that the cosmological principle holds on too short distances, then the evidence for dark energy disappears. That paper has been almost entirely ignored by other scientists. Check out my earlier video for more about that.

Today I want to tell you about another problem with the cosmological principle. As I said, one can calculate the scale from which on it should be valid from the standard model of cosmology. Beyond that scale, the universe should look pretty much the same everywhere. This means in particular there shouldn’t be any clumps of matter on scales larger than about a billion light years. But. Astrophysicists keep on finding those.

Already in nineteen-ninety-one they found the Clowes-Campusano-Quasar group, which is a collection of thirty-four Quasars, about nine point five Billion light years away from us and it extends over two Billion Light-years, clearly too large to be compatible with the prediction from the concordance model.

Since 2003 astrophysicists know the „great wall“ a collection of galaxies about a billion light years away from us that extends over 1.5 billion light years. That too, is larger than it should be.

Then there’s the “Huge quasar group” which is… huge. It spans a whopping four Billion light-years. And just in July Alexia Lopez discovered the “Giant Arc” a collection of galaxies, galaxy clusters, gas and dust that spans 3 billion light years.

Theoretically, these structures shouldn’t exist. It can happen that such clumps appear coincidentally in the concordance model. That’s because this model uses an initial distribution of matter in the early universe with random fluctuations. So it could happen you end up with a big clump somewhere just by chance. But you can calculate the probability for that to happen. The Giant Arc alone has a probability of less than one in a hundred-thousand to have come about by chance. And that doesn’t factor in all the other big structures.

What does it mean? It means the evidence is mounting that the cosmological principle is a bad assumption to develop a model for the entire universe and it probably has to go. It increasingly looks like we live in a region in the universe that happens to have a significantly lower density than the average in the visible universe. This area of underdensity which we live in has been called the “local hole”, and it has a diameter of at least 600 million light years. This is the finding of a recent paper by a group of astrophysicists from Durham in the UK.

They also point out that if we live in a local hole then this means that the local value of the Hubble rate must be corrected down. This would be good news because currently measurements for the local value of the Hubble rate are in conflict with the value from the early universe. And that discrepancy has been one of the biggest headaches in cosmology in the past years. Giving up the cosmological principle could solve that problem.

However, the finding in that paper from the Durham group is only a mild tension with the concordance model, at about three sigma, which is not highly statistically significant. But Sarkar and his group had another paper recently in which they do a consistency check on the concordance model and find a conflict at four point nine sigma, that is a less than one in a million chance for it to be coincidence.

This works as follows. If we measure the temperature of the cosmic microwave background, it appears hotter into the direction which we move against it. This gives rise to the so-called CMB dipole. You can measure this dipole. You can also measure the dipole by inferring our motion from the observations of quasars. If the concordance model was right, the direction and magnitude of the dipoles should be the same. But they are not. You see this in this figure from Sarkar’s paper. The star is the location of the cmb dipole, the triangle that of the quasar dipole. In this figure you see how far away from the cmb expectation the quasar result is.

These recent developments make me think that in the next ten years or so, we will see a major paradigm shift in cosmology, where the current standard model will be replaced with another one. Just what the new model will be, and if it will still have dark energy, I don’t know. But I’ll keep you up to date. So don’t forget to subscribe, see you next week.

Saturday, August 28, 2021

Why is quantum mechanics weird? The bomb experiment.

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



I have done quite a few videos in which I have tried to demystify quantum mechanics. Because many things people say are weird about quantum mechanics aren’t weird. Like superpositions or entanglement. Not weird. No, really, they’re not weird, just a little unintuitive. But now I feel like I may accidentally have left you with the impression that quantum mechanics is not weird at all. But of course it is. And that’s what we’ll talk about today.

Before we talk about quantum mechanics, big thanks to our tier four supporters on patreon. Your help is greatly appreciated. And you too can help us, go check out our page on Patreon or click on the join button, right below this video. Now let’s talk about quantum weirdness.

First I’ll remind you what’s not weird about quantum mechanics, though you may have been told it is. In quantum mechanics we describe everything by a wave-function, usually denoted with the Greek letter Psi. The wave-function itself cannot be observed. We just use it to calculate the probabilities of measurement outcomes, for example the probability that the particle hits a screen at a particular place. Some people say it’s weird that you can’t observe the wave-function. But I don’t see anything weird with that. You see, the wave-function describes probabilities. It’s like the average person. You never see The Average Person. It’s a math thing that we use to describe probabilities. The wave-function is like that.

Another thing that people seem to think is weird is that in quantum mechanics, the outcome of a measurement is not determined. Calculating the probability for the outcome is the best you can do. That is maybe somewhat disappointing, but there is nothing intrinsically weird about it. People just think it’s weird because they have beliefs about how nature should be.

Then there are the dead-and-alive cats. A lot of people seem to think those are weird. I would agree. But of course we don’t see dead and alive cats.

But then what’s with particles that are in two places at the same time, or two different spins. We do see those, right? Well, no. We actually don’t see them. When we “see” a particle, when we measure it, it does have definite properties, not two “at the same time”.

So what do physicists mean when they say that particles “can be at two places at the same time”? It means they have a certain mathematical expression, called a superposition, from which they calculate the probability of what they observe. A superposition is just a sum of wavefunctions for particles that are in two definite states. Yes, it’s just a sum. The math is easy, it’s just hard to interpret. What does it mean that you have a sum of a particle that’s here and a particle that’s there? Well, I don’t know. I don’t even know what could possibly answer this question. But I don’t need to know what it means to do a calculation with it. And I don’t think there’s anything weird with superpositions. They’re just sums. You add things. Like, you know, two plus two.

Okay, so superpositions, or particles which “are in two places” are just a flowery way to talk about sums. But what’s with entanglement? That’s nonlocal, right? And isn’t that weird?

Well, no. Entanglement is a type of correlation. Nonlocal correlations are all over the place and everywhere, they’re not specific to quantum mechanics, and there is nothing weird about nonlocal correlations because they are locally created. See, if I rip a photo into two and ship one half to New York, then the two parts of the photo are now non-locally correlated. They share information. But that correlation was created locally, so nothing weird about that.

Entanglement is also locally created. Suppose I have a particle with a conserved quantity that has value zero. It decays into two particles. Now all I know is that the shares of the conserved quantity for both particles have to add to zero. So if I call the one share x, then the other share is minus x, but I don’t know what x is. This means these particles are now entangled. They are non-locally correlated, but the correlation was locally created.

Now, entanglement is in a quantifiable sense a stronger correlation than what you can do with non-quantum particles, and that’s cool and is what makes quantum computers run, but it’s just a property of how quantum states combine. Entanglement is useful, but not weird. And it’s also not what Einstein meant by “spooky action at a distance”, check out my earlier video for more about that.

So then what is weird about quantum mechanics? What’s weird about quantum mechanics is best illustrated by the bomb experiment.

The bomb experiment was first proposed by Elitzur and Vaidman in 1993, and goes as follows.

Suppose you have a bomb that can be triggered by a single quantum of light. The bomb could either be live or a dud, you don’t know. If it’s a dud, then the photon doesn’t do anything to it, if it’s live, boom. Can you find out whether the bomb is live without blowing it up? Seems impossible. But quantum mechanics makes it possible. That’s where things get really weird.

Here’s what you do. You take a source that can produce single photons. Then you send those photons through a beam splitter. The beam splitter creates a superposition, so, a sum of the two possible paths that the photon could go. To make things simpler, I’ll assume that the weights of the two paths are the same, so it’s 50/50.

Along each possible path there’s a mirror, so that the paths meet again. And where they meet there’s another beam splitter. If nothing else happens, that second beam splitter will just reverse the effect of the first, so the photon continues in the same direction as before. The reason is that the two paths of the photon interfere like sound waves interfere. In the one direction they interfere destructively, so they cancel out each other. In the other direction they add together to 100 percent. We place a detector where we expect the photon to go, and call that detector A. And because we’ll need it later, we put another detector up here, where the destructive interference is, and call that detector B. In this setup, no photon ever goes into detector B.

But now, now we place the bomb into one of those paths. What happens?

If the bomb’s a dud, that’s easy. In this case nothing happens. The photon splits, takes both paths, recombines, and goes into detector A, as previously.

What happens if the bomb’s live? If the bomb’s live, it acts like a detector. So there’s a 50 percent chance that it goes boom because you detected the photon in the lower path. So far, so clear. But here’s the interesting thing.

If the bomb is live but doesn’t go boom, you know the photon’s in the upper path. And now there’s nothing coming from the lower path to interfere with.

So then the second beam splitter has nothing to recombine and the same thing happens there as at the first beam splitter, the photon goes both paths with equal probability. It is then detected either at A or B.

The probability for this is 25% each because it’s half of the half of cases when the photon took the upper path.

In summary, if the bomb’s live, it blows up 50% of the time, 25% of the time the photon goes into detector A, 25% of the time it goes into detector B. If the photon is detected at A, you don’t know if the bomb’s live or a dud because that’s the same result. But, here’s the thing, if the photon goes to detector B, that can only happen if the bomb is live AND it didn’t explode.

That means, quantum mechanics tells you something about the path that the photon didn’t take. That’s the sense in which quantum mechanics is truly non-local and weird. Not because you can’t observe the wave-function. And not because of entanglement. But because it can tell you something about events that didn’t happen.

You may think that this can’t possibly be right, but it is. This experiment has actually been done, not with bombs, but with detectors, and the result is exactly as quantum mechanics predicts.

Saturday, August 21, 2021

Everything vibrates. It really does.

[This is a transcript of the video embedded below.]



I’ve noticed that everything vibrates is quite a popular idea among alternative medicine gurus and holistic healers and so on. As most of the scientific ideas that pseudoscientists borrow, there’s a grain of truth to it. So in just which way is it true that everything vibrates? That’s what we’ll talk about today.

Today’s video was inspired by these two lovely ladies.

    We don't have the vibrational frequency to host that virus.  
    And I taught her that. 
    So if you don't have that vibrational frequency right here you're not going to get it. 
    We don't have the vibrational frequency to get COVID? 
    Correct. Do you know that everything in this universe vibrates. And is alive. There is life with that. That's what I'm talking about. I don't put life into COVID. I'm not going to wear a mask. 
    I'm not going to wear a mask either. I never wear a mask. Ever.

Now. There’s so much wrong with that, it’s hard to decide where to even begin. I guess the first thing to talk about is what we mean by vibration. As we’ve already seen a few times, definitions in science aren’t remotely as clear-cut as you might think, but roughly what we mean by vibration is a periodic deformation in a medium.

The typical example is a gong. So, some kind of metal that can slightly deform but has a restoring force. If you hit it, it’ll vibrate until air resistance damps the motion. Another example is that the sound waves created by the gong will make your eardrum vibrate. The earth itself also vibrates, because it’s not perfectly rigid and small earthquakes constantly make it ring. Indeed, the earth has what’s called a “breathing mode”, that’s an isotropic expansion and contraction. So the radius of earth expands and shrinks regularly with a period of about twenty point five minutes.

But. We also use the word vibration for a number of more general periodic motions, for example the vibration of your phone that’s caused by a small electric motor, or vibrations in vehicles that are caused by resonance.

What all these vibrations have in common is that they are also oscillations, where an oscillation is just any kind of periodic behavior. If you ask the internet, “vibrations” are a specific type of “mechanic” oscillation. But that doesn’t make sense because material properties, like those of the gong, are consequences of atomic energy levels of electrons, so, that’s electromagnetism and quantum mechanics, not mechanics. And we also talk of vibrational modes of molecules. Just where to draw the line between vibration and oscillation is somewhat unclear. You wouldn’t say electromagnetic waves vibrate, you’d say they oscillate, but just why I don’t know.

For this reason, I think it’s better to talk about oscillations than vibrations, because it’s clearer what it means. An oscillation is a regularly recurring change. In a water-wave for example, the height of the water oscillates around a mean value. Swings oscillate. Hormone levels oscillate. Traffic flow oscillates, and humans, yeah, humans can also oscillate.

With this hopefully transparent shift from the vague term vibration to oscillation, I’ll now try to convince you that everything oscillates. The reason is that everything is made of particles, and according to quantum mechanics, particles are also waves, and waves, well, oscillate.

Indeed, every massive particle has a wave-length, to so-called Compton wave-length, that’s inversely proportional to the mass of the particle. So here, lambda is the Compton wave-length, h is Planck’s constant, and c is the speed of light. The frequency of this oscillation is the speed of light divided by the wave-length. But just what is it that oscillates? Well, it’s this thing that we call the wave-function of the particle, usually denoted Psi. I have talked about psi a lot in my earlier videos. The brief summary is that physicists don’t agree on what it it, but they agree that Psi gives us the probability to observe the particle in one place or another, or with one velocity or another, or with spin or another, and so on.

For an electron, the wave-function oscillates about ten to the twenty times per second. This means, the particle carries its own internal clock with it. And all particles do this. The heavier ones, like protons or atoms, oscillate even faster than electrons because the frequency is proportional to the mass.

Neutrinos, which are lighter than electrons, don’t just oscillate by themselves, they actually oscillate into each other. This is called neutrino-mixing. There are three different types of neutrinos, and as they travel, the fraction between them periodically changes. If you start out with neutrinos of one particular type, after some while you have all three types of them. This can only happen if neutrinos have masses, so the neutrino oscillations tell us neutrinos are not massless, and a Nobel Prize was awarded for this discovery in 2015.

Photons, the particles that make up light, are, for all we know massless. This means they do not have an internal clock, but they also oscillate, it’s just that their oscillation frequency depends on the energy.

Okay, so we have seen that all particles oscillate constantly, thanks to quantum mechanics. But, you may say, particles alone don’t make up the universe, what about space and time. Well, unless you’ve been living under a rock you probably know that space-time can wiggle, that’s the so-called gravitational waves, which were first detected in twenty fifteen by the LIGO gravitational wave interferometer.

The gravitational waves that we can presently measure come from events in which space-time gets particularly strongly bent and curved, for example black holes colliding or a black hole eating up a neutron stars or something like that. But it’s not that this is the only thing that makes space-time wiggle. It’s just that normally the wiggles are way, way too small to measure. Strictly speaking though, every time you move, you make gravitational waves. Tiny ripples of space-time. So, yes, space-time also vibrates. Really, everything vibrates, kind of, all the time. It’s actually correct. But it doesn’t help against COVID.

Saturday, August 14, 2021

Physicist Despairs over Vacuum Energy

[This is a transcript of the video embedded below.]



Vacuum energy is all around us, it makes the universe expand with quantum fluctuations, and before you know they’re talking about energy chakras and quantum healing. Even many physicists and science writers are very, very confused about what vacuum energy is. But don’t despair, at the end of this video you’ll know why it’s not what you were told it is.

This video came out of my desperation over a letter that was published in the June 2021 issue of Scientific American. It’s a follow-up question about an article about the accelerated expansion of the universe and it reads as follows
“[The article] “Cosmic Conundrum” by Clara Moskowitz, describes how the most likely cause of the accelerating expansion of the universe is “vacuum energy,” the effect of virtual particles popping in and out of existence. But it does not explain why vacuum energy would cause the universe to expand. I would think that if space is filled with evanescent virtual particles, they would collectively exert a huge gravitational force that would counteract expansion.”
To which the editor replies:
“Vacuum energy is positive and has a constant density throughout space. Thus, increasing the volume of space increases the total amount of vacuum energy, which requires work. It is the opposite of a gas, whose energy and density decrease as it expands. When that happens, the gas exerts positive pressure. In contrast, because vacuum energy is positive, it exerts negative pressure, so galaxies on the largest scales are pushed apart, not pulled together.”
I didn’t understand this answer. Which is a little bit embarrassing because I’m one of the people quoted in the original article. So I want to look at this in a little bit more detail.

First of all, the terminology. What’s vacuum energy and why is it important?

If we leave aside gravity, we can’t measure absolute energies. We only ever measure energy differences. You probably remember this from your electronics class, you never measure the electric potential energy, you measure differences in it, which is what makes currents flow. It’s like you have a long list of height comparisons, Peter is 2 inch taller than Mary and Mary is one inch taller than Bob and Bob is 5 inch smaller than Alice. But you don’t know anyone’s absolute height. Energies are like that.

Now, this is generally the case, that you can only measure energy differences – as long as you ignore gravity. Because all kinds of energies have a gravitational pull, and for that gravitational pull it’s the absolute energy that matters, not the relative one.

So it really only becomes relevant to talk about absolute energies in general relativity, Einstein’s theory for gravity. Yes, that guy again. Now, if we want to find out the absolute value of energies, we need to do this only for one case, because we know the energy differences. Think of the height-comparisons. If you know all the relative heights, you only need to measure the absolute height of one person, say Paul, to know all the absolute heights. In General Relativity, we don’t measure Paul, we measure the vacuum.

How do we do this? For this we need to have a look at Einstein’s equations for General Relativity. Here they are. They are called “Einstein’s Field Equations”. They contain two constants, so they have the same value at every point in space and at every moment in time. The one constant, the G, is Newton’s constant and determines the strength of gravity. The other, the Lambda, is called the cosmological constant. The R’s here quantify the curvature of space-time.

And this term with the T contains all the other kinds of energies, particles and radiation and so on. This means, if we set the T-term to zero, we have empty space. You can therefore interpret Lambda as the energy-density of the vacuum. So, not the entire energy, but energy per volume. This vacuum energy-density doesn’t dilute if the universe expands because it’s a property of space-time. That makes it different from all other kinds of energy densities that we know. The other ones, for example for matter or radiation, all dilute with the expansion of the universe. The vacuum energy density doesn’t.

What does the energy-density of vacuum have to do with the acceleration of the universe? If we want to know what the universe does as a whole, we introduce what’s called the “scale factor” a. The scale factor tells you how distances change with time. So a is a function of time, a(t). If the universe expands, a increases, if the universe shrinks, a decreases. You plug this into Einstein’s equations. And then one of the equations says that the second time derivative of the scale factor, so that’s the acceleration of the expansion, as a contribution that is proportional to the cosmological constant. So that’s where it comes from. A positive lambda makes the expansion speed up.

What’s this all got to do with vacuum fluctuations? Nothing. And that’s where physicists get very confused. You see, we cannot calculate this measureable vacuum energy-density which appears in general relativity. It’s a constant that we infer from observations and that’s that.

A lot of physicists claim that particle physics predicts the vacuum energy-density, and it’s 120 orders of magnitude too large, and that’s the worst prediction ever, I’m sure you’ve heard that story. But that’s just wrong. This value which you get from particle physics is unmeasurable, so it’s not a prediction. If you hear someone claim it was a bad prediction, I suggest you ask them what theory was ruled out by the conflict between the prediction and observation? The answer is: none. And why is that? It’s because it wasn’t a prediction.

Okay, so we have learned: vacuum has an energy-density, it’s a constant of nature, it’s proportional to the acceleration of the expansion of the universe, and it has nothing to do with quantum fluctuations. This hopefully also clarifies how something that’s supposedly due to fluctuations can be constant both in space and in time. It’s because nothing is fluctuating. So that would have been my response to the question.

Let us then look at the editor’s response. This response uses an analogy between the vacuum energy-density and the simplest type of gas called an “ideal gas”. An ideal gas is just a bunch of particles moving around bumping of each other. The ideal gas has a volume, temperature, pressure and an internal energy. Internal energy is what you need to do work. The key equation is
ΔU = - p ΔV
U is the internal energy, p the pressure and V the volume. Those Δ’s mean you have small changes of the quantities that come after the delta. The pressure of an ideal gas is always positive. What this equation tells you is that if you increase the volume, so ΔV is positive, then ΔU is negative, so the internal energy decreases. This means if the gas expands it does work, and then you have less internal energy left. Makes sense.

Now, as we have seen, the energy-density of the vacuum, Lambda, is just a constant. The total energy is just the density times the volume. This mean, if the volume increases, because the universe expands, but the energy density of the vacuum is constant, then the amount of vacuum energy increases with the volume. If you identify this energy with the internal energy of a gas, this means delta U has to be positive, and if Delta V is also positive, because space expands, this can only be if the pressure is negative.

And this is correct. If you associate a pressure with the vacuum, then that pressure is negative. However, the problem with this explanation is that the vacuum energy is not an internal energy, it’s a total energy, and the vacuum energy is not a gas in any meaningful way because it’s not made of anything, and how you get from the ideal gas analogy to the expansion of the universe I don’t know.

So I don’t want to call this answer wrong, but I think it’s misleading. It strongly suggests a physical interpretation, namely that the cosmological constant is some kind of weird gas, but it doesn’t spell out that this is really just an analogy. I am picking on this because simplified analogies like this that make no sense if you think about them are the reason so many people either physics is incomprehensible or physicists have totally lost it or maybe both.

If you look at the math, the best way to think about the vacuum energy-density is that it’s just a constant of nature.

Saturday, August 07, 2021

Why the Hype around Hypersonics?

[This is a transcript of the video embedded below.]



Recently, we’ve seen quite a few headlines about traveling faster than the speed of sound. For example, the startup Venus Aerospace wants to reach 12 times the speed of sound. That’s nine-thousand miles per hour, and would bring you from New York to Frankfurt in less than half an hour.

NASA is working on a Quiet SuperSonic airplane called X fifty-nine, that’s supposed to have a reduced sonic boom and be ready in twenty-twenty-four. The American Airline United announced they want to offer supersonic flights by twenty-twenty-nine. And Boeing as well as some other companies have made deals with the US military about developing hypersonic missiles. How seriously should you take these headline? What’s the difference between supersonic and hypersonic? And what’s with those missiles? That’s what we’ll talk about today.

First things first, what is hypersonic flight? Is it just a fancy name to mean really fast? You know… hyperfast! No. Hypersonic flight is defined as flight above Mach 5. The Mach number tells you how many times faster than the speed of sound you are moving. So, moving at Mach 1 through a medium means you are moving at the speed of sound in that medium. Below Mach 0.8 you’re subsonic. The range from 0.8 to 1.2 is called transonic. Between Mach 1.2 and 5 you’re Supersonic, and faster than Mach 5 is hypersonic.

What happens once you fly faster than sound? A plane emits noise that travels outwards into all directions, at the speed of sound, but in rest with the air, not with the plane. If the plane moves below the speed of sound, some of the sound moves ahead of the plane. But if you reach the speed of sound, the plane moves exactly with the sound, and the sound piles up along a cone creating a shock-wave. This is what creates the supersonic boom. You can’t hear the plane coming, but you hear a loud bang once it’s passed by.

Actually, a plane usually creates two shockwaves, one at the front and one at the back of the plane. This means there are really two supersonic booms and if the plane is large enough, you can hear them separately. Here’s an example from the Concorde.

The supersonic boom happens at any speed above the speed of sound though it’s the loudest directly at the sound barrier since the sound spreads out somewhat more at higher speeds. For this reason, supersonic flights are currently forbidden over populated areas, they’re just too loud.

But what’s so special about Mach 5 that everything above is “hypersonic”? It’s somewhat of an arbitrary definition, but it’s roughly at about Mach 5 that some “funny effects” start to become important, effects that either don’t happen or aren’t important at lower speeds.

What are those “funny effects”? The issue with hypersonic flying is what physicists call “stagnation points.” If you have an object that’s flying through a gas fast enough, it’ll basically stop the flow of gas at some places. But the kinetic energy from the gas molecules has to go somewhere, and that increases the temperature to what’s called the “stagnation temperature”. Problem is, this stagnation temperature increases quickly with the Mach number.

The equation that relates the two looks like this, where T naught is the stagnation temperature and T the temperature before stagnation. M is the Mach number, and γ is a constant that depends on the medium. For air, γ is about 1.4. As you can see, the temperature increases with the square of the Mach number. That’s a problem.

Let’s plug in some numbers for illustration. If you are flying at an altitude of about twelve kilometers, like an average overseas flight, T is about 219 Kelvin, or a little below -50C. For Mach 1 this gives a stagnation temperature of about 260 Kelvin, so not much happens.

But already for Mach 2 the stagnation temperature is 390 Kelvin, that’s 117 Celsius. Next time you fly on a fighter jet don’t stick your hand out of the window. At Mach 5 the stagnation temperature is 1300 Kelvin and by Mach 8 you have 3000 Kelvin. At that temperature, most metals melt. That’s not good.

And it’s not enough to keep the metal from melting, because materials weaken long before they melt and also, the pressure increases along with the temperature. Worse, in these conditions out-of-equilibrium chemical processes occur, causing molecules to split or ionize.

Well, you may say, what about rockets, seems to work for them. Indeed, for example, the space shuttle was flying regularly at Mach 25. But. The thing with rockets is they go up. And if you go up, the atmosphere thins out and eventually ends, so air resistance is no longer a problem. The space shuttle left the atmosphere at “only” about Mach 3. Flying hypersonic in the atmosphere, that’s what’s the problem.

And we don’t want to do it with a rocket engine, but with a jet engine. The difference is that a rocket uses combustion with additional oxygen supply, and the rocket carries the source of the oxygen with it. That’s why they work in outer space. Jet engines on the contrary, take in and push out air. They are what’s called “air-breathing” machines. This requires less fuel and makes them lighter.

So how do you get to hypersonic speeds without melting the aircraft? Well the obvious thing is to use materials with extraordinarily high melting points. Among the most promising materials are Tantalum carbide and hafnium carbide with melting temperatures above four-thousand Kelvin. But that isn’t enough. To get beyond Mach 5, you need to redesign the whole engine. Interestingly, and maybe contrary to what you might have expected, you do this by removing parts.

In a jet engine, air enters the engine from the front is compressed with rotating blades. This heats the air, which is then mixed with fuel in the combustion chamber. But above about Mach 3 the air which enters the engine is hot and compressed just because it’s being slowed down so much, so one doesn’t need the compressor. The thing that’s left is called a ramjet, called that way because it “rams” into the air.

A ramjet can’t fly below Mach 3 because it doesn’t have a compressor, so it needs to be launched by other planes. But it works up to about Mach 6. Above that, temperature and pressure get too high to have good combustion

So why don’t we just keep the air flowing through the engine, instead of slowing it down, which causes the heating? Indeed, great idea. If you do this, you get what’s called a scramjet, short for Supersonic Combustion Ramjets.

The Scramjet design greatly alleviate the heating problem inside the engine. Scramjets are basically tubes with some divisions inside where fuel is injected into the air – they don’t even have moving parts. The problem with Scramjets is that the air goes in and out the other end in about a millisecond, and it’s also turbulent. So the challenge is to find the right shape to control the turbulence and get the fuel where it needs to be. Scramjets work from about 4 Mach upward. The current speed record is Mach 9.6 and is held by NASA’s X-43 jet.

In 2013 Boeing’s X-51 scramjet broke a record. It was the first scramjet to use jet fuel instead of hydrogen and had a more lightweight design. The record that it broke was not that of speed (it just flew a bit over Mach 5) but that of duration: it flew for 3.5 minutes.

Yes, you heard that right. 3.5 minutes. That’s the record. And don’t forget that to launch, it first had to be carried aboard a B-52, then accelerated to Mach 4.5 with a rocket booster.

The leader of the team that designed the X-51, Kevin Bowcutt, delivered a TED talk in which he envision a future when people take hypersonic flights regularly and he claims that a way to do it would be to use antimatter as fuel... Hahaha.

Ok, so I’m somewhat skeptic we’ll see hypersonic commercial flights in the near future. Not only, as you have seen, isn’t the technology ready, the whole process is also ridiculously fuel consuming. When it comes to supersonic flights, NASA seems to have made good progress in alleviating the problem with the supersonic boom by smart design. This is neat but doesn’t really do anything about the fuel problem.

This makes me think we might see some supersonic flights but they’ll probably remain rare and expensive. Personally I think it makes much more sense to look for a mode of transportation in which you excavate a tube or tunnel to lower air pressure, such as the hyperloop, because that way it becomes dramatically easier to reach high speeds.

So much about hypersonic travel, but what’s with those hypersonic weapons? It seems we’re in the middle of a hypersonic arms race between the United States, Russia and China. Russia recently became the first nation to deploy a hypersonic missile, tested in December 2018. And the Chinese have created a new hypersonic wind tunnel that, if you trust the Chinese media reaches up to 30 Mach. If you don’t trust them it’s still 22 Mach.

The budgets for this research are, one could say, stratospheric. For 2021, U.S. research agencies have allocated 3 point 2 billion US dollars for hypersonic weapons research, up from 2 point 6 billion in the previous year.

The attraction is easy to understand: at these speeds the enemy just doesn’t time have to react to the missile. The path of “normal” ballistic missiles is easy to predict, so anti-missile systems can target and destroy them. They’re also easy to see coming by radar because they fly high. But hypersonic missile are fast, can fly low and only appear on the radar late, and can unpredictably change direction, so by the time you see them it might be too late to do anything about it.

But is it all advantages? No, according to a paper by researchers from MIT, that appeared in January 2021. That’s because common ballistic missiles fly at high altitudes where the air pressure is really low and reaching hypersonic speeds is fairly easy. They then simply fall down, but even so still hit the ground at hypersonic speed. According to the MIT researchers, with an optimal trajectory, a ballistic missile would even be faster than a hypersonic glider.

They calculate that for a distance of 8500 kilometers, the hypersonic glider would take 28 minutes, and the optimized ballistic path only 25 minutes. They claim that the threat from hypersonic weapons has been exaggerated by military officials, quite possibly to get funding. In their paper, they write:
“It is commonly claimed that hypersonic weapons can reduce warhead delivery times by reaching their targets faster than existing ballistic missiles could. In 2019 testimony before the U.S. Senate Committee on Armed Services, the Commander of U.S. Strategic Command addressed this delivery time issue. Asked how long it would take a Russian hypersonic glide weapon to strike the United States, he responded: “it is a shorter period of time. The ballistic missile is roughly 30 minutes. A hypersonic weapon, depending on the design, could be half of that, depending on where it is launched from, the platform. It could be even less than that.””

The researchers then explain “The implication that a hypersonic missile could halve the time necessary to deliver a warhead between Russia and the United States—while false—subsequently permeated the U.S. discourse, fueling narratives of the revolutionary nature of these weapons.”

They also claim that even though land radars cannot detect missiles flying low until they are too close, because they are behind the Earth curvature, hypersonic vehicles flying inside an atmosphere are actually easy to detect. That’s because they become so terribly hot that they can be seen from satellites with infrared detectors. They conclude that the performance and strategic implications of hypersonic weapons would be comparable to those of established ballistic missile technologies.

So, my conclusion from all this is that we might well see some supersonic passenger flights again in the next decades, but I doubt they’ll become common, and hypersonic missiles are an overhyped threat. We have better things to worry about.

Saturday, July 31, 2021

Are we made of math? Is math real?

[This is a transcript of the video embedded below.]


There’s a lot of mathematics in physics, as you have undoubtedly noticed. But what’s the difference between the math that we use to describe nature and nature itself? Is there any difference? Or could it be that they’re just the same thing, that everything *is math? That’s what we’ll talk about today.

I noticed in the comments to my earlier video about complex numbers that many people said oh, numbers are not real. But of course numbers are real.

Here’s why. You probably think I am “real”. Why? Because the hypothesis that I am a human being standing in front of a green screen trying to remember that the “h” in “human” isn’t silent explains your observations. And it explains your observations better than any other hypothesis, for example, that I’m computer generated, in which case I’d probably be better looking, or that I’m a hallucination, in which case your sub consciousness speaks German und das macht igendwie keinen Sinn oder?

We use the same notion of “reality” in physics, that something is real because it’s a good explanation for our observations. I am not trying to tell you that this is The Right Way to define reality, it’s just for all I can tell how we use the word. We can’t actually see elementary particles, like the Higgs-boson, with our own eyes. We say they are real because certain mathematical structures that we have come up with describe our observations. Same thing with gravitational waves, or black holes, or the particle spin.

And numbers are just like that. Of course we don’t see numbers as objects walking around, but as attributes of objects, like the spin that is a property of certain particles, not a thing in and by itself. If you see three apples, three describes what you see, therefore it’s real. Again, if that is not a notion of reality you want to use, that’s totally okay, but then I challenge you to come up with a different notion that is consistent and agrees with how most people actually use the word.

Interestingly enough, not all numbers are real. The example I just gave was for integers. But if you look at all numbers with infinitely many digits after the decimal point we don’t actually need all those digits to describe observations, because we cannot measure anything with infinite accuracy. In reality we only ever need a finite number of digits. Now, all these numbers with infinitely many digits are called the real numbers. Which means, odd as it may sound, we don’t know whether the real numbers are, erm, real.

But of course physics is more difficult than just number. For all we currently know, everything in the universe is made of 25 particles, held together by four fundamental forces: gravity, the electromagnetic force, and the strong and weak nuclear force. Those particles and their forces can be mathematically described by Einstein’s Theory of General Relativity and Quantum Field Theory, theories which have been remarkably successful in explaining what we observe.

For what the science is concerned, I’d say that’s it. But people often ask me things like “what is space-time?” “what is a particle?” And I don’t know what to do with questions like this.

Space-time is a mathematical structure that we use in our theories. This mathematical structure is defined by its properties. Space-time is a differentiable manifold with Lorentzian signature, it has a distance measure, it has curvature, and so on. It’s a math thing. We call it “real” because it correctly describes our observations.

It’s a similar story for the particles. A particle is a vector in a Hilbert space that transforms under certain irreducible representations of the Poincare group. That’s the best answer we have to the question what a particle is. Again we call those particles “real” because they correctly describe what we observe.

So when physicists say that space-time is real or the Higgs-boson is real, they mean that a certain mathematical structure correctly describes observations. But many people seem to find this unsatisfactory. Now that may partly be because they’re looking for a simple answer and there just isn’t one. But I think there’s another reason, it’s that they intuitively think there must be something more to space-time and matter, something that distinguishes the math from the physics. Something that makes the math real or, as Stephen Hawking put it “Breathes fire into the equations”.

But those mathematical structures in our theories already describe all our observations. This means just going by the evidence, you don’t need anything more. It’s therefore possible that reality actually is math, that there is no distinction between them. This idea is not in conflict with any observation. The origin of this idea goes all the way back to Plato, which is why it’s often called Platonism, though Plato thought that the ideal mathematical forms are somehow beyond human recognition. The idea has more recently been given a modern formulation by Max Tegmark who called it the Mathematical Universe Hypothesis.

Tegmark’s hypothesis is actually more, shall we say, grandiose. He doesn’t just claim that actually reality is math but that all math is real. Not just the math that we use in the theories that describe our observations, but all of it. The exponential function, Mandelbrot sets, the number 18, they’re all real as you and I. If you believe Tegmark.

But should you believe Tegmark? Well, as we have seen earlier, the justification we have for calling some mathematical structures real is that they describe what we observe. This means we have no rationale for talking about the reality of mathematics that does not describe what we observe, therefore the mathematical universe hypothesis isn’t scientific. This is generally the case for all types of the multiverse. The physicists who believe in this argue that unobservable universes are real because they are in their math. But just because you have math for something doesn’t mean it’s real. You can just assume it’s real, but this is unnecessary to describe what we observe and therefore unscientific.

Let me be clear that this doesn’t mean it’s wrong. It isn’t wrong to say the exponential function exists, or there are infinitely many other universes that we can’t see. It’s just that this is a belief-based statement, not supported by evidence. What’s wrong is to claim that science says so.

Then what about the question whether we are made of math? Well, you can’t falsify this hypothesis. Suppose you had an observation that you can’t describe by math, it could always be that you just haven’t found the right math. So the idea that we’re made of math is also not wrong but unscientific. You can believe it if you want. There’s no evidence for or against it.

I want to finish by saying I am not doing these videos to convince you to share my opinion. I just want to introduce you to some topics that I think are thought-stimulating, and give you a starting point, in the hope it will give you something interesting to think about.

Saturday, July 24, 2021

Can Physics Be Too Speculative?



Imagination and creativity are the heart of science. But look at the headlines in the popular science media and you can’t shake off the feeling that some physicists have gotten ahead of themselves. Multiverses, dark matter, string theory, fifth forces, and that asteroid which was supposedly alien technology. These ideas make headlines, but are then either never heard of again – like hundreds of hypothetical particles that were never detected, and tests of string theory that were impossible in the first place – or later turn out to be wrong – all reports of fifth forces disappeared, and that asteroid was probably a big chunk of nitrogen. Have physicists gone too far in their speculations?

The question how much speculation is healthy differs from the question where to draw the line between science and pseudoscience. That’s because physicists usually justify their speculations as work in progress, so they don’t have to live up to the standard we expect for fully-fledged scientific theories. It’s then not as easy as pointing out that string theory is for all practical purposes untestable, because its supporters will argue that maybe one day they’ll figure out how to test it. The same argument can be made about the hypothetical particles that make up dark matter or those fifth forces. Maybe one day they’ll find a way to test them.

The question we are facing, thus, is similar to the one that the philosopher Imre Lakatos posed: Which research programs make progress, and which have become degenerative? When speculation stimulates progress it benefits science, but when speculation leads to no insights for the description of nature, it eats up time and resources, and gets in the way of progress. Which research program is on which side must be assessed on a case-by-case basis.

Dark matter is an example of a research program that used to be progressive but has become degenerative. In its original form, dark matter was a simple parameterization that fit a lot of observations – a paradigmatic example of a good scientific hypothesis. However, as David Merritt elucidates in his recent book “A philosophical approach to MOND”, dark matter has trouble with more recent observations, and physicists in the area have taken on to accommodating data, rather than making successful predictions.

Moreover, the abundance of specific particle models for dark matter that physicists have put forward are unnecessary to explain any existing observations. These models produce publications but they do not further progress. This isn’t so surprising because guessing a specific particle from rather unspecific observations of its gravitational pull has an infinitesimal chance of working.

Theories for the early universe or fifth forces suffer from a similar problem. They do not explain any existing observations. Instead, they make the existing – very well working – theories more complicated without solving any problem.

String theory is a different case. That’s because string theory is supposed to remove an inconsistency in the foundations of physics: The missing quantization of gravity. If successful, that would be progress in and by itself, even if it doesn’t result in testable predictions. But string theorists have pretty much given up on their original goal and never satisfactorily showed the theory solves the problem to begin with.

Much of what goes as “string theory” today has nothing to do with the original idea of unifying all the forces. Instead, string theorists apply certain limits of their theory in an attempt to describe condensed matter systems. Now, in my opinion, string theorists vastly overstate the success of this method. But the research program is progressing and working towards empirical predictions.

Multiverse research concerns itself with postulating the existence of entities that are unobservable in principle. This isn’t scientific and should have no place in physics. The origin of the problem seems to be that many physicists are Platonists – they believe that their math is real, rather than just a description of reality. But Platonism is a philosophy and shouldn’t be mistaken for science.

What about Avi Loeb’s claim that the interstellar object `Oumuamua was alien technology? Loeb has justified his speculation by pointing towards scientists who ponder multiverses and extra dimensions. He seems to think his argument is similar. But Loeb’s argument isn’t degenerative science. It's just bad science. He jumped to conclusions from incomplete data.
It isn’t hard to guess that many physicists will object to my assessments. That is fine – my intention here is not so much to argue this particular assessment is correct, but that this assessment must be done regularly, in collaboration between physicists and philosophers.

Yes, Imagination and creativity are the heart of science. They are also the heart of science fiction. And we shouldn’t conflate science with fiction.

Saturday, July 17, 2021

What’s the Fifth Force?

[This is a transcript of the video embedded below.]


Physicists may have found a fifth force. Uh, that sounds exciting. And since it sounds so exciting, you see it in headlines frequently, so frequently you probably wonder how many of these fifth forces there are. And what’s a fifth force anyway? Could it really exist? If it exists, is it good for anything? That’s what we’ll talk about today.

Before we can talk about the fifth force, we have to briefly talk about the first four forces. To our best current knowledge, all matter in the universe is made of 25 particles. Physicists collect them in the “standard model” that’s kind of like the periodic table for subatomic particles. These 25 particles are held together by four forces. That’s 1) gravity, apples falling down and all that, 2) the electromagnetic force, that’s a combination of the electric and magnetic force which really belong together, 3) the strong nuclear force that holds together atomic nuclei against the electromagnetic force, and 4) the weak nuclear force that’s responsible for nuclear decay.

All other forces that we know, for example the van-der Waals force that keeps atoms together in molecules, friction forces, muscle forces, these are all emergent forces. That they are emergent means that they derive from those four fundamental forces. And that those forces are fundamental means they are not emergent – they cannot be derived from anything else. Or at least we don’t presently know anything simpler that they could be derived from.

Now, if you say that gravity is a force in the wrong company, someone might point out that Einstein taught us gravity is not a force. Yes, that guy again. According to Einstein, gravity is the effect of a curved space-time. Looks like a force, but isn’t one. Indeed, that’s the reason why physicists, if they want to be very precise, will not speak of four fundamental *forces, but of four fundamental interactions. But in reality, I hear them talk about the gravitational force all the time, so I would say if you want to call gravity a force, please go ahead, we all know what you mean.

As you can tell already from that, what physicists call a force doesn’t have a very precise definition. For example, the three forces besides gravity – the electromagnetic and the strong and weak nuclear force – are similar in that we know they are mediated by exchange particles. So that means if there is a force between two particles, like, say, a positively charged proton and a negatively charged electron, then you can understand that force as the exchange of another particle between them. For the case of electromagnetism, that exchange particle is the photon, the quantum of light. For the strong and weak nuclear force, we also have exchange particles. For the strong nuclear force, those are called “gluons” because they “glue” quarks together, and for the weak nuclear force, these are called the Z and W bosons.

Gravity, again, is the odd one out. We believe it has an exchange particle – that particle is called the “graviton” – but we don’t know whether that particle actually exists, it’s never been measured. And on the other hand, we have an exchange particle to which we don’t associate a force, and that’s the Higgs-boson. The Higgs-boson is the particle that gives masses to the other particles. It does that by interacting with those particles, and it acts pretty much like a force carrier. Indeed, some physicists *do* call the Higgs-exchange a force. But most of them don’t.

The reason is that the exchange particles of electromagnetism, the strong and weak nuclear force, and even gravity, hypothetically, all come out of symmetry requirements. The Higgs-boson doesn’t. That may not be a particularly good reason to not call it a force carrier, but that’s the common terminology. Four fundamental forces, among them is gravity, which isn’t a force, but not the Higgs-exchange, which is a force. Yes, it’s confusing.

So what’s with that fifth force? The fifth force is a hypothetical, new, fundamental force for which we don’t yet have evidence. It we found it, it would be the biggest physics news in 100 years. That’s why it frequently makes headlines. There isn’t one particular fifth force, but there’s a large number of “fifth” forces that physicists have invented and that they’re now looking for.

We know that if a fifth force exists it’s difficult to observe, because otherwise we’d already have noticed it. This means, this force either only becomes noticeable at very long distances – so you’d see it in cosmology or astrophysics – or it become noticeable at very short distances, and it’s hidden somewhere in the realm of particle physics.

For example, the anomaly in the muon g-2, could be a sign for a new force carrier, so it could be a fifth force. Or maybe not. There is also a supposed anomaly in some nuclear transitions, which could be mediated by a new particle, called X17, which would carry a fifth force. Or maybe not. Neither of these anomalies are very compelling evidence, the most likely explanation in both cases is some difficult nuclear physics.

The most plausible case for a fifth force, I think, comes from the observations we usually attribute to dark matter. Astrophysicists introduce dark matter because they do see a force that’s acting on normal matter. The currently most widely accepted hypothesis for this observation is that this force is just gravity, so an old force, if you wish, but that instead there is some new type of matter. That doesn’t fit very well with all observations, so it could be instead that it’s actually not just gravity, but indeed a new force, and that would be a fifth force. Dark energy, too, is sometimes attributed to a fifth force. But this isn’t really necessary to explain observations, at least not at the moment.

If we found evidence for such a new force, could we do anything with it? Almost certainly not, at least not in the foreseeable future. The reason is, if such forces exist, their effects must be very very small otherwise we’d have noticed them earlier. So, you most definitely can’t use it for Yogic flying, or to pin your enemies to the wall. However, who knows, if we do find a new force, maybe one day we’ll figure out something to do with it. It’s definitely worth looking for.

So, if you read headlines about a fifth force, that just means there’s some anomalous observation which can be explained by a new fundamental interaction, most often a new particle. It’s a catchy phrase, but really quite vague and not very informative.

Saturday, July 10, 2021

How Dangerous are Solar Storms?

[This is a transcript of the video embedded below.]


On May twenty-third nineteen sixty-seven, the US Air Force almost started a war. It was during the most intense part of the Cold War. On that day, the American Missile Warning System, designed to detect threats coming from the Soviet Union, suddenly stopped working. Radar stations at all sites in the Northern Hemisphere seemed to be jammed. Officials of the U.S. Air Force thought that the Soviet Union had attacked their radar and began to prepare for war. Then they realized it wasn’t the Soviets. It was a solar storm.

What are solar storms, how dangerous are they, and what can we do about them? That’s what we will talk about today.

First things first, what is a solar storm? The sun is so hot that in it, electrons are not bound to atomic nuclei, but can move around freely. Physicists call this state a “plasma”. If electric charges move around in the plasma, that builds up magnetic fields. And the magnetic fields move more electric charges around, which increases the magnetic fields and so on. That way, the sun can build up enormous magnetic fields, powered by nuclear fusion.

Sometimes these magnetic fields form arcs above the surface of the sun, often in an area of sunspots. These arcs can rip and blast off and then two things can happen: First, a lot of radiation is released suddenly, that’s visible light but also ultraviolet light and up into the X-ray range. This is called a solar flare. The radiation is usually accompanied by some fast moving particles, called solar particles. And second, in some case the flare comes with a shock wave that blasts some of the plasma into space. This is called a “coronal mass ejection,” and it can be billions of tons of hot plasma. The solar flare together with the coronal mass ejection is called a “solar storm”.

A solar storm can last from minutes to hours and can release more energy than the entire power we have spent in human history. The activity of the sun has an 11-year cycle, and the worst solar storms often come in the years after the solar maximum. We’re currently just starting a new cycle and the next maximum of solar activity will be around twenty twenty-five. The statistically most dangerous years of the solar cycle will come after that.

Well, actually. The solar cycle is really 22 years, because after 11 years the magnetic field flips, and the cycle isn’t complete until it flips back. It’s just that for what the solar activity is concerned, 11 years is the relevant cycle.

How do these solar storms affect us? Space is big and most of these solar storms don’t go into our direction. If they do, the solar flare moves at the speed of light and takes about eight minutes to reach us. The radiation exposure that comes with it is a health risk for astronauts and pilots, and it can affect satellites in orbit. For example, during a solar storm in 2003 the Japanese weather satellite Madori 2 was permanently damaged, and many other satellites automatically shut down because their navigation systems were not working. This solar storm became known as the 2003 Halloween storm because it happened in October.

Down here on earth we are mostly shielded from the flare. But not so with the coronal mass ejection. It comes after the flare with a delay of twelve hours to three days, depending on the initial velocity, and it carries its own magnetic field. When it reaches earth, that magnetic field connects with that of Earth. One effect of this is that the aurora becomes stronger, can be seen closer to the equator and can even change color to become red. During the Halloween storm, it could be seen as far south as the Mediterranean and also in Texas and Florida.

The aurora is pretty and mostly harmless, but the magnetic field causes a big problem. Because it changes so rapidly, it induces electric currents. The crust of Earth is not very conductive but our electric grids are, by design, very conductive. This means that the magnetic field from the solar storm moves around a lot of currents in the electric grid, which can damage power plants and transformers, and cause power outages.

How big can solar storms get? The strength of solar storms is measured by the energy output in the solar flare. The smallest ones are called A-class and are near background levels, followed by B, C, M and X-class. This is a logarithmic scale, so each letter represents a 10-fold increase in energy output. There’s no more letters after X, instead one adds numbers after the X, X10, for example is another 10 fold increase after X.

What’s the biggest solar storm on record? It might have been the one from September 2nd, 1859. The solar flare on that day was observed coincidentally by the English astronomer Richard Carrington, which is why it’s known today as the “Carrington event”.

The coronal mass ejection after the flare travelled directly into direction Earth. At the time there weren’t many power grids that could have been damaged because electric lights wouldn’t become common in cities for another two decades or so. But they did have a telegraph system.

A telegrapher in Philadelphia received a severe electric shock when he was testing his equipment, and most of the devices stopped working because they couldn’t cope with the current. But some telegraphers figured out that they could continue using their device if they unplugged it, using just the current induced by the solar storm. The following exchange took place during the Carrington event between Portland and Boston:
    "Please cut off your battery entirely from the line for fifteen minutes."
    "Will do so. It is now disconnected."
    "Mine is disconnected, and we are working with the auroral current. How do you receive my writing?"
    "Better than with our batteries on. – Current comes and goes gradually."
    "My current is very strong at times, and we can work better without the batteries, as the Aurora seems to neutralize and augment our batteries alternately, making current too strong at times for our relay magnets. Suppose we work without batteries while we are affected by this trouble."


How strong was the Carrington event? We don’t know really. At the time two measurement stations in England were keeping track of the magnetic field on earth. But those devices worked by pushing an inked pen around on paper, and during the peak of the storm, that pen just ran off the page. It’s been estimated by Karen Harvey to have had a total energy up to 10³² erg which puts it roughly into the category X45. You can read more about the Carrington event in Stuart Clark’s book “The Sun Kings”.

In twenty thirteen the insurance market Lloyd’s estimated that if a solar storm similar to the Carrington event took place today it would cause damage to the electric grid between zero point six and two point six trillion US dollars – for the United States alone. That’s about twenty times the damage of hurricane Katrina. Power outages could last from a couple of weeks to up to two years because so many transformers would have to be replaced.

The most powerful flare measured with modern methods was the 2003 Halloween Storm. Again it was so powerful that it overloaded the detectors. The sensors cut out at X 17. It was later estimated to have been X 35 +/- five, so somewhat below the Carrington event.

How bad can solar storms get? The magnetic field of our planet shields us from particles that come from the sun constantly, the so-called solar wind. It also prevents those solar particles from ripping the atmosphere off our planet. Mars, for example, once had an atmosphere, but since Mars has a weak magnetic field, its atmosphere was wiped away by solar wind. A solar storm that overwhelms the protection we have from our magnetic field could leave us exposed to the plasma raining down and could in the worst case strip apart some or all of our atmosphere. Can such strong solar storms happen?

Well, I hope you are sitting, because for all I can tell the answer is not obviously “no”. The more energy a solar storm has, the less likely it is. But occasionally astrophysicists observe stars very similar to our Sun that have a solar flare so large they might put life in the habitable zone at risk. They don’t presently know whether such an event is possible for our sun, or how likely it is.

I didn’t know that when I began working on this video. Sorry for the bad news.

What can we do about it? Satellites in orbit can be shielded to some extent. Airplanes can be redirected to lower latitudes or altitudes to limit radiation exposure of pilots and passengers. We can interrupt part of the electric grid to prevent currents from moving around too easily. But besides that, the best we can do is prepare for what’s to come, maybe stock up on toilet paper. How well these preparations work depends crucially on how far ahead we know a solar storm is headed in our direction. That’s why scientists are currently working on solar weather forecasts that might give us a warning already before the flare.

And about those mega-storms. We don’t currently have the technology to do anything about them. So I think the best we can do is to invest in science research and development so that one day we’ll able to protect ourselves.

Thanks for watching, don’t forget to subscribe, see you next week.

Saturday, July 03, 2021

Can we make a new universe?

[This is a transcript of the video embedded below.]


Some people dream of making babies, some dream of making baby universes. Seriously? Yes, seriously. How is that supposed to work? What does it take to make a new universe? And if we make one, what do we do with it? That’s what we’ll talk about today.

At first sight, it seems impossible to make a new universe, because where would you take all that stuff from, if not from the old universe? But it turns out you don’t need a lot of stuff to make a new universe. And we know that from Albert Einstein. Yes, that guy again.

First, Albert Einstein famously taught us that mass is really just a type of energy, E equals m c square and all that. But more importantly, Einstein also taught us that space is dynamic. It can bend and curve, and it can expand. It changes with time. And if space changes with time, then energy is not conserved. I explained this in more detail an earlier video, but here’s a brief summary.

The simplest example of energy non-conservation is the cosmological constant. The cosmological constant is the reason that the expansion of our universe gets faster. It has units of an energy-density – so that’s energy per volume – and as the name says, it’s constant. But if the energy per volume is constant, and the volume increases, then the total energy increases with the volume. This means in an expanding universe, you can get a lot of energy from nothing – if you just manage to expand space rapidly enough. I know that this sounds completely crazy, but this is really how it works in Einstein’s theory of General Relativity. Energy is just not conserved.

So, okay, we don’t need a lot of matter, but how do we make a baby universe that expands? Well, you try to generate conditions similar to those that created our own universe.

There’s a little problem with that, which is that no one really knows how our universe was created in the first place. There are many different theories for it, but none of them has observational support. However, one of those theories has become very popular among astrophysicists, it’s called “eternal inflation” – and while we don’t know it’s right, it could be right.

In eternal inflation, our universe is created from the decay of a false vacuum. To understand what a false vacuum is, let’s first talk about what a true vacuum is. A true vacuum is in a state of minimal energy. You can’t get energy out of it, it’s stable. It just sits there. Because it already has minimal energy, it can’t do anything and you can’t do anything with it.

A false vacuum is one that looks like a true vacuum temporarily, but eventually it decays into a true vacuum because it has energy left to spare, and that extra energy goes into something else. For example, if you throw jelly at a wall, it’ll stick there for a moment, but then fall down. That moment when it sticks to the wall is kind of like a false vacuum state. It’s unstable and it will eventually decay into the true vacuum, which is when the jelly drops to the ground and the extra energy splatters it all over the place.

What does this have to do with the creation of our universe? Well, consider you have a lot of false vacuum. In that false vacuum, there’s a patch that decays into a true vacuum. The true vacuum has a lower energy, but it can have higher pressure. If it has higher pressure, it’ll expand. That’s is how our universe could have started. And in principle you can recreate this situation in the laboratory. You “just” have to create this false vacuum state. Then part of it will decay into a true vacuum. And if the conditions are right, that true vacuum will expand rapidly. While it expands it creates its own space. It does not grow into our universe, it makes a bubble.

This universe creation only works if you have enough energy, or mass, in the original blob of false vacuum. How much do you need? Depends on some parameters of the model which physicists don’t know for sure, but in the most optimistic case it’s about 10 kilograms. That’s what it takes to make a new universe. 10 kilograms.

But how do you create 10 kilograms of false vacuum? No one has any idea. Also, 10 kilograms might not sound much if you’re a rocket scientist, but for particle physicists that’s a terrible lot. The mass equivalent that even the presently biggest particle collider, the large hadron collider, works with is 10 to the minus twenty grams. Now if you collide big atomic nuclei instead of protons, you can bring this up by some orders of magnitude, but 10 kilograms is not something that high energy physicists will work with in my lifetime. No one will create a new universe any time soon.

But, well, in principle, theoretically, we could do it. If you believe this story with the false vacuum and so on. Let us just suppose for a moment that this is correct, what would we do with these universes? Would we potty train them and send them to cosmic kindergarten?

Well, no, because sadly, these little baby-universes don’t stay connected to their mother-universe for long. Their connection is like a wormhole throat, it becomes unstable and pinches off within a fraction of a second. So you’d be giving birth to these universes, kick start their growth, but then, blip, they’re gone. From the outside they would look pretty much like small black holes.

By the way, this could be happening all the time without particle physicists doing anything. Because we don’t really understand the quantum properties of space. So, some people think that space really makes a lot of quantum fluctuations. These fluctuations happen at distances so short we can’t see them, but it could be that sometimes they create one of these baby universes.

If you want to know more about this topic, Zeeya Merali has written a very nice book about baby universes called “A Big Bang in a Little Room”.