Saturday, September 18, 2021

The physics anomaly no one talks about: What’s up with those neutrinos?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



In the past months we’ve talked a lot about topics that receive more attention than they deserve. Today I want to talk about a topic that doesn’t receive the attention it deserves. That’s a 20 years old anomaly in neutrino physics which has been above the discovery threshold since 2018, but chances are you’ve never even heard of it. So what are neutrinos, what’s going on with them, and what does it mean? That’s what we’ll talk about today.

I really don’t understand why some science results make headlines and others don’t. For example, we’ve seen loads of headlines about the anomaly in the measurement of the muon g-2 and the lepton anomaly at the Large Hadron Collider. In both of these cases the observations don’t agree with the prediction but neither is statistically significant enough to count as a new discovery, and in both cases there are reasons to doubt it’s actually new physics.

But in 2018, the MiniBooNE neutrino experiment at Fermilab confirmed an earlier anomaly from an experiment called LSND at the Los Alamos National Laboratory. The statistical significance of that anomaly is now at 6 σ. And in this case it’s really difficult to find an explanation that does not involve new physics. So why didn’t this make big headlines? I don’t know. Maybe people just don’t like neutrinos?

But there are lots of reasons to like neutrinos. Neutrinos are elementary particles in the standard model of particle physics. That they are elementary means they aren’t made of anything else, at least not for all we currently know. In the standard model, we have three neutrinos. Each of them is a partner-particle of a charged lepton. The charged leptons are the electron, muon, and tau. So we have an electron-neutrino, a muon-neutrino, and a tau-neutrino. Physicists call the types of neutrinos the neutrino “flavor”. The standard model neutrinos each have a flavor, have spin ½ and no electric charge.

So far, so boring. But neutrinos are decidedly weird for a number of reasons. First, they are the only particles that interact only with the weak nuclear force. All the other particles we know either interact with the electromagnetic force or the strong nuclear force or both. And the weak nuclear force is weak. Which is why neutrinos rarely interact with anything at all. They mostly just pass through matter without leaving a trace. This is why they are often called “ghostly”. While you’ve listened to this sentence about 10 to the fifteen neutrinos have passed through you.

This isn’t the only reason neutrinos are weird. What’s even weirder is that the three types of neutrino-flavors mix into each other. That means, if you start with, say, only electron-neutrinos, they’ll convert into muon-neutrinos as they travel. And then they’ll convert back into electron neutrinos. So, depending on what distance from a source you make a measurement, you’ll get more electron neutrinos or more muon neutrinos. Crazy! But it’s true. We have a lot of evidence that this actually happens and indeed a Nobel Prize was awarded for this in 2015.

Now, to be fair, neutrino-mixing in and by itself isn’t all that weird. Indeed, quarks also do this mixing, it’s just that they don’t mix as much. That *neutrinos mix is weird because neutrinos can only mix if they have masses. But we don’t know how they get masses.

You see the way that other elementary particles get masses is that they couple to the Higgs-boson. But the way this works is that we need a left-handed and a right-handed version of the particle, and the Higgs needs to couple to both of them together. That works for all particles *except the neutrinos”. Because no one has ever seen a right-handed neutrino, we only ever measure left-handed ones. So, the neutrinos mix, which means they must have masses, but we don’t know how they get these masses.

There are two ways to fix this problem. Either the right-handed neutrinos exist but are very heavy, so we haven’t seen them yet because creating them would take a lot of energy. Or the neutrinos are different from all the other spin ½ particles in that their left- and right-handed versions are just the same. This is called a Majorana particle. But either way, something is missing from our understanding of neutrinos.

And the weirdest bit is the anomaly that I mentioned. As I said we have three flavors of neutrinos and these mix into each other as they travel. This has been confirmed by a large number of observations on neutrinos from different sources. There are natural sources like the sun, and neutrinos that are created in the upper atmosphere when cosmic rays hit. And then there are neutrinos from manmade sources, particle accelerators and nuclear power plants. In all of these cases, you know how many neutrinos are created of which type at what energy. And then after some distance you measure them and see what you get.

What physicists then do is that they try to find parameters for the neutrino mixing that fit to all the data. This is called a global fit and you can look up the current status online. The parameters you need to fit are the differences in masses which determines the wavelength of the mixing and the mixing angles, that determine how much the neutrinos mix.

By 2005 or so physicists had pretty much pinned down all the parameters. Except. There was one experiment which didn’t make sense. That was the Liquid Scintillator Neutrino Detector, LSND for short, which ran from 1993 to 98. The LSND data just wouldn’t fit together with all the other data. It’s normally just excluded from the global fit.

In this figure, you see the LSND results from back then. The red and green is what you expect. The dots with the crosses are the data. The blue is the fit to the data. This excess has a statistical significance of 3.8 \sigma. As a quick reminder, 1 \sigma is a standard deviation. The more sigmas away from the expectation the data is the less likely the deviation is to have come about coincidentally. So, the more \sigma, the more impressive the anomaly. In particle physics, the discovery threshold is 5 \sigma. The 3.8 sigma of the LSND anomaly wasn’t enough to get excited, but too much to just ignore.

15 years ago, I worked on neutrino mixing for a while, and in my impression back then most physicists thought the LSND data was just wrong and it’d not be reproduced. That’s because this experiment was a little different from the others for several reasons. They detected only anti-neutrinos created by a particle accelerator and the experiment had a very short baseline of only 30 meters, shorter than all the other experiments.

Still, a new experiment was commissioned to check on this. This was the MiniBooNE experiment at Fermilab. That’s the Mini Booster Neutrino Experiment and it’s been running since 2003. As you can tell by then the trend of cooking up funky acronyms had taken hold in physics. MiniBooNE is basically a big tank full of mineral oil surrounded with photo-detectors which you see in this photo. The tank waits for neutrinos from the nearby Booster accelerator, which you see in this photo.

For the first data analysis in 2007, MiniBoone didn’t have a lot of data and the result seemed to disagree with LSND. This was what everyone expected. Look at this headline from 2007 for example. But then in 2018 with more data MiniBooNE confirmed the LSND result. Yes, you heard that right. They confirmed it with 4.7 σ, and the combined significance is 6 σ.

What does that mean? You can’t fit this observation by tweaking the other neutrino mixing parameters. There just aren’t sufficiently many parameters to tweak. The observations is just incompatible with the standard model. So you have to introduce something new. Some ideas that physicists have put forward are symmetry violations, or new neutrino-interactions that aren’t in the standard model. There is also of course still the possibility that physicists misunderstand something about the experiment itself, but given that this is an independent reproduction of an earlier experiment, I find this unlikely. The most popular idea, which is also the easiest, is what’s called “sterile neutrinos”.

A sterile neutrino is one that doesn’t have a lepton associated with it, it doesn’t have a flavor. So we wouldn’t have seen it produced in particle collisions. Sterile neutrinos can however still mix into the other neutrinos. Indeed, that would be the only way sterile neutrinos could interact with the standard model particles, and so the only way we can measure them. One sterile neutrino alone doesn’t explain the MiniBooNE/LSND data though. You need at least two or more, or something else in addition. Interestingly enough, sterile neutrinos could also make up dark matter.

When will we find out. Indeed, seeing that the result is from 2018, why don’t we know already. Well, it’s because neutrinos… interact very rarely. This means it takes a really long time to detect sufficiently many of them to come to any conclusions.

Just to give you an idea, the MiniBooNe experiment collected data from two thousand and two to two thousand and seventeen. During that time they saw an excess of about five hundred events. 500 events in 15 years. So I think we’re onto something here. But glaciers now move faster than particle physics.

This isn’t a mystery that will resolve quickly but I’ll keep you up to date, so don’t forget to subscribe.

Saturday, September 11, 2021

The Second Quantum Revolution

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Quantum mechanics is more than a hundred years old. That sounds like it’s the stuff of dusty textbooks, but research on quantum mechanics is more active now than a century ago. That’s because many rather elementary facts about quantum mechanics couldn’t be experimentally settled until the 1980s. But then, by the year 2000 or so, experimental progress had totally transformed the field. Today it’s developing faster than ever. How did this happen, why does it matter, and what’s quantum teleportation? That’s what we’ll talk about today.

Albert Einstein was famously skeptical of quantum mechanics. Hi Albert. He thought quantum mechanics couldn’t possibly be a complete description of nature and he argued that something was missing from it.

You see, in quantum mechanics we can’t predict the outcome of a measurement. We can only predict the *probability for getting a particular outcome. Without quantum mechanics, I could say if I shoot my particle cannon, then the particles will land right there. With quantum mechanics I can only say they’ll land right there 50% of the time but have a small chance to be everywhere, really.

Einstein didn’t like this at all. He thought that actually the outcome of a measurement is determined, it’s just that it’s determined by “hidden variables” which we don’t have in quantum mechanics. If that was so, the outcome would look random just because we didn’t have enough information to predict it.

To make this point, in 1935 Einstein wrote a famous paper with Boris Podolsky and Nathan Rosen, now called the EPR paper. In this paper they argued that quantum mechanics is incomplete, it can’t be how nature works. They were the first to realize how essential “entangled” particles are to understand quantum mechanics. This would become super-important and lead to odd technologies such as quantum teleportation which I’ll tell you about in a moment.

Entangled particles share some property, but you only know that property for both particles together. It’s not determined for the individual particles. You may know for example that the spin of two particles must add up to zero even though you don’t know which particle has which spin. But if you measure one of the particles, quantum mechanics says that the spin of the other particle is suddenly determined. Regardless of how far away it is. This is what Einstein called “spooky action at a distance” and it’s what he, together with Podolsky and Rosen, tried to argue can’t possibly be real.

But Einstein or not, physicists didn’t pay much attention to the EPR paper. Have a look at this graph which shows the number of citations that the paper got over the years. There’s basically nothing until the mid 1960s. What happened in the 1960s? That’s when John Bell got on the case.

Bell was a particle physicist who worked at CERN. The EPR paper had got him to think about whether a theory with hidden variables can always give the same results as quantum mechanics. The answer he arrived at was “no”. Given certain assumptions, any hidden variable theory will obey an inequality, now called “Bell’s inequality” that quantum mechanics does not have to fulfil.

Great. But that was just maths. The question was now, can we make a measurement in which quantum mechanics will actually violate Bell’s inequality and prove that hidden variables are wrong? Or will the measurements always remain compatible with a hidden variable explanation, thereby ruling out quantum mechanics?

The first experiment to find out was done in 1972 by Stuart Freedman and John Clauser at the University of California at Berkeley. They found that Bell’s inequality was indeed violated and the predictions of quantum theory were confirmed. For a while this result remained somewhat controversial because it didn’t have a huge statistical significance and it left a couple of “loopholes” by which you could make hidden variables compatible with observations. For example if one detector had time to influence the other, then you wouldn’t need any “spooky action” to explain correlations in the measurement outcomes.

But in the late 1970s physicists found out how to generate and detect single photons, the quanta of light. This made things much easier and beginning in the 1980s a number of experiments, notably those by Alain Aspect and his group, closed the remaining loopholes and improved the statistical significance.

For most physicists, that settled the case: Einstein was wrong. Hidden variables can’t work. There is one loophole in Bell’s theorem, called the “free will” loophole that cannot be closed with this type of experiment. This is something I’ve been working on myself. I’ll tell you more about this some other time but today let me just tell you what came out of all this.

These experiments did much more than just confirming quantum mechanics. By pushing the experimental limits, physicists understood how super-useful entangled particles are. They’re just something entirely different from anything they had dealt with before. And you cannot only entangle two particles but actually arbitrarily many. And the more of them you entangle, the more pronounced the quantum effects become.

This has given rise to all kinds of applications, for example quantum cryptography. This is a method to safely submit messages with quantum particles. The information is safe because quantum particles have this odd behavior that if you measure just what their properties are, that changes them. Because of this, if you use quantum particles to encrypt a message you can tell if someone intercepts it. I made a video about this specifically earlier, so check this out for more.

You can also use entangled particles to make more precise measurements, for example to study materials or measure gravitational or magnetic fields. This is called quantum metrology, I also have a video about this specifically.

But the maybe oddest thing to have come out of this is quantum teleportation. Quantum teleportation allows you to send quantum information with entangled states, even if you don’t yourself know the quantum information. It roughly works like this. First you generate an entangled state and you give one half to the sender, lets call her Alice, and the other half to the receiver, Bob. Alice takes her quantum information whatever that is, it’s just another quantum state. She mixes it together with her end of the entangled state, that entangles her information with the state that is entangled with Bob, and then she makes a measurement. The important thing is that this measurement only partly tells her what state the mixed system is in. So it’s still partly a quantum thing after the measurement.

But now remember, in quantum mechanics making a measurement on one end of an entangled state will suddenly determine the state on the other end. This means Alice has pushed the quantum information from her state into her end of the entangled state and then over to Bob. But how does Bob get this information back out? For this he needs to know the outcome of Alice’s measurement. If he doesn’t have that, his end of the entangled state isn’t useful. So, Alice lets Bob now about her measurement outcome. This tells him what measurement he needs to do to recreate the quantum information that Alice wanted to send.

So, Alice put the information into her end of the entangled state, tied the two together, sent information about the tie to Bob, who can then untie it on his end. In that process, the information gets destroyed on Alice’s end, but Bob can exactly recreate it on his end. It does not break the speed of light limit because Alice has to send information about her measurement outcome, but it’s an entirely new method of information transfer.

Quantum teleportation was successfully done first in 1997 by the groups of Sandu Popescu and Anton Zeilinger. By now they do it IN SPACE… I’m not kidding. Look at the citations to the EPR paper again. They’re higher now than ever before.

Quantum technologies have a lot of potential that we’re only now beginning to explore. And this isn’t the only reason this research matters. It also matters because it’s pushing the boundaries of our knowledge. It’s an avenue to discovering fundamentally new properties of nature. Because maybe Einstein was right after all, and quantum mechanics isn’t the last word.

Today research on quantum mechanics is developing so rapidly it’s impossible to keep up. There’s quantum information, quantum optics, quantum computation, quantum cryptography, quantum simulations, quantum metrology, quantum everything. It’s even brought the big philosophical questions about the foundations of quantum mechanics back on the table.

I think a Nobel prize for the second quantum revolution is overdue. The people whose names are most associated with it are Anton Zeilinger, John Clauser, and Alain Aspect. They’ve been on the list for a Nobel Prize for quite some while and I hope that this year they’ll get lucky.

Saturday, September 04, 2021

New Evidence against the Standard Model of Cosmology

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Physicists believe they understand quite well how the universe works on large scales. There’s dark matter and there’s dark energy, and there’s the expansion of the universe that allows matter to cool and clump and form galaxies. The key assumption to this model for the universe is the cosmological principle, according to which the universe is approximately the same everywhere. But increasingly more observations show that the universe just isn’t the same everywhere. What are those observations? Why are they a problem? And what does it mean? That’s what we’ll talk about today.

Let’s begin with the cosmological principle, the idea that the universe looks the same everywhere. Well. Of course the universe does not look the same everywhere. There’s more matter under your feet than above your head and more matter in the Milky way than in intergalactic space, and so on. Physicists have noticed that too, so the cosmological principle more precisely says that matter in the universe is equally distributed when you average over sufficiently large distances.

To see what this means, forget about matter for a moment and suppose you have a row of detectors and they measure, say, temperature. Each detector gives you a somewhat different temperature but you can average over those detectors by taking a few of them at a time, let’s say 5, calculate the average value from the reading of those five detectors, and replace the values of the individual detectors with their average value. You can then ask how far away this averaged distribution is from one that’s the same everywhere. In this example it’s pretty close.

But suppose you have a different distribution, for example this one. If you average over sets of 5 detectors again, the result still does not look the same everywhere. Now, if you average over all detectors, then of course the average is the same everywhere. So if you want to know how close a distribution is to being uniform, you average it over increasingly large distances and ask from what distance on it’s very similar to just being the same everywhere.

In cosmology we don’t want to average over temperatures, but we want to average over the density of matter. On short scales, which for cosmologists is something like the size of the milky way, matter clearly is not uniformly distributed. If we average over the whole universe, then the average is uniform, but that’s uninteresting. What we want to know is, if we average over increasingly large distances, at what distance does the distribution of matter become uniform to good accuracy?

Yes, good question. One can calculate this distance using the concordance model, which is the currently accepted standard model of cosmology. It’s also often called ΛCDM, where Λ is the cosmological constant and CDM stands for cold dark matter. The distance at which the cosmological principle should be a good approximation to the real distribution of matter was calculated from the concordance model in a 2010 paper by Hunt and Sarkar.

They found that the deviations from a uniform distribution fall below one part in a hundred from an averaging distance of about 200-300 Mpc on. 300 Megaparsec are about 1 billion light years. And just to give you a sense of scale, our distance to the next closest galaxy, Andromeda, is about two and a half million light years. A billion light years is huge. But from that distance on at the latest, the cosmological principle should be fulfilled to good accuracy – if the concordance model is correct.

One problem with the cosmological principle is that astrophysicists have on occasion assumed it is valid already on shorter distances, down to about 100 Megaparsec. This is an unjustified assumption, but it has for example entered the analysis of supernovae data from which the existence of dark energy was inferred. And yes, that’s what the Nobel Prize in physics was awarded for in 2011.

Two years ago, I told you about a paper by Subir Sarkar and his colleagues, that showed if one analyses the supernovae data correctly, without assuming that the cosmological principle holds on too short distances, then the evidence for dark energy disappears. That paper has been almost entirely ignored by other scientists. Check out my earlier video for more about that.

Today I want to tell you about another problem with the cosmological principle. As I said, one can calculate the scale from which on it should be valid from the standard model of cosmology. Beyond that scale, the universe should look pretty much the same everywhere. This means in particular there shouldn’t be any clumps of matter on scales larger than about a billion light years. But. Astrophysicists keep on finding those.

Already in nineteen-ninety-one they found the Clowes-Campusano-Quasar group, which is a collection of thirty-four Quasars, about nine point five Billion light years away from us and it extends over two Billion Light-years, clearly too large to be compatible with the prediction from the concordance model.

Since 2003 astrophysicists know the „great wall“ a collection of galaxies about a billion light years away from us that extends over 1.5 billion light years. That too, is larger than it should be.

Then there’s the “Huge quasar group” which is… huge. It spans a whopping four Billion light-years. And just in July Alexia Lopez discovered the “Giant Arc” a collection of galaxies, galaxy clusters, gas and dust that spans 3 billion light years.

Theoretically, these structures shouldn’t exist. It can happen that such clumps appear coincidentally in the concordance model. That’s because this model uses an initial distribution of matter in the early universe with random fluctuations. So it could happen you end up with a big clump somewhere just by chance. But you can calculate the probability for that to happen. The Giant Arc alone has a probability of less than one in a hundred-thousand to have come about by chance. And that doesn’t factor in all the other big structures.

What does it mean? It means the evidence is mounting that the cosmological principle is a bad assumption to develop a model for the entire universe and it probably has to go. It increasingly looks like we live in a region in the universe that happens to have a significantly lower density than the average in the visible universe. This area of underdensity which we live in has been called the “local hole”, and it has a diameter of at least 600 million light years. This is the finding of a recent paper by a group of astrophysicists from Durham in the UK.

They also point out that if we live in a local hole then this means that the local value of the Hubble rate must be corrected down. This would be good news because currently measurements for the local value of the Hubble rate are in conflict with the value from the early universe. And that discrepancy has been one of the biggest headaches in cosmology in the past years. Giving up the cosmological principle could solve that problem.

However, the finding in that paper from the Durham group is only a mild tension with the concordance model, at about three sigma, which is not highly statistically significant. But Sarkar and his group had another paper recently in which they do a consistency check on the concordance model and find a conflict at four point nine sigma, that is a less than one in a million chance for it to be coincidence.

This works as follows. If we measure the temperature of the cosmic microwave background, it appears hotter into the direction which we move against it. This gives rise to the so-called CMB dipole. You can measure this dipole. You can also measure the dipole by inferring our motion from the observations of quasars. If the concordance model was right, the direction and magnitude of the dipoles should be the same. But they are not. You see this in this figure from Sarkar’s paper. The star is the location of the cmb dipole, the triangle that of the quasar dipole. In this figure you see how far away from the cmb expectation the quasar result is.

These recent developments make me think that in the next ten years or so, we will see a major paradigm shift in cosmology, where the current standard model will be replaced with another one. Just what the new model will be, and if it will still have dark energy, I don’t know. But I’ll keep you up to date. So don’t forget to subscribe, see you next week.