Saturday, August 01, 2020

What is the equivalence principle?

Folks, I recently read the website of the Flat Earth Society. I’m serious! It’s a most remarkable collection of… nonsense. Maybe most remarkable is how it throws together physical facts that are correct – but then gets their consequences completely wrong! This is most evident when it comes to flat earthers’ elaborations on Einstein’s equivalence principle.


The equivalence principle is experimentally extremely well-confirmed, yes. But flat earthers misconstrue evidence for the equivalence principle as “evidence for universal acceleration” or what they call the “universal accelerator”. By this they mean that the gravitational acceleration is the same everywhere on earth. It is not. But, you see, they believe that on their flat earth, there is no gravity. Instead, the flat earth is accelerating upwards. So, if you drop an apple, it’s not that gravity is pulling it down, it’s that the earth comes up and hits the apple.

The interesting thing is now that flat earthers’ claim Einstein said you cannot distinguish upward acceleration from downward gravity. That’s the equivalence principle, supposedly. So, you see, Einstein said it and therefore the earth is flat.

You can read on their website:
“Why does the physics of gravity behave exactly as if the earth were accelerating upwards? The Universal Accelerator answers this long-standing mystery, which has baffled generations of scientists, by positing that the earth is accelerating upwards.”

Ingenious! Why didn’t Einstein think of this? Well, because it’s wrong. And in this video, I will explain why it’s wrong. So, what is the equivalence principle? The equivalence principle says that:
“Acceleration in a flat space-time is locally indistinguishable from gravity.”
Okay, that sounds somewhat technical, so let us go through this step by step. I assume you know what acceleration is because otherwise you would not be watching a physics channel. Flat space-time means you are dealing with special relativity. So, you have combined space and time, as Einstein told us to do, but they are not curved; they’re flat, like a sheet of paper. “Locally” means in a small region. So, the equivalence principle says: If you can only make measurements in a small region around you, then you cannot tell acceleration apart from gravity. You can only tell them apart if you can make measurements over a large enough distances.

This is what Einstein’s thought experiment with the elevator was all about. I talked about this in an earlier video. If you’re in the elevator, you don’t know whether the elevator is sitting on the surface of a planet and gravity is pulling down, or if the elevator is accelerating upward.

The historical relevance of the equivalence principle is that it allowed Einstein to make the step from special relativity to general relativity. This worked because he already knew how to describe acceleration in flat space – you can do that with special relativity. In general relativity then, space-time is curved, but locally it is flat. So you can use special relativity locally and get general relativity. The equivalence principle connects both – that was Einstein’s great insight.

So, the equivalence principle says that you cannot tell gravity from acceleration in a small region. That sounds indeed very much like what flat earthers say. But here’s the important point: How large the region needs to be to tell apart gravity from acceleration depends on how precisely you can measure and how far you are willing to walk. If you cannot measure very precisely, you may have to climb on a mountain top. You then find that the acceleration up there is smaller than at sea level. Why? Because the gravitational force decreases with the distance to the center of the earth. That’s Newton’s 1/R2 force. Indeed, since the earth is not exactly a sphere, the acceleration also differs somewhat between the equator and the poles. This can and has been measured to great precision.

Yeah, we’ve know all this for some while. If the acceleration we normally assign to gravity was the same everywhere on earth, that would contradict a huge number of measurements. Evidence strongly speaks against it. If you measure very precisely, you can even find evidence for the non-universality of the gravitational pull in the laboratory. Mountains themselves, for example, have a non-negligible gravitational pull. This can, and has been measured, already in the 18th century. The gravitational acceleration caused by the ground underneath your feet has also local variations at constant altitude just because in some places the density of the ground is higher than in others.

So, explaining gravity as a universal acceleration is in conflict with a lot of evidence. But can you instead just give the flat earth a gravitational pull? No, that does not fit with evidence either. Because for a disk the gravitational acceleration does not drop with 1/R2. It falls more slowly with the distance from the disk. Exactly how depends on how far you are from the edge of the disk. In any case, it’s clearly wrong.

The equivalence principle is sometimes stated differently than I put it, namely as the equality of inertial and gravitational mass. Physicists don’t particularly like this way of formulating the equivalence principle because it’s not only mass that gravitates. All kinds of energy densities and momentum flow and pressure and so on also gravitate. So, strictly speaking it’s not correct to merely say inertial mass equals gravitational mass.

But in the special case when you are looking at a slowly moving point particle with a mass that is very small compared to earth, then the equality of inertial and gravitational mass is a good way to think of the equivalence principle. If you use the approximation of Newtonian gravity, then you would describe this by saying that F equals m_i times a, with m_i the inertial mass and a the acceleration, and that must be balanced with the gravitational force that is m_g, the gravitational mass of the particle, times the mass of earth divided by R^2, where R is the distance from the center of earth which is, excuse me, a sphere. So, if the inertial mass is equal to the gravitational mass of the particle, then these masses cancel out. If you calculate the path on which the particle moves, it will therefore not depend on the mass.

In general relativity, the equivalence of inertial and gravitational mass for a point particle has a very simple interpretation. Remember that, in general relativity, gravity is not a force. Gravity is really caused by the curvature of space-time. In this curved space-time a point particle just takes the path of the longest possible proper time between two places. This is an entirely geometrical requirement and does not depend on the mass of the particle.

Let me add that physicists use a few subtle distinctions of equivalence principles, in particular for quantum objects. If you want to know the technical details, please check the information below the video for a reference.

In summary, if you encounter a flat earther who wants to impress you with going on about the equivalence principle, all you need to know is that the equivalence principle is not evidence for universal acceleration. This is most definitely not what Einstein said.

If this video left you wishing you understood Einstein’s work better, I suggest you have a look at Brilliant dot org, who have been sponsoring this video. Brilliant offers online courses on a large variety of topics in mathematics and science, including physics. They have interactive courses on special relativity, general relativity, and even gravitational physics, where they explore the equivalence principle specifically. Brilliant is a great starting point to really understand how Einstein’s theories work and also test your understanding along the way.

To support this channel and learn more about Brilliant, go to brilliant.org/Sabine, and sign up for free. The first two-hundred people who go to that link will get twenty percent off the annual Premium subscription.

Thanks for watching, see you next week.

Saturday, July 25, 2020

Einstein’s Greatest Legacy: Thought Experiments

Einstein’s greatest legacy is not General Relativity, it’s not the photoelectric effect, and it’s not slices of his brain. It’s a word: Gedankenexperiment – that’s German for “thought experiment”.

Today, thought experiments are common in theoretical physics. We use them to examine the consequences of a theory beyond what is measureable with existing technology, but still measureable in principle. Thought experiments are useful to push a theory to its limits, and doing so can reveal inconsistencies in the theory or new effects. There are only two rules for thought experiments: (A) relevant is only what is measureable and (B) do not fool yourself. This is not as easy as it sounds.

The maybe first thought experiment came from James Maxwell and is known today as Maxwell’s demon. Maxwell used his thought experiment to find out whether one can beat the second law of thermodynamics and build a perpetual motion machine, from which an infinite amount of energy could be extracted.

Yes, we know that this is not possible, but Maxwell said, suppose you have two boxes of gas, one of high temperature and one of low temperature. If you bring them into contact with each other, the temperatures will reach equilibrium at a common temperature somewhere in the middle. In that process of reaching the equilibrium temperature, the system becomes more mixed up and entropy increases. And while that happens – while the gas mixes up – you can extract energy from the system. It “does work” as physicists say. But once the temperatures have equalized and are the same throughout the gas, you can no longer extract energy from the system. Entropy has become maximal and that’s the end of the story.

Maxwell’s demon now is a little omniscient being that sits at the connection between the two boxes where there is a little door. Each time a fast atom comes from the left, the demon lets it through. But if there’s a fast atom coming from the right, the demon closes the door. This way the number of fast atoms on the one side will increase, which means that the temperature on that side goes up again and the entropy of the whole system goes down.

It seems like thermodynamics is broken, because we all know that entropy cannot decrease, right? So what gives? Well, the demon needs to have information about the motion of the atoms, otherwise it does not know when to open the door. This means, essentially, the demon is itself a reservoir of low entropy. If you combine demon and gas the second law holds and all is well. The interesting thing about Maxwell’s demon is that it tells us entropy is somehow the opposite of information, you can use information to decrease entropy. Indeed, a miniature version of Maxwell’s demon has meanwhile been experimentally realized.

But let us come back to Einstein. Einstein’s best known thought experiment is that he imagined what would happen in an elevator that’s being pulled up. Einstein argued that there is no measurement that you can do inside the elevator to find out whether the elevator is in rest in a gravitational field or is being pulled up with constant acceleration. This became Einstein’s “equivalence principle”, according to which the effects of gravitation in a small region of space-time are the same as the effects of acceleration in the absence of gravity. If you converted this principle into mathematical equations, it becomes the basis of General Relativity.

Einstein also liked to imagine how it would be to chase after photons, which was super-important for him to develop special relativity, and he spent a lot of time thinking about what it really means to measure time and distances.

But the maybe most influential of his thought experiments was one that he came up with to illustrate that quantum mechanics must be wrong. In this thought experiment, he explored one of the most peculiar effects of quantum mechanics: entanglement. He did this together with Boris Podolsky and Nathan Rosen, so today this is known as the Einstein-Podolsky-Rosen or just EPR experiment.

How does it work? Entangled particles have some measureable property, for example spin, that is correlated between particles even though the value for each single particle is not determined as long as the particles were not measured. If you have a pair of particles, you can know for example that if one particle has spin up, then the other one has spin down, or the other way round, but you may still not know which is which. The consequence is that if one of these particles is measured, the state of the other one seems to change – instantaneously.

Einstein, Podolsky and Rosen suggested this experiment because Einstein believed this instantaneous ‘spooky’ action at a distance is nonsense. You see, Einstein had a problem with it because it seems to conflict with the speed of light limit in Special Relativity. We know today that this is not the case, quantum mechanics does not conflict with Special Relativity because no useful information can be sent between entangled particles. But Einstein didn’t know that. Today, the EPR experiment is no longer a thought experiment. It can, and has been done, and we now know beyond doubt that quantum entanglement is real.

A thought experiment that still gives headaches to theoretical physicists today is the black hole information loss paradox. General relativity and quantum field theory are both extremely well established theories, but if you combine them, you find that black holes will evaporate. We cannot measure this for real, because the temperature of the radiation is too low, but it is measureable in principle.

However, if you do the calculation, which was first done by Stephen Hawking, it seems that black hole evaporation is not reversible; it destroys information for good. This however cannot happen in quantum field theory and so we face a logical inconsistency when combining quantum theory with general relativity. This cannot be how nature works, so we must be making a mistake. But which?

There are many proposed solutions to the black hole information loss problem. Most of my colleagues believe that the inconsistency comes from using general relativity in a regime where it should no longer be used and that we need a quantum theory of gravity to resolve the problem. So far, however, physicists have not found a solution, or at least not one they can all agree on.

So, yes, thought experiments are a technique of investigation that physicists have used in the past and continue to use today. But we should not forget that eventually we need real experiments to test our theories.

Saturday, July 18, 2020

Understanding Quantum Mechanics #4: It’s not as difficult as you think! (The Bra-Ket)

If you do an image search for “quantum mechanics” you will find a lot of equations that contain things which look like this |Φ> or this |0> or maybe also that <χ|. These things are what it called the “bra-ket” notation. What does this mean? How do you calculate with it? And is quantum mechanics really as difficult as they say? This is what we will talk about today.


I know that quantum mechanics is supposedly impossible to understand, but trust me when I say the difficulty is not in the mathematics. The mathematics of quantum mechanics looks more intimidating than it really is.

To see how it works, let us have a look at how physicists write wave-functions. The wave-function, to remind you, is what we use in quantum mechanics to describe everything. There’s a wave-function for electrons, a wave-function for atoms, a wave-function for Schrödinger’s cat, and so on.

The wave-function is a vector, just like the ones we learned about in school. In a three-dimensional space, you can think of a vector as an arrow pointing from the origin of the coordinate system to any point. You can choose a particularly convenient basis in that space, typically these are three orthogonal vectors, each with a length of one. These basis vectors can be written as columns of numbers which each have one entry that equals one and all other entries equal to zero. You can then write an arbitrary vector as a sum of those basis vectors with coefficients in front of them, say (2,3,0). These coefficients are just numbers and you can collect them in one column. So far, so good.

Now, the wave-function in quantum mechanics is a vector just like that, except it’s not a vector in the space we see around us, but a vector in an abstract mathematical thing called a Hilbert-space. One of the most important differences between the wave-function and vectors that describe directions in space is that the coefficients in quantum mechanics are not real numbers but complex numbers, so they in general have a non-zero imaginary part. These complex numbers can be “conjugated” which is usually denoted with a star superscript and just means you change the sign of the imaginary part.

So the complex numbers make quantum mechanics different from your school math. But the biggest difference is really just the notation. In quantum mechanics, we do not write vectors with arrows. Instead we write them with these funny brackets.

Why? Well, for one because it’s convention. But it’s also a convenient way to keep track of whether a vector is a row or a column vector. The ones we talked about so far are column-vectors. If you have a row-vector instead, you draw the bracket on the other side. You have to watch out here because in quantum mechanics, if you convert a row vector to a column vector, you also have to take the complex conjugate of the coefficients.

This notation was the idea of Paul Dirac and is called the bra-ket notation. The left side, the row vector, is the “bra” and the right side, the column vector, is the “ket”.

You can use this notation for example to write a scalar product conveniently as a “bra-ket”. The scalar product between two vectors is the sum over the products of the coefficients. Again, don’t forget that the bra-vector has complex conjugates on the coefficients.

Now, in quantum mechanics, all the vectors describe probabilities. And usually you chose the basis in your space so that the basis vectors correspond to possible measurement outcomes. The probability of a particular measurement outcome is then the absolute square of the scalar product with the basis-vector that corresponds to the outcome. Since the basis vectors are those which have only zero entries except for one entry which is equal to one, the scalar product of a wave-function with a basis vector is just the coefficient that corresponds to the one non-zero entry.

And the probability is then the absolute square of that coefficient. This prescription for obtaining probabilities from the wave-function is known as “Born’s rule”, named after Max Born. And we know that the probability to get any measurement outcome is equal to one, which means that that the sum over the squared scalar products with all basis vectors has to be one. But this is just the length of the vector. So all wave-functions have length one.

 The scalar product of the wave-function with a basis-vector is also sometimes called a “projection” on that basis-vector. It is called a projection, because it’s the length you get if you project the full wave-function on the direction that corresponds to the basis-vector. Think of it as the vector casting a shadow. You could say in quantum mechanics we only ever observe shadows of the wave-function.

The whole issue with the measurement in quantum mechanics is now that once you do a measurement, and you have projected the wave-function onto one of the basis vectors, then its length will no longer be equal to 1 because the probability of getting this particular measurement outcome may have been smaller than 1. But! once you have measured the state, it is with probability one in one of the basis states. So then you have to choose the measurement outcome that you actually found and stretch the length of the vector back to 1. This is what is called the “measurement update”.

Another thing you can do with these vectors is to multiply one with itself the other way round, so that would be a ket-bra. What you get then is not a single number, as you would get with the scalar product, but a matrix, each element of which is a product of coefficients of the vectors. In quantum mechanics, this thing is called the “density matrix”, and you need it to understand decoherence. We will talk about this some other time, so keep the density matrix in mind.

Having said that, as much as I love doing these videos, if you really want to understand quantum mechanics, you have to do some mathematical exercises on your own. A great place to do this is Brilliant who have been sponsoring this video. Brilliant offers courses with exercise sets on a large variety of topics in science and mathematics. It’s exactly what you need to move from passively watching videos to actively dealing with the subject. The courses on Brilliant that will give you the required background for this video are those on linear algebra and its applications: What is a vector, what is a matrix, what is an eigenvalue, what is a linear transformation? That’s the key to understanding quantum mechanics.

To support this channel and learn more about Brilliant, go to brilliant.org/Sabine, and sign up for free. The first two-hundred people who go to that link will get twenty percent off the annual Premium subscription.

You may think I made it look too easy, but it’s true: Quantum mechanics is pretty much just linear algebra. What makes it difficult is not the mathematics. What makes it difficult is how to interpret the mathematics. The trouble is, you cannot directly observe the wave-function. But you cannot just get rid of it either; you need it to calculate probabilities. But the measurement update has to be done instantaneously and therefore it does not seem to be a physical process. So is the wave-function real? Or is it not? Physicists have debated this back and forth for more than 100 years.

Saturday, July 11, 2020

Do we need a Theory of Everything?

I get constantly asked if I could please comment on other people’s theories of everything. That could be Garrett Lisi’s E8 theory or Eric Weinstein’s geometric unity or Stephen Wolfram’s idea that the universe is but a big graph, and so on. Good, then. Let me tell you what I think about this. But I’m afraid it may not be what you wanted to hear.


Before we start, let me remind you what physicists mean by a “Theory of Everything”. For all we currently know, the universe and everything in it is held together by four fundamental interactions. That’s the electromagnetic force, the strong and the weak nuclear force, and gravity. All other forces that you are familiar with, say, the van der Waals force, or muscle force, or the force that’s pulling you down an infinite sequence of links on Wikipedia, these are all non-fundamental forces that derive from the four fundamental interactions. At least in principle.

Now, three of the fundamental interactions, the electromagnetic and the strong and weak nuclear force, are of the same type. They are collected in what is known as the standard model of particle physics. The three forces in the standard model are described by quantum field theories which means, in a nutshell, that all particles obey the principles of quantum mechanics, like the uncertainty principle, and they can be entangled and so on. Gravity, however, is described by Einstein’s theory of General Relativity and does not know anything about quantum mechanics, so it stands apart from the other three forces. That’s a problem because we know that all the quantum particles in the standard model have a gravitational pull. But we do not know how this works. We just do not have a theory to describe how elementary particles gravitate. For this, we would need a theory for the quantum behavior of gravity, a theory of “quantum gravity,” as it’s called.

We need a theory of quantum gravity because general relativity and the standard model are mathematically incompatible. So far, this is a purely theoretical problem because with the experiments that we can currently do, we do not need to use quantum gravity. In all presently possible experiments, we either measure quantum effects, but then the particle masses are so small that we cannot measure their gravitational pull. Or we can observe the gravitational pull of some objects, but then they do not have quantum behavior. So, at the moment we do not need quantum gravity to actually describe any observation. However, this will hopefully change in the coming decades. I talked about this in an earlier video.

Besides the missing theory of quantum gravity, there are various other issues that physicists have with the standard model. Most notably it’s that, while the three forces in the standard model are all of the same type, they are also all different in that each of them belongs to a different type of symmetry. Physicists would much rather have all these forces unified to one, which means that they would all come from the same mathematical structure.

In many cases that structure is one big symmetry group. Since we do not observe this, the idea is that the big symmetry would manifest itself only at energies so high that we have not yet been able to test them. At the energies that we have tested it so far, the symmetry would have to be broken, which gives rise to the standard model. This unification of the forces of the standard model is called a “grand unification” or a “grand unified theory”, GUT for short.

What physicists mean by a theory of everything is then a theory from which all the four fundamental interactions derive. This means it is both a grand unified theory and a theory of quantum gravity.

This sounds like a nice idea, yes. But. There is no reason that nature should actually be described by a theory of everything. While we *do need a theory of quantum gravity to avoid logical inconsistency in the laws of nature, the forces in the standard model do not have to be unified, and they do not have to be unified with gravity. It would be pretty, yes, but it’s unnecessary. The standard model works just fine without unification.

So this whole idea of a theory of everything is based on an unscientific premise. Some people would like the laws of nature to be pretty in a very specific way. They want it to be simple, they want it to be symmetric, they want it to be natural, and here I have to warn you that “natural” is a technical term. So they have an idea of what they want to be true. Then they stumble over some piece of mathematics that strikes them as particularly pretty and they become convinced that certainly it must play a role for the laws of nature. In brief, they invent a theory for what they think the universe *should be like.

This is simply not a good strategy to develop scientific theories, and no, it is most certainly not standard methodology. Indeed, the opposite is the case. Relying on beauty in theory development has historically worked badly. In physics, breakthroughs in theory-development have come instead from the resolution of mathematical inconsistencies. I have literally written a book about how problematic it is that researchers in the foundations of physics insist on using methods of theory development that we have no reason to think should work, and that as a matter of fact do not work.

The search for a theory of everything and for grand unification began in the 1980s. To the extent that the theories which physicists have come up with were falsifiable they have been falsified. Nature clearly doesn’t give a damn what physicists think is pretty math.

Having said that, what do you think I think about Lisi’s and Weinstein’s and Wolfram’s attempts at a theory of everything? Well, scientific history teaches us that their method of guessing some pretty piece of math and hoping it’s useful for something is extremely unpromising. It is not impossible it works, but it is almost certainly a waste of time. And I have looked closely enough at Lisi’s and Weinstein’s and Wolfram’s and many other people’s theories of everything to be able to tell you that they have not convincingly solved any actual problem in the existing fundamental theories. And I’m not interested to look any closer, because I don’t also want to waste my time.

But I don’t like commenting on individual people’s theories of everything. I don’t like it because it strikes me as deeply unfair. These are mostly researchers working alone or in small groups. They are very dedicated to their pursuit and they work incredibly hard on it. They’re mostly not paid by tax money so it’s really their private thing and who am I to judge them? Also, many of you evidently find it entertaining to have geniuses with their theories of everything around. That’s all fine with me.

I get a problem if theories that despite having turned out to be useless grow to large, tax-paid research programs that employ thousands of people, as it has happened with string theory and supersymmetry and grand unification. That creates a problem because it eats up resources and can entirely stall progress, which is what has happened in the foundations of physics.

People like Lisi and Weinstein and Wolfram at least remind us that the big programs are not the only thing you can do with math. So, odd as it sounds, while I don’t think their specific research avenue is any more promising than string theory, I’m glad they do it anyway. Indeed, physics can need more people like them who have the courage to go their own way, no matter how difficult.

The brief summary is that if you hear something about a newly proposed theory of everything, do not ask whether the math is right. Because many of the people who work on this are really smart and they know their math and it’s probably right. The question you, and all science journalists who report on such things, should ask is what reason do we have to think that this particular piece of math has anything to do with reality. “Because it’s pretty” is not a scientific answer. And I have never seen a theory of everything that gave a satisfactory scientific answer to this question.

Wednesday, July 08, 2020

[Guest Post] Update of Converseful comment feature now allows for more conversations

[This post is written by Ben Alderoty from Conversful.]

Conversful launched on BackRe(action) at the end of April. For those that aren’t familiar with our name, Conversful is the app in the bottom corner that allows you to have conversations with other readers. It is still only supported on computers, so if you’re on a phone or tablet right now, come back another time to see what I’m talking about. Based on the feedback we’ve received from many of you, we have been working on some big changes to Conversful and are excited to announce these changes are now live for everyone on BackRe(action).



To participate in Conversful you will now need to create an account. You’ll be able to see a preview of the conversations happening without an account, but to join one or start your own you’ll need to create one. Accounts make it easier to find the right people you want to talk to and maintain those conversations over time.


Conversations on Conversful are still 1-on-1 and start with questions (formerly topics). Questions can pertain to a specific article you might be reading or a more general physics question you are pondering. Ask specific questions as those will elicit the best replies. When you post a question, it will remain open for a week by default and can receive multiple responses. These responses will come in the form of multiple threads within your Conversations tab.

Conversful now works both in real-time and asynchronously. If you see another reader with a green circle next to their name that means they are currently online. We’ll notify you via email if you receive a new message on Conversful when you’re offline. This way you’ll know when to get back on BackRe(action) to continue the conversation. You can control this setting in your profile tab by clicking the avatar in the top left corner of the app.


And that’s it! The app is still pretty simple as we wanted to make it as easy as possible to start conversations and enjoy the one’s you’re already in. We hope that the 1-on-1, private nature of Conversful makes it easy to say something if you’ve ever wanted to, but didn’t want to do so publicly. For all of the active commenters out there, keep commenting! We hope you can use us to continue your conversations and stray into off topic discussions that don’t pertain to a specific post.

As always, please send feedback and suggestions to ben@conversful.com. For those that have given us feedback thus far, thank you so much!

Saturday, July 04, 2020

What is Quantum Metrology?

Metrology is one of the most undervalued areas of science. And no, I am not just mispronouncing meteorology, I actually mean metrology. Think “meter” not “meteor”. Metrology is the science of measurement. Meteorology is about clouds and things like that. And the study of meteors, in case you wonder, is called meteoritics.



Metrology matters because you can’t do science without measuring things. In metrology, scientists deal with problems how to define conventions for units, how to do this most accurately, how to most reliably reproduce measurements, and so on. Metrology sounds boring, but it is super-important to move from basic research to commercial application.

Just consider you are trying to build a house. If you cannot measure distances and angles, it does not matter how good your mathematics is, that house is not going to come out right. And the smaller the object that you want to build, the more precisely you must be able to measure. It’s as simple as that. You can’t reliably produce something if you don’t know what you are doing.

But if you start dealing with very small things, then quantum mechanics will become important. Yes, quantum mechanics is in principle a theory that applies to objects of all sizes. But in practice its effects are negligibly tiny for large things. However, from the size of molecules downwards, quantum effects are essential to understand what is going on. So what then is quantum metrology? Quantum metrology uses quantum effects to make more precise measurements.

It may sound somewhat weird that quantum mechanics can help you to measure things more precisely. Because we all know that quantum mechanics is… uncertain, right? So how do these two things fit together, quantum uncertainty and more precise measurements? Well, quantum uncertainty is not something that applies to any measurement. It only sets a limit to the entirety of information you can obtain about a system.

For example, there is nothing in quantum mechanics that prevents you from measuring the momentum of an electron precisely. But if you do that, you cannot also measure its position precisely. That’s what the uncertainty principle tells you. So, you have to decide what you want to measure, but the uncertainty principle is not an obstacle to measuring precisely per se.

Now, the magic that allows you to measure things more precisely with quantum effects is the same that gives quantum computers an edge over ordinary computers. It’s that quantum particles can be correlated in ways that non-quantum particles can’t. This quantum-typical type of correlation is called entanglement. There are many different ways to entangle particles, so entanglement lets you encode a lot of information with few particles. In a quantum computer, you want to use this to perform a lot of operations quickly. For quantum metrology, more information in a small space means a higher sensitivity of your measurement.

Quantum computers exist already, but the ones which exist are far from being useful. That’s because you need a large number of entangled particles, as much as a million, to not only make calculations, but to make calculations that are actually faster than you could do with a conventional computer. I explained the issue with quantum computers in an earlier video.

But in contrast to quantum computers, quantum metrology does not require large numbers of entangled particles.

A simple example for how quantum behavior can aid measurement comes from medicine. Positron emission tomography, or PET for short, is an imaging method that relies on, yes, entangled particles. For PET, one uses a short-lived radioactive substance, called a “tracer”, that is injected into whatever body part you want to investigate. A typical substance that’s being used for this is carbon-11 which has a half-life of about 20 minutes.

The radioactive substance makes a beta-decay and emits a positron. The positron annihilates with one of the electrons in the neighborhood of the decay site which creates, here it comes, an entangled pair of photons. They fly off not in one particular direction, but in two opposite directions. So, if you measure two photons that fit together, you can calculate where they were emitted. And from this you can reconstruct the distribution of the radioactive substance which “traces” the tissue of interest.

Positron emission tomography has been used since the 1950s and it’s a simple example for how quantum effects can aid measurements. But the general theoretical basis of quantum metrology was only laid in the 1980s. And then for a long time not much happened because it’s really hard to control quantum effects without getting screwed up by noise. In that, quantum metrology faced the same problem as quantum computing.

But in the past two decades, physicists have made rapid progress in designing and controlling quantum states, and, with that, quantum metrology has become one of the most promising avenues to new technology.

In 2009, for example, entangled photons were used to improve the resolution of an imaging method called optical coherence tomography. The way this works is that you create a pair of entangled photons and let them travel in two different directions. One of the photons enters a sample that you want to study, the other does not. Then you recombine the photons, which tells you where the one photon scattered in the sample, which you can then use to reconstruct how the sample is made up.

You can do that with normal light, but the quantum correlations let you measure more precisely. And it’s not only about the precision. These quantum measurements require only tiny numbers of particles, so they are minimally disruptive and therefore particularly well suited to the study of biological systems, for example, the eye, for which you don’t exactly want to use a laser beam.

Another example for quantum metrology is the precise measurement of magnetic fields. You can measure a magnetic field by taking a cloud of atoms, splitting it in two, letting one part go through the magnetic field, and then recombining the atoms. The magnetic field will shift the phases of the atoms that passed through it – because particles are also waves – and you can measure how much the phases were shifted, which tells you what the magnetic field was. Aaaand, if you entangled those atoms you can improve the sensitivity to the magnetic field. This is called quantum enhanced magnetometry.

Quantum metrology has also be used to improve the sensitivity of the LIGO gravitational wave interferometer. LIGO uses laser beams to measure periodic distortions of space and time. Laser light itself is already remarkable, but one can improve on it by bringing the laser light into a particular quantum state, called a “squeezed state,” that is less sensitive to noise and therefore allows more precise measurements.

Now, clearly these are not technologies you will have a switch for on your phone any time soon. But they are technologies with practical uses and they are technologies that we already know do really work. I don’t usually give investment advice, but if I was rich, I would put my money into quantum metrology, not into quantum computing.

Sunday, June 28, 2020

Is COVID there before you measure it?

Today I want to talk about a peculiar aspect of quantum measurements that you may have heard of. It’s that the measurement does not merely reveal a property that previously existed, but that the act of measuring makes that property real. So when Donald Trump claims that not testing people for COVID means there will be fewer cases, rather than just fewer cases you know about, then that demonstrates his deep knowledge of quantum mechanics.

This special role of the measurement process is an aspect of quantum mechanics that Einstein worried about profoundly. He thought it could not possibly be correct. He reportedly summed up the question by asking whether the moon is there when nobody looks, implying that, certainly, the question is absurd. Common sense says “yes” because what does the moon care if someone looks at it. But quantum mechanics says “no”.



In quantum mechanics, the act of observation has special relevance. As long as you don’t look, you don’t know if something is there or just exactly what its properties are. Quantum mechanics, therefore, requires us to rethink what we even mean by “reality”. And that’s why they say it’s strange and weird and you can’t understand it and so on.

Now, Einstein’s remark about the moon is frequently quoted but it’s somewhat misleading because there are other ways of telling whether the moon is there that do not require looking at it in the sense of actually seeing it with our own eyes. We know that the moon is there, for example, because its gravitational pull causes tides. So the word “looking” actually refers to any act of observation.

You could say, but well, we know that quantum mechanics is a theory that is relevant only for small things, so it does not apply to viruses and certainly not to the moon. But well, it’s not so simple. Think of Schrödinger’s cat.

Erwin Schrödinger’s thought experiment with the cat demonstrates that quantum effects for single particles can have macroscopic consequences. Schrödinger said, let us take a single atom which can undergo nuclear decay. Nuclear decay is a real quantum effect. You cannot predict just exactly when it happens, you can only say it happens with a certain probability in a certain amount of time. Before you measure the decay, according to quantum mechanics, the atom is both decayed and not decayed. Physicists say, it is in a “superposition” of these states. Please watch my earlier video for more details about what superpositions are.

But then, Schrödinger says, you can take the information from the nuclear decay and amplify it. He suggested that the nuclear decay could releases a toxic substance. So if you put the cat in a box with the toxin device triggered by nuclear decay, is the cat alive or is it dead if you have not opened the box?

Well, it seems that the cat is somehow both, dead and alive, just like the atom is both decayed and not decayed. And, oddly enough, getting an answer to the question seems to depend on the very human act of making an observation. It is for this reason that people used to think consciousness has something to do with quantum mechanics.

This was something which confused physicists a lot in the early days of quantum mechanics, but this confusion has luckily been resolved, at least almost. First, we now understand that it is irrelevant whether a person does the observation in quantum mechanics. It could as well be an apparatus. So, consciousness is out of the picture. And we also understand that it is really not the observation that is the relevant part but the measurement itself. Things happen when the particle hits the detector, not when the detector spits out a number.

But that brings up the question what is a measurement in quantum mechanics? A measurement is the very act of amplifying the subtle quantum signal and creating a definite outcome. It happens because if the particle hits the detector it has to interact with a lot of other particles. Once this happens, the quantum effects are destroyed.

And here is the important thing. A measurement is not the only way that the quantum system can interact with many particles. Indeed, most particles interact with other particles all the time, just because there is air and radiation around us and there are constantly particles banging into each other. And this also destroys quantum effects, regardless of whether anyone actually measures any of it.

This process in which many particles lose their quantum effects is called “decoherence” because quantum effects come from the “coherence” of states in a superposition. Coherence just means these state which are in a superposition are all alike. But if the superposition interacts with a lot of other particles, this alikeness is screwed up, and with that the quantum effects disappear.

If you look at the numbers you find that decoherence happens enormously quickly, and it happens more quickly the larger the system and the more it interacts. A few particles in vacuum can maintain their quantum effects for a long time. A cat in a box, however, decoheres so quickly there isn’t even a name for that tiny fraction of a second. For all practical purposes, therefore, you can say that cats do not exist in quantum superpositions. They are either dead or alive. In Schrödinger’s thought experiment, the decoherence actually happens already when the toxin is released, so the superposition is never passed on to the cat to begin with.

Now what’s with viruses? Viruses are not actually that large. In fact, some simple viruses have been brought into quantum superpositions. But these quantum superpositions disappear incredibly quickly. And again, that’s due to decoherence. That’s what makes these experiments so difficult. If it was easy to keep large systems in quantum states, we would already be using quantum computers!

So, to summarize. The moon is there regardless of whether you look, and Schrödinger’s cat is either dead or alive regardless of whether you open the box, because the moon and the cat are both large objects that constantly interact with many other particles. And people either have a virus or they don’t, regardless of whether you actually test them.

Having said that, quantum mechanics has left us with a problem that so far has not been resolved. The problem is that decoherence explains why quantum effects go away in a measurement. But it does not explain how to make sense of the probabilities in quantum mechanics for single particles. Because the probabilities seem to suddenly change once you measure the particle. Before measurement, quantum mechanics may have said it would be in the left detector with 50% probability. After measurement, the probability is either 0% of 100%. And decoherence does not explain how this happens. This is known as the measurement problem in quantum mechanics.

Monday, June 22, 2020

Guest Post: “Who Needs a Giant New Collider?” by Alessandro Strumia

Size of 100km tunnel for CERN's planned new collider, the FCC. [Image:CERN]

For the first time in the history of particle physics the scientific program at a collider is mostly in the past light cone and there is no new collider in view. I would like to share my thoughts about this exceptional situation, knowing that many colleagues have negative options of those of us who publicly discuss problems, such as Peter Woit, Sabine Hossenfelder and even Adam Falkowski.

To understand present problems, let’s start from stone age. Something that happens only once in history happened about a century ago: physicists understood what matter is. During this golden period, progress in fundamental physics had huge practical relevance: new discoveries made people richer, countries stronger, and could be used for new experiments that gave new discoveries.

This virtuous cycle attracted the best people and allowed to recognise deep beautiful principles like relativity, quantum mechanics, gauge invariance. After 1945 nuclear physics got huge funds that allowed to explore energies higher than those of ordinary matter building bigger experiments.

This lead to discoveries of new forms of matter, but at energies so high that the new particles had little practical applications, not even for building new experiments. What practical use can have a particle that decays in a zeptosecond? As a result, colliders still use ordinary matter and got bigger because physics demands that the radius of a circular collider grows linearly with energy: R ≈ (4π/α)3 (energy)/(electron mass)2 in natural units. This equation means that HEP (High Energy Physics) can explore energies much above the electron mass by becoming HEP (High Expenses Physics). Some people get impressed by big stuff, but it got bigger because we could not make it better.

For decades bigger and bigger colliders got funded thanks to past prestige, but prestige fades away while costs grew until hitting human resources and time-scales. European physicists saw this problem 60 years ago and joined national resources forming CERN. This choice paid: a few decades after WW2 Europe was again the center of high-energy physics. But energy and costs kept growing, and the number of research institutions that push the energy frontier declined as 6, 5, 4, 3, 2, 1.

How CERN began.
Some institutions gave up, others tried. Around 2000 German physicists proposed a new collider, but the answer was nein. Around 2010 Americans tried, but the answer was no. Next Japanese tried, but the answer was “we express interest” which in Japanese probably means no. Europeans waited hoping that new technology will be developed while the Large Hadron Collider will discover new physics and motivate a dedicated new collider to be financed once the economic crisis is over. Instead of new technology and new physics we got a new virus and a possible new crisis.

The responsibility of being the last dinosaur does not help survival. Innovative colliders would need taking risks, but unexplored energies got so high that the cost of a failure is no longer affordable. But this leads to stagnation. CERN now choose a non-innovative strategy based on reliability. First, get time by running LHC ad nauseam. Second, be or appear so nice and reliable that politics might give the needed ≈30 billions. Third, make again ee and pp circular colliders but greater, 100 km instead of 27.

As a theorist I would enjoy a 100 TeV pp collider for my 100th birthday.

But would it be good for society? No discovery is warranted, but anyhow recent discoveries at colliders had no direct practical applications. Despite this, giving resources to best scientists often leads to indirect innovations. The problem is that building a 100 km phonograph seems not a project that can give a technology leap towards a small gadget with the same memory. Rather, collider physics got so gigantic that when somebody has a new idea, the typical answer no longer is “let’s do it” but “let’s discuss at the next committee”. Committees are filled by people who like discussing, while creative minds seem more attracted by different environments. I see many smart physicists voting with their feet.

But would it be good for physics? So far physics is a serious science. This happened because physics had objective data and no school or center ever dominated physics. But now getting more high-energy data needs concentrating most resources in one center that struggles for its survival. Putting all eggs in one basket seems to me a danger. Maybe I am too much sensitive because some time ago CERN removed sociological data that I presented (now accepted for publication) and warned me that its code of conduct restricts free speech if “prejudicial to the Organization”. Happily I am no longer subject to it, and I say what I think.

Extract from rules that CERN claims Strumia violated.


Even if CERN gets the billions, its 100 TeV pp collider is too far away in time: high-energy physics will fade away earlier. Good physicists cannot wait decades fitting Higgs couplings and pretending it’s interesting enough. The only hope is that China decides that their similar collider project is worthwhile and builds it faster and cheaper. This would force CERN to learn how to make a more innovative muon collider in the LHC tunnel or disappear.

Sunday, June 21, 2020

How to tell science from pseudoscience

Is the earth flat? Is 5G is a mind-control experiment by the Russian government? What about the idea that COVID was engineered by the vaccine industry? How can we tell apart science from pseudoscience? This is what we will talk about today.

Now, how to tell science from pseudoscience is a topic with a long history that lots of intelligent people have written lots of intelligent things about. But this is YouTube. So instead of telling you what everybody else has said, I’ll just tell you what I think.

I think the task of science is to explain observations. So if you want to know whether something is science you need (a) observations and (b) you need to know what it means to explain something in scientific terms. What scientists mean by “explanation” is that they have a model, which is a simplified description of the real world, and this model allows them to make statements about observations that agree with measurements and – here is the important bit – the model is simpler than just a collection of all available data. Usually that is because the model captures certain patterns in the data, and any kind of pattern is a simplification. If we have such a model, we say it “explains” the data. Or at least part of it.

One of the best historical examples for this is astronomy. Astronomy has been all about finding patterns in the motions of celestial objects. And once you know the patterns, they will, quite literally, connect the dots. Visually speaking, a scientific model gives you a curve that connects data points.

This is arguably over-simplified, but it is an instructive visualization because it tells you when a model stops being scientific. This happens if the model has so much freedom that it can fit any data, because then the model does not explain anything. You would be better off just collecting the data. This is also known as “overfitting ”. If you have a model that has more free parameters as input than data to explain, you may as well not bother with that model. It’s not scientific.

There is something else one can learn from this simple image, which is that making a model more complicated will generally allow a better fit to the data. So if one asks what is the best explanation of a set of data, one has to ask when does adding another parameter not justify the slightly better fit to the data you’d get from it. For our purposes it does not matter just exactly how to calculate this, so let me say that there are statistical methods to evaluate exactly this. This means, we can quantify how well a model explains data.

Now, all of what I just said was very quantitative and not in all disciplines of science are models quantitative, but the general point holds. If you have a model that requires many assumptions to explain few observations, and if you hold on to that model even though there is a simpler explanation, then that is unscientific. And, needless to say, if you have a model that does not explain any observation, then that is also not scientific.

A typical case of pseudoscience are conspiracy theories. Whether that is the idea that the earth is flat but NASA has been covering up the evidence since the days of Ptolemais at least, or that 5G is a plan by the government to mind-control you using secret radiation, or that COVID was engineered by the vaccine industry for profit. All these ideas have in common that they are contrived.

You have to make a lot of assumptions for these ideas to agree with reality, assumptions like somehow it’s been possible to consistently fake all the data and images of a round earth and brainwash every single airline pilot, or it is possible to control other’s people’s mind and yet somehow that hasn’t prevented you from figuring out that minds are being controlled. These contrived assumptions are the equivalent of overfitting. That’s what makes these conspiracy theories unscientific. The scientific explanations are the simple ones, the ones that explain lots of observations with few assumptions. The earth is round. 5G is a wireless network. Bats carry many coronaviruses, these have jumped over to humans before, and that’s most likely where COVID also came from.

Let us look at some other popular example, Darwinian evolution. Darwinian evolution is a good scientific theory because it “connects the dots” basically by telling you how certain organisms evolved from each other. I think that in principle it should be possible to quantify this fit to data, but arguably no one has done that. Creationism, on the other hand, simply posits that Earth was created with everything in place. That means Creationism puts in as much information as you get out of it. It therefore does not explain anything. This does not mean it’s wrong. But it means it is unscientific.

Another way to tell pseudoscience from science is that a lot of pseudoscientists like to brag with making predictions. But just because you have a model that makes predictions does not mean it’s scientific. And the opposite is also true, just because a model does not make predictions does not mean it is not scientific.

This is because it does not take much to make a prediction. I can predict, for example, that one of your relatives will fall ill in the coming week. And just coincidentally, this will be correct for some of you. Are you impressed? Probably not. Why? Because to demonstrate that this prediction was scientific, I’d have to show was better than a random guess. For this I’d have to tell you what model I used and what the assumptions were. But of course I didn’t have a model, I just made a guess. And that doesn’t explain anything, so it’s not scientific.

And a model that does not make predictions can still be scientific if it explains a lot of already existing data. Pandemic models are actually a good example for scientific models which do not make predictions. It is basically impossible to make predictions for the spread of infectious diseases because that spread depends on policy decisions which themselves can’t be predicted.

So with pandemic models we really make “projections” or we can look at certain “scenarios” that are if-then cases. If we do not cancel large events, then the spread will likely look like this. If we do cancel them, the spread will more likely look like that. It’s not a prediction because we cannot predict whether large events will be canceled. But that does not make these models unscientific. They are scientific because they accurately describe the spread of epidemics on record. These are simple explanations that fit a lot of data. And that’s why we use them in the first place.

The same is the case for climate models. The simplest explanation for our observation, the one that fits the data with the least amount of assumptions, is that climate change is due to increasing carbondioxide levels and caused by humans. That’s what the science says.

So if you want to know whether a model is scientific, ask how much data it can correctly reproduce and how many assumptions were required for this.

Having said that, it can be difficult to tell science from pseudoscience if an idea has not yet been fully developed and you are constantly told it’s promising, it’s promising, but no one can ever actually show the model fits to data because, they say, they’re not done with the research. We see this in the foundations of physics most prominently with string theory. String theory, if it would work as advertised, could be good science. But string theorists never seem to get to the point where the idea would actually be useful.

In this case, then, the question is really a different one, namely, how much time and money should you throw at a certain research direction to even find out whether it’s science or pseudoscience. And that, ultimately, is a decision that falls to those who fund that research.

Saturday, June 13, 2020

How to search for alien life

Yes, I believe there is life on other planets, intelligent life even. I also think that the search for life elsewhere in the universe is THE most exciting scientific exploration ever. Why then don’t I work on it, you ask? Well, I think I do, kind of. I’ll get to this. But first let me tell you how scientists search for life that’s not on Earth, or “extraterrestrial”, as they say.


When I was a student in the 1990s, talking about extraterrestrial life was not considered serious science. At the time it was not even widely accepted that solar systems with planets like earth are a common occurrence in the universe. But in the past 10 years the mood among scientists has shifted dramatically, and that’s largely thanks to the Kepler mission.

The Kepler satellite was a NASA mission that looked for planets which orbit around stars in our galactic neighborhood. It has observed about 150,000 stars in a small patch of the sky, closely and for long periods of time. From these observations you can tell whether a stars dims periodically because a planet passes by in the line of sight. If you are lucky, you can also tell how big the planet is, how close it is to the star, and how fast it orbits, from which you can then extract its mass.

Kepler has found evidence for more than 4000 exoplanets, as they are called. Big ones and small ones, hot ones and cold ones, and also a few that are not too different from our own planet. Kepler is no longer operating, but NASA has followed up with a new mission, TESS, and several more missions to look for exoplanets are upcoming soon, for example there is another NASA Mission W-FIRST, there is the CHEOPS mission of the E.S.A, and the James Webb Space Telescope, which is a joint mission of NASA, the ESA, and the Canadian Space Agency.

So, we now know that other earth-like planets are out there. The next thing that scientists would like to know is whether the conditions on any of these planets are similar to the conditions on Earth. This is a very human-centered way of thinking about life, of course, but at least so far life on this planet is the only one we are sure exists, so it makes sense, to ask if other places are similar. Ideally, scientists would like to know whether the atmosphere of the earth-like exoplanets contains oxygen and methane, or maybe traces of chlorophyll.

They do already have a few measurements of atmospheres of exoplanets, but these are mostly of large and hot planets that orbit closely around their mother star, because in this case the atmosphere is easier to measure. The way you can measure what’s in the atmosphere is that you investigate the spectral composition of light that either passes through the atmosphere or that is emitted or reflected off the surface. For this too, there are more satellite missions planned, for example the ESA mission ARIEL.

Ok, you may say, but this will in the best case give us an indication for microbial life and really you’d rather know if there is intelligent life out there. For this you need an entirely different type of search. Such searches for extraterrestrial intelligence have been conducted for about century. They have largely relied on analyzing electromagnetic radiation in the radio or micro-wave range that reaches us from outer space. For one that’s because this part of the electromagnetic spectrum is fairly easy to measure without going into the upper atmosphere. But it’s also because our own civilization emits in this part of the spectrum. This electromagnetic radiation is then analyzed for any kind of pattern that is unlikely to be of natural, astrophysical origin.

As you already know, no one found any sign of intelligent life on other planets, except for some false alarms.

The search for intelligent, extraterrestrial life has, sadly enough, always been underfunded, but some people are not giving up their hopes and efforts. There is for example the SETI Institute in California. They have a new plan to look for aliens, which is to distribute 96 cameras on the surface of our planet so that they can look for LASER signals from outer space, 24 hours a day, all over the sky. Like with the search for radio signals, the idea is that LASER-light might be a sign of communication or a by-product of other technologies that extraterrestrial civilizations are using. From those 96 cameras so far one has been installed. The institute is trying to crowdfund the mission, for more information, check out their website.

A search that has no funding issues is the “Breakthrough Listen” project which is supported by billionaire Yuri Milner. This project has run since 2015 and will run through 2025. It employs two radio telescopes to searching for signs of intelligent life. The data that this project has collected so far are publicly available. However, they amount to about 2000 Terabytes, so it’s not exactly user-friendly. Milner has another alien project, which is the “Breakthrough Starshot”. Yes, Milner likes “Breakthroughs” and everything he does is Breakthrough Something; he is also the guy who set up the Breakthrough Prize. The vision of the Starshot project is to send an army of mini space-craft to Alpha Centauri. Alpha Centauri is a solar system in our galactic neighborhood, and “only” about 4 light years away. It is believed to have an earth-like planet. Milner’s mini-space craft are supposed to study this planet and send data back to earth. The scientists on Milner’s team hope to be ready for launch by 2036. It will take 20 to 30 years to reach Alpha Centauri, and then another four years to send the data back to Earth. So, maybe by 2070, we’ll know what’s going on there.

It’s unlikely, of course, that we should be so lucky to find intelligent life basically at the first place we look. Scanning the galaxy for signs of communication, I think, is much more promising. But. We should keep in mind that quite plausibly the reason we have not yet found evidence for extraterrestrial intelligent life is that we have not developed the right technology to pick up their communication. In particular, if there is any way to send information faster than the speed of light, then that’s what all the aliens are using. And, as I explained in an earlier video, in contrast to what you may have been told, there is nothing whatsoever wrong with faster-than-light messaging, except that we don’t know how to do that.

And here is where my own research area, the foundations of physics, becomes really important. If we ever want to find those aliens, we need to better understand space and time, and matter and information. Thanks for watching, see you next week.

Monday, June 08, 2020

Friday, June 05, 2020

Physicists still lost in math

My book Lost in Math was published two years ago, and this week the paperback edition will appear. I want to use the occasion to tell you why I wrote the book and what has happened since.


In Lost in Math, I explain why I have become very worried about what is happening in the foundations of physics. What is happening, you ask? Well, nothing. We have not made progress for 40 years. The problems we are trying to solve today are the same problems we were trying to solve half a century ago.

This worries me because if we do not make progress understanding nature on the most fundamental level, then scientific progress will eventually be reduced to working out details of applications of what we already know. This means that overall societal progress depends crucially on progress in the foundations of physics, more so than on any other discipline.

I know that a lot of scientists in other disciplines find that tremendously offensive. But if they object all I have to do is remind them that without breakthroughs in the foundations of physics there would be no transistors, no microchips, no hard disks, no computers, no wifi, no internet. There would be no artificial intelligence, no lasers, no magnetic resonance imaging, no electron microscopes, no digital cameras. Computer science would not exist. Modern medicine would not exist either because the imaging methods and tools for data analysis would never have been invented. In brief, without the work that physicists did 100 years ago, modern civilization as we know it today would not exist.

I find it somewhat perplexing that so few people seem to realize how big of a problem it is that progress in the foundations of physics has stalled. Part of the reason, I think, is that physicists in the foundations themselves have been talking so much rubbish that people have come to believe foundational work is just philosophical speculation and has lost any relevance for technological progress.

Indeed, I am afraid, most of my colleagues now believe that themselves. It’s wrong, needless to say. A better understand of the theories that we currently use to make all these fancy devices, will almost certainly lead to practical applications. Maybe not in 5 years or 10 years, but more in 100 or 500 years. But eventually, it will.

So, my book Lost in Math is an examination of what has gone wrong. As the subtitle says, the problem is that physicists rely on unscientific methods to develop new theories. These methods are variations of arguments from mathematical beauty, though many physicists are not aware that this is what they are doing.

This problem has been particularly apparent when it comes to the belief that the Large Hadron Collider (LHC) should see new fundamental particles besides the Higgs boson. The reason so many physicists believed this, is that if it had happened, if the LHC would have found other new particles, then the theories would have been much more beautiful. I explained in my book why this argument is unscientific and why therefore, we have no reason to think the LHC should see anything new besides the Higgs. And indeed that’s exactly what happened.

Since the publication of my book, it has slowly sunken in with particle physicists that they were indeed wrong and that their methods did not work. They have largely given up using this particular argument from beauty that led to those wrong LHC predictions. That’s good, of course, but it does not really solve the problem, because they have not analyzed how it could happen that they collectively – and we are talking here about thousands of people – believed in something that was obviously unscientific.

So this is where we stand today. The recognition that something is going wrong in the foundations of physics is spreading. But physicists still have not done anything to fix the problem.

How can we even fix the problem? Well, I explain this in my book. The key is to have a look at what has historically worked. Where have breakthroughs come from in the foundations of physics? Historically a lot of breakthroughs were driven by experimental discoveries. But the simple things have been done and new experiments now are so costly and take such a long time to build, that coincidental discoveries have become incredibly unlikely. You do not just tinker around with a 27 kilometer particle collider.

This means we have to look at the other type of breakthrough, where a theoretical prediction turned out to be correct. Think of Einstein and Dirac and of Higgs and the others who predicted the Higgs boson. What did these correct predictions have in common?

They have in common that they were based on theoretical advances which resolved an inconsistency in the then existing theories. What I mean by inconsistency here is an internal logical disagreement. Therefore, the conclusion I draw from looking at the history of physics is that we should stop trying to make our theories prettier, and instead focus on solving the real problems with these theories.

Some of the inconsistencies in the current theories are the missing quantization of gravity, the measurement problem in quantum mechanics, some aspects of dark energy and dark matter, and some issues with quantum field theories.

I don’t think physicists have really understood what I told them, or maybe they don’t want to understand it. Most of them claim there is no problem, which is patently ridiculous, because everyone who follows popular science news knows that they have been producing loads of nonsense predictions for decades and nothing ever panned out. Clearly, something is going wrong there.

But what I have found very encouraging is the reaction of young physicists to the book, students and postdocs. They don’t want to repeat the mistakes of the past, and they are frequently asking for practical advice. Which I am happy to give, to the extent that I can. The young people give me hope that things will change, eventually, though it might take some time.

“Lost in Math” contains several interviews with key people in the field, Frank Wilczek, Steven Weinberg, Gian Francesco Giudice, who was head of the CERN theory division at the time, Garrett Lisi. George Ellis. Chad Orzel. So you will not only get to hear my opinion, but also that of others. If you haven’t had a chance to read the hardcover, the paperback edition has just appeared, so check it out!

Friday, May 29, 2020

Understanding Quantum Mechanics #3: Non-locality

Locality means that to get from one point to another you somehow have to make a connection in space between these points. You cannot suddenly disappear and reappear elsewhere. At least that was Einstein’s idea. In quantum mechanics it’s more difficult. Just exactly how quantum mechanics is and is not local, that’s what we will talk about today.


To illustrate why it’s complicated, let me remind you of an experiment we already talked about in a previous video. Suppose you have a particle with total spin zero. The spin is conserved and the particle decays in two new particles. One goes left, one goes right. But you know that the two new particles cannot each have spin zero. Each can only have a spin with an absolute value of 1. The easiest way to think of this spin is as a little arrow. Since the total spin is zero, these two spin-arrows of the particles have to point in opposite directions. You do not know just which direction either of the arrows points, but you do know that they have to add to zero. Physicists then say that the two particles are “entangled”.

The question is now what happens if you measure one of the particles’ spins. This experiment was originally proposed as a thought experiment by Einstein, Podolsky, and Rosen, and is therefore also known as the EPR experiment. Well, actually the original idea was somewhat more complicated, and this is a simpler version that was later proposed by Bohm, but the distinction really doesn’t matter for us. The EPR experiment has meanwhile actually been done, many times, so we know what the outcome is. The outcome is... that if you measure the spin on the particle on one side, then the spin of the particle on the other side has the opposite value. Ok, I see you are not surprised. Because, eh, we knew this already, right? So what is the big deal?

Indeed, at first sight entanglement does not appear particularly remarkable because it seems you can do the same thing without quantum anything. Suppose you take a pair of shoes and put them in separate boxes. You don’t know which box contains the left shoe and which the right shoe. You send one box to your friend overseas. The moment the friend opens their box, she will instantaneously know what’s in your box. That seems to be very similar to the two particles with total spin zero.

But it is not, and here’s why. Shoes do not have quantum properties, so the question which box contained the left shoe and which the right shoe was decided already when you packed them. The one box travels entirely locally to your friend, while the other one stays with you. When she opens the box, nothing happens with your box, except that now she knows what’s in it. That’s indeed rather unsurprising.

The surprising bit is that in quantum mechanics this explanation does not work. If you assume that the spin of the particle that goes left was already decided when the original particle decayed, then this does not fit with the observations.

The way that you can show this is to not measure the spin in the same direction on both sides, but to measure them in two different directions. In quantum mechanics, the spin in two orthogonal directions has the same type of mutual uncertainty as the position and momentum. So if you measure the spin in one direction, then you don’t know what’s with the other direction. This means if you on the left side measure the spin in up-down direction and on the right side measure in a horizontal direction, then there is no correlation between the measurements. If you measure them in the same direction, then the measurements are maximally correlated. Where quantum mechanics becomes important is for what happens in between, if you dial the difference in directions of the measurements from orthogonal to parallel. For this case you can calculate how strongly correlated the measurement outcomes are if the spins had been determined already at the time the original particle decayed. This correlation has an upper bound, which is known as Bell’s inequality. But, and here is the important point: Many experiments have shown that this bound can be violated.

And this creates the key conundrum of quantum mechanics. If the outcome of the measurement had been determined at the time that the entangled state was created, then you cannot explain the observed correlations. So it cannot work the same way as the boxes with shoes. But if the spins were not already determined before the measurement, then they suddenly become determined on both sides the moment you measure at least one of them. And that appears to be non-local.

So this is why quantum mechanics is said to be non-local. Because you have these correlations between separated particles that are stronger than they could possibly be if the state had been determined before measurement. Quantum mechanics, it seems, forces you to give up on determinism and locality. It is fundamentally unpredictable and non-local.

Ok, you may say, cool, then let us build a transmitter, forget our frequent flyer cards and travel non-locally from here on. Unfortunately, that does not work. Because while quantum mechanics somehow seems to be non-local with these strong correlations, there is nothing that actually observably travels non-locally. You cannot use these correlations to send information of any kind from one side of the experiment to the other side. That’s because on neither side do you actually know what the outcome of these measurements will be if you chose a particular setting. You only know the probability distribution. The only way you can send information is from the place where the particle decayed to the detectors. And that is local in the normal way.

So, oddly enough, quantum mechanics is entirely local in the common meaning of the word. When physicists say that it is non-local, they mean that particles which have a common origin but then were separated can be stronger correlated than particles without quantum properties could ever be. I know this sounds somewhat lame, but that’s what quantum non-locality really means.

Having said this, let me add a word of caution. The conclusion that it is not possible to explain the observations by assuming the spins were already determined at the moment the original particle decays requires the assumption that this decay is independent of the settings of the detectors. This assumption is known as “statistical independence”. If is violated, it is very well possible to explain the observations locally and deterministically. This option is known as “superdeterminism” and I will tell you more about this some other time.

Friday, May 22, 2020

Is faster-than-light travel possible?

Einstein said that nothing can travel faster than the speed of light. You have probably heard something like that. But is this really correct? This is what we will talk about today.


But first, a quick YouTube announcement. My channel has seen a lot of new subscribers in the past year. And I have noticed that the newcomers are really confused each time I upload a music video. They’re like oh my god she sings, what’s that? So, to make it easier for you, I will no longer post my music videos here, but I have set up a separate channel for those. This means if you want to continue seeing my music videos, please go and subscribe to my other channel.

Now about faster than light travel. To get the obvious out of the way, no one currently knows how to travel faster than light, so in this sense it’s not possible. But you already knew that and it’s not what I want to talk about. Instead, I want to talk about whether it is possible in principle. Like, is there anything actually preventing us from ever developing a technology for faster than light travel?

To find out let us first have a look at what Einstein really said. Einstein’s theory of Special Relativity contains a speed that all observers will measure to be the same. One can show that this is the speed of massless particles. And since the particles of light are, for all we currently know, massless, we usually identify this invariant speed with the speed of light. But if it turned out one day that the particles of light have a small, nonzero mass, then we would still have this invariant speed in Einstein’s theory, it just would not be the speed of light any more.

Next, Einstein also showed that if you have any particle which moves slower than the speed of light, then you cannot accelerate it to faster than the speed of light. You cannot do that because it would take an infinite amount of energy. And this is why you often hear that the speed of light is an upper limit.

However, there is nothing in Einstein’s theory that forbids a particle to move faster than light. You just don’t know how to accelerate anything to such a speed. So really Einstein did not rule out faster than light motion, he just said, no idea how to get there. However, there is a problem with particles that go faster than light, which is that for some observers they look like they go backwards in time. Really, that’s what the mathematics says.

And that, so the argument goes, is a big problem because once you can travel back in time, you can create causal paradoxes, the so-called “grandfather paradoxes”. The idea is, that you could go back in time, kill your own grandfather – accidentally, we hope – so that you would never be born and could not have travelled back in time to kill him, which does not make any sense whatsoever.

So, faster than light travel is a problem because it can lead to causal inconsistencies. At least that’s what most physicists will tell you or maybe have already told you. I will now explain why this is complete nonsense.

It’s not even hard to see what’s wrong with this argument. Imagine you have a particle that goes right to left backwards in time, what would it look like? It would look like a particle going left to right forward in time. These two descriptions are mathematically just entirely identical. A particle does not know which direction of time is forward.

*Our observation that forward in time is different than backward in time comes from entropy increase. It arises from the behavior of large numbers of particles together. If you have many particles together, you can still in principle reverse any particular process in time, but the reversed process will usually be extremely unlikely. Take the example of mixing dough. It’s very easy to get it mixed up and very difficult to unmix, though that is in principle possible.

In any case, you probably don’t need convincing that we do have an arrow of time and that arrow of time points towards more wrinkles. One direction is forward, the other one is not. That’s pretty obvious. Now the reason for the grandfather paradox is not faster than light travel, but it’s that these stories screw up the direction of the arrow of time. You are going back in time, yet you are getting older. *That is the inconsistency. But as long as you have a consistent arrow of time, there is nothing physically wrong with faster-than-light travel.

So, really, the argument from causal paradoxes is rubbish, they are easy to prevent, you just have to demand a consistent arrow of time. But there is another problem with faster-than-light travel and that comes from quantum mechanics. If you take into account quantum mechanics, then a particle that travels faster than light will destroy the universe, basically, which would be unfortunate. Also, it should already have happened, so the existence of faster-than-light particles seems to not agree with what we observe.

The reason is that particles which move faster than light can have negative energy. And in quantum mechanics you can create pairs of new particles provided you conserve the total energy. Now, if you have particles with negative energy, you can pair them with particles of positive energy, and then you can create arbitrarily many of these pairs from nothing. Physicists then say that the vacuum is unstable. Btw, since it is a common confusion, let me mention that anti-particles do NOT have negative energy. But faster than light particles can have negative energy.

This is clearly something to worry about. However, the conclusion depends on how seriously you take quantum theory. Personally I think quantum theory is not itself fundamental, but it is only an approximation to a better theory that has not yet been developed. The best evidence for this is the measurement problem which I talked about in an earlier video. So I think that this supposed problem with the vacuum instability comes from taking quantum mechanics too seriously.

Also, you can circumvent the problem with the negative energies if you travel faster than light by using wormholes because in this case you can use entirely normal particles. Wormholes are basically shortcuts in space. Instead of taking the long way from here to Andromeda, you could hop into one end of a wormhole and just reappear at the other end. Unfortunately, there are good reasons to think that wormholes don’t exist which I talked about in an earlier video.

In summary, there is no reason in principle why faster than light travel or faster than light communication is impossible. Maybe we just have not figured out how to do it.

Talk To Me [I've been singing again]

This is for the guy who recommended I “release” my “inner Whitney Houston”.

Book Update

The US version of my book "Lost in Math" is about to be published as paperback. You can now pre-order it, which of course you should.

I am quite pleased that what I wrote in the book five years ago has held up so beautifully. There has been zero further progress in the foundations of physics and, needless to say, there will be zero progress until physicists understand that they need to change their methodology. Chances that they actually understand this are not exactly zero, but very close by.

In other news, on Monday I gave an online seminar about Superdeterminism, which was recorded and is now available on YouTube. Don't despair if Tim doesn't quite make sense to you; it took me a year to figure out that he isn't as crazy as he sounds.

The How The Light Gets in Festival which is normally in Hay-on-Wye has also been moved online. I think that's great, because Hay-on-Wye is a tiny village somewhere in the middle of nowhere and traveling there has been somewhat of a pain. Indeed, I had actually declined the invitation months ago. But since I can now attend without having to sit on a car, a bus, a plane, a train, and a taxi, I will be in a debate about Supersymmetry tomorrow (May 23) at 11:30am BST (not CEST) and giving a 30 mins talk about my book at 2pm (again that's BST).

Tuesday, May 19, 2020

[Guest Post] Conversful 101: Explaining What’s In The Bottom Corner Of Your Screen

[This post is written by Ben Alderoty from Conversful.]

You may have noticed something new in the bottom corner of BackRe(action) recently. It appears only if you’re on a computer. So if you’re on a phone or tablet right now, finish reading this post, but then come back another time from a computer to see what I’m talking about. That thing is called Conversful & myself with a team of a few others are behind it. I wanted to take a second to give some context as to what Conversful is & how it works.

We built Conversful to create new conversations. We believe that people on the same website at the same time probably have a lot in common. So much so that if they were to meet randomly at a conference, an airport or a bar, they would probably get into a fantastic conversation. But nothing exists right now to make these spontaneous connections happen. With Conversful, we’re trying to create a space where these connections can happen - a “virtual meeting place” of sorts to borrow Sabine’s words.



To open Conversful, just click the globe icon in the bottom corner. With the app open you can do one of two things; start a new conversation or join a conversation. 1️) To start a new conversation, all you’ll need is a topic and your first name. Topics can be anything. So far we’ve seen topics range from “Physics” to “Stephen Wolfram thinks he is close to a unified theory of physics unifying QM and GR. Some opinions?”. Both of these work. There’s no need to overthink a topic, keep it short, and submit it. 2) Joining a conversation is even easier. With the app open, click ‘Join Conversation’ and enter your first name.


Here’s a few other things:
  • Conversations on Conversful are 1-1. They are between the person who started the conversation and the person who joined it.
  • Conversations on Conversful are real-time. If you post a topic and then leave before someone joins, your topic will disappear. When you come back to the website at a later time you will not have any responses.
  • Conversful is for everyone. We designed Conversful to make it feel like you’re texting a friend. Be yourself, share your thoughts, there’s always someone online to hear them out as long as you’re willing to hear theirs.
Today we rolled out a handful of new features to make it easier for conversations to happen. You’re probably seeing some of them right now. If you’ve already tried Conversful and it didn’t end in a conversation, I ask you to please give it another try!

P.S. To make Conversful the best it can be, I would love to hear from you. If you have any thoughts/ideas/feedback on what’s working (or not) and what else you’d like to see, please feel free to email me at (ben@conversful.com). Cheers from NYC & happy conversing!