Sunday, June 28, 2020

Is COVID there before you measure it?

Today I want to talk about a peculiar aspect of quantum measurements that you may have heard of. It’s that the measurement does not merely reveal a property that previously existed, but that the act of measuring makes that property real. So when Donald Trump claims that not testing people for COVID means there will be fewer cases, rather than just fewer cases you know about, then that demonstrates his deep knowledge of quantum mechanics.

This special role of the measurement process is an aspect of quantum mechanics that Einstein worried about profoundly. He thought it could not possibly be correct. He reportedly summed up the question by asking whether the moon is there when nobody looks, implying that, certainly, the question is absurd. Common sense says “yes” because what does the moon care if someone looks at it. But quantum mechanics says “no”.



In quantum mechanics, the act of observation has special relevance. As long as you don’t look, you don’t know if something is there or just exactly what its properties are. Quantum mechanics, therefore, requires us to rethink what we even mean by “reality”. And that’s why they say it’s strange and weird and you can’t understand it and so on.

Now, Einstein’s remark about the moon is frequently quoted but it’s somewhat misleading because there are other ways of telling whether the moon is there that do not require looking at it in the sense of actually seeing it with our own eyes. We know that the moon is there, for example, because its gravitational pull causes tides. So the word “looking” actually refers to any act of observation.

You could say, but well, we know that quantum mechanics is a theory that is relevant only for small things, so it does not apply to viruses and certainly not to the moon. But well, it’s not so simple. Think of Schrödinger’s cat.

Erwin Schrödinger’s thought experiment with the cat demonstrates that quantum effects for single particles can have macroscopic consequences. Schrödinger said, let us take a single atom which can undergo nuclear decay. Nuclear decay is a real quantum effect. You cannot predict just exactly when it happens, you can only say it happens with a certain probability in a certain amount of time. Before you measure the decay, according to quantum mechanics, the atom is both decayed and not decayed. Physicists say, it is in a “superposition” of these states. Please watch my earlier video for more details about what superpositions are.

But then, Schrödinger says, you can take the information from the nuclear decay and amplify it. He suggested that the nuclear decay could releases a toxic substance. So if you put the cat in a box with the toxin device triggered by nuclear decay, is the cat alive or is it dead if you have not opened the box?

Well, it seems that the cat is somehow both, dead and alive, just like the atom is both decayed and not decayed. And, oddly enough, getting an answer to the question seems to depend on the very human act of making an observation. It is for this reason that people used to think consciousness has something to do with quantum mechanics.

This was something which confused physicists a lot in the early days of quantum mechanics, but this confusion has luckily been resolved, at least almost. First, we now understand that it is irrelevant whether a person does the observation in quantum mechanics. It could as well be an apparatus. So, consciousness is out of the picture. And we also understand that it is really not the observation that is the relevant part but the measurement itself. Things happen when the particle hits the detector, not when the detector spits out a number.

But that brings up the question what is a measurement in quantum mechanics? A measurement is the very act of amplifying the subtle quantum signal and creating a definite outcome. It happens because if the particle hits the detector it has to interact with a lot of other particles. Once this happens, the quantum effects are destroyed.

And here is the important thing. A measurement is not the only way that the quantum system can interact with many particles. Indeed, most particles interact with other particles all the time, just because there is air and radiation around us and there are constantly particles banging into each other. And this also destroys quantum effects, regardless of whether anyone actually measures any of it.

This process in which many particles lose their quantum effects is called “decoherence” because quantum effects come from the “coherence” of states in a superposition. Coherence just means these state which are in a superposition are all alike. But if the superposition interacts with a lot of other particles, this alikeness is screwed up, and with that the quantum effects disappear.

If you look at the numbers you find that decoherence happens enormously quickly, and it happens more quickly the larger the system and the more it interacts. A few particles in vacuum can maintain their quantum effects for a long time. A cat in a box, however, decoheres so quickly there isn’t even a name for that tiny fraction of a second. For all practical purposes, therefore, you can say that cats do not exist in quantum superpositions. They are either dead or alive. In Schrödinger’s thought experiment, the decoherence actually happens already when the toxin is released, so the superposition is never passed on to the cat to begin with.

Now what’s with viruses? Viruses are not actually that large. In fact, some simple viruses have been brought into quantum superpositions. But these quantum superpositions disappear incredibly quickly. And again, that’s due to decoherence. That’s what makes these experiments so difficult. If it was easy to keep large systems in quantum states, we would already be using quantum computers!

So, to summarize. The moon is there regardless of whether you look, and Schrödinger’s cat is either dead or alive regardless of whether you open the box, because the moon and the cat are both large objects that constantly interact with many other particles. And people either have a virus or they don’t, regardless of whether you actually test them.

Having said that, quantum mechanics has left us with a problem that so far has not been resolved. The problem is that decoherence explains why quantum effects go away in a measurement. But it does not explain how to make sense of the probabilities in quantum mechanics for single particles. Because the probabilities seem to suddenly change once you measure the particle. Before measurement, quantum mechanics may have said it would be in the left detector with 50% probability. After measurement, the probability is either 0% of 100%. And decoherence does not explain how this happens. This is known as the measurement problem in quantum mechanics.

Monday, June 22, 2020

Guest Post: “Who Needs a Giant New Collider?” by Alessandro Strumia

Size of 100km tunnel for CERN's planned new collider, the FCC. [Image:CERN]

For the first time in the history of particle physics the scientific program at a collider is mostly in the past light cone and there is no new collider in view. I would like to share my thoughts about this exceptional situation, knowing that many colleagues have negative options of those of us who publicly discuss problems, such as Peter Woit, Sabine Hossenfelder and even Adam Falkowski.

To understand present problems, let’s start from stone age. Something that happens only once in history happened about a century ago: physicists understood what matter is. During this golden period, progress in fundamental physics had huge practical relevance: new discoveries made people richer, countries stronger, and could be used for new experiments that gave new discoveries.

This virtuous cycle attracted the best people and allowed to recognise deep beautiful principles like relativity, quantum mechanics, gauge invariance. After 1945 nuclear physics got huge funds that allowed to explore energies higher than those of ordinary matter building bigger experiments.

This lead to discoveries of new forms of matter, but at energies so high that the new particles had little practical applications, not even for building new experiments. What practical use can have a particle that decays in a zeptosecond? As a result, colliders still use ordinary matter and got bigger because physics demands that the radius of a circular collider grows linearly with energy: R ≈ (4π/α)3 (energy)/(electron mass)2 in natural units. This equation means that HEP (High Energy Physics) can explore energies much above the electron mass by becoming HEP (High Expenses Physics). Some people get impressed by big stuff, but it got bigger because we could not make it better.

For decades bigger and bigger colliders got funded thanks to past prestige, but prestige fades away while costs grew until hitting human resources and time-scales. European physicists saw this problem 60 years ago and joined national resources forming CERN. This choice paid: a few decades after WW2 Europe was again the center of high-energy physics. But energy and costs kept growing, and the number of research institutions that push the energy frontier declined as 6, 5, 4, 3, 2, 1.

How CERN began.
Some institutions gave up, others tried. Around 2000 German physicists proposed a new collider, but the answer was nein. Around 2010 Americans tried, but the answer was no. Next Japanese tried, but the answer was “we express interest” which in Japanese probably means no. Europeans waited hoping that new technology will be developed while the Large Hadron Collider will discover new physics and motivate a dedicated new collider to be financed once the economic crisis is over. Instead of new technology and new physics we got a new virus and a possible new crisis.

The responsibility of being the last dinosaur does not help survival. Innovative colliders would need taking risks, but unexplored energies got so high that the cost of a failure is no longer affordable. But this leads to stagnation. CERN now choose a non-innovative strategy based on reliability. First, get time by running LHC ad nauseam. Second, be or appear so nice and reliable that politics might give the needed ≈30 billions. Third, make again ee and pp circular colliders but greater, 100 km instead of 27.

As a theorist I would enjoy a 100 TeV pp collider for my 100th birthday.

But would it be good for society? No discovery is warranted, but anyhow recent discoveries at colliders had no direct practical applications. Despite this, giving resources to best scientists often leads to indirect innovations. The problem is that building a 100 km phonograph seems not a project that can give a technology leap towards a small gadget with the same memory. Rather, collider physics got so gigantic that when somebody has a new idea, the typical answer no longer is “let’s do it” but “let’s discuss at the next committee”. Committees are filled by people who like discussing, while creative minds seem more attracted by different environments. I see many smart physicists voting with their feet.

But would it be good for physics? So far physics is a serious science. This happened because physics had objective data and no school or center ever dominated physics. But now getting more high-energy data needs concentrating most resources in one center that struggles for its survival. Putting all eggs in one basket seems to me a danger. Maybe I am too much sensitive because some time ago CERN removed sociological data that I presented (now accepted for publication) and warned me that its code of conduct restricts free speech if “prejudicial to the Organization”. Happily I am no longer subject to it, and I say what I think.

Extract from rules that CERN claims Strumia violated.


Even if CERN gets the billions, its 100 TeV pp collider is too far away in time: high-energy physics will fade away earlier. Good physicists cannot wait decades fitting Higgs couplings and pretending it’s interesting enough. The only hope is that China decides that their similar collider project is worthwhile and builds it faster and cheaper. This would force CERN to learn how to make a more innovative muon collider in the LHC tunnel or disappear.

Sunday, June 21, 2020

How to tell science from pseudoscience

Is the earth flat? Is 5G is a mind-control experiment by the Russian government? What about the idea that COVID was engineered by the vaccine industry? How can we tell apart science from pseudoscience? This is what we will talk about today.

Now, how to tell science from pseudoscience is a topic with a long history that lots of intelligent people have written lots of intelligent things about. But this is YouTube. So instead of telling you what everybody else has said, I’ll just tell you what I think.

I think the task of science is to explain observations. So if you want to know whether something is science you need (a) observations and (b) you need to know what it means to explain something in scientific terms. What scientists mean by “explanation” is that they have a model, which is a simplified description of the real world, and this model allows them to make statements about observations that agree with measurements and – here is the important bit – the model is simpler than just a collection of all available data. Usually that is because the model captures certain patterns in the data, and any kind of pattern is a simplification. If we have such a model, we say it “explains” the data. Or at least part of it.

One of the best historical examples for this is astronomy. Astronomy has been all about finding patterns in the motions of celestial objects. And once you know the patterns, they will, quite literally, connect the dots. Visually speaking, a scientific model gives you a curve that connects data points.

This is arguably over-simplified, but it is an instructive visualization because it tells you when a model stops being scientific. This happens if the model has so much freedom that it can fit any data, because then the model does not explain anything. You would be better off just collecting the data. This is also known as “overfitting ”. If you have a model that has more free parameters as input than data to explain, you may as well not bother with that model. It’s not scientific.

There is something else one can learn from this simple image, which is that making a model more complicated will generally allow a better fit to the data. So if one asks what is the best explanation of a set of data, one has to ask when does adding another parameter not justify the slightly better fit to the data you’d get from it. For our purposes it does not matter just exactly how to calculate this, so let me say that there are statistical methods to evaluate exactly this. This means, we can quantify how well a model explains data.

Now, all of what I just said was very quantitative and not in all disciplines of science are models quantitative, but the general point holds. If you have a model that requires many assumptions to explain few observations, and if you hold on to that model even though there is a simpler explanation, then that is unscientific. And, needless to say, if you have a model that does not explain any observation, then that is also not scientific.

A typical case of pseudoscience are conspiracy theories. Whether that is the idea that the earth is flat but NASA has been covering up the evidence since the days of Ptolemais at least, or that 5G is a plan by the government to mind-control you using secret radiation, or that COVID was engineered by the vaccine industry for profit. All these ideas have in common that they are contrived.

You have to make a lot of assumptions for these ideas to agree with reality, assumptions like somehow it’s been possible to consistently fake all the data and images of a round earth and brainwash every single airline pilot, or it is possible to control other’s people’s mind and yet somehow that hasn’t prevented you from figuring out that minds are being controlled. These contrived assumptions are the equivalent of overfitting. That’s what makes these conspiracy theories unscientific. The scientific explanations are the simple ones, the ones that explain lots of observations with few assumptions. The earth is round. 5G is a wireless network. Bats carry many coronaviruses, these have jumped over to humans before, and that’s most likely where COVID also came from.

Let us look at some other popular example, Darwinian evolution. Darwinian evolution is a good scientific theory because it “connects the dots” basically by telling you how certain organisms evolved from each other. I think that in principle it should be possible to quantify this fit to data, but arguably no one has done that. Creationism, on the other hand, simply posits that Earth was created with everything in place. That means Creationism puts in as much information as you get out of it. It therefore does not explain anything. This does not mean it’s wrong. But it means it is unscientific.

Another way to tell pseudoscience from science is that a lot of pseudoscientists like to brag with making predictions. But just because you have a model that makes predictions does not mean it’s scientific. And the opposite is also true, just because a model does not make predictions does not mean it is not scientific.

This is because it does not take much to make a prediction. I can predict, for example, that one of your relatives will fall ill in the coming week. And just coincidentally, this will be correct for some of you. Are you impressed? Probably not. Why? Because to demonstrate that this prediction was scientific, I’d have to show was better than a random guess. For this I’d have to tell you what model I used and what the assumptions were. But of course I didn’t have a model, I just made a guess. And that doesn’t explain anything, so it’s not scientific.

And a model that does not make predictions can still be scientific if it explains a lot of already existing data. Pandemic models are actually a good example for scientific models which do not make predictions. It is basically impossible to make predictions for the spread of infectious diseases because that spread depends on policy decisions which themselves can’t be predicted.

So with pandemic models we really make “projections” or we can look at certain “scenarios” that are if-then cases. If we do not cancel large events, then the spread will likely look like this. If we do cancel them, the spread will more likely look like that. It’s not a prediction because we cannot predict whether large events will be canceled. But that does not make these models unscientific. They are scientific because they accurately describe the spread of epidemics on record. These are simple explanations that fit a lot of data. And that’s why we use them in the first place.

The same is the case for climate models. The simplest explanation for our observation, the one that fits the data with the least amount of assumptions, is that climate change is due to increasing carbondioxide levels and caused by humans. That’s what the science says.

So if you want to know whether a model is scientific, ask how much data it can correctly reproduce and how many assumptions were required for this.

Having said that, it can be difficult to tell science from pseudoscience if an idea has not yet been fully developed and you are constantly told it’s promising, it’s promising, but no one can ever actually show the model fits to data because, they say, they’re not done with the research. We see this in the foundations of physics most prominently with string theory. String theory, if it would work as advertised, could be good science. But string theorists never seem to get to the point where the idea would actually be useful.

In this case, then, the question is really a different one, namely, how much time and money should you throw at a certain research direction to even find out whether it’s science or pseudoscience. And that, ultimately, is a decision that falls to those who fund that research.

Saturday, June 13, 2020

How to search for alien life

Yes, I believe there is life on other planets, intelligent life even. I also think that the search for life elsewhere in the universe is THE most exciting scientific exploration ever. Why then don’t I work on it, you ask? Well, I think I do, kind of. I’ll get to this. But first let me tell you how scientists search for life that’s not on Earth, or “extraterrestrial”, as they say.


When I was a student in the 1990s, talking about extraterrestrial life was not considered serious science. At the time it was not even widely accepted that solar systems with planets like earth are a common occurrence in the universe. But in the past 10 years the mood among scientists has shifted dramatically, and that’s largely thanks to the Kepler mission.

The Kepler satellite was a NASA mission that looked for planets which orbit around stars in our galactic neighborhood. It has observed about 150,000 stars in a small patch of the sky, closely and for long periods of time. From these observations you can tell whether a stars dims periodically because a planet passes by in the line of sight. If you are lucky, you can also tell how big the planet is, how close it is to the star, and how fast it orbits, from which you can then extract its mass.

Kepler has found evidence for more than 4000 exoplanets, as they are called. Big ones and small ones, hot ones and cold ones, and also a few that are not too different from our own planet. Kepler is no longer operating, but NASA has followed up with a new mission, TESS, and several more missions to look for exoplanets are upcoming soon, for example there is another NASA Mission W-FIRST, there is the CHEOPS mission of the E.S.A, and the James Webb Space Telescope, which is a joint mission of NASA, the ESA, and the Canadian Space Agency.

So, we now know that other earth-like planets are out there. The next thing that scientists would like to know is whether the conditions on any of these planets are similar to the conditions on Earth. This is a very human-centered way of thinking about life, of course, but at least so far life on this planet is the only one we are sure exists, so it makes sense, to ask if other places are similar. Ideally, scientists would like to know whether the atmosphere of the earth-like exoplanets contains oxygen and methane, or maybe traces of chlorophyll.

They do already have a few measurements of atmospheres of exoplanets, but these are mostly of large and hot planets that orbit closely around their mother star, because in this case the atmosphere is easier to measure. The way you can measure what’s in the atmosphere is that you investigate the spectral composition of light that either passes through the atmosphere or that is emitted or reflected off the surface. For this too, there are more satellite missions planned, for example the ESA mission ARIEL.

Ok, you may say, but this will in the best case give us an indication for microbial life and really you’d rather know if there is intelligent life out there. For this you need an entirely different type of search. Such searches for extraterrestrial intelligence have been conducted for about century. They have largely relied on analyzing electromagnetic radiation in the radio or micro-wave range that reaches us from outer space. For one that’s because this part of the electromagnetic spectrum is fairly easy to measure without going into the upper atmosphere. But it’s also because our own civilization emits in this part of the spectrum. This electromagnetic radiation is then analyzed for any kind of pattern that is unlikely to be of natural, astrophysical origin.

As you already know, no one found any sign of intelligent life on other planets, except for some false alarms.

The search for intelligent, extraterrestrial life has, sadly enough, always been underfunded, but some people are not giving up their hopes and efforts. There is for example the SETI Institute in California. They have a new plan to look for aliens, which is to distribute 96 cameras on the surface of our planet so that they can look for LASER signals from outer space, 24 hours a day, all over the sky. Like with the search for radio signals, the idea is that LASER-light might be a sign of communication or a by-product of other technologies that extraterrestrial civilizations are using. From those 96 cameras so far one has been installed. The institute is trying to crowdfund the mission, for more information, check out their website.

A search that has no funding issues is the “Breakthrough Listen” project which is supported by billionaire Yuri Milner. This project has run since 2015 and will run through 2025. It employs two radio telescopes to searching for signs of intelligent life. The data that this project has collected so far are publicly available. However, they amount to about 2000 Terabytes, so it’s not exactly user-friendly. Milner has another alien project, which is the “Breakthrough Starshot”. Yes, Milner likes “Breakthroughs” and everything he does is Breakthrough Something; he is also the guy who set up the Breakthrough Prize. The vision of the Starshot project is to send an army of mini space-craft to Alpha Centauri. Alpha Centauri is a solar system in our galactic neighborhood, and “only” about 4 light years away. It is believed to have an earth-like planet. Milner’s mini-space craft are supposed to study this planet and send data back to earth. The scientists on Milner’s team hope to be ready for launch by 2036. It will take 20 to 30 years to reach Alpha Centauri, and then another four years to send the data back to Earth. So, maybe by 2070, we’ll know what’s going on there.

It’s unlikely, of course, that we should be so lucky to find intelligent life basically at the first place we look. Scanning the galaxy for signs of communication, I think, is much more promising. But. We should keep in mind that quite plausibly the reason we have not yet found evidence for extraterrestrial intelligent life is that we have not developed the right technology to pick up their communication. In particular, if there is any way to send information faster than the speed of light, then that’s what all the aliens are using. And, as I explained in an earlier video, in contrast to what you may have been told, there is nothing whatsoever wrong with faster-than-light messaging, except that we don’t know how to do that.

And here is where my own research area, the foundations of physics, becomes really important. If we ever want to find those aliens, we need to better understand space and time, and matter and information. Thanks for watching, see you next week.

Monday, June 08, 2020

Friday, June 05, 2020

Physicists still lost in math

My book Lost in Math was published two years ago, and this week the paperback edition will appear. I want to use the occasion to tell you why I wrote the book and what has happened since.


In Lost in Math, I explain why I have become very worried about what is happening in the foundations of physics. What is happening, you ask? Well, nothing. We have not made progress for 40 years. The problems we are trying to solve today are the same problems we were trying to solve half a century ago.

This worries me because if we do not make progress understanding nature on the most fundamental level, then scientific progress will eventually be reduced to working out details of applications of what we already know. This means that overall societal progress depends crucially on progress in the foundations of physics, more so than on any other discipline.

I know that a lot of scientists in other disciplines find that tremendously offensive. But if they object all I have to do is remind them that without breakthroughs in the foundations of physics there would be no transistors, no microchips, no hard disks, no computers, no wifi, no internet. There would be no artificial intelligence, no lasers, no magnetic resonance imaging, no electron microscopes, no digital cameras. Computer science would not exist. Modern medicine would not exist either because the imaging methods and tools for data analysis would never have been invented. In brief, without the work that physicists did 100 years ago, modern civilization as we know it today would not exist.

I find it somewhat perplexing that so few people seem to realize how big of a problem it is that progress in the foundations of physics has stalled. Part of the reason, I think, is that physicists in the foundations themselves have been talking so much rubbish that people have come to believe foundational work is just philosophical speculation and has lost any relevance for technological progress.

Indeed, I am afraid, most of my colleagues now believe that themselves. It’s wrong, needless to say. A better understand of the theories that we currently use to make all these fancy devices, will almost certainly lead to practical applications. Maybe not in 5 years or 10 years, but more in 100 or 500 years. But eventually, it will.

So, my book Lost in Math is an examination of what has gone wrong. As the subtitle says, the problem is that physicists rely on unscientific methods to develop new theories. These methods are variations of arguments from mathematical beauty, though many physicists are not aware that this is what they are doing.

This problem has been particularly apparent when it comes to the belief that the Large Hadron Collider (LHC) should see new fundamental particles besides the Higgs boson. The reason so many physicists believed this, is that if it had happened, if the LHC would have found other new particles, then the theories would have been much more beautiful. I explained in my book why this argument is unscientific and why therefore, we have no reason to think the LHC should see anything new besides the Higgs. And indeed that’s exactly what happened.

Since the publication of my book, it has slowly sunken in with particle physicists that they were indeed wrong and that their methods did not work. They have largely given up using this particular argument from beauty that led to those wrong LHC predictions. That’s good, of course, but it does not really solve the problem, because they have not analyzed how it could happen that they collectively – and we are talking here about thousands of people – believed in something that was obviously unscientific.

So this is where we stand today. The recognition that something is going wrong in the foundations of physics is spreading. But physicists still have not done anything to fix the problem.

How can we even fix the problem? Well, I explain this in my book. The key is to have a look at what has historically worked. Where have breakthroughs come from in the foundations of physics? Historically a lot of breakthroughs were driven by experimental discoveries. But the simple things have been done and new experiments now are so costly and take such a long time to build, that coincidental discoveries have become incredibly unlikely. You do not just tinker around with a 27 kilometer particle collider.

This means we have to look at the other type of breakthrough, where a theoretical prediction turned out to be correct. Think of Einstein and Dirac and of Higgs and the others who predicted the Higgs boson. What did these correct predictions have in common?

They have in common that they were based on theoretical advances which resolved an inconsistency in the then existing theories. What I mean by inconsistency here is an internal logical disagreement. Therefore, the conclusion I draw from looking at the history of physics is that we should stop trying to make our theories prettier, and instead focus on solving the real problems with these theories.

Some of the inconsistencies in the current theories are the missing quantization of gravity, the measurement problem in quantum mechanics, some aspects of dark energy and dark matter, and some issues with quantum field theories.

I don’t think physicists have really understood what I told them, or maybe they don’t want to understand it. Most of them claim there is no problem, which is patently ridiculous, because everyone who follows popular science news knows that they have been producing loads of nonsense predictions for decades and nothing ever panned out. Clearly, something is going wrong there.

But what I have found very encouraging is the reaction of young physicists to the book, students and postdocs. They don’t want to repeat the mistakes of the past, and they are frequently asking for practical advice. Which I am happy to give, to the extent that I can. The young people give me hope that things will change, eventually, though it might take some time.

“Lost in Math” contains several interviews with key people in the field, Frank Wilczek, Steven Weinberg, Gian Francesco Giudice, who was head of the CERN theory division at the time, Garrett Lisi. George Ellis. Chad Orzel. So you will not only get to hear my opinion, but also that of others. If you haven’t had a chance to read the hardcover, the paperback edition has just appeared, so check it out!