Saturday, February 27, 2021

Schrödinger’s Cat – Still Not Dead

[This is a transcript of the video embedded below.]


The internet, as we all know, was invented so we can spend our days watching cat videos, which is why this video is about the most famous of all science cats, Schrödinger’s cat. It is really both dead and alive? If so, what does that mean? And what has recent research to say about it? That’s what we’ll talk about today.

Quantum mechanics has struck physicists as weird ever since its discovery, more than a century ago. One especially peculiar aspect of quantum mechanics is that it forces you to accept the existence of superpositions. That are systems which can be in two states at the same time, until you make a measurement, which suddenly “collapses” the superposition into one definite measurement outcome.

The system here could be a single particle, like a photon, but it could also be a big object made of many particles. The thing is that in quantum mechanics, if two states exist separately, like an object being here and being there, then the superposition – that is the same object both here and there – must also exist. We know this experimentally, and I explained the mathematics behind this in an earlier video.

Now, you may think that being in a quantum superposition is something that only tiny particles can do. But these superpositions for large objects can’t be easily ignored, because you can take the tiny ones and amplify them to macroscopic size.

This amplification is what Erwin Schrödinger wanted to illustrate with a hypothetical experiment he came up with in 1935. In this experiment, a cat is in a box, together with a vial of poison, a trigger mechanism, and a radioactive atom. The nucleus of the atom has a fifty percent chance of decaying in a certain amount of time. If it decays, the trigger breaks the vial of poison, which kills the cat.

But the decay follows the laws of quantum physics. Before you measure it, the nucleus is both decayed and not decayed, and so, it seems that before one opens the box, the cat is both dead and alive. Or is it?

Well, depends on your interpretation of quantum mechanics, that is, what you think the mathematics means. In the most widely taught interpretation, the Copenhagen interpretation, the question what state the cat is in before you measure it is just meaningless. You’re not supposed to ask. The same is the case in all interpretations according to which quantum mechanics is a theory about the knowledge we have about a system, and not about the system itself.

In the many-worlds interpretation, in contrast, each possible measurement outcome happens in a separate universe. So, there’s a universe where the cat lives and one where the cat dies. When someone opens the box, that decides which universe they’re in. But for what observations are concerned, the result is exactly the same as in the Copenhagen interpretation.

Pilot wave-theory, which we talked about earlier, says that the cat is really always in only one state, you just don’t know which one it is until you look. The same is the case for spontaneous collapse models. In these models, the collapse of the wave-function is not merely an update when you open the box, but it’s a physical process.

It’s no secret that I myself am signed up to superdeterminism, which means that the measurement outcome is partly determined by the measurement settings. In this case, the cat may start out in a superposition, but by the time you measure it, it has reached the state which you actually observe. So, there is no sudden collapse in superdeterminism, it’s a smooth, deterministic, and local process.

Now, one cannot experimentally tell apart interpretations of mathematics, but collapse models, superdeterminism, and, under certain circumstances, pilot wave theory, make different predictions than Copenhagen or many worlds. So, clearly, one wants to do the experiment!

But. As you have undoubtedly noticed, cats are usually either dead or alive, not both. The reason is that even tiny interactions with a quantum system have the same effect as a measurement, and large objects, like cats, just constantly interact with something, like air or the cosmic background radiation. And that’s already sufficient to destroy a quantum superposition of a cat so quickly we’d never observe it. But physicists are trying to push the experimental boundary for bringing large objects into quantum states.

For example, in 2013, a team of physicists from the University of Calgary in Canada amplified a quantum superposition of a single photon. They first fired the photon at a partially silvered mirror, called a beam splitter, so that it became a superposition of two states: it passed through the mirror and also reflected back off it. Then they used one part of this superposition to trigger a laser pulse, which contains a whole lot of photons. Finally, they showed that the pulse was still in a superposition with the single photon. In another 2019 experiment, they amplified both parts of this superposition, and again they found that the quantum effects survived, for up to about 100 million photons.

Now, a group of 100 million photons not a cat, but it is bigger than your standard quantum particle. So, some headlines referred to this as the “Schrödinger's kitten” experiment.

But just in case you think a laser pulse is a poor approximation for a cat, how about this. In 2017, scientists at the University of Sheffield put bacteria in a cavity between two mirrors and they bounced light between the mirrors. The bacteria absorbed, emitted, and re-absorbed the light multiple times. The researchers could demonstrate that this way, some of the bacterias’ molecules became entangled with the cavity, so that is a special case of a quantum superposition.

However, a paper published the following year by scientists at Oxford University argued that the observations on the bacteria could also be explained without quantum effects. Now, this doesn’t mean that this is the correct explanation. Indeed, it doesn’t make much sense because we already know that molecules have quantum effects and they couple to light in certain quantum ways. However, this criticism demonstrates that it can be difficult to prove that something you observe is really a quantum effect, and the bacteria experiment isn’t quite there yet.

Let us then talk about a variant of Schrödinger’s cat that Eugene Wigner came up with in the nineteen-sixties. Imagine that this guy Wigner is outside the laboratory in which his friend just opens the box with the cat. In this case, not only would the cat be both dead and alive before the friend observes it, the friend would also both see a dead cat and see a live cat, until Wigner opens the door to the room where the experiment took place.

This sounds both completely nuts as well as an unnecessary complication, but bear with me for a moment, because this is a really important twist on Schrödinger’s cat experiment. Because if you think that the first measurement, so the friend observing the cat, actually resulted in a definite outcome, just that the friend outside the lab doesn’t know it, then, as long as the door is closed, you effectively have a deterministic hidden variable model for the second measurement. The result is clear already, you just don’t know what it is. But we know that deterministic hidden variable models cannot produce the results of quantum mechanics, unless they are also superdeterministic.

Now, again, of course, you can’t actually do the experiment with cats and friends and so on because their quantum effects would get destroyed too quickly to observe anything. But recently a team at Griffith University in Brisbane, Australia, created a version of this experiment with several devices that measure, or observe, pairs of photons. As anticipated, the measurement result agrees with the predictions of quantum mechanics.

What this means is that one of the following three assumptions must be wrong:

1. No Superdeterminism.
2. Measurements have definite outcomes.
3. No spooky action at a distance.

The absence of superdeterminism is sometimes called “Free choice” or “Free will”, but really it has nothing to do with free will. Needless to say, I think what’s wrong is rejecting superdeterminism. But I am afraid most physicists presently would rather throw out objective reality. Which one are you willing to give up? Let me know in the comments.

As of now, scientists remain hard at work trying to unravel the mysteries of Schrödinger's cat. For example, a promising line of investigation that’s still in its infancy is to measure the heat of a large system to determine whether quantum superpositions can influence its behavior. You find references to that as well as to the other papers that I mentioned in the info below the video. Schrödinger, by the way, didn’t have a cat, but a dog. His name was Burschie.

Wednesday, February 24, 2021

What's up with the Ozone Layer?

[This is a transcript of the video embedded below.]

Without the ozone layer, life, as we know it, would not exist. Scientists therefore closely monitor how the ozone layer is doing. In the past years, two new developments have attracted their attention and concern. What have they found and what does it mean? That’s what we’ll talk about today.
 

First things first, ozone is a molecule made of three oxygen atoms. It’s unstable, and on the surface of Earth it decays quickly, on the average within a day or so. For this reason, there’s very little ozone around us, and that’s good, because breathing in ozone is really unhealthy even in small doses.

But ozone is produced when sunlight hits the upper atmosphere, and accumulates far up there in a region called the “stratosphere”. This “ozone layer” then absorbs much of the sun’s ultraviolet light. The protection we get from the ozone layer is super-important, because the energy of ultraviolet light is high enough to break molecular bonds. Ultra-violet light, therefore, can damage cells or their genetic code. This means, with exposure to ultraviolet light, the risk of cancer and other mutations increases significantly. I have explained radiation risk in more detail in an earlier video, so check this out for more.

You have probably all heard of the ozone “hole” that was first discovered in the 1980s. This ozone hole is still with us today. It was caused by human emissions of ozone-depleting substances, notably chlorofluorocarbons – CFCs for short – that were used, among other things, in refrigerators and spray cans. CFCs have since been banned, but it will take at least several more decades for the ozone layer to completely recover. With that background knowledge, let’s now look at the two new developments.

What’s new?

The first news is that last year we have seen a large and pronounced ozone hole over the North Pole, in addition to the “usual” one over the South Pole. This has happened before, but it’s still an unusual event. That’s because the creation of an ozone hole is driven by supercooled droplets of water and nitric acid which are present in polar stratospheric clouds, so clouds that you find on the poles in the stratosphere. But these clouds can only form if it’s cold enough, and I mean really cold, below about −108 °F or −78 °C. Therefore, the major reason that ozone holes form more readily over the South pole than over the North Pole is quite simply that the South Pole is, on average, colder.

Why is the South Pole colder? Loosely speaking it’s because there are fewer high mountains in the Southern hemisphere than in the Northern hemisphere. And because of this, wind circulations around the South Pole tend to be more stable; they can lock in air, which then cools over the dark polar winter months. Air over the North Pole, in contrast, mixes more efficiently with warmer air from the mid latitudes.

On occasion, however, cold air gets locked in over the North Pole as well, which creates conditions similar to those at the South Pole. This is what happened in the Spring of 2020. For five weeks in March and early April, the North Pole saw the biggest arctic ozone hole on record, surrounded by a stable wind circulation called a polar vortex.

Now, we have all witnessed in the past decade that climate change alters wind patterns in the Northern Hemisphere, which gives rise to longer heat waves in the summer. This brings up the question whether climate change was one of the factors contributing to the northern ozone hole and whether we, therefore, must expect it to become a recurring event.

This question was studied in a recent paper by Martin Dameris and coauthors, for the full reference, please check the info below the video. Their conclusion is that, so far, observations of the northern ozone hole are consistent with it just being a coincidence. However, if coincidences pile upon coincidences, they make a trend. And so, researchers are now waiting to see whether the hole will return in the Spring of 2021 or in the coming years.

The second new development is that the ozone layer over the equator isn’t recovering as quickly as scientists expected. Indeed, above the equator, the amount of ozone in the lower parts of the stratosphere seems to be declining, though that trend is, for now, offset by the recovery of ozone in the upper parts of the stratosphere, which proceeds as anticipated.

The scientists who work on this previously considered various possible reasons, from data problems to illegal emissions of ozone-depleting substances. But the data have held up, and while we now know illegal emissions are indeed happening, these do not suffice to explain the observations.

Instead, further analysis indicates that the depletion of ozone in the lower stratosphere over the equator seems to be driven, again, by wind patterns. Earth’s ozone is itself created by sunlight, which is why most of it forms over the equator where sunlight is the most intensive. The ozone is then transported from the equatorial regions towards the poles by a wind cycle – called the “Brewer-Dobson circulation” – in which air rises over the equator and comes down again in mid to high latitude. With global warming, that circulation may become more intense, so that more ozone is redistributed from the equator to higher latitudes.

Again, though, the strength of this circulation also changes just by random chance. It’s therefore presently unclear whether the observations merely show a temporary fluctuation or are indicative of a trend. However, a recent analysis of different climate-chemistry models by Simone Dietmüller et al shows that human-caused carbon dioxide emissions contribute to the trend of less ozone over the equator and more ozone in the mid-latitudes, and the trend is therefore likely to continue. I have to warn you though that this paper has not yet passed peer review.

Before we talk about what this all means, I want to thank my tier four supporters on Patreon. Your help is greatly appreciated. And you, too, can help us produce videos by supporting us on Patreon. Now let’s talk about what these news from the ozone layer mean.

You may say, ah, so what. Tell the people in the tropics to put on more sun-lotion and those in Europe to take more vitamin D. This is a science channel, and I’ll not tell anyone what they should or shouldn’t worry about, that’s your personal business. But to help you gauge the present situation, let me tell you an interesting bit of history.

The Montreal protocol from 1987, which regulates the phasing out of ozone depleting substances, was passed quickly after the discovery of the first ozone hole. It is often praised as a milestone of environmental protection, the prime example that everyone points to for how to do it right. But I think the Montreal Protocol teaches us a very different lesson.

That’s because scientists knew already in the 1970s, long before the first ozone hole was discovered, that chlorofluorocarbons would deplete the ozone layer. But they thought the effect would be slow and global. When the ozone hole over the South Pole was discovered by the British Antarctic Survey in 1985, that came as a complete surprise.

Indeed, fun fact, it later turned out that American satellites had measured the ozone hole years before the British Survey did, but since the data were so far off the expected value, they were automatically overwritten by software.

The issue was that at the time the effects of polar stratospheric clouds on the ozone layer were poorly understood, and the real situation turned out to be far worse than scientists thought.

So, for me, the lesson from the Montreal Protocol is that we’d be fools to think that we now have all pieces in place to understand our planet’s climate system. We know we’re pushing the planet into regimes that scientists poorly understand and chances are that this will bring more unpleasant surprises.

So what do those changes in the ozone layer mean? They mean we have to pay close attention to what’s happening.

Saturday, February 20, 2021

The Science of Making Rain

[This is a transcript of the video embedded below]


Wouldn’t it be great if we could control the weather? I am sure people have thought about this ever since there’ve been people having thoughts. But what are scientists thinking about this today? In this video we’ll look at the best understood case of weather control, that’s making rain by seeding clouds. How is cloud seeding supposed to work? Does it work? And if it works, is it a good idea? That’s what we’ll talk about today.

First things first, what is cloud seeding? Cloud seeding is a method for increasing precipitation, which is a fancy word for water that falls off the sky in any form: rain, snow, hail and so on. One seeds a cloud by spraying small particles into it, which encourages the cloud to shed precipitation. At least that’s the idea. Cloud seeding does not actually create new clouds. It’s just a method to get water out of already existing clouds. So you can’t use it to turn a desert into a forest – the water needs to be in the air already.

Cloud seeding was discovered, as so many things, accidentally. In nineteen-fourty-six a man named Vincent Schaefer was studying clouds in a box in his laboratory, but it was too warm for his experiment to work. So he put dry ice into his cloud box, that’s carbon dioxide frozen at about minus eighty degrees Celsius. He then observed that small grains of dry ice would rapidly grow to the size of snowflakes.

Schaefer realized this happened because the water in the clouds was supercooled, that means below freezing point, but still liquid. This is an energetically unstable state. If one introduces tiny amounts of crystals into a supercooled cloud, the water droplets will attach to the crystals immediately and freeze, so the crystals grow quickly until they are heavy enough to fall down. Schaefer saw this happening when sprinkles of solid dry ice fell into his box. He had seeded the first cloud. In the following years he’d go on to test various methods of cloud seeding.

Today scientists distinguish two different ways of seeding clouds, either by growing ice crystals, as Schaefer did, that’s called Glaciogenic seeding. Or by growing water droplets, which is called hygroscopic seeding.

How does it work?

The method that Schaefer used is today more specifically called the “Glaciogenic static mode”, static because it doesn’t rely on circulation within the cloud. There’s also a Glaciogenic dynamic mode which works somewhat differently.

In the dynamic mode, one exploits that the conversion of the supercoooled water into ice releases heat, and that heat creates an updraft. This allows the seeds to reach more water droplets, so the cloud grows, and eventually more snow falls. One of the substances commonly used for this is silver iodide, though there are a number of different organic and inorganic substances that have proved to work.

For hygroscopic seeding one uses particles that can absorb water that serve as condensation seeds to turn water vapor into large drops that become rain. The substances used for this are typically some type of salt.

How do you do it?

Seeding clouds in a box in the laboratory is one thing, seeding a real cloud another thing entirely. To seed a real cloud, one either uses airplanes that spray the seeding particles directly into the cloud, or targets the cloud with a rocket which gives off the particles, or one uses a ground-based generator that releases the particles slowly mixed with hot air, that rises up into the atmosphere. They do this for example in Colorado, and other winter tourism areas, and claim that it can lead to several inches more snow.

But does it work?

It’s difficult to test if cloud seeding actually works. The issue is, as I said, seeding doesn’t actually create clouds, it just encourages clouds to release snow or rain at a particular time and place. But how do you know if it wouldn’t have rained anyway?

After Schaefer’s original work in the nineteen-fifties, the United States launched a research program on cloud seeding, and so did several other countries including the UK, Canada, India, and Australia. But evidence that cloud seeding works didn’t come by for a long time, and so, in the late nineteen-eighties, funding into this research area drastically declined. That didn’t deter people from trying to seed clouds though. Despite the absence of evidence quite a few winter sport areas used cloud seeding in an attempt to increase snow fall.

But beginning around the turn of the millennium, interest in cloud seeding was revived by several well-funded studies in the United States, Australia, Japan, and China, for just to name a few. Quite possibly this interest was driven by the increasing risk of drought due to climate change. And today, scientists have much better technology to figure out whether cloud seeding works, and so, the new studies could finally deliver evidence that it does work.

Some of the most convincing studies used radar measurements to detect ice crystals in clouds after a plane went through and distributed the seeds. This was done for example in a 2011 study in Australia and also in a 2018 study in the northern part of the United States.

These radar measurements are a direct signature of seeding, glaciogenic seeding in this case. The researchers can tell that the ice crystals are caused by the seeding because the crystals that appear in the radar signal replicate the trajectory of the seeding plane, downwind.

From the radar measurements they can also tell that the concentration of ice crystals is two to three orders of magnitude larger than those in neighboring, not-seeded areas. And, they know that the newly formed ice-crystals grow, because the amount of radar signal that’s reflected depends on the size of the particle.

This and similar studies also contained several cross checks. For example, they seeded some areas of the clouds with particles that are known to grow ice crystals and others with particles that aren’t expected to do that. And they detected ice formation only for the particles that act as seeds. They also checked that the resulting snowfall is really the one that came from the seeding. One can do this by analyzing the snow for traces of the substance used for seeding.

Besides this, there are also about a dozen studies that evaluated statistically if there changes in precipitation from the glaciogenic static seeding. These come from research programs in the United States, Australia, and Japan. To get statistics, they monitor the unseeded areas surrounding the seeded region as an estimation of the natural precipitation. It’s not a perfect method of course, but done often enough and for long enough periods, it gives a reasonable assessment for the increase of precipitation due to seeding.

These studies typically found an increase in precipitation around 15% and estimated the probability that this increase happened just coincidentally with 5%.

So, at least for the seeding of ice crystals, there is now pretty solid evidence that it works better than a rain dance. For the other types of seeding it’s still unclear whether it’s efficient.

Please check the information below the video for references to the papers.

The world’s biggest weather modification program is China’s. The Chinese government employs an estimated 35,000 people to this end already, and in December 2020 they announced they’ll increase investments into their weather modification program five-fold.

Now, as we have seen, cloud seeding isn’t terribly efficient and for it to work, the clouds have to be already there in the first place. Nevertheless, there’s an obvious worry here. If some countries can go and make clouds rain off over their territory, that might leave less water for neighboring countries.

And the bad news is, there aren’t currently any international laws regulating this. Most countries have regulations for what you are allowed to spray into the air or how much, but cloud seeding is mostly legal. There is an international convention, the Environmental Modification Convention, that seventy-eight states have signed, which prohibits “the military and hostile use of environmental modification techniques.” But this can’t in any clear way be applied to cloud seeding.

I think that now that we know cloud seeding does work, we should think about how to regulate it, before someone actually gets good at it. Controlling the weather is an ancient dream, but, thanks to Vincent Schaefer, maybe it won’t remain a dream forever. When he died in 1993, his obituary in the New York Times said “He was hailed as the first person to actually do something about the weather and not just talk about it”.

Tuesday, February 16, 2021

Saturday, February 13, 2021

The Simulation Hypothesis is Pseudoscience

[This is a transcript of the video embedded below.]


I quite like the idea that we live in a computer simulation. It gives me hope that things will be better on the next level. Unfortunately, the idea is unscientific. But why do some people believe in the simulation hypothesis? And just exactly what’s the problem with it? That’s what we’ll talk about today.

According to the simulation hypothesis, everything we experience was coded by an intelligent being, and we are part of that computer code. That we live in some kind of computation in and by itself is not unscientific. For all we currently know, the laws of nature are mathematical, so you could say the universe is really just computing those laws. You may find this terminology a little weird, and I would agree, but it’s not controversial. The controversial bit about the simulation hypothesis is that it assumes there is another level of reality where someone or some thing controls what we believe are the laws of nature, or even interferes with those laws.

The belief in an omniscient being that can interfere with the laws of nature, but for some reason remains hidden from us, is a common element of monotheistic religions. But those who believe in the simulation hypothesis argue they arrived at their belief by reason. The philosopher Nick Boström, for example, claims it’s likely that we live in a computer simulation based on an argument that, in a nutshell, goes like this. If there are a) many civilizations, and these civilizations b) build computers that run simulations of conscious beings, then c) there are many more simulated conscious beings than real ones, so you are likely to live in a simulation.

Elon Musk is among those who have bought into it. He too has said “it’s most likely we’re in a simulation.” And even Neil DeGrasse Tyson gave the simulation hypothesis “better than 50-50 odds” of being correct.

Maybe you’re now rolling your eyes because, come on, let the nerds have some fun, right? And, sure, some part of this conversation is just intellectual entertainment. But I don’t think popularizing the simulation hypothesis is entirely innocent fun. It’s mixing science with religion, which is generally a bad idea, and, really, I think we have better things to worry about than that someone might pull the plug on us. I dare you!

But before I explain why the simulation hypothesis is not a scientific argument, I have a general comment about the difference between religion and science. Take an example from Christian faith, like Jesus healing the blind and lame. It’s a religious story, but not because it’s impossible to heal blind and lame people. One day we might well be able to do that. It’s a religious story because it doesn’t explain how the healing supposedly happens. The whole point is that the believers take it on faith. In science, in contrast, we require explanations for how something works.

Let us then have a look at Boström’s argument. Here it is again. If there are many civilizations that run many simulations of conscious beings, then you are likely to be simulated.

First of all, it could be that one or both of the premises is wrong. Maybe there aren’t any other civilizations, or they aren’t interested in simulations. That wouldn’t make the argument wrong of course, it would just mean that the conclusion can’t be draw. But I will leave aside the possibility that one of the premises is wrong because really I don’t think we have good evidence for one side or the other.

The point I have seen people criticize most frequently about Boström’s argument is that he just assumes it is possible to simulate human-like consciousness. We don’t actually know that this is possible. However, in this case it would require explanation to assume that it is not possible. That’s because, for all we currently know, consciousness is simply a property of certain systems that process large amounts of information. It doesn’t really matter exactly what physical basis this information processing is based on. Could be neurons or could be transistors, or it could be transistors believing they are neurons. So, I don’t think simulating consciousness is the problematic part.

The problematic part of Boström’s argument is that he assumes it is possible to reproduce all our observations using not the natural laws that physicists have confirmed to extremely high precision, but using a different, underlying algorithm, which the programmer is running. I don’t think that’s what Bostrom meant to do, but it’s what he did. He implicitly claimed that it’s easy to reproduce the foundations of physics with something else.

But nobody presently knows how to reproduce General Relativity and the Standard Model of particle physics from a computer algorithm running on some sort of machine. You can approximate the laws that we know with a computer simulation – we do this all the time – but if that was how nature actually worked, we could see the difference. Indeed, physicists have looked for signs that natural laws really proceed step by step, like in a computer code, but their search has come up empty handed. It’s possible to tell the difference because attempts to algorithmically reproduce natural laws are usually incompatible with the symmetries of Einstein’s theories of special and general relativity. I’ll leave you a reference in the info below the video. The bottomline is, it’s not easy to outdo Einstein.

It also doesn’t help by the way if you assume that the simulation would run on a quantum computer. Quantum computers, as I have explained earlier, are special purpose machines. Nobody currently knows how to put General Relativity on a quantum computer.

A second issue with Boström’s argument is that, for it to work, a civilization needs to be able to simulate a lot of conscious beings, and these conscious beings will themselves try to simulate conscious beings, and so on. This means you have to compress the information that we think the universe contains. Bostrom therefore has to assume that it’s somehow possible to not care much about the details in some parts of the world where no one is currently looking, and just fill them in in case someone looks.

Again though, he doesn’t explain how this is supposed to work. What kind of computer code can actually do that? What algorithm can identify conscious subsystems and their intention and then quickly fill in the required information without ever producing an observable inconsistency. That’s a much more difficult issue than Bostrom seems to appreciate. You cannot in general just throw away physical processes on short distances and still get the long distances right.

Climate models are an excellent example. We don’t currently have the computational capacity to resolve distances below something like 10 kilometers or so. But you can’t just throw away all the physics below this scale. This is a non-linear system, so the information from the short scales propagates up into large scales. If you can’t compute the short-distance physics, you have to suitably replace it with something. Getting this right even approximately is a big headache. And the only reason climate scientists do get it approximately right is that they have observations which they can use to check whether their approximations work. If you only have a simulation, like the programmer in the simulation hypothesis, you can’t do that.

And that’s my issue with the simulation hypothesis. Those who believe it make, maybe unknowingly, really big assumptions about what natural laws can be reproduced with computer simulations, and they don’t explain how this is supposed to work. But finding alternative explanations that match all our observations to high precision is really difficult. The simulation hypothesis, therefore, just isn’t a serious scientific argument. This doesn’t mean it’s wrong, but it means you’d have to believe it because you have faith, not because you have logic on your side.

Saturday, February 06, 2021

Don't Fall for Quantum Hype

[This is a transcript of the video embedded below.]

Quantum technology is presently amazingly popular. The United States and the United Kingdom have made it a „national initiative”, the European Union has a quantum technology “flagship.” India has a “national mission”, and China has announced they’ll put quantum technology into their next 5 year plan. What is “quantum technology” and what impact will it have on our lives? That’s what we will talk about today.


The quantum initiatives differ somewhat from nation to nation, but they usually contain research programs on four key topics that I will go through in this video. That’s: quantum computing, the quantum internet, quantum metrology, and quantum simulations.

We’ll start with quantum computing.

Quantum computing is one of the most interesting developments in the foundations of physics right now. I have talked about quantum computing in more detail in an earlier video, so check this out for more. In brief, quantum computers can speed up certain types of calculations dramatically. A quantum computer can do this because it does not work with “bits” that have values of either 0 or 1, but with quantum bits – “qbits” for short – that can be entangled, and can take on any value in between 0 and 1.

It’s not an accident that I say “between” instead of “both”, I think this describes the mathematics more accurately. Either way, of course, these are just attempts to put equations into words and the words will in the best case give you a rough idea of what’s really going on. But the bottom line is that you can process much more information with qbits than with normal bits. The consequence is that quantum computers can do certain calculations much faster than conventional computers. This speed-up only works for certain types of calculations though. So, quantum computers are special purpose machines.

The theory behind quantum computing is well understood and uncontroversial. Quantum computers already exist and so far they work as predicted. The problem with quantum computers is that for them to become commercially useful, you need to be able to bring a large number of qbits into controllable quantum states, and that’s really, really difficult.

Estimates say, the number we need to reach is roughly a million, details depend on the quality of qbits and the problem you are trying to solve. The status of research is presently at about 50 qbits. Yes, that’s a good start, but it’s a long way to go to a million and there’s no reason to expect anything resembling Moore’s will help us here, because we’re already working on the limit.

So, the major question for quantum computing is not “does it work”. We know it works. The question is “Will it scale”?

To me the situation for quantum computing today looks similar to the situation for nuclear fusion 50 years ago. 50 years ago, physicists understood how nuclear fusion works just fine, and they had experimentally checked that their theories were correct. The problem was “just” to make the technology large and still efficient enough to actually be useful. And, as you all know, that’s still the problem today.

Now, I am positive that we will eventually use both nuclear fusion and quantum computing in everyday life. But keep in mind that technology enthusiasts tend to be overly optimistic in their predictions for how long it will take for technology to become useful.

The Quantum Internet

The quantum internet refers to information transmitted with quantum effects. This means most importantly, the quantum internet uses quantum cryptography as a security protocol. Quantum cryptography is a method to make information transfer secure by exploiting the fact that in quantum mechanics, a measurement irreversibly changes the state of a quantum particle. This means if you encode a message suitably with quantum particles, you can tell whether it has been intercepted by a hacker, because the hacker’s measurement would change the behavior of the particles. That doesn’t prevent hacking, but it means you’d know when it happens.

I made an entire video about how quantum cryptography works, so check this out if you want to know more. Today I just want to draw your attention to two pointes that the headlines tend to get wrong.

First, you cannot transfer information faster than the speed of light with the quantum internet or with any other quantum effect. That quantum mechanics respects the speed of light limit is super-basic knowledge that you’d think every science writer knows about. Unfortunately, this is not the case. You see this over and over again in the headlines, that the quantum internet can supposedly beat the speed of light limit. It cannot. That’s just wrong.

And no, this does not depend on your interpretation of quantum mechanics, it’s wrong either way you look at it. No, this is not what Einstein meant with “spooky action at a distance”. It’s really just wrong. Quantum mechanics does not allow you to send information faster than the speed of light.

This isn’t the major issue I have with the coverage of the quantum internet though, because that’s obviously wrong and really what do you expect from the Daily Mail. No, the major issue I have is that almost all of the of the articles mislead the audience about the relevance of the quantum internet.

It’s not explicitly lying, but it’s lying by omission. Here is a recent example from Don Lincoln who does exactly this, and pretty much every article you’ll read about the quantum internet goes somewhat like this.

First, they will tell you that quantum computers, if they reach a sufficiently large number of qbits, can break the security protocols that are currently being used on the internet quickly, which is a huge problem for national security and privacy. Second, they will tell you that the quantum internet is safe from hacking by quantum computers.

Now, these two statements separately are entirely correct. But there’s an important piece of information missing between them, which is that we have security protocols that do not require quantum technology but are safe from quantum computers nevertheless. They are just presently not in use. These security protocols that, for all we currently know, cannot be broken even by quantum computers are, somewhat confusingly, called “post-quantum cryptography” or, in somewhat better terminology, quantum-safe cryptography.

This means that we do not need the quantum internet to be safe from quantum computers. We merely need to update the current security protocols, and this update is already under way. For some reason the people who work on quantum things don’t like draw attention to that.

Quantum metrology

Quantum metrology is a collection of techniques to improve measurements by help of quantum effects. The word “metrology” means that this research is about measurement; it’s got nothing to do with meteorology, different thing entirely. Quantum metrology has recently seen quite a few research developments that I expect to become useful soon in areas like medicine or material science. That’s because one of the major benefits of quantum measurements is that they can make do with very few particles, and that means minimal damage to the sample.

Personally I think quantum metrology is the most promising part of the quantum technology package and the one that we’re most likely to encounter in new applications soon.

I made a video especially about quantum metrology earlier, so check this out for more detail.

Quantum Simulations

Quantum simulations are a scientifically extremely interesting development that I think has been somewhat underappreciated. In a quantum simulation you try to understand a complicated system whose properties you cannot calculate, by reproducing its behavior as good as you can with a different quantum system that you can control better, so you can learn more about it.

This is actually something I have worked on myself for some years, in particular the possibility that you can simulate black holes with superfluids. I will tell you more about this some other time, for today let me just say that I think this is a rather dramatic shift in the foundations of physics because it allows you to take out mathematics as the middleman. Instead of modeling a system with mathematics, either with a pen on paper or with computer code, you model it directly with another system without having to write down equations in one form or another.

Now, quantum simulations are really cool from the perspective of basic research, because they allow you to learn a great deal. You can for example simulate particles similar to the Higgs or certain types of neutrinos, and learn something about their behavior, which you couldn’t do in any other way.

However, quantum simulations are unlikely to have technological impact any time soon, and, what’s worse, they have been oversold by some people in the community. Especially all the talk about simulating wormholes is nonsense. These simulated “wormholes” have nothing in common with actual wormholes that, in case you missed it, we have good reason to think do not exist in the first place. I am highlighting the wormhole myth because to my shock I saw it appear in a white house report. So, quantum simulations are cool for the most part, but if someone starts babbling about wormholes, that is not serious science.

I hope this quick summary helps you make sense of all the quantum stuff in the headlines.

Saturday, January 30, 2021

Has Protein Folding Been Solved?

[This is a transcript of the video embedded below.]


Protein folding is one of the biggest, if not THE biggest problem, in biochemistry. It’s become the holy grail of drug development. Some of you may even have folded proteins yourself, at least virtually, with the crowd-science app ‘’Foldit”. But then late last year the headlines proclaimed that Protein Folding was “solved” by artificial intelligence. Was it really solved? And if it was solved, what does that mean? And, erm, what was the protein folding problem again? That’s what we will talk about today.

Proteins are one of the major building blocks of living tissue, for example muscles, which is why you may be familiar with “proteins” as one of the most important nutrients in meat.

But proteins come in a bewildering number of variants and functions. They are everywhere in biology, and are super-important: Proteins can be antibodies that fight against infections, proteins allow organs to communicate between each other, and proteins can repair damaged tissue. Some proteins can perform amazingly complex functions. For example, pumping molecules in and out of cells, or carrying substances along using motions that look much like walking.

But what’s a protein to begin with? Proteins are basically really big molecules. Somewhat more specifically, proteins are chains of smaller molecules called amino acids. But long and loose chains of amino acids are unstable, so proteins fold and curl until they reach a stable, three-dimensional, shape. What is a protein’s stable shape, or stable shapes, if there are several? This is the “protein folding problem”.

Understanding how proteins fold is important because the function of a protein depends on its shape. Some mutations can lead to a change in the amino acid sequence of a protein which causes the protein to fold the wrong way. It can then no longer fulfil its function and the result can be severe illness. There are many diseases which are caused by improperly folded proteins, for example, type two diabetes, Alzheimer’s, Parkinson’s, and also ALS, that’s the disease that Stephen Hawking had.

So, understanding how proteins fold is essential to figuring out how these diseases come about, and how to maybe cure them. But the benefit of understanding protein folding goes beyond that. If we knew how proteins fold, it would generally be much easier to synthetically produce proteins with a desired function.

But protein folding is a hideously difficult problem. What makes it so difficult is that there’s a huge number of ways proteins can fold. The amino acid chains are long and they can fold in many different directions, so the possibilities increase exponentially with the length of the chain.

Cyrus Levinthal estimated in the nineteen-sixties that a typical protein could fold in more than ten to the one-hundred-forty ways. Don’t take this number too seriously though. The number of possible foldings actually depends on the size of the protein. Small proteins may have as “few” as ten to the fifty, while some large ones can have and a mind-blowing ten to the three-hundred possible foldings. That’s almost as many vacua as there are in string theory!

So, just trying out all possible foldings is clearly not feasible. We’d never figure out which one is the most stable one.

The problem is so difficult, you may think it’s unsolvable. But not all is bad. Scientists found out in the nineteen-fifties that when proteins fold under controlled conditions, for example in a test tube, then the shape into which they fold is pretty much determined by the sequence of amino acids. And even in a natural environment, rather than a test tube, this is usually still the case.

Indeed, the Nobel Prize for Chemistry was awarded for this in 1972. Before that, one could have been worried that proteins have a large numbers of stable shapes, but that doesn’t seem to be the case. This is probably because natural selection preferentially made use of large molecules which reliably fold the same way.

There are some exceptions to this. For example prions, like the ones that are responsible for mad cow disease, have several stable shapes. And proteins can change shape if their environment changes, for instance when they encounter certain substances inside a cell. But mostly, the amino acid sequence determines the shape of the protein.

So, the protein folding problem comes down to the question: If you have the amino-acid sequence, can you tell me what’s the most stable shape?

How would one go about solving this problem? There are basically two ways. One is that you can try to come up with a model for why proteins fold one way and not another. You probably won’t be surprised to hear that I had quite a few physicist friends who tried their hands at this. In physics we call that a “top down” approach. The other thing you can do is what we call a “bottom up” approach. This means you observe how a large number of proteins fold and hope to extract regularities from this.

Either way, to get anywhere with protein folding you first of all need examples of how folded proteins look like. One of the most important methods for this is X-ray crystallography. For this, one fires beams of X-rays at crystallized proteins and measures how the rays scatter off. The resulting pattern depends on the position of the different atoms in the molecule, from which one can then infer the three-dimensional shape of the protein. Unfortunately, some proteins take months or even years to crystallize. But a new method has recently much improved the situation by using electron microscopy on deep-frozen proteins. This so-called Cryo-electron microscopy gives much better resolution.

In 1994, to keep track of progress in protein folding predictions, researchers founded an initiative called the Critical Assessment of Protein Structure Prediction, CASP for short. CASP is a competition among different research teams which try to predict how proteins fold. The teams are given a set of amino acid sequences and have to submit which shape they think the protein will fold into.

This competition takes place every two years. It uses protein structures that were just experimentally measured, but have not yet been published, so the competing teams don’t know the right answer. The predictions are then compared with the real shape of the protein, and get a score depending on how well they match. This method for comparing the predicted with the actual three-dimensional shape is called a Global Distance Test, and it’s a percentage. 0% is a total failure, 100% is the high score. In the end, each team gets a complete score that is the average over all their prediction scores.

For the first 20 years, progress in the CASP competition was slow. Then, researchers began putting artificial intelligence on the task. Indeed, in last year’s competition, about half of the teams used artificial intelligence or, more specifically, deep learning. Deep learning uses neural networks. It is software that is trained on large sets of data and learns recognize patterns which it then extrapolates from. I explained this in more detail in an earlier video.

Until some years ago, no one in the CASP competition scored more than 40%. But in the last two installments of the competition, one team has reached remarkable scores. This is DeepMind, a British Company that was acquired by Google in twenty-fourteen. It’s the same company which is also behind the computer program AlphaGo, that in twenty-fifteen was first to beat a professional Go player.

DeepMind’s program for protein folding is called AlphaFold. In twenty-eighteen, AlphaFold got a score of almost 60% in the CASP competition, and in 2020, the update AlphaFold2 reached almost 90%.

The news made big headlines some months ago. Indeed, many news outlets claimed that AlphaFold2 solved the protein folding problem. But did it?

Critics have pointed out that 90% is still a significant failure rate and that some of the most interesting cases are the ones for which AlphaFold2 did not do well, such as complexes of proteins, called oligomers, in which several amino acids are interacting. There is also the general problem with artificial intelligences, which is that they can only learn to extract patterns from data which they’ve been trained on. This means the data has to exist in the first place. If there are entirely new functions that don’t make an appearance in the data set, they may remain undiscovered.

But well. I sense a certain grumpiness here of people who are afraid they’ll be rendered obsolete by software. It’s certainly true that the AlphaFold’s 2020 success won’t be the end of the story. Much needs to be done, and of course one still needs data, meaning measurements, to train artificial intelligence on.

Still I think this is a remarkable achievement and amazing progress. It means that, in the future, protein folding predictions by artificially intelligent software may save scientists much time-consuming and expensive experiments. This could help researchers to develop proteins that have specific functions. Some that are on the wish-list, for example, are proteins to stimulate the immune system to fight cancer, a universal flu vaccine, or proteins that breaking down plastics.

Saturday, January 23, 2021

Where do atoms come from?

[This is a transcript of the video embedded below.]


Matter is made of atoms. You all know that. But where do atoms come from? When were they made and how? And what’s the “island of stability”? That’s what we will talk about today.

At first sight, making an atom doesn’t seem all that difficult. All you need are some neutrons and protons for the nucleus, then you put electrons around them until the whole thing is electrically neutral, done. Sounds easy. But it isn’t.

The electrons are the simple part. Once you have a positively charged nucleus, it attracts electrons and they automatically form shells around the nucleus. For more about atomic electron shells, check my earlier video.

But making an atomic nucleus is not easy. The problem is that the protons are all positively charged and they repel each other. Now, if you get them really, really close to each other, then the strong nuclear force will kick in and keep them together – if there’s a suitable amount of neutrons in the mix. But to get the protons close enough together, you need very high temperatures, we’re talking about hundreds of millions of degrees.

Such high temperatures, indeed much higher temperatures, existed in the early universe, briefly after the big bang. However, at that time the density of matter was very high everywhere in the universe. It was a mostly structureless soup of subatomic particles called a plasma. There were no nuclei in this soup, just a mix of the constituents of nuclei.

It was only when this plasma expanded and cooled, that some of those particles managed to stick together. This created the first atomic nuclei which could then catch electrons to make atoms. From this you get Hydrogen and Helium and a few other chemical elements with their isotopes up to atomic number 4. The process of making atomic nuclei, by the way is called “nucleosynthesis”. And this part of nucleosynthesis that happened a few minutes after the big bang is called “big bang nucleosynthesis”.

But the expansion of plasma after the big bang happened so rapidly that only the lightest atomic nuclei could form in that process. Making the heavier ones takes more patience, indeed it takes a few hundred million years. During that time the universe continued to expand, but the light nuclei collected under the pull of gravity and formed the first stars. In these stars, the gravitational pressure increased the temperature again. Eventually, the temperature became large enough to push the small atomic nuclei into each other and fuse them to larger ones. This nuclear fusion creates energy and is the reason why stars are hot and shine.

Nuclear fusion in stars can go on up to atomic number twenty-six, which is iron, but then it stops. That’s because iron is the most stable of the chemical elements. Its binding energy is the largest. So, if you join small nuclei, you get energy out in the process until you hit iron, after which pushing more into the nucleus begins to take up energy.

So, with the nuclear fusion inside of stars, we now have elements up to iron. But where do the elements heavier than iron come from? They come from a process called “neutron capture”. Some fusion processes create free neutrons, and the neutrons, since they have no electric charge, have a much easier time entering an atomic nucleus than protons. And once they are in the nucleus, they can decay into a proton, an electron, and an electron-antineutrino. If they do that, they have created a heavier element. A lot of the so-created nuclei will be unstable isotopes, but they will spit out bits and pieces until they hit on a stable configuration.

Neutron capture can happen in stars just by chance every now and then. Over the course of time, therefore, old stars breed a few of the elements heavier than iron. But the stars eventually run out of nuclear fuel and die. Many of them collapse and subsequently explode. These supernovae distribute the nuclei inside galaxies or even blow them out of galaxies. Some of the lighter elements which are around today are actually created from splitting up these heavier elements by cosmic rays.

However, neutron capture in old stars is slow and stars only live for so long. This process just does not produce sufficient amounts of the heavy elements that we have here on Earth. Doing that requires a process that’s called “rapid neutron capture”. For this one needs an extreme environment of very high pressure with lots of neutrons that bombard the small atomic nuclei. Again, some of the neutrons enter the nucleus and then decay, leaving behind a proton, which creates heavier elements.

For a long time astrophysicists thought that rapid neutron capture happens in supernovae. But that turned out to not work very well. Their calculations indicated that supernovae would not produce a sufficient amount of neutrons quickly enough. The idea also did not fit well with observations. For example, if the heavy elements that astrophysicists observe in some small galaxies –called “dwarf galaxies” – had been produced by supernovae, that would have required so many supernovae that these small galaxies had been blown apart and we wouldn’t observe them in the first place.

Astrophysicists therefore now think that the heavy elements are most likely produced not in supernovae, but in neutron star mergers. Neutron stars are one of the remnants of supernovae. As the name says, they contain lots of neutrons. They do not actually contain nuclei, they’re just one big blob of super-dense nuclear plasma. But if they collide, the collision will spit out lots of nuclei, and create conditions that are right for rapid neutron-capture. This can create all of the heavy elements that we find on Earth. A recent analysis of light emitted during a neutron star merger supports this hypothesis because the light contains evidence for the presence of some of these heavy elements.

You may have noticed that we haven’t checked off the heaviest elements in the periodic table and that there are a few missing in between. That’s because they are unstable. They decay into smaller nuclei in times between a few thousand years and some micro-seconds. Those that were produced in stars are long gone. We only know their properties because they’ve been created in laboratories, by shooting smaller nuclei at each other with high energy.

Are there any other stable nuclei that we haven’t yet discovered? Maybe. It’s a long-standing hypothesis in nuclear physics that there are heavy nuclei with specific numbers of neutrons and protons that should have life-times up to some hundred thousand years, it’s just that we have not been able to create them so far. Nuclear physicists call it the “island of stability”, because it looks like an island if you put each nucleus on a graph where one axis is the number of protons, and the other axis is the number of neutrons.

Just exactly where the island of stability is, though, isn’t clear and predictions have moved around somewhat over the course of time. Currently, nuclear physicists believe reaching the island of stability would require pushing more neutrons inside the heaviest nuclei they previously produced.

But the maybe most astonishing thing about atoms is how so much complexity, look around you, is built up from merely three ingredients neutrons, protons, and electrons.

I hope you enjoyed this video. You can now support this channel on Patreon. So, if you want to be part of the story, go check this out. I especially want to thank my super-supporters on Patreon. Your help is greatly appreciated.

Saturday, January 16, 2021

Was the universe made for us?

[This is a transcript of the video embedded below.]


Today I want to talk about the claim that our universe is especially made for humans, or fine-tuned for life. According to this idea it’s extremely unlikely our universe would just happen to be the way it is by chance, and the fact that we nevertheless exist requires explanation. This argument is popular among some religious people who use it to claim that our universe needs a creator, and the same argument is used by physicists to pass off unscientific ideas like the multiverse or naturalness as science. In this video, I will explain what’s wrong with this argument, and why the observation that the universe is this way and not some other way, is evidence neither for nor against god or the multiverse.

Ok, so here is how the argument goes in a nutshell. The currently known laws of nature contain constants. Some of these constants are for example, the fine-structure constant that sets the strength of the electromagnetic force, Planck’s constant, Newton’s constant, the cosmological constant, the mass of the Higgs boson, and so on.

Now you can ask, what would a universe look like, in which one or several of these constants were a tiny little bit different. Turns out that for some changes to these constants, processes that are essential for life as we know it could not happen, and we could not exist. For example, if the cosmological constant was too large, then galaxies would never form. If the electromagnetic force was too strong, nuclear fusion could not light up stars. And so on. There’s a long list of calculations of this type, but they’re not the relevant part of the argument, so I don’t want to go through the whole list.

The relevant part of the argument goes like this: It’s extremely unlikely that these constants would happen to have just exactly the values that allow for our existence. Therefore, the universe as we observe it requires an explanation. And then that explanation may be god or the multiverse or whatever is your pet idea. Particle physicists use the same type of argument when they ask for a next larger particle collider. In that case, they claim it requires explanation why the mass of the Higgs boson happens to be what it is. This is called an argument from “naturalness”. I explained this in an earlier video.

What’s wrong with the argument? What’s wrong is the claim that the values of the constants of nature that we observe are unlikely. There is no way to ever quantify this probability because we will never measure a constant of nature that has a value other than the one it does have. If you want to quantify a probability you have to collect a sample of data. You could do that, for example, if you were throwing dice.Throw them often enough, and you get an empirically supported probability distribution.

But we do not have an empirically supported probability distribution for the constants of nature. And why is that. It’s because… they are constant. Saying that the only value we have ever observed is “unlikely” is a scientifically meaningless statement. We have no data, and will never have data, which allow us to quantify the probability of something we cannot observe. There’s nothing quantifiably unlikely, therefore, there’s nothing in need of explanation.

If you look at the published literature on the supposed “fine-tuning” of the constants of nature, the mistake is always the same. They just postulate a particular probability distribution. It’s this postulate that leads to their conclusion. This is one of the best known logical fallacies, called “begging the question” or “circular reasoning.” You assume what you need to show. And instead of showing that a value is unlikely, they pick a specific probability distribution that makes it unlikely. They could as well pick a probability distribution that would make the observed values *likely, just that this doesn’t give the result they want to have.

And, by the way, even if you could measure a probability distribution for the constants of nature, which you can’t, then the idea that our particular combination of constants is necessary for life would *still be wrong. There are several examples in the scientific literature for laws of nature with constants nothing like our own that, for all we can tell, allow for chemistry complex enough for life. Please check the info below the video for references.

Let me be clear though that finetuning arguments are not always unscientific. The best-known example of a good finetuning argument is a pen balanced on its tip. If you saw that, you’d be surprised. Because this is very unlikely to happen just by chance. You’d look for an explanation, a hidden mechanism. That sounds very similar to the argument for finetuning the constants of nature, but the balanced pen is a very different situation. The claim that the balanced pen is unlikely is based on data. You are surprised because you don’t normally encounter pens balanced on their tip.You have experience, meaning you have statistics. But it’s completely different if you talk about changing constants that cannot be changed by any physical process. Not only do we not have experience with that, we can never get any experience.

I should add there are theories in which the constants of nature are replaced with parameters that can change with time or place, but that’s a different story entirely and has nothing to do with the fine-tuning arguments. It’s an interesting idea though. Maybe I should talk about this some other time? Let me know in the comments.

And for the experts, yes, I have so far specifically referred to what’s known as the frequentist interpretation of probability. You can alternatively interpret the term “unlikely” using the Bayesian interpretation of probability. In the Bayesian sense, saying that something you observe was “unlikely”, means you didn’t expect it to happen. But with the Bayesian interpretation, the whole argument that the universe was especially made for us doesn’t work. That’s because in that case it’s easy enough to find reasons for why your probability assessment was just wrong and nothing’s in need of explaining.

Example: Did you expect a year ago that we’d spent much of 2020 in lockdown? Probably not. You probably considered that unlikely. But no one would claim that you need god to explain why it seemed unlikely.

What does this mean for the existence of god or the multiverse? Both are assumptions that are unnecessary additions to our theories of nature. In the first case, you say “the constants of nature in our universe are what we have measured, and god made them”, in the second case you say “the constants of nature in our universe are what we have measured, and there are infinitely many other unobservable universes with other constants of nature.” Neither addition does anything whatsoever to improve our theories of nature. But this does not mean god or the multiverse do not exist. It just means that evidence cannot tell us whether they do or do not exist. It means, god and the multiverse are not scientific ideas.

If you want to know more about fine-tuning, I have explained all this in great detail in my book Lost in Math.

In summary: Was the universe made for us? We have no evidence whatsoever that this is the case.


You can join the chat on this video today (Saturday, Jan 16) at 6pm CET/Eastern Time here.

Saturday, January 09, 2021

The Mathematics of Consciousness

[This is a transcript of the video embedded below.]


Physicists like to think they can explain everything, and that, of course, includes human consciousness. And so in the last few decades they’ve set out to demystify the brain by throwing math at the problem. Last year, I attended a workshop on the mathematics of consciousness in Oxford. Back then, when we still met other people in real life, remember that?

I find it to be a really interesting development that physicists take on consciousness, and so, today I want to talk a little about ideas for how consciousness can be described mathematically, how that’s going so far, and what we can hope to learn from it in the future.

The currently most popular mathematical approach to consciousness is integrated information theory, IIT for short. It was put forward by a neurologist, Giulio Tononi, in two thousand and four.

In IIT, each system is assigned a number, that’s big Phi, which is the “integrated information” and supposedly a measure of consciousness. The better a system is at distributing information while it’s processing the information, the larger Phi. A system that’s fragmented and has many parts that calculate in isolation may process lots of information, but this information is not “integrated”, so Phi is small.

For example, a digital camera has millions of light receptors. It processes large amounts of information. But the parts of the system don’t work much together, so Phi is small. The human brain on the other hand is very well connected and neural impulses constantly travel from one part to another. So Phi is large. At least that’s the idea. But IIT has its problems.

One problem with IIT is that computing Phi is ridiculously time consuming. The calculation requires that you divide up the system which you are evaluating in any possible way and then calculate the connections between the parts. This takes up an enormous amount of computing power. Estimates show that even for the brain of a worm, with only three hundred synapses, calculating Phi would take several billion years. This is why measurements of Phi that have actually been done in the human brain have used incredibly simplified definitions of integrated information.

Do these simplified definitions at least correlate with consciousness? Well, some studies have claimed they do. Then again others have claimed they don’t. The magazine New Scientist for example interviewed Daniel Bor from the University of Cambridge and reports:
“Phi should decrease when you go to sleep or are sedated via a general anesthetic, for instance, but work in Bor’s lab has shown that it doesn’t. “It either goes up or stays the same,” he says.”
I contacted Bor and his group, but they wouldn’t come forward with evidence to back up this claim. I do not actually doubt it’s correct, but I do find it somewhat peculiar they’d make such a statements to a journalist and then not provide evidence for it.

Yet another problem for IIT is, as the computer scientist Scott Aaronson pointed out, that one can think of rather trivial systems, that solve some mathematical problem, which distribute information during the calculation in such a way that Phi becomes very large. This demonstrates that Phi in general says nothing about consciousness, and in my opinion this just kills the idea.

Nevertheless, integrated information theory was much discussed at the Oxford workshop. Another topic that received a lot of attention is the idea by Roger Penrose and Stuart Hamaroff that consciousness arises from quantum effects in the human brain, not in synapses, but in microtubules. What the heck are microtubules? Microtubules are tiny tubes made of proteins that are present in most cells, including neurons. According to Penrose and Hameroff, in the brain these microtubules can enter coherent quantum states, which collapse every once in a while, and consciousness is created in that collapse.

Most physicists, me included, are not terribly excited about this idea because it’s generally hard to create coherent quantum states of fairly large molecules, and it doesn’t help if you put the molecules into a warm and wiggly environment like the human brain. For the Penrose and Hamaroff conjecture to work, the quantum states would have to survive at least a microsecond or so. But the physicist Max Tegmark has estimated that they would last more like a femtosecond, that’s only ten to the minus fifteen seconds.

Penrose and Hameroff are not the only ones who pursue the idea that quantum mechanics has something to do with consciousness. The climate physicist Tim Palmer also thinks there is something to it, though he is more concerned with the origins of creativity specifically than with consciousness in general.

According to Palmer, quantum fluctuations in the human brain create noise, and that noise is essential for human creativity, because it can help us when a deterministic, analytical approach gets stuck. He believes the sensitivity to quantum fluctuations developed in the human brain because that’s the most energy-efficient way of solving problems, but it only becomes possible once you have small and thin neurons, of the types you find in the human brain. Therefore, palmer has argued that low-energy transistors which operate probabilistically rather than deterministically, might help us develop artificial intelligent that’s actually intelligent.

Another talk that I thought was interesting at the Oxford workshop was that by Ramon Erra. One of the leading hypothesis for how cognitive processing works is that it uses the synchronization of neural activity in different regions of the brain to integrate information. But Erra points out that during an epileptic seizure, different parts of the brain are highly synchronized.

In this figure, for example, you see the correlations between the measured activity of hundred fifty or so brain sites. Red is correlated, blue is uncorrelated. On the left is the brain during a normal conscious phase, on the right is a seizure. So, clearly too much synchronization is not a good thing. Erra has therefore proposed that a measure of consciousness could be the entropy in the correlation matrix of the synchronization. Which is low both for highly uncorrelated and highly correlated states, but large in the middle, where you expect consciousness.

However, I worry that this theory has the same problem as integrated information theory, which is that there may be very simple systems that you do not expect to be conscious but that nevertheless score highly on this simple measure of synchronization.

One final talk that I would like to mention is that by Jonathan Mason. He asks us to imagine a stack of compact disks, and a disk player that doesn’t know which order to read out the bits on a compact disk. For the first disk, you then can always find a readout order that will result in a particular bit sequence, that could correspond, for example, to your favorite song.

But if you then use that same readout order for the next disk, you most likely just get noise, which means there is very little information in the signal. So if you have no idea how to read out information from the disks, what would you do? You’d look for a readout process that maximizes the information, or minimizes the entropy, for the readout result for all of the disks. Mason argues that the brain uses a similar principle of entropy minimization to make sense of information.

Personally, I think all of these approaches are way too simple to be correct. In the best case, they’re first steps on a long way. But as they say, every journey starts with a first step, and I certainly hope that in the next decades we will learn more about just what it takes to create consciousness. This might not only allow us to create artificial consciousness and help us tell when patients who can't communicate are conscious, it might also help us allow to make sense of the unconscious part of our thoughts so that we can become more conscious of them.

You can find recordings of all the talks at the workshop, right here on YouTube, please check the info below the video for references.


You can join the chat about this video today (Saturday, Jan 9) at noon Eastern Time or 6pm CET here.

Saturday, January 02, 2021

Is Time Real? What does this even mean?

[This is a transcript of the video embedded below.]


Time is money. It’s also running out. Unless possibly it’s on your side. Time flies. Time is up. We talk about time… all the time. But does anybody actually know what it is? It’s 3:30. That’s not what I mean. Then what do you mean? What does it mean? That’s what we will talk about today.

First things first, what is time? “Time is what keeps everything from happening at once,” as Ray Cummings put it. Funny, but not very useful. If you ask Wikipedia, time is what clocks measure. Which brings up the question, what is a clock. According to Wikipedia, a clock is what measures time. Huh. That seems a little circular.

Luckily, Albert Einstein gets us out of this conundrum. Yes, this guy again. According to Einstein, time is a dimension. This idea goes back originally to Minkowski, but it was Einstein who used it in his theories of special and general relativity to arrive at testable predictions that have since been confirmed countless times.

Time is a dimension, similar to the three dimensions of space, but with a very important difference that I’m sure you have noticed. We can stand still in space, but we cannot stand still in time. So time is not the same as space. But that time is a dimension means you can rotate into the time-direction, like you can rotate into a direction of space. In space, if you are moving in, say, the forward direction, you can turn forty-five degrees and then you’ll instead move into a direction that’s a mixture of forward and sideways.

You can do the same with a time and a space direction. And it’s not even all that difficult. The only thing you need to do is change your velocity. If you are standing still and then begin to walk, that does not only change your position in space, it also changes which direction you are going in space-time. You are now moving into a direction that is a combination of both time and space.

In physics, we call such a change of velocity a “boost” and the larger the change of velocity, the larger the angle you turn from time to space. Now, as you all know, the speed of light is an upper limit. This means you cannot turn from moving only through time and standing still in space to moving only in space and not in time. That does not work. Instead, there’s a maximal angle you can turn in space-time by speeding up. That maximal angle is by convention usually set to 45 degrees. But that’s really just convention. For the physics it matters only that it’s some angle smaller than ninety degrees.

The consequence of time being a dimension, as Einstein understood, is that time passes more slowly if you move, relative to the case when you were not moving. This is the “time dilation”.

How do we know this is correct? We can measure it. How do you measure a time-dimension? It turns out you can measure the time-dimension with – guess what – the things we normally call clocks. The relevant point here is that this definition is no longer circular. We defined time as a dimension in a specific theory. Clocks are what we call devices that measure this.

How do clocks work? A clock is anything that counts how often a system returns to the same, or at least very similar, configuration. For example, if the Earth orbits around the sun once, and returns to almost the same place, we call that a year. Or take a pendulum. If you count how often the pendulum is, say, at one of the turning points, that gives you a measure of time. The reason this works is that once you have a theory for space-time, you can calculate that the thing you called time is related to the recurrences of certain events in a regular way. Then you measure the recurrence of these events to tell the passage of time.

But then what do physicists mean if they say time is not real, as for example Lee Smolin has argued. As I have discussed in a series of earlier videos, we call something “real” in scientific terms if it is a necessary ingredient of a theory that correctly describes what we observe. Quarks, for example, are real, not because we can see them – we cannot – but because they are necessary to correctly describe what particle physicists measure at the Large Hadron Collider. Time, for the same reason, is real, because it’s a necessary ingredient for Einstein’s theory of General Relativity to correctly describe observations.

However, we know that General Relativity is not fundamentally the correct theory. By this I mean that this theory has shortcomings that have so-far not been resolved, notably singularities and the incompatibility with quantum theory. For this reason, most physicists, me included, think that General Relativity is only an approximation to a better theory, usually called “quantum gravity”. We don’t yet have a theory of quantum gravity, but there is no shortage of speculations about what its properties may be. And one of the properties that it may have is that it does not have time.

So, this is what physicists mean when they say time is not real. They mean that time may not be an ingredient of the to-be-found theory of quantum gravity or, if you are even more ambitious, a theory of everything. Time then exists only on an approximate “emergent” level.

Personally, I find it misleading to say that in this case, time is not real. It’s like claiming that because our theories for the constituents of matter don’t contain chairs, chairs are not real. That doesn’t make any sense. But leaving aside that it’s bad terminology, is it right that time might fundamentally not exist?

I have to admit it’s not entirely implausible. That’s because one of the major reasons why it’s difficult to combine quantum theory with general relativity is that… time is a dimension in general relativity. In Quantum Mechanics, on the other hand, time is not something you can measure. It is not “an observable,” as the physicists say. In fact, in quantum mechanics it is entirely unclear how to answer a seemingly simple question like “what is the probability for the arrival time of a laser signal”. Time is treated very differently in these two theories.

What might a theory look like in which time is not real? One possibility is that our space-time might be embedded into just space. But it has a boundary were time turns to space. Note how carefully I have avoided saying before it turns to space. Before arguably is a meaningless word if you have no direction of time. It would be more accurate to say what we usually call “the early universe” where we expect a “big bang” may actually have been “outside of space time” and there might have been only space, no time.

Another possibility that physicists have discussed is that deep down the universe and everything in it is a network. What we usually call space-time is merely an approximation to the network in cases when the network is particularly regular. There are actually quite a few approaches that use this idea, the most recent one being Stephen Wolfram’s Hypergraphs.

Finally, I should mention Julian Barbour who has argued that we don’t need time to begin with. We do need it in General Relativity, which is the currently accepted theory for the universe. But Barbour has developed a theory that he claims is at least as good as General Relativity, and does not need time. Instead, it is a theory only about the relations between configurations of matter in space, which contain an order that we normally associate with the passage of time, but really the order in space by itself is already sufficient. Barbour’s view is certainly unconventional and it may not lead anywhere, but then again, maybe he is onto something. He has just published a new book about his ideas.

Thanks for your time, see you next week.


You can join the chat on this topic today (Jan 2nd) at 6pm CET/noon Eastern Time here.

Wednesday, December 30, 2020

Well, Actually. 10 Physics Answers.

[This is a transcript of the video embedded below.]


Today I will tell you how to be just as annoying as a real physicist. And the easiest way to do that is to insist correcting people when it really doesn’t matter.

1. “The Earth Orbits Around the Sun.”

Well, actually the Earth and the Sun orbit around a common center of mass. It’s just that the location of the center of mass is very close by the center of the sun because the sun is so much heavier than earth. To be precise, that’s not quite correct either because Earth isn’t the only planet in the solar system, so, well, it’s complicated.

2. “The Speed of Light is constant.”

Well, actually it’s only the speed of light in vacuum that’s constant. The speed of light is lower when the light goes through a medium, and just what the speed is depends on the type of medium. The speed of light in a medium is also no longer observer-independent – as the speed of light in vacuum is – but instead it depends on the relative velocity between the observer and the medium. The speed of light in a medium can also depend on the polarization or color of the light, the former is called birefringence and the latter dispersion.

3. “Gravity Waves are Wiggles in Space-time”

Well, actually gravity waves are periodic oscillations in gases and fluids for which gravity is a restoring force. Ocean waves and certain clouds are examples of gravity waves. The wiggles in space-time are called gravitational waves, not gravity waves.

4. “The Earth is round.”

Well, actually the earth isn’t round, it’s an oblate spheroid, which means it’s somewhat thicker at the equator than from pole to pole. That’s because it rotates and the centrifugal force is stronger for the parts that are farther away from the axis of rotation. In the course of time, this has made the equator bulge outwards. It is however a really small bulge, and to very good precision the earth is indeed round.

5. “Quantum Mechanics is a theory for Small Things”

Well, actually, quantum mechanics applies to everything regardless of size. It’s just that for large things the effects are usually so tiny you can’t see them.

6. “I’ve lost weight!”

Well, actually weight is a force that depends on the gravitational pull of the planet you are on, and it’s also a vector, meaning it has a direction. You probably meant you lost mass.

7. “Light is both a particle and a wave.”

Well, actually, it’s neither. Light, as everything else, is described by a wave-function in quantum mechanics. A wave-function is a mathematical object, that can both be sharply focused and look pretty much like a particle. Or it can be very smeared out, in which case it looks more like a wave. But really it’s just a quantum-thing from which you calculate probabilities of measurement outcomes. And that’s, to our best current knowledge, what light “is”.

8. “The Sun is eight light minutes away from Earth.”

Well, actually, this is only correct in a particular coordinate system, for example that in which Planet Earth is in rest. If you move really fast relative to Earth, and use a coordinate system in rest with that fast motion, then the distance from sun to earth will undergo Lorentz-contraction, and it will take light less time to cross the distance.

9. “Water is blue because it mirrors the sky.”

Well, actually, water is just blue. No, really. If you look at the frequencies of electromagnetic radiation that water absorbs, you find that in the visual part of the spectrum the absorption has a dip around blue. This means water swallows less blue light than light of other frequencies that we can see, so more blue light reaches your eye, and water looks blue. 

However, as you have certainly noticed, water is mostly transparent. It generally swallows very little visible light and so, that slight taint of blue is a really tiny effect. Also, what I just told you is for chemically pure water, H two O, and that’s not the water you find in oceans, which contain various minerals and salt, not to mention dirt. So the major reason the oceans look blue, if they do look blue, is indeed that they mirror the sky.

10. “Black Holes have a strong gravitational pull.”

Well, actually the gravitational pull of a black hole with mass M is exactly as large as the gravitational pull of a star with mass M. It’s just that – if you remember newton’s one over r square law – the gravitational pull depends on the distance to the object. 

The difference between a black hole and a star is that if you fall onto a star, you’re burned to ashes when you get too close. For a black hole you keep falling towards the center, cross the horizon, and the gravitational pull continues to increase. Theoretically, it eventually becomes infinitely large.

How many did you know? Let me know in the comments.


You can join the chat on this video tomorrow, Thursday Dec 31, at 6pm CET or noon Eastern Time here.

Saturday, December 26, 2020

What is radiation? How harmful is it?

[This is a transcript of the video embedded below.]


Did you know that sometimes a higher exposure to radiation is better than a lower one? And that some researchers have claimed low levels of radioactivity are actually beneficial for your health? Does that make sense? Are air purifiers that ionize air dangerous? And what do we mean by radiation to begin with? That’s what we will talk about today.

First of all, what is radiation? Radiation generally refers to energy transferred by waves or particles. So, if I give you a fully charged battery pack, that’s an energy transfer, but it’s not radiation because the battery is neither a wave nor a particle. On the other hand, if I shout at you and it makes your hair wiggle, that sound was radiation. In this case the energy was transferred by sound waves, that are periodic density fluctuations in the air.

Sound is not something we usually think of as radiation, but technically, it is. Really all kind of things are technically radiation. If you drop a pebble into water, for example, then the waves this creates are also radiation.

But what people usually think of, when they talk about radiation, is radiation that’s transferred by either (a) elementary particles, that’s particles which have no substructure, for all we currently know, or that’s (b) transferred by small composite particles, such as protons, neutrons, or even small atomic nuclei and (c) electromagnetic waves. But electromagnetic waves are strictly speaking also made of particles, which are the photons. So, really all these types of radiation that we usually worry about are made of some kind of particle.

The only exception is gravitational radiation. That’s transferred in waves, and we believe that these gravitational waves are made of particles, that’s the gravitons, yet we have no evidence for the gravitons themselves. But of all the possible types of radiation, gravitational radiation is the one that is the least likely to leave any noticeable trace. Therefore, with apologies, I will in the following, not consider gravitational waves.

Having said that, if you want to know what radiation does, you need to know four things. First, what particle is it? Second, what’s the energy of the particle. Third, how many of these particles are there. And forth, what do they do to the human body. We will go through these one by one.

First, the type of particle tells you how likely the radiation is to interact with you. Some particles come in huge amounts, but they basically never interact. They just go through stuff and don’t do anything. For example, the sun produces an enormous number of particles called neutrinos. Neutrinos are electrically neutral, have a small mass, and they just pass through walls and you and indeed, the whole earth. There are about one hundred trillion neutrinos going through your body every second. And you don’t notice.

It’s the same with the particles that make up dark matter. They should be all around us and going through us as we speak, but they interact so rarely with anything, we don’t notice. Or maybe they don’t exist in the first place. In any case, neutrinos and dark matter are particles you clearly don’t need to worry about.

However, other particles interact more strongly, especially if they are electrically charged. That’s because the constituents of all types of matter are also electrically charged. Charged particles in radiation are mostly electrons, which you all know, or muons. Muons are very similar to electrons, just heavier and they are unstable. They decay into electrons and neutrinos again. You can also have charged radiation made of protons, that’s one of the constituents of the atomic nucleus and it’s positively charged, or you can have radiation made of small atomic nuclei. The best known of those are Helium nuclei, which are also called alpha particles.

Besides protons, the other constituent of the atomic nucleus are neutrons. As the name says, they are electrically neutral. They are a special case because they can do a lot of damage even though they do not have an electric charge. That’s because neutrons can enter the atomic nucleus and make the nucleus unstable.

However, neutrons, curiously enough, are actually unstable if they are on their own. If neutrons are not inside an atomic nucleus, they live only for about 10 minutes, then they decay to a proton, an electron and an electron-anti-neutrino. For this reason you don’t encounter single neutrons in nature. So, that too is something you don’t need to worry about.

Then there’s electromagnetic radiation, which we already talked about the other week. Electromagnetic radiation is made of photons, and they can interact with anything that is electrically charged. And since atoms have electrically charged constituents, this means, electromagnetic radiation can which interact with any atom. But whether they actually do that depends on the amount of energy per photon.

So, first you need to know what kind particle is in the radiation, because that tells you how likely it is to interact. And then, second, to understand what the radiation can do if it interacts, you need to know how much energy the individual particles in the radiation have. If the energy of the particles in the radiation is large enough to break bonds between molecules, then they are much more likely to be harmful.

The typical binding energy of molecules is similar to the energy you need to pull an electron off an atom. This is called ionization, and radiation that can do that is therefore called “ionizing radiation”. The reason ionizing radiation is harmful is not so much the ionization itself, it’s that if radiation can ionize, you know it can also break molecular bonds.

Ionized atoms or molecules like to undergo chemical reactions. That may be a problem if it happens inside the body. But ionized molecules in the air are actually common, because sunlight can do this ionization, and not something you need to worry about. If you have an air purifier, for example, that ionizes some air molecules, usually O two or N two.

The idea is that these ionized molecules will bind to dust and then the dust is charged, so it will stick to the floor or other grounded surfaces. But this ionization in air purifiers does not require ionizing radiation, so it’s not a health risk. Except that air purifiers may also produce ozone, which is not healthy.

Where does ionizing radiation come from? Well, for one, ultraviolet sunlight has enough energy to ionize. But even higher energies can be reached by ionizing radiation that comes from outer space, the so-called cosmic rays.

Most ultraviolet radiation from the sun gets stuck in the stratosphere thanks to ozone. Most cosmic rays are also blocked or at least dramatically slowed down in the upper atmosphere, but part of it still reaches the ground. This already tells you that your exposure to ionizing radiation increases with altitude. In fact, average people like you and I tend to get the highest doses of ionizing radiation on airplanes.

The particles in the primary cosmic radiation are mostly protons, some are small ionized nuclei, and then there’s a tiny fraction of other stuff. Primary here means, it’s the thing that actually comes from outer space. But almost all of these primary cosmic particles hit air molecules in the upper atmosphere, which converts them into a lot of particles of lower energy, usually called a cosmic ray shower. This shower, which rains down on earth, is almost exclusively made of photons, electrons, muons, and neutrinos, which we’ve already talked about.

Ionizing radiation is also emitted by radioactive atoms. The radiation that atoms can emit is of three types: alpha, that’s Helium nuclei, beta, that’s electrons and positrons, and gamma, that’s photons. Radioactive atoms which emit these types of radiation occur naturally in air, rock, soil, and even food. So there is a small amount of background radiation everywhere on earth, no matter where you go, and what you touch.

This then brings us to the third point. If you know what particle it is, and you know what energy it has, you need to know how many of them there are. We measure this in the total energy per time, that is known as power. The power of the radiation is the highest if you are close to the source of the radiation. That’s because the particles spread out into space, so the farther away you are, the fewer of them will hit you. The number of particles can drop very quickly if some of the radiation is absorbed. And the radiation that is the most likely to interact with matter, is the least likely to reach you. This is the case for example for alpha particles. You can block them just by a sheet of paper.

And then there’s the fourth point, which is the really difficult one. How much of that radiation is absorbed by the body and what can it do? There is no simple answer to this. Well, okay, one thing that’s simple is that high amounts of radiation, regardless of what type, can pump a lot of energy into the body, which is generally bad. Most countries therefore have radiation safety regulations that set strict limits on the amount of radiation that humans should maximally be exposed to. If you want to know details, I encourage you to check out these official guides, to which I leave links in the info below the video.

Interestingly enough, more radiation is not always worse. For example, you may remember that if there’s a nuclear accident, people rush to buy iodine pills. That’s because nuclear accidents can release a radioactive isotope of iodine, which may then enter the body through air or food. This iodine will accumulate in the thyroid gland and if it decays, that can damage cells and cause cancer. The pills of normal iodine basically fill up the storage space in your thyroid gland, which means the radioactive substance leaves the body faster.

But. Believe that or not, some people swallow large amounts of radioactive iodine as a medical treatment. This is actually rather common if someone has an overactive thyroid gland which causes a long list of health issues. It can be treated by medication, but then patients have to take pills throughout their lives, and these pills are not without side effects.

Now, if you give those patients radioactive iodine that kills off a significant fraction of the cells in the thyroid gland, and can solve their problem permanently. This method has been in use since the 1940s, is very effective, and no, it does not increase the risk of thyroid cancer. The thing is that if the radiation dose is high enough, the cells in the thyroid gland will not just be damaged, mutate, and possibly cause cancer. They’ll just die.

Now, this is not to say that more radiation is generally better, certainly not. But it demonstrates that it’s not easy to find out what the health effects of a certain radiation dose are. The physics is simple. But the biology isn’t.

Indeed, some scientists have argued that low doses of ionizing radiation may be beneficial because they encourage the body to use cell-repair mechanisms. This idea is called “radiation hormesis”. Does that make sense? Well. It kind of sounds plausible. But the plausible ideas are the one you should be most careful with. Several dozen studies have looked at radiation hormesis in the past 20 years. But so far, the evidence has been inconclusive and official radiation safety committees have not accepted it. This does not mean it’s wrong. It just means, at the moment it’s unclear whether or, if so, under which circumstances, low doses of ionizing radiation may have health benefits, or at least not do damage.

So, I hope you learned something new today!



You can join the chat on this video today (Saturday) at 6pm CET/noon Eastern Time here.