Saturday, September 26, 2020

Understanding Quantum Mechanics #6: It’s not just a theory for small things.

[This is a transcript of the video embedded below. Some parts of the text may not make sense without the graphics in the video.]

One of the most common misunderstandings about quantum mechanics that I encounter is that quantum mechanics is about small things and short distances. It’s about atomic spectral lines, electrons going through double slits, nuclear decay, and so on. There’s a realm of big things where stuff behaves like we’re used to, and then there’s a realm of small things, where quantum weirdness happens. It’s an understandable misunderstanding because we do not experience quantum effects in daily life. But it’s wrong and in this video I will explain why. Quantum mechanics applies to everything, regardless of size.

The best example of a big quantum thing is the sun. The sun shines thanks to nuclear fusion, which relies on quantum tunneling. You have to fuse two nuclei together even though they repel each other because they are both positively charged. Without tunneling, this would not work. And the sun certainly is not small.

Ah, you may say, that doesn’t count because the fusion itself only happens on short distances. It’s just that the sun contains a lot of matter so it’s big.

Ok. Here is another example. All that matter around you, air, walls, table, what have you, is only there because of quantum mechanics. Without quantum mechanics, atoms would not exist. Indeed, this was one of the major reasons for the invention of quantum mechanics in the first place.

You see, without quantum mechanics, an electron circling around the atomic nucleus would emit electromagnetic radiation, lose energy, and fall into the nucleus very quickly. So, atoms would be unstable. Quantum mechanics explains why this does not happen. It’s because the electrons are not particles that are localized at a specific point, they are instead described by wave-functions which merely tell you the probability for the electron to be at a particular point. And for atoms this probability distribution is focused on shells around the nucleus. These shells correspond to different energy levels and are also called the “orbitals” of the electron, but I find that somewhat misleading. It’s not like the electron is actually orbiting as in going around in a loop.

I get frequently asked why this is not a problem for the orbits of planets in the solar system. Why don’t the planets emit radiation and fall into the sun? The answer is: They do! But in the case of the solar system, the force which acts is not the electromagnetic force, as in the case of the atom, but the gravitational force. Correspondingly, the radiation that’s emitted when planets go around the sun is not electromagnetic radiation, but gravitational radiation, which means gravitational waves. These carry away energy. And this indeed causes planets to lose energy which gradually shrinks the radius of their orbits.

However, the gravitational force is much, much weaker than the electromagnetic force, so this effect is extremely small and it does not noticeably affect planetary orbits. The effect can become large enough to be observable if you have a system of two stars that circle each other at short distance. In this case the energy loss from gravitational radiation will cause the stars to spiral into each other. Indeed, this is how gravitational waves were first indirectly confirmed, for which a Nobel Prize was handed out in 1993.

But this brings up another question, doesn’t it. Why aren’t the orbits of planets quantized like the orbits of electrons around the atomic nucleus? Again the answer is: they are! It’s just that for such large objects the shells are so close together that the gaps between them are unmeasureably small and the wave-function of the planets is very well localized. So it is an excellent approximation to treat the planets as balls – or indeed points – moving on curves. For the electron in an atom, on the other hand, this approximation is terribly bad.

So, all the matter around us is evidence that quantum mechanics works because it’s necessary to make atoms stable. Does that finally convince you that quantum mechanics isn’t just about small things? Ah, you may say, but all this normal matter does not look like a quantum thing.

Well, then how about lasers? Lasers work by pumping energy into a crystal or gas that makes the electrons mostly populate unstable energy levels. This is called “population inversion.” If one of the electrons drops down to a stable state, that emits a photon which causes another electron to drop, and so on. This process is called “stimulated emission”. Lasers then amplify this signal by putting mirrors around the crystal or gas. The light that is emitted in this way is coherent and very strongly focused. And that’s thanks to quantum mechanics because if the atomic energy levels were not quantized this would not work.

Nah, you say, this still doesn’t count because it is not weird. Isn’t quantum theory supposed to be weird?

Ok, so you want weird. Enter Zeilinger. Anton Zeilinger is famous for, well, for many things actually. He’s been on the hotlist for a NobelPrize for some while. But one of his most famous experiments is showing that entanglement between photons persists for more than one-hundred kilometers. Zeilinger and his group did this experiment between two of the Canary Islands in 2008. They produced pairs of entangled photons on La Palma, sent one of each pair to Tenerife, which is one-hundred-forty-four kilometers away, and let the other photon do circles in an optical fibre on La Palma. When they measured the polarization on both photons, they could unambiguously demonstrate that they were still entangled.

So, quantum mechanics is most definitely not a theory for short distances. It’s just that the weird stuff that’s typical for quantum mechanics – entanglement and quantum uncertainty and the ability of particles to act like waves – are under normal circumstances really really tiny for big and warm objects. I am here using the words “big” and “warm” the way physicists do, so “warm” means anything more than a few degrees above absolute zero and “big” means anything exceeding the size of a molecule. As I explained in the previous video in this series, it’s decoherence that ruins quantum effects for big and warm objects just because they frequently interact with other things, air or radiation.

But if you control the environment of an object very closely, if you keep it cool and in an ultra-high vacuum, you can slow down decoherence. This way, physicists have been able to demonstrate quantum behavior for big molecules. The record holder is presently a molecule made of about 2000 atoms or about 40,000 protons, neutrons and electrons.

An entirely different type of “large” quantum states are Bose Einstein condensates. These are clouds of atoms cooled to very low temperature, where they combine to one coherent state that has quantum effects throughout. For Bose Einstein Condensates, the record is presently at a few hundred million atoms.

Now, you may still think that’s small, and I can’t blame you for it. But the relevant point is that there is no limit in size or weight or distance where quantum effects suddenly stop. In principle, everything has quantum effects, even you. It’s just that those effects are so small you don’t notice.

This video was brought to you by Brilliant, which is a website on which you can take interactive courses on a large variety of topics in science and mathematics, including quantum mechanics. Brilliant has courses covering both the mathematical basis of quantum mechanics, as well as quantum objects, quantum computing, quantum logics, and many of the key experiments in quantum mechanics. I have spent some time browsing the courses offered by Brilliant, and I think they are a great starting point if you want to really understand what I explained in this video.

To support my YouTube channel and learn more about Brilliant, go to brilliant.org/Sabine, and sign up for free. The first two-hundred people who go to that link will get 20 percent off the annual Premium subscription.

Wednesday, September 23, 2020

Follow the Science? Nonsense, I say.

Today I want to tell you why I had to stop reading news about climate science. Because it pisses me off. Every. Single. Time.



There’s all these left-wing do-gooders who think their readers are too fucking dumb to draw their own conclusions so it’s not enough to tell me what’s the correlation between hurricane intensity and air moisture, no, they also have to tell me that, therefore, I should donate to save the polar bears. There’s this implied link: Science says this, therefore you should do that. Follow the science, stop flying. Follow the science, go vegan. Follow the science and glue yourself to a bus, because certainly that’s the logical conclusion to draw from the observed weakening of the atlantic meridional circulation.

When I was your age, we learned science does not say anything about what we should do. What we should do is a matter of opinion, science is matter of fact.

Science tells us what situation we are in and what consequences our actions are likely to have, but it does not tell us what to do. Science does not say you shouldn’t pee on high voltage lines, it says urine is an excellent conductor. Science does not say you should stop smoking, science says nicotine narrows arteries, so if you smoke you’ll probably die young lacking a few toes. Science does not say we should cut carbondioxide emissions. It says if we don’t, then by the end of the century estimated damages will exceed some Trillion US $. Is that what we should go for? Well, that’s a matter of opinion.

Follow the Science is a complete rubbish idea, because science does not know the direction. We have to decide what way to go.

You’d think it’s bad enough that politicians conflate scientific fact with opinion, but the media actually make it worse. They make it worse by giving their audience the impression that it matters what someone whose job it is to execute the will of the electorate believes about scientific facts. But I couldn’t care less if Donald Trump “believes” in climate change. Look, this is a man who can’t tell herd immunity from herd mentality, he probably thinks winter’s the same as an ice age. It’s not his job to offer opinions about science he clearly doesn’t understand, so why do you keep asking him. His job is to say if the situation is this, we will do that. At least in principle, that’s what he should be doing. Then you look up what science says which situation we are in and act accordingly.

The problem, the problem, you see, is that by conflating the two things – the facts with the opinions – the media give people an excuse to hide opinions behind scientific beliefs. If you don’t give a shit that today’s teenagers will struggle their whole life cleaning up the mess that your generation left behind fine, that’s a totally valid opinion. But please just say it out loud, so we can all hear it. Don’t cover it up by telling us a story about how you weren’t able to reproduce a figure in the IPCC report even though you tried really hard for almost ten seconds, because no one gives a shit whether you have your own “theory.”

If you are more bothered by the prospect of rising gasoline prices than by rising sea levels because you don’t know anyone who lives by the sea anyway, then just say so. If you worry more about the pension for your friend the coal miner than about drought and famine in the developing world because after all there’s only poor people in the developing world, then just say so. If you don’t give a shit about a global recession caused by natural catastrophes that eat up billion after billion because you’re a rich white guy with a big house and think you’re immune to trouble, then just say so. Say it loud, so we can all hear it.

And all the rest of you stop chanting we need to “follow the science”. People who oppose action on climate change are not anti-science, they simply worry more that a wind farm might ruin the view from their summer vacation house, than they worry wild fires will burn down the house. That’s not anti-scientific, that’s just dumb. But then that’s only my opinion.

Saturday, September 19, 2020

What is quantum cryptography and how does it work?

[This is a transcript of the video embedded below. Some parts of the text may not make sense without the graphics in the video.]

If you punch your credit card number into a website and hit “submit”, I bet you don’t want to have twenty fraudulent charges on your bank account a week later. This is why all serious online retailers use encryption protocols. In this video, I want to tell you how quantum mechanics can help us keep secrets safe.


Before I get to quantum cryptography, I briefly have to tell you how the normal, non-quantum cryptography works, the one that most of the internet uses today. If you know this already, you can use the YouTube tool bar to jump to the next chapter.

The cryptographic codes that are presently being used online are for the most part public key systems. The word “key” refers to the method that you use to encrypt a message. It’s basically an algorithm that converts readable text or data into a mess, but it creates this mess in a predictable way, so that the messing up can be undone. If the key is public, this means everybody knows how to encrypt a message, but only the recipient knows how to decrypt it.

This may sound somewhat perplexing, because if the key is public and everybody knows how to scramble up a message, then it seems everybody also knows how to unscramble it. It does not sound very secure. But the clever part of public key cryptography is that to encode the message you use a method that is easy to do, but hard to undo.

You can think of this as if the website you are buying from gives you, not a key, but an empty treasure chest that locks when you close it. You take the chest. Put in your credit card number, close it. And now the only person who can open it, is the one who knows how to unlock it. So your message is safe to send. In practice that treasure chest is locked by a mathematical problem that is easy to pose but really hard to solve.

There are various mathematical problems that can, and that are being used, in cryptographic protocols for locking the treasure chest. The best known one is the factorization of a large number into primes. This method is used by the algorithm known as RSA, after its inventors Rivest (i as in kit), Shamir, and Adleman. The idea behind RSA is that if you have two large prime numbers, it is easy to multiply them. But if you only have the product of the two primes, then it is very difficult to find out what its prime-factors are.

For RSA, the public key, the one that locks the treasure chest, is a number that is derived from the product of the primes, but does not contain the prime factors themselves. You can therefore use the public key to encode a message, but to decode it, you need the prime factors, which only the recipient of your message has, for example the retailer to whom you are sending your credit card information.

Now, this public key can be broken, in principle, because we do know algorithms to decompose numbers into their prime factors. But for large numbers, these algorithms take very, very long, to give you a result, even on the world’s presently most powerful computers. So, maybe that key you are using can be broken, given a hundred thousand years of computation time. But really who cares. For all practical purposes, these keys are safe.

But here’s the thing. Whether or not someone can break one of these public keys depends on how quickly they can solve the mathematical problem behind it. And quantum computers can vastly speed up computation. You can see the problem: Quantum computers can break cryptographic protocols, such as RSA, in a short time. And that is a big security risk.

I explained in a previous video what quantum computers are and what to expect from them, so check this out if you want to know more. But just how quantum computers work doesn’t matter so much here. It only matters that you know, if you had a powerful quantum computer, it could break some public key cryptosystems that are currently widely being used, and it could do that quickly.

This is a problem which does not only affect your credit card number but really everything from trade to national security. Now, we are nowhere near having a quantum computer that could actually do such a computation. But the risk that one could be built in the next decades is high enough so that computer scientists and physicists have thought of ways to make public key cryptography more secure.

They have come up with various cryptographic protocols that cannot be broken by quantum computers. This is possible by using protocols which rely on mathematical problems for which a quantum computer does not bring an advantage. This cryptography, which is safe from quantum computers is called “post-quantum cryptography” or, sometimes, “quantum resistant cryptography”.

Post-quantum cryptographic protocols do not themselves use quantum effects. They have the word “quantum” in their name merely to say that they cannot be broken even with quantum computers. At least according to present knowledge. This situation can change because it’s possible that in the future someone will find a way to use a quantum computer to break a code currently considered unbreakable. However, at least at the moment, some cryptographic protocols exist for which no one knows how a quantum computer could break them.

So, computer scientists have ways to keep the internet safe, even if someone, somewhere develops a powerful quantum computer. Indeed, most nations already have plans to switch to post-quantum cryptography in the coming decade, if not sooner.

Let us then come to quantum cryptography, and its application for “quantum key distribution”. Quantum key distribution is a method for two parties to securely share a key that they can then use to encode messages. And quantum physics is what helps keep the key safe. To explain how this works, I will again just use the simplest example, that’s a protocol known as BB Eighty-four, after the authors Bennett and Brassard and the year of publication.

When physicists talk about information transfer, they like to give names to senders and receivers. Usually they are called Alice and Bob, so that’s what I will call them to. Alice wants to send a secret key to Bob so they can then have a little chat, but she does not want Bob’s wife, Eve, to know what they’re talking about. In the literature, this third party is normally called “Eve” because she is “eavesdropping”, hahaha, physics humor.

So, Alice creates a random sequence of particles that can have spin either up or down. She measures the spin of each particle and then sends it to Bob who also measures the spin. Each time they measure spin up, they note down a zero, and each time they measure spin down, they note down a one. This way, they get a randomly created, shared sequence of bits, which they can use to encode messages.

But this is no good. The problem is, this key can easily be intercepted by Eve. She could catch the particle meant for Bob in midflight, measure it, note down the number, and then pass it on to Bob. That’s a recipe for disaster.

So, Alice picks up her physics textbooks and makes the sequence of particles that she sends to Bob more complicated.

That the spin is up or down means Alice has to choose a direction along which to create the spin. Bob has to know this direction to make his measurement, because different directions of spins obey an uncertainty relation. It is here where quantum mechanics becomes important. If you measure the direction of a spin into one direction, then the measurement into a perpendicular direction is maximally uncertain. For a binary variable like the spin, this just means the measurements in two orthogonal directions are uncorrelated. If Alice sends a particle that has spin up or down, but Bob mistakenly measures the spin in the horizontal direction, he just gets left or right with fifty percent probability.

Now, what Alice does is to randomly choose whether the particles’ spin goes in the up-down or left-right direction. As before, she sends the particles to Bob, but – and here is the important bit – does not tell him whether the particle was created in the up-down or left-right direction. Since Bob does not know the direction, he randomly picks one for his measurement. If he happens to pick the same direction that Alice used to create the particle, then he gets, as previously, a perfectly correlated result. But if he picks the wrong one, he gets a completely uncorrelated result.

After they have done that, Alice sends Bob information about which directions she used. For that, she can use an unencrypted channel. Once Bob knows that, he discards the measurements where he picked the wrong setting. The remaining measurements are then correlated, and that’s the secret key.

What happens now if Eve tries to intersect the key that Alice sends? Here’s the thing: She cannot do that without Bob and Alice noticing. That’s because she does not know either which direction Alice used to create the particles. If Eve measures in the wrong direction – say, left-right instead of up-down – she changes the spin of the particle, but she has no way of knowing whether that happened or not.

If she then passes on her measurement result to Bob, and it’s a case where Bob did pick the correct setting, then his measurement result will no longer be correlated with Alice’s, when it should be. So, what Alice and Bob do is that they compare some part of the sequence they have shared, again they can do that using an unencrypted channel, and they can check whether their measurements were indeed correlated when they should have been. If that’s not the case, they know someone tried to intercept the message. This is what makes the key safe.

The deeper reason this works is that in quantum mechanics it is impossible to copy an arbitrary state without destroying it. This is known as the no-cloning theorem, and this is ultimately why Eve cannot listen in without Bob and Alice finding out.

So, quantum key distribution is a secure way to exchange a secret key, which can be done either through optical fiber or just free space. Quantum key distribution actually already exists and is being used commercially, though it is not in widespread use. However, in this case the encoded message itself is still sent through a classical channel without quantum effects.

Quantum key distribution is an example for quantum cryptography, but quantum cryptography also more generally refers to using quantum effects to encode messages, not just to exchange keys. But this more general quantum cryptography so far exists only theoretically.

So, to summarize: “Post quantum cryptography” refers to non-quantum cryptography that cannot be broken with a quantum computer. It exists and is in the process of becoming widely adopted. “Quantum key distribution” exploits quantum effects to share a key that is secure from eavesdropping. It does already exist though it is not widely used. “Quantum cryptography” beyond quantum key distribution would use quantum effects to actually share messages. The theory exists but it has not been realized technologically.

I want to thank Scott Aaronson for fact-checking parts of this transcript, Tim Palmer for trying to fix my broken English even though it’s futile, and all of you for watching. See you next week.

Saturday, September 12, 2020

Path Dependence and Tipping Points

[This is a transcript for the video embedded below. Part of the text may not make sense without the graphics in the video.]



Most of the physics we learn about in school is, let’s be honest, a little dull. It’s balls rolling down slopes, resistance proportional to voltage, pendula going back and forth and back and forth... wait don’t fall asleep, that’s what I will not talk about today. Today I will talk about weird things that can happen in physics: path dependence and tipping points.

I want to start with chocolate. What’s chocolate got to do with physics? Chocolate is a crystal. No, really. A complicated crystal, alright, but a crystal, and a truly fascinating one. If you buy chocolate in a store you get it in this neat smooth and shiny form. It melts at a temperature between thirty-three and thirty-four degrees Celsius, or about ninety-two degrees Fahrenheit. That’s just below body temperature, so the chocolate will melt if you stuff it into your mouth but not too much earlier. Exactly what you want.

But suppose your chocolate melts for some other reason, maybe you left it sitting in the sun, or you totally accidentally held a hair drier above it. Now you have a mush. The physicist would say the crystal has undergone a phase transition from solid to liquid. But no problem, you think, you will just put it into the fridge. And sure enough, as you lower the temperature, the chocolate undergoes another phase transition and turns back into a solid.

Here’s the interesting thing. The chocolate now looks different. It’s not only that it has lost some of its original shape, it actually has a different structure now. It’s not as smooth and shiny as it previously wass. Even weirder, it now melts more easily! The melting point has dropped from about thirty-four to something like twenty-eight degrees Celsius. What the heck is going on?

What happens is that if the chocolate melts and becomes solid again, it does not form the same crystal structure that it had before. Instead, it ends up in a mixture of other crystal structures. If you want to get the crystal structure that chocolate is normally sold in, you have to cool it down very carefully and add seeds for the structure you want to get. This process is called “tempering”. The crystal structure which you get with tempering, the one that you normally buy, is actually unstable. Even if you do not let it melt, it will decay after some time. This is why chocolate gets “old” and then has this white stuff on the surface. Depending on what chocolate you have, the white stuff is sugar or fat or both, and it tells you that the crystal structure is decaying.

For our purposes the relevant point is that the chocolate can be in different states at the same temperature, depending on how you got there. In physics, we call this a “path dependence” of the state of the system. It normally means that the system has several different states of equilibrium. An equilibrium state is simply one that does not change in time. Though, as in the case of chocolate these states may merely be long-lived and not actually be eternally stable.

Chocolate is not exactly the example physicists normally use for path dependence. The go-to example for physicists is the magnetization of a ferromagnet. A ferromagnet is a metal that can be permanently magnetized. It’s what normal people call a “magnet”, period. The reason ferromagnets can be magnetized is that the electron shell structure means the atoms in the metal are tiny little magnets themselves. And these tiny magnets like to align their orientation with that of their neighbors.

Now, if you find a ferromagnetic metal somewhere out in the field, then its atomic magnets are almost certainly disordered and look somewhat like this. To make the illustration simpler, I will pretend that the atomic magnets can point in only one of two directions. If the little magnets are randomly pointing into one of these directions, then the metal has no overall magnetization.

If you apply a magnetic field to this metal, then the atoms will begin to align with the field because that’s energetically the most favorable state. At some point they’re just all aligned in the same direction, and the magnetization of the metal saturates. If you now turn off the magnetic field, some of those atoms will switch back again just because there’s some thermal motion and so on. However, at room temperature, the metal will keep most of the magnetization. That’s what makes ferromagnets special.

If you turn on the external magnetic field again but increase its strength into the other direction, then the atomic magnets will begin to line up pointing into that other direction until saturated. If you turn down the field back to zero, again most of them will continue to point there. Turn the external field back to the other side and you go back to saturating the magnetization in the first direction.

We can plot this behavior of the magnet in a graph that shows the external magnetic field and the resulting magnetization of the magnet. We started from zero, zero, saturated the magnetization pointing right, turned the external field to zero, but kept most of the magnetization. Saturated the magnetization pointing left, turned the field back to zero but kept most of the magnetization. And saturated the magnetization again to the right.

This is what is called the “hysteresis loop”. Hysteresis means the same as “path dependence”. Whether the magnetization of the metal points into one direction or the other does not merely depend on the external field. It also depends on how you got to that value of the field. In particular, if the external field is zero, the magnet has two different, stable, equilibrium states.

This path-dependence is also why magnets can be used to store information. Path-dependence basically means that the system has a memory.

Path-dependence sounds like a really peculiar physics-y thing but really it’s everywhere. Just to illustrate this I have squeezed myself into this T-shirt from my daughter. See, it has two stable equilibrium states. And they keep a memory of how you got there. That’s a path-dependence too.

Another common example of a path dependence are air conditioning units. To avoid a lot of switching on and off, they’re usually configured so that if you input a certain target temperature, they will begin to cool if the temperature rises more than a degree above the target temperature, but will stop cooling if the temperature has dropped to a degree below the target temperature. So whether or not the air condition is running at the target temperature depends on how you got to that temperature. That’s a path-dependence.

A common property of path-dependent systems is that they have multiple stable equilibrium states. As a reminder, equilibrium merely means it does not change in time. In some cases, a system can very suddenly switch between different equilibrium states. Like this parasol. It has a heavy weight at the bottom, so if the wind sways it a little, it will stay upright. That’s an equilibrium state. But if the wind blows too hard, it will suddenly top over. Also an equilibrium state. But a much more stable one. Even if the wind now blows into the other direction, the system is not going back to the first state.

Such a sudden transition between two equilibrium states is called a “tipping point”. You have probably heard the word “tipping point” in the context of climate models, where they are a particular pain. I say “pain” because by their very nature they are really hard to predict with mathematical modeling, exactly because there are so many path-dependencies in the system. A glacier that melts off at a certain level of carbondioxide will not climb back onto the mountain if carbondioxide levels fall. And that’s one of the better understood path-dependencies.

A much discussed tipping point in climate models is the Atlantic meridional overturning circulation. That’s a water cycle in the atlantic ocean. Warm surface water from the equator flows north. Along the way it cools and partly evaporates, which increases the density of salt in the water and makes the water heavy. The cool, salty water sinks down to the bottom of the ocean, comes back up where it came from, warms, and the cycle repeats. Why does it come back up in the same place? Well, if some water sinks down somewhere, then some water has to come up elsewhere. And a cycle is a stable configuration, so once the system settles in the cycle, it just continues cycling.

But. This particular cycle is not the only equilibrium configuration and the system does not have to stay there. In fact, there’s a high risk this water cycle is going to be interrupted if global temperatures continue to rise.

That’s because ice in the arctic is mostly fresh water. If it melts in large amounts, as it presently does, this reduces the salt content of the water. This can prevent the water in the atlantic overtuning circulation from sinking down and thereby shut off the cycle.

Now, this circulation is responsible for much of the warm wind that Europe gets. Did you ever look at a world map and noticed that the UK and much of middle Europe is North of Montreal? Why is the climate in these two places so dramatically different? Well, that atlantic overturning circulation is one of the major reasons. If it shuts off, we’re going to see a lot of climate changes very suddenly. Aaaand it’s a path-dependent system. Reducing carbondioxide after we’ve crossed that tipping point will not just turn the circulation back on. And some evidence suggests that this cycle is weakening already.

There are many other tipping points in climate models, that, once crossed can bring sudden changes that will stay with us for thousands of years, even if we bring carbondioxide levels back down. Like the collapse of the Greenland and West Antarctic Ice Sheet. If warming continues, the question is not whether it will happen but just when. I don’t want to go through this whole list, I just want to make clear that tipping points are not fear mongering. They are a very real risk that should not be dismissed easily.

I felt it was necessary to spell this out because I recently read an article by Michael Shellenberger who wrote: “Speculations about tipping points are unscientific because levels of uncertainty and complexity are too high, which is exactly why IPCC does not take such scenarios seriously.”

This is complete rubbish. First, tipping points are covered in the IPCC report, it’s just that they are not collected in a chapter called “tipping points,” they are called large scale singular events. I found this out by googling “tipping points IPCC”, so not like it would have taken Shellenberger much of an effort to get this right. Here is a figure from the summary for policy makers about the weakening of the atlantic overturning circulation, that’s the tipping point that we just talked about. And here they are going on about the collapse of ice sheets, another tipping point.

Having said that, tipping points are not emphasized much by the IPCC, but that’s not because they do not take them seriously, but because the existing climate models simply are not good enough to make reliable predictions for exactly when and how tipping points will be crossed. That does not mean tipping points are unscientific. Just because no one can presently put a number to the risk posed by tipping points does not mean the risk does not exist. It does mean, however, that we need better climate models.

Path-dependence and tipping points are cases where naïve extrapolations can badly fail and they are common occurrences in non-linear systems, like the global climate. Just because we’ve been coping okay with climate change so far does not mean it will remain that way.

I want to thank Michael Mann for checking parts of this transcript.

Saturday, September 05, 2020

What is a singular limit?

Imagine you bite into an apple and find a beheaded worm. Eeeh. But it could have been worse. If you had found only half a worm in the apple, you’d now have the other half in your mouth. And a quarter of worm in the apple would be even worse. Or a hundredth. Or a thousandth. If we extrapolate this, we find that the worst apple ever is one without worm.

Eh, no, this can’t be right, can it? What went wrong?

I borrowed the story of the wormy apple from Michael Berry, who has used it to illustrate a “singular limit”. In this video, I will explain what a singular limit is and what we can learn from it.


A singular limit is also sometimes called a “discontinuous limit” and it means that if some variable gets closer to a certain point, you do not get a good approximation for the value of a function at this point. In the case of the apple, the variable is the length of the worm that remains in the apple, and the point you are approaching is a worm-length of zero. The function is what you could call the yuckiness of the apple. The yuckiness increases the less worm is left in the apple, but then it suddenly jumps to totally okay. This is a discontinuity, or a singular limit.

You can simulate such a function on your smartphone easily if you punch in a positive number smaller than one and square it repeatedly. This will always give zero, eventually, regardless of how close your original number was to 1. But if you start from 1 exactly, you will stay at 1. So, if you define a function from the limit of squaring a number infinitely often, that would be f(x) is the limit n to infinity of x2n, where n is a natural number, then this function makes a sudden jump at x equals to 1.

This is a fairly obvious example, but singular limits are not always easy to spot. Here is an example from John Baez that will blow your mind, trust me, even if you are used to weird math. Look at this integral. Looks like a pretty innocent integral over the positive, real numbers. You are integrating the function sin(t) over t, and the result turns out to be π/2. Nothing funny going on.

You can make this integral a little more complicated by multiplying the function you are integrating with another function. This other function is just the same function as previously, except that it divides the integration variable by 101. If you integrate the product of these two functions, it comes out to be Ï€/2 again. You can multiply these two functions by a third function in which you divide the integration variable by 201. The result is Ï€/2 again. And so on.

We can write these integrals in a nicely closed form because zero times 100 plus 1 is just one. So, for an arbitrary number of factors, that we can call N, you get an integral over this product. And you can keep on evaluating these integrals, which will give you Ï€/2, Ï€/2, Ï€/2 until you give up at N equals 2000 or what have you. It certainly looks like this series just gives Ï€/2 regardless of N. But it doesn’t. When N takes on this value:
    15,341,178,777,673,149,429,167,740,440,969,249,338,310,889
The result of the integral is, for the first time, not Ï€/2, and it never becomes Ï€/2 for any N larger than that. You can find a proof for this here. The details of the proof don’t matter here, I am just telling you about this to show that mathematics can be far weirder than it appears at first sight.

And this matters because a lot of physicists act like the only numbers in mathematics are 2, Ï€, and Euler’s number. If they encounter anything else, then that’s supposedly “unnatural”. Like, for example, the strength of the electromagnetic force relative to the gravitational force between, say, an electron and a proton. That ratio turns out to be about ten to the thirty-nine. So what, you may say. Well, physicists believe that a number like this just cannot come out of the math all by itself. They called it the “Hierarchy Problem” and it supposedly requires new physics to “explain” where this large number comes from.

But pure mathematics can easily spit out numbers that large. There isn’t a priori anything wrong with the physics if a theory contains a large number. We just saw one such oddly specific large number coming out of a rather innocent looking integral series. This number is of the order of magnitude 1043. Another example of a large number coming out of pure math is the dimension of the monster group that is about 1053. So the integral series is not an isolated case. It’s just how mathematics is.

Let me be clear that I am not saying these particular numbers are somehow relevant for physics. I am just saying if we find experimentally that a constant without units is very large, then this does not mean math alone cannot explain it and it must therefore be a signal for new physics. That’s just wrong.

But let me come back to the singular limits because there’s more to learn from them. You may put the previous examples down as mathematical curiosities, but they are just very vivid demonstrations for how badly naïve extrapolations can fail. And this is something we do not merely encounter in mathematics, but also in a lot of physical systems.

I am here not thinking of the man who falls off the roof and, as he passes the 2nd floor, thinks “so far, so good”. In this case we full well know that his good luck will soon come to an end because the surface of earth is in the way of his well-being. We have merely ignored this information because otherwise it would not be funny. So, this is not what I am talking about. I am talking about situations where we observe sudden changes in a system that are not due to just willfully ignoring information.

An example you are probably familiar with are phase transitions. If you cool down water, it is liquid, liquid, liquid, until suddenly it isn’t. You cannot extrapolate from the water being liquid to it being a solid. It’s a pattern that does not continue. There are many such phase transitions in physical systems where the behavior of a system suddenly changes, and they usually come along with observable properties that make sudden jumps, like entropy or viscosity. These are singular limits.

Singular limits are all over the place in condensed matter physics, but in other areas, physicists seem to have a hard time acknowledging their existence. An example that you find frequently in the popular science press are calculations in a universe with a negative cosmological constant, that’s the so-called Anti-de Sitter space, which falsely raise the impression that these calculations tell us something about the real world, which has a positive cosmological constant.

A lot of physicists believe the one case tells us something about the other because, well, you could take the limit from a very small but negative cosmological constant to a very small but positive cosmological constant, and then, so they argue, the physics should be kind of the same. But. We know that the limit from a small negative cosmological constant to zero and then on to positive values is a singular limit. Space-time has a conformal boundary for all values strictly smaller than zero, but no longer for exactly zero. We have therefore no reason to think these calculations that have been done for a negative cosmological constant tell us anything about our universe, which has a positive cosmological constant.

Here are a few examples of such misleading headlines. They usually tell stories about black holes or wormholes because that’s catchy. Please do not fall for this. These calculations tell us nothing, absolutely nothing, about the real world.