Saturday, May 28, 2022

Chaos: The Real Problem with Quantum Mechanics

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


You’ve probably seen a lot of headlines claiming that quantum mechanics is “strange”, “weird” or “spooky”. In the best case it’s “unintuitive” and “no one understands it”. Poor thing. In this video I will try to convince you that the problem with quantum mechanics isn’t that it’s weird. The problem with quantum mechanics is chaos. And that’s what we’ll talk about today.

Saturn has 82 moons. This is one of them, its name is Hyperion. Hyperion has a diameter of about 200 kilometers and its motion is chaotic. It’s not the orbit that’s chaotic, it’s the orientation of the moon on that orbit.

It takes Hyperion about 3 weeks to go around Saturn once, and about 5 days to rotate about its own axis. But the orientation of the axis tumbles around erratically every couple of months. And that tumbling is chaotic in the technical sense. Even if you measure the position and orientation of Hyperion to utmost precision, you won’t be able to predict what the orientation will be a year later.

Hyperion is a big headache for physicists. Not so much for astrophysicists. Hyperion’s motion can be understood, if not predicted, with general relativity or, to good approximation, with Newtonian dynamics and Newtonian gravity. These are all theories which do not have quantum properties. Physicists call such theories without quantum properties “classical”.

But Hyperion is a headache for those who think that quantum mechanics is really the way nature works. Because quantum mechanics predicts that Hyperion’s chaotic motion shouldn’t last longer than about 20 years. But it has lasted much longer. So, quantum mechanics has been falsified.

Wait what? Yes, and it isn’t even news. That quantum mechanics doesn’t correctly reproduce the dynamics of classical, chaotic systems has been known since the 1950s. The particular example with the moon of Saturn comes from the 1990s. (For details see here or here.)

The origin of the problem isn’t all that difficult to see. If you remember, in quantum mechanics we describe everything with a wave-function, usually denoted psi. There aren’t just wave-functions for particles. In quantum mechanics there’s a wave-function for everything: atoms, cats, and also moons.

You calculate the change of the wave-function in time with the Schrödinger equation, which looks like this. The Schrödinger equation is linear, which just means that no products of the wave-function appear in it. You see, there’s only one Psi on each side. Systems with linear equations like this don’t have chaos. To have chaos you need non-linear equations.

But quantum mechanics is supposed to be a theory of all matter. So we should be able to use quantum mechanics to describe large objects, right? If we do that, we should just find that the motion of these large objects agrees with the classical non-quantum behavior. This is called the “correspondence principle”, a name that goes back to Niels Bohr.

But if you look at a classical chaotic system, like this moon of Saturn, the prediction you get from quantum mechanics only agrees with that from classical Newtonian dynamics for a certain period of time, known as the “Ehrenfest time”. Within this time, you can actually use quantum mechanics to study chaos. This is what quantum chaos is all about. But after the Ehrenfest time, quantum mechanics gives you a prediction that just doesn’t agree with what we observe. It would predict that the orientations of Hyperion don’t tumble around but instead blur out until they’re so blurred you wouldn’t notice any tumbling. Basically the chaos gets washed away in quantum uncertainty.

It seems to me that some of you are a little skeptical. It can’t possibly be that physicists have known of this problem for 60 years and just ignored it? Indeed, they haven’t exactly ignored it. The have come up with an explanation which goes like this.

Hyperion may be far away from us and not much is going on there, but it still interacts with dust and with light or, more precisely, with the quanta of light called “photons”. These are each really tiny interactions, but there are a lot of them. And they have to be added to the Schrödinger equation of the moon.

What these tiny interactions do is that they entangle the moon with its environment, with the dust and the light. This means that each time a grain of dust bumps into the moon, this very slightly changes some part of the moon’s wave-function, and afterwards they are both correlated. This correlation is the entanglement. And those little bumps slightly shift the crest and troughs of parts of the wave-function.

This is called “decoherence” and it’s just what the Schrödinger equation predicts. And this equation is still linear, so all those interactions don’t solve the problem that the prediction doesn’t agree with observation. The solution to the problem comes in the 2nd step of the argument. Physicists now say, okay, so we have this wave-function for the moon with this huge number of entangled dust grains and photons. But we don’t know exactly what this dust is or where it is or what the photons do and so on. So we do what we always do if we don’t know the exact details: We make guesses about what the details could plausibly be and then we average over them. And that average agrees with what classical Newtonian dynamics predicts.

So, physicists say, all is good! But there are two problems with this explanation. One is that it forces you to accept that in the absence of dust and light a moon will not follow Newton’s law of motion.

Ok, well, you could say that in this case you can’t see the moon either so for all we can tell that might be correct.

The more serious problem is that taking an average isn’t a physical process. It doesn’t change anything about the state that the moon is in. It’s still in one of those blurry quantum states that are now also entangled with dust and photons, you just don’t know exactly which one.

To see the problem with the argument, let me use an analogy. Take a classical chaotic process like throwing a die. The outcome is an integer from 1 to 6, and if you average over many throws then the average value per throw is 3.5. Just exactly which outcome you get is determined by a lot of tiny details like the positions of air molecules and the surface roughness and the motion of your hand and so on.

Now suppose I write down a model for the die. My model says that the outcome of throwing the die is either 106 or -99 each with probability 1/2. Wait, you say, there’s no way throwing a die will give you minus 99. Look, I say, the average is 3.5, all is good. Would you accept this? Probably not.

Clearly for the model to be correct it shouldn’t just get the average right, but each possible individual outcome should also agree with observations. And throwing a die doesn’t give minus 99 any more than a big blurry rock entangled with a lot of photons agrees with our observations of Hyperion.

Ok but what’s with the collapse of the wave-function? When we make a measurement, then the wave-function changes in a way that the Schrödinger-equation does not predict. Whatever happened to that?

Exactly! In quantum mechanics we use the wave-function to make probabilistic predictions. Say, an electron hits either the left or right side of a screen with 50% probability each. But then when we measure the electron, we know it’s, say, left with 100% probability.

This means after a measurement we have to update the wave-function from 50-50 to 100-0. Importantly, what we call a “measurement” in quantum mechanics doesn’t actually have to be done by a measurement device. I know it’s an awkward nomenclature, but in quantum mechanics a “measurement” can happen just by interaction with a lot of particles. Like grains of dust, or photons.

This means, Hyperion is in some sense constantly being “detected” by all those small particles. And the update of the wave-function is indeed a non-linear process. This neatly resolves the problem: Hyperion correctly tumbles around on its orbit chaotically. Hurray.

But here’s the thing. This only works if the collapse of the wave-function is a physical process. Because you have to actually change something about that blurry quantum state of the moon for it to agree with observations. But the vast majority of physicists today think the collapse of the wave-function isn’t a physical process. Because if it was, then it would have to happen instantaneously everywhere.

Take the example of the electron hitting the screen. When the wave-function arrives on the screen, it is spread out. But when the particle appears on one side of the screen, the wave-function on the other side of the screen must immediately change. Likewise, when a photon hits the moon on one side, then the wave-function of the moon has to change on the other side, immediately.

This is what Einstein called “spooky action at a distance”. It would break the speed of light limit. So, physicists said, the measurement is not a physical process. We’re just accounting for the knowledge we have gained. And there’s nothing propagating faster than light if we just update our knowledge about another place.

But the example with the chaotic motion of Hyperion tells us that we need the measurement collapse to actually be a physical process. Without it, quantum mechanics just doesn’t correctly describe our observations. But then what is this process? No one knows. And that’s the problem with quantum mechanics.

Saturday, May 21, 2022

The closest we have to a Theory of Everything

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


In English they talk about a “Theory of Everything”. In German we talk about the “Weltformel”, the world-equation. I’ve always disliked the German expression. That’s because equations in and by themselves don’t tell you anything. Take for example the equation x=y. That may well be the world-equation, the question is just what’s x and what is y. However, in physics we do have an equation that’s pretty damn close to a “world-equation”. It’s remarkably simple, looks like this, and it’s called the principle of least action. But what’s S? And what’s this squiggle. That’s what we’ll talk about today.

The principle of least action is an example of optimization where the solution you are looking for is “optimal” in some quantifiable way. Optimization principles are everywhere. For example, equilibrium economics optimizes the distribution of resources, at least that’s the idea. Natural selection optimizes the survival of offspring. If you shift around on your couch until you’re comfortable you are optimizing your comfort. What these examples have in common is that the optimization requires trial and error. The optimization we see in physics is different. It seems that nature doesn’t need trial and error. What happens is optimal right away, without trying out different options. And we can quantify just in which way it’s optimal.

I’ll start with a super simple example. Suppose a lonely rock flies through outer space, far away from any stars or planets, so there are no forces acting on the rock, no air friction, no gravity, nothing. Let’s say you know the rock goes through point A at a time we’ll call t_A and later through point B at time t_B. What path did the rock take to get from A to B?

Well, if no force is acting on the rock it must travel in a straight line with constant velocity, and there is only one straight line connecting the two dots, and only one constant velocity that will fit to the duration. It’s easy to describe this particular path between the two points – it’s the shortest possible path. So the path which the rock takes is optimal in that it’s the shortest.

This is also the case for rays of light that bounce off a mirror. Suppose you know the ray goes from A to B and want to know which path it takes. You find the position of point B in the mirror, draw the shortest path from A to B, and reflect the segment behind the mirror back because that doesn’t change the length of the path. The result is that the angle of incidence equals the angle of reflection, which you probably remember from middle school.

This “principle of the shortest path” goes back to the Greek mathematician Hero of Alexandria in the first century, so not exactly cutting edge science, and it doesn’t work for refraction in a medium, like for example water, because the angle at which a ray of light travels changes when it enters the medium. This means using the length to quantify how “optimal” a path is can’t be quite right.

In 1657, Pierre de Fermat figured out that in both cases the path which the ray of light takes from A to B is that which requires the least amount of time. If there’s no change of medium, then the speed of light doesn’t change and taking the least time means the same as taking the shortest path. So, reflection works as previously.

But if you have a change of medium, then the speed of light changes too. Let us use the previous example with a tank of water, and let us call speed of light in air c_1, and the speed of light in water c_2.

We already know that in either medium the light ray has to take a straight line, because that’s the fastest you can get from one point to another at constant speed. But you don’t know what’s the best point for the ray to enter the water so that the time to get from A to B is the shortest.

But that’s pretty straight-forward to calculate. We give names to these distances, calculate the length of the paths as a function of the point where it enters. Multiply each path with the speed in the medium and add them up to get the total time.

Now we want to know which is the smallest possible time if we change the point where the ray enters the medium. So we treat this time as a function of x and calculate where it has a minimum, so where the first derivative with respect to x vanishes.

The result you get is this. And then you remember that those ratios with square roots here are the sines of the angles. Et voila, Fermat may have said, this is the correct law of refraction. This is known as the principle of least time, or as Fermat’s principle, and it works for both reflection and refraction.

Let us pause here for a moment and appreciate how odd this is. The ray of light takes the path that requires the least amount of time. But how does the light know it will enter a medium before it gets there, so that it can pick the right place to change direction. It seems like the light needs to know something about the future. Crazy.

It gets crazier. Let us go back to the rock, but now we do something a little more interesting, namely throw the rock in a gravitational field. For simplicity let’s say the gravitational potential energy is just proportional to the height which it is to good precision near the surface of earth. Again I tell you the particle goes from point A at time T_A to point B at time t_B. In this case the principle of least time doesn’t give the right result.

But in the early 18th century, the French mathematician Maupertuis figured out that the path which the rock takes is still optimal in some other sense. It’s just that we have to calculate something a little more difficult. We have to take the kinetic energy of the particle, subtract the potential energy and integrate this over the path of the particle.

This expression, the time-integral over the kinetic minus potential energy is the “action” of the particle. I have no idea why it’s called that way, and even less do I know why it’s usually abbreviated S, but that’s how it is. This action is the S in the equation that I showed at the very beginning.

The thing is now that the rock always takes the path for which the action has the smallest possible value. You see, to keep this integral small you can either try to make the kinetic energy small, which means keeping the velocity small, or you make the potential energy large, because that enters with a minus.

But remember you have to get from A to B in a fixed time. If you make the potential energy large, this means the particle has to go high up, but then it has a longer path to cover so the velocity needs to be high and that means the kinetic energy is high. If on the other hand the kinetic energy is low, then the potential energy doesn’t subtract much. So if you want to minimize the action you have to balance both against each other. Keep the kinetic energy small but make the potential energy large.

The path that minimizes the action turns out to be a parabola, as you probably already knew, but again note how weird this is. It’s not that the rock actually tries all possible paths. It just gets on the way and takes the best one on first try, like it knows what’s coming before it gets there.

What’s this squiggle in the principle of least action? Well, if we want to calculate which path is the optimal path, we do this similarly to how we calculate the optimum of a curve. At the optimum of a curve, the first derivative with respect to the variable of the function vanishes. If we calculate the optimal path of the action, we have to take the derivative with respect to the path and then again we ask where it vanishes. And this is what the squiggle means. It’s a sloppy way to say, take the derivative with respect to the paths. And that has to vanish, which means the same as that the action is optimal, and it is usually a minimum, hence the principle of least action.

Okay, you may say but you don’t care all that much about paths of rocks. Alright, but here’s the thing. If we leave aside quantum mechanics for a moment, there’s an action for everything. For point particles and rocks and arrows and that stuff, the action is the integral over the kinetic energy minus potential energy.

But there is also an action that gives you electrodynamics. And there’s an action that gives you general relativity. In each of these cases, if you ask what the system must do to give you the least action, then that’s what actually happens in nature. You can also get the principle of least time and of the shortest path back out of the least action in special cases.

And yes, the principle of least action *really uses an integral into the future. How do we explain that?

Well. It turns out that there is another way to express the principle of least action. One can mathematically show that the path which minimizes the action is that path which fulfils a set of differential equations which are called the Euler-Lagrange Equations.

For example, the Euler Lagrange Equations of the rock example just give you Newton’s second law. The Euler Lagrange Equations for electrodynamics are Maxwell’s equations, the Euler Lagrange Equations for General Relativity are Einstein’s Field equations. And in these equations, you don’t need to know anything about the future. So you can make this future dependence go away.

What’s with quantum mechanics? In quantum mechanics, the principle of least action works somewhat differently. In this case a particle doesn’t just go one optimal path. It actually goes all paths. Each of these paths has its own action. It’s not only that the particle goes all paths, it also goes to all possible endpoints. But if you eventually measure the particle, the wave-function “collapses”, and the particle is only in one point. This means that these paths really only tell you probability for the particle to go one way or another. You calculate the probability for the particle to go to one point by summing over all paths that go there.

This interpretation of quantum mechanics was introduced by Richard Feynman and is therefore now called the Feynman path integral. What happens with the strange dependence on the future in the Feynman path integral? Well, technically it’s there in the mathematics. But to do the calculation you don’t need to know what happens in the future, because the particle goes to all points anyway.

Except, hmm, it doesn’t. In reality it goes to only one point. So maybe the reason we need the measurement postulate is that we don’t take this dependence on the future which we have in the path integral seriously enough.

The Superdetermined Workshop finally took place

In case you’re still following this blog, I think I owe you an apology for the silence. I keep thinking I’ll get back to posting more than the video scripts but there just isn’t enough time in the day. 

Still, I’m just back from Bonn, where our workshop on Superdeterminism and Retrocausality finally took place. And since I told you how this story started three years ago I thought I’d tell you today how it went.

Superdeterminism and Retrocausality are approaches to physics beyond quantum mechanics, at least that’s how I think about it – and that already brings us to the problem: we don’t have an agreed-upon definition for these terms. Everyone is using them in a slightly different way and it’s causing a lot of confusion. 

So one of the purposes of the workshop was to see if we can bring clarity into the nomenclature. The other reason was to bring in experimentalists, so that the more math-minded among us could get a sense of what tests are technologically feasible.

I did the same thing 15 years ago with the phenomenology of quantum gravity, on which I organized a series of conferences (if you’ve followed this blog for a really long time you’ll remember). This worked out beautifully – the field of quantum gravity phenomenology is in much better condition today than it was 20 years ago.

It isn’t only that I think we’ll quite possibly see experimental confirmation (or falsification!) of quantum gravity in the next decade or two, because I thought that’d be possible all along. Much more important is that the realization that it’s possible to test quantum gravity (without building a Milky-Way sized particle collider) is slowly sinking into the minds of the community, so something is actually happening.

But, characteristically, the moment things started moving I lost interest in the whole quantum gravity thing and moved on to attack the measurement problem in quantum mechanics. I have a lot of weaknesses, but lack of ambition isn’t one of them.

The workshop was originally scheduled to take place in Cambridge in May 2020. We picked Cambridge because my one co-organizer, Huw Price, was located there, the other one, Tim Palmer, is in Oxford, and both places collect a decent number of quantum foundations people. We had the room reserved, had the catering sorted out, and had begun to book hotels. Then COVID happened and we had to cancel everything at the last minute. We tentatively postponed the meeting to late 2020, but that didn’t come into being either.

Huw went to Australia, and by the time the pandemic was tapering out, he’d moved on to Bonn. We moved the workshop with him to Bonn, more specifically to a place called the International Center for Philosophy. Then we started all over again.

We didn’t want to turn this workshop into an online event because that’d have defeated the purpose. There are few people working on superdeterminism and retrocausality and we wanted them to have a chance to get to personally know each other. Luckily our sponsor, the Franklin Fetzer Fund, was extremely supportive even though we had to postpone the workshop twice and put up with some cancellation fees.

Of course the pandemic isn’t quite over and several people still have travel troubles. In particular, it turned out there’s a nest of retrocausalists in Australia and they were more or less stuck there. Traveling from China is also difficult at the moment. And we had a participant affiliated with a Russian university who had difficulties traveling for yet another reason. The world is in many ways a different place now than it was 2 years ago.

One positive thing that’s come out of the pandemic though is that it’s become much easier to set up zoom links and live streams and people are more used to it. So while we didn’t have remote talks, we did have people participating from overseas, from Australia, China, and Canada. It worked reasonably well, leaving aside the usually hiccups, that they partly couldn’t see or hear, the zoom event expired when it shouldn’t have, etc.

I have organized a lot of workshops and conferences and I have attended even more of them. This meeting was special in a way I didn’t anticipate. Many of the people who are working on superdeterminism and retrocausality have for decades been met with a mix of incredulity, ridicule, and insults. In fact, you might have seen this play out with your own eyes in the comment sections of this and other blogs. For many of us, me included, this was the first time we had an audience who took our work seriously.

All of this talk about superdeterminism and new physics beyond quantum mechanics may turn out to be complete rubbish of course. But at least at present I think it’s the most promising route to make progress in the foundations of physics. The reason is quite simple: If it’s right, then new physics should appear in a parameter range that we can experimentally access from two sides, by making measuring devices smaller, and by bringing larger objects into quantum states. And by extrapolating the current technological developments, we'll get there soon enough anyway. The challenge is now to figure out what to look for when the data come in.

The talks from the workshop were recorded. I will post a link when they appear online. We’re hoping to produce a kind of white paper that lays out the terminology that we can refer to in the future. And I am working on a new paper in which I try to better explain why I think that either superdeterminism or retrocausality is almost certainly correct. So this isn’t the end of the story, it’s just the beginning. Stay tuned. 

Friday, May 13, 2022

Can we make a black hole? And if we could, what could we do with it?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Wouldn’t it be cool to have a little black hole in your office? You know, maybe as a trash bin. Or to move around the furniture. Or just as a kind of nerdy gimmick. Why can we not make black holes? Or can we? If we could, what could we do with them? And what’s a black hole laser? That’s what we’ll talk about today.

Everything has a gravitational pull, the sun and earth but also you and I and every single molecule. You might think that it’s the mass of the object that determines how strong the gravitational pull is, but this isn’t quite correct.

If you remember Newton’s gravitational law, then, sure, a higher mass means a higher gravitational pull. But a smaller radius also means a higher gravitational pull. So, if you hold the mass fixed and compress an object into a smaller and smaller radius, then the gravitational pull gets stronger. Eventually, it becomes so strong that not even light can escape. You’ve made a black hole.

This happens when the mass is compressed inside a radius known as the Schwarzschild-radius. Every object has a Schwarzschild radius, and you can calculate it from the mass. For the things around us the Schwarzschild-radius is much much smaller than the actual radius.

For example, the actual radius of earth is about 6000 kilometers, but the Schwarzschild-radius is only about 9 millimeters. Your actual radius is maybe something like a meter, but your Schwarzschild radius is about 10 to the minus 24 meters, that’s about a billion times smaller than a proton.

And the Schwarzschild radius of an atom is about 10 to the minus 53 meters, that’s even smaller than the Planck length which is widely regarded to be the smallest possible length, though I personally think this is nonsense, but that’s a different story.

So the reason we can’t just make a black hole is that the Schwarzschild radius of stuff we can handle is tiny, and it would take a lot of energy to compress matter sufficiently. It happens out there in the universe because if you have really huge amounts of matter with little internal pressure, like burned out stars, then gravity compressed it for you. But we can’t do this ourselves down here on earth. It’s basically the same problem like making nuclear fusion work, just many orders of magnitude more difficult.

But wait. Einstein said that mass is really a type of energy, and energy also has a gravitational pull. Yes, that guy again. Doesn’t this mean, if we want to create a black hole, we can just speed up particles to really high velocity, so that they have a high energy, and then bang them into each other. For example, hmm, with a really big collider. 

Indeed, we could do this. But even the biggest collider we have built so far, which is currently the Large Hadron Collider at CERN, is nowhere near reaching the required energy to make a black hole. Let’s just put in the numbers.

In the collisions at the LHC we can reach energies about 10 TeV, that corresponds to a Schwarzschild radius of about 10 to the minus 50 meters. But the region in which the LHC compresses this energy is more like 10 to the minus 19 meters. We’re far far away from making a black hole.

So why were people so worried 10 years ago that the LHC might create a black hole? This is only possible if gravity doesn’t work the way Einstein said. If gravity for whatever reason would be much stronger on short distances than Einstein’s theory predicts, then it’d become much easier to make black holes. And 10 years ago the idea that gravity could indeed get stronger on very short distances was popular for a while. But there’s no reason to think this is actually correct and, as you’ve noticed, the LHC didn’t produce any black holes.

Alright, so far it doesn’t sound like you’ll get your black hole trash can. But what if we built a much bigger collider? Yes, well, with current technology it’d have to have a diameter about the size of the milky-way. It’s not going to happen. Something else we can do?

We could try to focus a lot of lasers on a point. If we used the world’s currently most powerful lasers and focused them on an area about 1 nanometer wide, we’d need about 10 to the 37 of those lasers. It’s not strictly speaking impossible, but clearly it’s not going to happen any time soon.  

Ok, good, but what if we could make a black hole? What could we do with it? Well, surprise, there’s a couple of problems. Black holes have a reputation for sucking stuff in, but actually if they’re small, the problem is the opposite. They throw stuff out. That stuff is Hawking radiation. 

Stephen Hawking discovered in the early 1970s that all black holes emit radiation due to quantum effects, so they lose mass and evaporate. The smaller the black holes, the hotter, and the faster they evaporate. A black hole with a mass of about 100 kilograms would entirely evaporate in less than a nanosecond.

Now “Evaporation” sounds rather innocent and might make you think of a puddle turning into water vapor. But for the black hole it’s far from innocent. And if the black hole’s temperature is high, the radiation is composed of all elementary particles, photons, electrons, quarks, and so on. It’s really unhealthy. And a small black hole converts energy into a lot of those particles very quickly. This means a small black hole is black basically a bomb. So it wouldn’t quite work out the way it looks in the Simpson’s clip. Rather than eating up the city it’d blast it apart.

But if you’d manage to make a black hole with masses about a million tons, those would live a few years, so that’d make more sense. Hawking suggested to surround them with mirrors and use them to generate power. It’d be very climate friendly, too. Louis Crane suggested to put such a medium sized black hole in the focus of a half mirror and use its radiation to propel a spaceship.

Slight problem with this is that you can’t touch black holes, so there’s nothing to hold them with. A black hole isn’t really anything, it’s just strongly curved space. They can be electrically charged but since they radiate they’ll shed their electric charge quickly, and then they are neutral again and electric fields won’t hold them. So some engineering challenges that remain to be solved.

What if we don’t make a black hole but just use one that’s out there? Are those good for anything? The astrophysical black holes which we know exist are very heavy. This means their Hawking temperature is very small, so small indeed that we can’t measure it, as I just explained in a recent video. But if we could reach such a black hole it might be useful for something else.

Roger Penrose already pointed out in the early 1970s that it’s possible to extract energy from a big, spinning black hole by throwing an object just past it. This slows down the black hole by a tiny bit, but speeds up the object you’ve thrown. So energy is conserved in total, but you get something out of it. It’s a little like a swing-by that’s used in space-flight to speed up space missions by using a path that goes by near a planet.

And that too can be used to build a bomb… This was pointed out in 1972 in a letter to Nature by Press and Teukolsky. They said, look, we’ll take the black hole, surround it with mirrors, and then we send in a laser beam, just past the black hole. That gets bend around and comes back with a somewhat higher energy, like Penrose said. But then it bounces off the mirror, goes around the black hole again, gains a little more energy, and so on. This exponentially increases the energy in the laser light until the whole thing blasts apart.

Ok, so now that we’ve talked about blowing things up with bombs that we can’t actually build, let us talk about something that we can actually build, which is called an analogue black hole. The word “analogue” refers to “analogy” and not to the opposite of digital. Analogue black holes are simulations of black holes in fluids or solids where you can “trap” some kind of radiation.

In some cases, what you trap are sound waves in a fluid, rather than light. I should add here that “sound waves” in physics don’t necessarily have something to do with what you can hear. They are just periodic density changes, like the sound you can hear, but not necessarily something your ears can detect.

You can trap sounds waves in a similar way to how a black hole traps light. This can happen if a fluid flows faster than the sound speed in that fluid. You see, in this case there’s some region from within which the sound waves can’t escape.

Those fluids aren’t really black holes of course, they don’t actually trap light. But they affect sound very much like real black holes affect light. If you want to observe Hawking radiation in such fluids, they need to have quantum properties, so in practice one uses superfluids. Another way to create a black hole analogue it is with solids in which the speed of light changes from one place to another.

And those analogue black holes can be used to amplify radiation too. It works a little differently than the amplifications we already discussed because one needs two horizons, but the outcome is pretty much the same: you send in radiation with some energy, and get out radiation with more energy. Of course the total energy is conserved, you take that from the background field which is the analogy for the black hole. This radiation which you amplify isn’t necessarily light, as I said it could be sound waves, but it’s an “amplified stimulated emission”, which is why this is called a black hole laser.

Black hole lasers aren’t just a theoretical speculation. It’s reasonably well confirmed that analogue black holes actually act much like real black holes and do indeed emit Hawking radiation. And there have been claims that black hole lasing has been observed as well. It has remained somewhat controversial exactly what the experiment measured, but either way it shows that black hole lasers are within experimental reach. They’re basically a new method to amplify radiation. This isn’t going to result in new technology in the near future, but it serves to show that speculations about what we could do with black holes aren’t as far removed from reality as you may have thought.

Saturday, May 07, 2022

How Bad is Diesel?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


I need a new car, and in my case “new” really means “used”. I can’t afford one of the shiny new electric ones, so it’s either gasoline or diesel. But in recent years we’ve seen a lot of bad headlines about diesel. Why do diesel engines have such a bad reputation? How much does diesel exhaust affect our health really? And what’s the car industry doing about it? That’s what we will talk about today.

In September 2015, news broke about the Volkswagen emissions scandal, sometimes referred to as Dieselgate. It turned out Volkswagen had equipped cars with a special setting for emission tests, so that they would create less pollution during the test than on the road. Much of the world seems to have been shocked how the allegedly accurate and efficient Germans could possibly have done such a thing. I wasn’t really surprised. Let me tell you why.

My first car was a little red ford fiesta near the end of its life. For the emissions test I used to take it to a cheap repair shop in the outskirts of a suburb of a suburb. There was no train station and really nothing else nearby, so I’d usually just wait there. One day I saw the guy from the shop fumbling around on the engine before the emissions test and asked him what he was doing. Oh, he said, he’s just turning down the engine so it’ll pass the test. But with that setting the car wouldn’t properly drive, so later he’ll turn it up again.

Well, I thought, that probably wasn’t the point of the emissions test. But I didn’t have money for a better car. When I heard the news about the Volkswagen scandal, that made total sense to me. Of course the always efficient Germans would eventually automatize this little extra setting for the emissions test.

But why is diesel in particular so controversial? Diesel and gasoline engines are similar in that they’re both internal combustion engines. In these engines, fuel is ignited which moves a piston, so it converts chemical energy into mechanical energy.

The major difference between diesel and gasoline is the way these explosions happen. In a gasoline engine, the fuel is mixed with air, compressed by pistons and ignited by sparks from spark plugs. In a diesel engine, the air is compressed first which heats it up. Then the fuel is injected into the hot air and ignites. 

One advantage of diesel engines is that they don’t need a constant ignition spark. You just have to get them going once and then they’ll keep on running. Another advantage is that the energy efficiency is about thirty percent higher than that of gasoline engines. They also have lower carbon dioxide emissions per kilometer. For this reason, they were long considered environmentally preferable.

The disadvantage of diesel engines is that the hotter and more compressed gas produces more nitrous oxide and more particulates. And those are a health hazard.

Nitrous Oxides are combinations of one Nitrogen and one or several Oxygen atoms. The most prevalent ones in diesel exhaust are nitric oxide (NO) and nitrogen dioxide (NO2 ). When those molecules are hit by sunlight they can also split off an oxygen atom which then creates ozone by joining an O2 in the air. Many studies have shown that breathing in ozone or nitrous oxides irritates airways and worsens respiratory illness, especially asthma.

It’s difficult to find exact numbers for comparing nitric oxide components for diesel with gasoline because they depend strongly on the car and make and road conditions and how long the car’s been driving etc.

A road test on 149 diesel and gasoline cars manufactured from 2013 to 2016 found that Nitrogen oxide emissions from diesel cars are about a factor ten higher than those of gasoline cars.

This is nicely summarized in this figure, where you can see why this discussion is so heated. Diesel has on average lower carbon-dioxide emission but higher emissions of nitrogen oxides, gasoline cars the other way round. However, you also see that there are huge differences between the cars. You can totally find diesel engines that are lower in both emissions than some gasoline cars. Also note the two hybrid cars which are low on both emissions.

The other issue with diesel emissions are the particulates, basically tiny grains. Particulates are classified by their size, usually abbreviated with PM for ‘particulate matter’ and then a number which tells you their maximal size in micrometers. For example, PM2.5 stands for particulates the size of 2 point 5 micrometers or smaller.

This classification is somewhat confusing because technically PM 10 includes PM2.5. But it makes sense if you know that regulations put bounds on the total amount of particulates in a certain class in terms of weight, and most of the weight in some size classification comes from the largest particles.

So a PM10 limit will for all practical purposes just affect the largest of those particles. To reduce the smaller ones, you then add another limit for, say PM2.5.

Diesel particulates are made of soot and ash from incomplete burning of the fuel, but also abrasion from the engine parts, that includes metals, sulfates, and silicates. Diesel engines generate particulates with a total mass of up to 100 times more than similar-sized petrol engines.

What these particulates do depends strongly on their size. PM10 particles tend to settle to the ground by gravity in a matter of hours whereas PM0.1 can stay in the atmosphere for weeks and are then mostly removed by precipitation. The numbers strongly depend on weather conditions.

When one talks about the amount of particulate matter in diesel exhaust one has to be very careful exactly how one quantifies them. Most of the *mass* of particulate matter in diesel exhaust is in range of about a tenth of a micrometer. But most of the particles are about a factor of ten in size smaller. It’s just that since they’re so much smaller they don’t have much total mass.

This figure (p 157) shows the typical distribution of particulate matter in diesel exhaust. The brown dotted line is the distribution of mass. As you can see it peaks somewhat above tenth of a micrometer, that’s where PM 0.1 begins. For comparison, that’s a hundred to a thousand times smaller than pollen. The blue line is the distribution of the number of particles.

As you can see it peaks at a much smaller size, about 10 nanometers. That’s roughly the same size as viruses, so these particulates are really, really tiny, you can’t see them by eye. The green curve shows yet something else, it’s the surface of those particles. The surface is important because it determines how much the particles can interact with living tissue.  

The distinction between mass, surface, and amounts of particulate matter may seem like nitpicking but it’s really important because regulations are based on them.

What do we know about the health impacts of particulates? The WHO has classified airborne particulates as a Group 1 carcinogen. That they’re in group 1 means that the causal relation has been established. But the damage that those particles can do depends strongly on their size. Roughly speaking, the smaller they are, the more easily they can enter the body and the more damage they can do.

PM10 can get into the lower part of the respiratory system, PM 2.5 and smaller can enter the blood through the lung, and from thereon it can reach pretty much every organ.

The body does have some defense mechanisms. First there’s the obvious like coughing and sneezing, but once the stuff’s in the lower lungs it can stay there for months and if you breathe in new particulates all the time, the lung doesn’t have a chance to clear out. In other organs, the immune system tries to attack the particles but the most prevalent element in these particulates is carbon, and that is biopersistent, which means they just sit there and accumulate in the tissue.

Here’s a photo of such particulates that have accumulated in bronchial tissue. (Fig 2) It isn’t just that having dirt accumulate in your organs isn’t good news, the particulates can also carry toxic compounds on their surfaces. According to the WHO, PM 2.5 exposure has been linked to an increased risk heart attacks, strokes, respiratory disease, and premature death [Source (3)].

One key study was published in 2007 by researchers from several American institutions. They followed over 65,000 postmenopausal American women who had no history of diagnosed cardiovascular disease.

They found that a 10 microgram increase of PM 2.5 per cubic meter was associated with a 24 percent increase for experiencing a first cardiovascular event (at 95% CL), and a 76% increase for death resulting from cardiovascular disease, also at 95% CL. These results were already adjusted to remove already known risk factors, such as those stemming from age, household income, pre-existing conditions, and so on.

OA 2013 study that was published in The Lancet followed over 300,000 people from nine European countries for more than a decade. They found that a 5 microgram increase of PM 2.5 per cubic meter was correlated with an 18% increased risk of developing lung cancer. Again those results are already adjusted to take into account otherwise known risk factors. The PM exposure adds on top of that.

There’ve been lots of other studies claiming correlations between exposure to particulate matter and all kinds of diseases, though not all of them have great statistics. One even claimed they found a correlation between exposure to particulate pollution and decreasing intelligence, which explains it all, really.

Okay, so far we have seen that diesel exhaust really isn’t healthy. Well, glad we talked about it, but that doesn’t really help me to decide what to do about my car. Let’s then look at what the regulations are and what the car industry has been doing about it.

The World Health organization has put out guideline values for PM10 and PM2.5, both an annual mean and a daily mean, but as you see in this table the actual regulations in the EU are considerably higher. In the US people are even more relaxed about air pollution. Australia has some of the strictest air pollution standards but even those are above what the WHO recommends.

If you want to put these numbers in perspective, you can look up the air quality at your own location on the website iqair.com that’ll tell you the current PM 2.5 concentration. If you live in a major city chances are you’ll find the level frequently exceeds the WHO recommendation.

Of course the reason for this is not just diesel exhaust. In fact, if you look at this recently published map of global air pollution levels, you’ll see that some of the most polluted places are tiny villages in southern Chile and Patagonia. The reason is not that they love diesel so much down there, but almost everybody heats the house and cooks with firewood.

Indeed, more than half of PM2.5 pollution comes from fuel combustion in industry and households, while road transport accounts merely for about 11 percent. But more than half of the road traffic contribution to particulate matter comes from abrasion, not from exhaust. The additional contribution from diesel exhaust to the total pollution is therefore in the single percent values. Though you have to keep in mind that these are average values, the percentages can be very different in specific locations. These numbers are for the European Union but they are probably similar in the United States and the UK.

And of the fraction coming from diesel, only some share come from passenger cars, the rest is trucks which are almost exclusively diesel. Just how the breakdown between trucks and diesel passenger cars looks depends strongly on location.

Nevertheless, diesel exhaust is a significant contribution to air pollution, especially in cities where traffic is dense and air flow small. This is why many countries have passed regulation to force car manufacturers to make diesel cleaner.

Europeans have regularly updated their emission standards since 1992. The standards are called Euro 1, Euro 2, and so on, with the current one being Euro 6. The Euro 7 standard is expected for 2025. The idea is that only cars with certain standards are allowed into cities, though each city picks its own standard.

For example, London currently uses Euro 6, Brussels 5, and in Paris the rules change every couple of months and depend on the time of the day and just paying the fee may be less painful than figuring out what you’re supposed to do.

Basically these European standards limit the emissions of carbon dioxide, nitrogen oxides, and particulates, and some other things. (Table) The industry is getting increasingly better at adapting to these restrictions. As a consequence, new diesel cars pollute considerably less than those from one or two decades ago.

One of the most popular ways to make diesel cleaner is filtering the exhaust before it is released into the air.  A common type of filter are cordierite wall flow filters which you see in this image. They are very efficient and relatively inexpensive. These filters remove particles of size 100 nano meters and up.

The ones approved by the Environmental Protection Agency in the USA filter at least 85 percent of particulates, though some filter percentages in the upper 90s.  When the filter is “full” it gets burned by the engine itself. Remember that most of the particulates gets created by incomplete combustion in the first place, so you can in principle burn them again.

However, a consequence of this is some of the particulates simply get too small to be caught in the filter and they eventually escape. Another downside is that some filters result in an increase of nitrogen oxide emission when the filter is burned. Still, the filters do take out a significant fraction of the particulates.

Another measure to reduce pollution is exhaust gas recirculation. This isn’t only used in diesel cars but also in gasoline cars and it works by recirculating a portion of the exhaust gas back to the engine cylinders. This dilutes the oxygen in the incoming air stream and brings down the peak temperature. Since nitrogen oxides are mostly produced at higher temperature, this has the effect of reducing their fraction in the exhaust. But this recirculation has the downside that with the drop of combustion temperature the car drives less efficiently.

These technologies have been around for decades, but since emission regulations have become more stringent, carmakers have pushed their development and integration forward. This worked so well that in 2017 an international team of researchers published a paper in Science magazine in which they claimed that  modern gasoline produces more carbonaceous particulate matter than modern filter-equipped diesel cars.

What’s carbonaceous? That’s particles which contain carbon, and those make up about 50 percent of the particulates in the emissions. So not all of it but a decent fraction. In the paper they argue that whether gasoline or diesel cars are more polluting depends on what pollutant you look at, the age of the engine and whether it carries a filter or a catalytic converter.

I think what we learn from this is that being strict about environmental requirements and regulations seems to work out pretty well for diesel emissions, and the industry has proved capable of putting their engineers at work and finding solutions. Not all is good, but it’s getting better.

And this has all been very interesting but hasn’t really helped me make up my mind about what car to buy. So what should I do? Let me know what you think in the comments.