Saturday, May 21, 2022

The closest we have to a Theory of Everything

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]

In English they talk about a “Theory of Everything”. In German we talk about the “Weltformel”, the world-equation. I’ve always disliked the German expression. That’s because equations in and by themselves don’t tell you anything. Take for example the equation x=y. That may well be the world-equation, the question is just what’s x and what is y. However, in physics we do have an equation that’s pretty damn close to a “world-equation”. It’s remarkably simple, looks like this, and it’s called the principle of least action. But what’s S? And what’s this squiggle. That’s what we’ll talk about today.

The principle of least action is an example of optimization where the solution you are looking for is “optimal” in some quantifiable way. Optimization principles are everywhere. For example, equilibrium economics optimizes the distribution of resources, at least that’s the idea. Natural selection optimizes the survival of offspring. If you shift around on your couch until you’re comfortable you are optimizing your comfort. What these examples have in common is that the optimization requires trial and error. The optimization we see in physics is different. It seems that nature doesn’t need trial and error. What happens is optimal right away, without trying out different options. And we can quantify just in which way it’s optimal.

I’ll start with a super simple example. Suppose a lonely rock flies through outer space, far away from any stars or planets, so there are no forces acting on the rock, no air friction, no gravity, nothing. Let’s say you know the rock goes through point A at a time we’ll call t_A and later through point B at time t_B. What path did the rock take to get from A to B?

Well, if no force is acting on the rock it must travel in a straight line with constant velocity, and there is only one straight line connecting the two dots, and only one constant velocity that will fit to the duration. It’s easy to describe this particular path between the two points – it’s the shortest possible path. So the path which the rock takes is optimal in that it’s the shortest.

This is also the case for rays of light that bounce off a mirror. Suppose you know the ray goes from A to B and want to know which path it takes. You find the position of point B in the mirror, draw the shortest path from A to B, and reflect the segment behind the mirror back because that doesn’t change the length of the path. The result is that the angle of incidence equals the angle of reflection, which you probably remember from middle school.

This “principle of the shortest path” goes back to the Greek mathematician Hero of Alexandria in the first century, so not exactly cutting edge science, and it doesn’t work for refraction in a medium, like for example water, because the angle at which a ray of light travels changes when it enters the medium. This means using the length to quantify how “optimal” a path is can’t be quite right.

In 1657, Pierre de Fermat figured out that in both cases the path which the ray of light takes from A to B is that which requires the least amount of time. If there’s no change of medium, then the speed of light doesn’t change and taking the least time means the same as taking the shortest path. So, reflection works as previously.

But if you have a change of medium, then the speed of light changes too. Let us use the previous example with a tank of water, and let us call speed of light in air c_1, and the speed of light in water c_2.

We already know that in either medium the light ray has to take a straight line, because that’s the fastest you can get from one point to another at constant speed. But you don’t know what’s the best point for the ray to enter the water so that the time to get from A to B is the shortest.

But that’s pretty straight-forward to calculate. We give names to these distances, calculate the length of the paths as a function of the point where it enters. Multiply each path with the speed in the medium and add them up to get the total time.

Now we want to know which is the smallest possible time if we change the point where the ray enters the medium. So we treat this time as a function of x and calculate where it has a minimum, so where the first derivative with respect to x vanishes.

The result you get is this. And then you remember that those ratios with square roots here are the sines of the angles. Et voila, Fermat may have said, this is the correct law of refraction. This is known as the principle of least time, or as Fermat’s principle, and it works for both reflection and refraction.

Let us pause here for a moment and appreciate how odd this is. The ray of light takes the path that requires the least amount of time. But how does the light know it will enter a medium before it gets there, so that it can pick the right place to change direction. It seems like the light needs to know something about the future. Crazy.

It gets crazier. Let us go back to the rock, but now we do something a little more interesting, namely throw the rock in a gravitational field. For simplicity let’s say the gravitational potential energy is just proportional to the height which it is to good precision near the surface of earth. Again I tell you the particle goes from point A at time T_A to point B at time t_B. In this case the principle of least time doesn’t give the right result.

But in the early 18th century, the French mathematician Maupertuis figured out that the path which the rock takes is still optimal in some other sense. It’s just that we have to calculate something a little more difficult. We have to take the kinetic energy of the particle, subtract the potential energy and integrate this over the path of the particle.

This expression, the time-integral over the kinetic minus potential energy is the “action” of the particle. I have no idea why it’s called that way, and even less do I know why it’s usually abbreviated S, but that’s how it is. This action is the S in the equation that I showed at the very beginning.

The thing is now that the rock always takes the path for which the action has the smallest possible value. You see, to keep this integral small you can either try to make the kinetic energy small, which means keeping the velocity small, or you make the potential energy large, because that enters with a minus.

But remember you have to get from A to B in a fixed time. If you make the potential energy large, this means the particle has to go high up, but then it has a longer path to cover so the velocity needs to be high and that means the kinetic energy is high. If on the other hand the kinetic energy is low, then the potential energy doesn’t subtract much. So if you want to minimize the action you have to balance both against each other. Keep the kinetic energy small but make the potential energy large.

The path that minimizes the action turns out to be a parabola, as you probably already knew, but again note how weird this is. It’s not that the rock actually tries all possible paths. It just gets on the way and takes the best one on first try, like it knows what’s coming before it gets there.

What’s this squiggle in the principle of least action? Well, if we want to calculate which path is the optimal path, we do this similarly to how we calculate the optimum of a curve. At the optimum of a curve, the first derivative with respect to the variable of the function vanishes. If we calculate the optimal path of the action, we have to take the derivative with respect to the path and then again we ask where it vanishes. And this is what the squiggle means. It’s a sloppy way to say, take the derivative with respect to the paths. And that has to vanish, which means the same as that the action is optimal, and it is usually a minimum, hence the principle of least action.

Okay, you may say but you don’t care all that much about paths of rocks. Alright, but here’s the thing. If we leave aside quantum mechanics for a moment, there’s an action for everything. For point particles and rocks and arrows and that stuff, the action is the integral over the kinetic energy minus potential energy.

But there is also an action that gives you electrodynamics. And there’s an action that gives you general relativity. In each of these cases, if you ask what the system must do to give you the least action, then that’s what actually happens in nature. You can also get the principle of least time and of the shortest path back out of the least action in special cases.

And yes, the principle of least action *really uses an integral into the future. How do we explain that?

Well. It turns out that there is another way to express the principle of least action. One can mathematically show that the path which minimizes the action is that path which fulfils a set of differential equations which are called the Euler-Lagrange Equations.

For example, the Euler Lagrange Equations of the rock example just give you Newton’s second law. The Euler Lagrange Equations for electrodynamics are Maxwell’s equations, the Euler Lagrange Equations for General Relativity are Einstein’s Field equations. And in these equations, you don’t need to know anything about the future. So you can make this future dependence go away.

What’s with quantum mechanics? In quantum mechanics, the principle of least action works somewhat differently. In this case a particle doesn’t just go one optimal path. It actually goes all paths. Each of these paths has its own action. It’s not only that the particle goes all paths, it also goes to all possible endpoints. But if you eventually measure the particle, the wave-function “collapses”, and the particle is only in one point. This means that these paths really only tell you probability for the particle to go one way or another. You calculate the probability for the particle to go to one point by summing over all paths that go there.

This interpretation of quantum mechanics was introduced by Richard Feynman and is therefore now called the Feynman path integral. What happens with the strange dependence on the future in the Feynman path integral? Well, technically it’s there in the mathematics. But to do the calculation you don’t need to know what happens in the future, because the particle goes to all points anyway.

Except, hmm, it doesn’t. In reality it goes to only one point. So maybe the reason we need the measurement postulate is that we don’t take this dependence on the future which we have in the path integral seriously enough.

No comments:

Post a Comment

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.