Saturday, February 29, 2020

14 Years BackRe(Action)

[Image: Scott McLeod/Flickr]
14 years ago, I was a postdoc in Santa Barbara, in a tiny corner office where the windows wouldn't open, in a building that slightly swayed each time one of the frequent mini-earthquakes shook up California. I had just published my first blogpost. It happened to be about the possibility that the Large Hadron Collider, which was not yet in operation, would produce tiny black holes and inadvertently kill all of us. The topic would soon rise to attention in the media and thereby mark my entry into the world of science communication. I was well prepared: Black holes at the LHC were the topic of my PhD thesis.

A few months later, I got married.

Later that same year, Lee Smolin's book "The Trouble With Physics" was published, coincidentally at almost the same time I moved to Canada and started my new position at Perimeter Institute. I had read an early version of the manuscript and published one of the first online reviews. Peter Woit's book "Not Even Wrong" appeared at almost the same time and kicked off what later became known as "The String Wars", though I've always found the rather militant term somewhat inappropriate.

Time marched on and I kept writing, through my move to Sweden, my first pregnancy and the following miscarriage, the second pregnancy, the twin's birth, parental leave, my suffering through 5 years of a 3000 km commute while trying to raise two kids, and, in late 2015, my move back to Germany. Then, in 2018, the publication of my first book.

The loyal readers of this blog will have noticed that in the past year I have shifted weight from Blogger to YouTube. The reason is that the way search engine algorithms and the blogosphere have evolved, it has become basically impossible to attract new audiences to a blog. Here on Blogger, I feel rather stuck on the topics I have originally written about, mostly quantum gravity and particle physics, while meanwhile my interests have drifted more towards astrophysics, quantum foundations, and the philosophy of physics. YouTube's algorithm is certainly not perfect, but it serves content to users that may be interested in the topic of a video, regardless of whether they've previously heard of me.

I have to admit that personally I still prefer writing over videos. Not only because it's less time-consuming, but also because I don't particularly like either my voice or my face. But then, the average number of people who watch my videos has quickly surpassed the number of those who typically read my blog, so I guess I am doing okay.

On this occasion I want to thank all of you for spending some time with me, for your feedback and comments and encouragement. I am especially grateful to those of you who have on occasion sent a donation my way. I am not entirely sure where this blog will be going in the future, but stay around and you will find out. I promise it won't be boring.

Friday, February 28, 2020

Quantum Gravity in the Lab? The Hype Is On.


Quanta Magazine has an article by Phillip Ball titled “Wormholes Reveal a Way to Manipulate Black Hole Information in the Lab”. It’s about using quantum simulations to study the behavior of black holes in Anti De-Sitter space, that is a space with a negative cosmological constant. A quantum simulation is a collection of particles with specifically designed interactions that can mimic the behavior of another system. To briefly remind you, we do not live in Anti De-Sitter space. For all we know, the cosmological constant in our universe is positive. And no, the two cases are not remotely similar.

It’s an interesting topic in principle, but unfortunately the article by Ball is full of statements that gloss over this not very subtle fact that we do not live in Anti De-Sitter space. We can read there for example:
“In principle, researchers could construct systems entirely equivalent to wormhole-connected black holes by entangling quantum circuits in the right way and teleporting qubits between them.”
The correct statement would be:
“Researchers could construct systems whose governing equations are in certain limits equivalent to those governing black holes in a universe we do not inhabit.”
Further, needless to say, a collection of ions in the laboratory is not “entirely equivalent” to a black hole. For starters that is because the ions are made of other particles which are yet again made of other particles, none of which has any correspondence in the black hole analogy. Also, in case you’ve forgotten, we do not live in Anti De-Sitter space.

Why do physicists even study black holes in Anti-De Sitter space? To make a long story short: Because they can. They can, both because they have an idea how the math works and because they can get paid for it.

Now, there is nothing wrong with using methods obtained by the AdS/CFT correspondence to calculate the behavior of many particle systems. Indeed, I think that’s a neat idea. However, it is patently false to raise the impression that this tells us anything about quantum gravity, where by “quantum gravity” I mean the theory that resolves the inconsistency between the Standard Model of particle physics and General Relativity in our universe. Ie, a theory that actually describes nature. We have no reason whatsoever to think that the AdS/CFT correspondence tells us something about quantum gravity in our universe.

As I explained in this earlier post, it is highly implausible that the results from AdS carry over to flat space or to space with a positive cosmological constant because the limit is not continuous. You can of course simply take the limit ignoring its convergence properties, but then the theory you get has no obvious relation to General Relativity.

Let us have a look at the paper behind the article. We can read there in the introduction:
“In the quest to understand the quantum nature of spacetime and gravity, a key difficulty is the lack of contact with experiment. Since gravity is so weak, directly probing quantum gravity means going to experimentally infeasible energy scales.”
This is wrong and it demonstrates that the authors are not familiar with the phenomenology of quantum gravity. Large deviations from the semi-classical limit can occur at small energy scales. The reason is, rather trivially, that large masses in quantum superpositions should have gravitational fields in quantum superpositions. No large energies necessary for that.

If you could, for example, put a billiard ball into a superposition of location you should be able to measure what happens to its gravitational field. This is unfeasible, but not because it involves high energies. It’s infeasible because decoherence kicks in too quickly to measure anything.

Here is the rest of the first paragraph of the paper. I have in bold face added corrections that any reviewer should have insisted on:
“However, a consequence of the holographic principle [3, 4] and its concrete realization in the AdS/CFT correspondence [5–7] (see also [8]) is that non-gravitational systems with sufficient entanglement may exhibit phenomena characteristic of quantum gravity in a space with a negative cosmological constant. This suggests that we may be able to use table-top physics experiments to indirectly probe quantum gravity in universes that we do not inhabit. Indeed, the technology for the control of complex quantum many-body systems is advancing rapidly, and we appear to be at the dawn of a new era in physics—the study of quantum gravity in the lab, except that, by the methods described in this paper, we cannot actually test quantum gravity in our universe. For this, other experiments are needed, which we will however not even mention.

The purpose of this paper is to discuss one way in which quantum gravity can make contact with experiment, if you, like us, insist on studying quantum gravity in fictional universes that for all we know do not exist.”

I pointed out that these black holes that string theorists deal with have nothing to do with real black holes in an article I wrote for Quanta Magazine last year. It was also the last article I wrote for them.

Thursday, February 20, 2020

The 10 Most Important Physics Effects

Today I have a count-down of the 10 most important effects in physics that you should all know about.


10. The Doppler Effect

The Doppler effect is the change in frequency of a wave when the source moves relative to the receiver. If the source is approaching, the wavelength appears shorter and the frequency higher. If the source is moving away, the wavelength appears longer and the frequency lower.

The most common example of the Doppler effect is that of an approaching ambulance, where the pitch of the signal is higher when it moves towards you than when it moves away from you.

But the Doppler effect does not only happen for sound waves; it also happens to light which is why it’s enormously important in astrophysics. For light, the frequency is the color, so the color of an approaching object is shifted to the blue and that of an object moving away from you is shifted to the red. Because of this, we can for example calculate our velocity relative to the cosmic microwave background.

The Doppler effect is named after the Austrian physicist Christian Doppler and has nothing to do with the German word Doppelgänger.

9. The Butterfly Effect

Even a tiny change, like the flap of a butterfly’s wings, can making a big difference for the weather next Sunday. This is the butterfly effect as you have probably heard of it. But Edward Lorenz actually meant something much more radical when he spoke of the butterfly effect. He meant that for some non-linear systems you can only make predictions for a limited amount of time, even if you can measure the tiniest perturbations to arbitrary accuracy. I explained this in more detail in my earlier video.

8. The Meissner-Ochsenfeld Effect

The Meissner-Ochsenfeld effect is the impossibility of making a magnetic field enter a superconductor. It was discovered by Walther Meissner and his postdoc Robert Ochsenfeld in 1933. Thanks to this effect, if you try to place a superconductor on a magnet, it will hover above the magnet because the magnetic field lines cannot enter the superconductor. I assure you that this has absolutely nothing to do with Yogic flying.

7. The Aharonov–Bohm Effect

Okay, I admit this is not a particularly well-known effect, but it should be. The Aharonov-Bohm effect says that the wave-function of a charged particle in an electromagnetic field obtains a phase shift from the potential of the background field.

I know this sounds abstract, but the relevant point is that it’s the potential that causes the phase, not the field. In electrodynamics, the potential itself is normally not observable. But this phase shift in the Aharonov-Bohm Effect can and has been observed in interference patterns. And this tells us that the potential is not merely a mathematical tool. Before the Aharonov–Bohm effect one could reasonably question the physical reality of the potential because it was not observable.

6. The Tennis Racket Effect

If you throw any three-dimensional object with a spin, then the spin around the shortest and longest axes will be stable, but that around the intermediate third axis not. The typical example for the spinning object this is a tennis racket, hence the name. It’s also known as the intermediate axis theorem or the Dzhanibekov effect. You see a beautiful illustration of the instability in this little clip from the International Space Station.

5. The Hall Effect

If you bring a conducting plate into a magnetic field, then the magnetic field will affect the motion of the electrons in the plate. In particular, If the plate is orthogonal to the magnetic field lines, you can measure a voltage flowing between opposing ends of the plate, and this voltage can be measured to determine the strength of the magnetic field. This effect is named after Edwin Hall.

If the plate is very thin, the temperature very low, and the magnetic field very strong, you can also observe that the conductivity makes discrete jumps, which is known as the quantum Hall effect.

4. The Hawking Effect

Stephen Hawking showed in the early 1970s that black holes emit thermal radiation with a temperature inverse to the black hole’s mass. This Hawking effect is a consequence of the relativity of the particle number. An observer falling into a black hole would not measure any particles and think the black hole is surrounded by vacuum. But an observer far away from the black hole would think the horizon is surrounded by particles. This can happen because in general relativity, what we mean by a particle depend on the motion of an observer like the passage of time.

A closely related effect is the Unruh effect named after Bill Unruh, which says that an accelerated observer in flat space will measure a thermal distribution of particles with a temperature that depends on the acceleration. Again that can happen because the accelerated observer’s particles are not the same as the particles of an observer at rest.

3. The Photoelectric Effect

When light falls on a plate of metal, it can kick out electrons from their orbits around atomic nuclei. This is called the “photoelectric effect”. The surprising thing about this is that the frequency of the light needs to be above a certain threshold. Just what the threshold is depends on the material, but if the frequency is below the threshold, it does not matter how intense the light is, it will not kick out electrons.

The photoelectric effect was explained in 1905 by Albert Einstein who correctly concluded that it means the light must be made of quanta whose energy is proportional to the frequency of the light.

2. The Casimir Effect

Everybody knows that two metal plates will attract each other if one plate is positively charged and the other one negatively charged. But did you know the plates also attract each other if they are uncharged? Yes, they do!

This is the Casimir effect, named after Hendrik Casimir. It is created by quantum fluctuations that create a pressure even in vacuum. This pressure is lower between the plates than outside of them, so that the two plates are pushed towards each other. However, the force from the Casimir effect is very weak and can be measured only at very short distances.

1. The Tunnel Effect

Definitely my most favorite effect. Quantum effects allow a particle that is trapped in a potential to escape. This would not be possible without quantum effects because the particle just does not have enough energy to escape. However, in quantum mechanics the wave-function of the particle can leak out of the potential and this means that there is a small, but nonzero, probability that a quantum particle can do the seemingly impossible.

Saturday, February 15, 2020

The Reproducibility Crisis: An Interview with Prof. Dorothy Bishop

On my recent visit to Great Britain (the first one post-Brexit) I had the pleasure of talking to Dorothy Bishop. Bishop is Professor of Psychology at the University of Oxford and has been a leading force in combating the reproducibility crisis in her and other disciplines. You find her on twitter under the handle @deevybee . The comment for Nature magazine which I mention in the video is here.

Monday, February 10, 2020

Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 2.” by Tim Palmer

[This is the second part of Tim’s guest contribution. The first part is here.]



In this second part of my guest post, I want to discuss how the concepts of undecidability and uncomputability can lead to a novel interpretation of Bell’s famous theorem. This theorem states that under seemingly reasonable conditions, a deterministic theory of quantum physics – something Einstein believed in passionately – must satisfy a certain inequality which experiment shows is violated.

These reasonable conditions, broadly speaking, describe the concepts of causality and freedom to choose experimental parameters. The issue I want to discuss is whether the way these conditions are formulated mathematically in Bell’s Theorem actually captures the physics that supposedly underpins them.

The discussion here and in the previous post summarises the essay I recently submitted to the FQXi essay competition on undecidability and uncomputability.

For many, the notion that we have some freedom in our actions and decisions seems irrefutable. But how would we explain this to an alien, or indeed a computer, for whom free will is a meaningless concept? Perhaps we might say that we are free because we could have done otherwise. This invokes the notion of a counterfactual world: even though we in fact did X, we could have done Y.

Counterfactuals also play an important role in describing the notion of causality. Imagine throwing a stone at a glass window. Was the smashed glass caused by my throwing the stone? Yes, I might say, because if I hadn’t thrown the stone, the window wouldn’t have broken.

However, there is an alternative way to describe these notions of free will and causality without invoking counterfactual worlds. I can just as well say that free will denotes an absence of constraints that would otherwise prevent me from doing what I want to do. Or I can use Newton’s laws of motion to determine that a stone with a certain mass, projected at a certain velocity, will hit the window with a momentum guaranteed to shatter the glass. These latter descriptions make no reference to counterfactuals at all; instead the descriptions are based on processes occurring in space-time (e.g. associated with the neurons of my brain or projectiles in physical space).

What has all this got to do with Bell’s Theorem? I mentioned above the need for a given theory to satisfy “certain conditions” in order for it to be constrained by Bell’s inequality (and hence be inconsistent with experiment). One of these conditions, the one linked to free will, is called Statistical Independence. Theories which violate this condition are called Superdeterministic.

Superdeterministic theories are typically excoriated by quantum foundations experts, not least because the Statistical Independence condition appears to underpin scientific methodology in general.

For example, consider a source of particles emitting 1000 spin-1/2 particles. Suppose you measure the spin of 500 of them along one direction and 500 of them along a different direction. Statistical Independence guarantees that the measurement statistics (e.g. the frequency of spin-up measurements) will not depend on the particular way in which the experimenter chooses to partition the full ensemble of 1000 particles into the two sub-ensembles of 500 particles.

If you violate Statistical Independence, the experts say, you are effectively invoking some conspiratorial prescient daemon who could, unknown to the experimenter, preselect particles for the particular measurements the experimenter choses to make - or even worse perhaps, could subvert the mind of the experimenter when deciding which type of measurement to perform on a given particle. Effectively, violating Statistical Independence turns experimenters into mindless zombies! No wonder experimentalists hate Superdeterministic theories of quantum physics!!

However, the experts miss a subtle but crucial point here: whilst imposing Statistical Independence guarantees that real-world sub-ensembles are statistically equivalent, violating Statistical Independence does not guarantee that real-world sub-ensembles are not statistically equivalent. In particular it is possible to violate Statistical Independence in such a way that it is only sub-ensembles of particles subject to certain counterfactual measurements that may be statistically inequivalent to the corresponding sub-ensembles with real-world measurements.

In the example above, a sub-ensemble of particles subject to a counterfactual measurement would be associated with the first sub-ensemble of 500 particles subject to the measurement direction applied to the second sub-ensemble of 500 particles. It is possible to violate Statistical Independence when comparing this counterfactual sub-ensemble with the real-world equivalent, without violating the statistical equivalence of the two corresponding sub-ensembles measured along their real-world directions.

However, for this idea to make any theoretical sense at all, there has to be some mathematical basis for asserting that sub-ensembles with real-world measurements can be different to sub-ensembles with counterfactual-world measurements. This is where uncomputable fractal attractors play a key role.

It is worth keeping an example of a fractal attractor in mind here. The Lorenz fractal attractor, discussed in my first post, is a geometric representation in state space of fluid motion in Newtonian space-time.

The Lorenz attractor.
[Image Credits: Markus Fritzsch.]

As I explained in my first post, the attractor is uncomputable in the sense that there is no algorithm which can decide whether a given point in state space lies on the attractor (in exactly the same sense that, as Turing discovered, there is no algorithm for deciding whether a given computer program will halt for given input data). However, as I lay out in my essay, the differential equations for the fluid motion in space-time associated with the Lorenz attractor are themselves solvable by algorithm to arbitrary accuracy and hence are computable. This dichotomy (between state space and space-time) is extremely important to bear in mind below.

With this in mind, suppose the universe itself evolves on some uncomputable fractal subset of state space, such that the corresponding evolution equations for physics in space-time are computable. In such a model, Statistical Independence will be violated for sub-ensembles if the corresponding counterfactual measurements take states of the universe off the fractal subset (since such counterfactual states have probability of occurrence equal to zero by definition).

In the model I have developed this always occurs when considering counterfactual measurements such as those in Bell’s Theorem. (This is a nontrivial result and is the consequence of number-theoretic properties of trigonometric functions.) Importantly, in this theory, Statistical Independence is never violated when comparing two sub-ensembles subject to real-world measurements such as occurs in analysing Bell’s Theorem.

This is all a bit mind numbing, I do admit. However, the bottom line is that I believe that the mathematical definitions of free choice and causality used to understand quantum entanglement are much too general – in particular they admit counterfactual worlds as physical in a completely unconstrained way. I have proposed alternative definitions of free choice and causality which strongly constrain counterfactual states (essentially they must lie on the fractal subset in state space), whilst leaving untouched descriptions of free choice and causality based only on space-time processes. (For the experts, in the classical limit of this theory, Statistical Independence is not violated for any counterfactual states.)

With these alternative definitions, it is possible to violate Bell’s inequality in a deterministic theory which respects free choice and local causality, in exactly the way it is violated in quantum mechanics. Einstein may have been right after all!

If we can explain entanglement deterministically and causally, then synthesising quantum and gravitational physics may have become easier. Indeed, it is through such synthesis that experimental tests of my model may eventually come.

In conclusion, I believe that the uncomputable fractal attractors of chaotic systems may provide a key geometric ingredient needed to unify our theories of physics.

My thanks to Sabine for allowing me the space on her blog to express these points of view.

Saturday, February 08, 2020

Philosophers should talk more about climate change. Yes, philosophers.


I never cease to be shocked – shocked! – how many scientists don’t know how science works and, worse, don’t seem to care about it. Most of those I have to deal with still think Popper was right when he claimed falsifiability is both necessary and sufficient to make a theory scientific, even though this position has logical consequences they’d strongly object to.

Trouble is, if falsifiability was all it took, then arbitrary statements about the future would be scientific. I should, for example, be able to publish a paper predicting that tomorrow the sky will be pink and next Wednesday my cat will speak French. That’s totally falsifiable, yet I hope we all agree that if we’d let such nonsense pass as scientific, science would be entirely useless. I don’t even have a cat.

As the contemporary philosopher Larry Laudan politely put it, Popper’s idea of telling science from non-science by falsifiability “has the untoward consequence of countenancing as `scientific’ every crank claim which makes ascertainably false assertions.” Which is why the world’s cranks love Popper.

But you are not a crank, oh no, not you. And so you surely know that almost all of today’s philosophers of science agree that falsification is not a sufficient criterion of demarcation (though they disagree on whether it is necessary). Luckily, you don’t need to know anything about these philosophers to understand today’s post because I will not attempt to solve the demarcation problem (which, for the record, I don’t think is a philosophical question). I merely want to clarify just when it is scientifically justified to amend a theory whose predictions ran into tension with new data. And the only thing you need to know to understand this is that science cannot work without Occam’s razor.

Occam’s razor tells you that among two theories that describe nature equally well you should take the simpler one. Roughly speaking it means you must discard superfluous assumptions. Occam’s razor is important because without it we were allowed to add all kinds of unnecessary clutter to a theory just because we like it. We would be permitted, for example, to add the assumption “all particles were made by god” to the standard model of particle physics. You see right away how this isn’t going well for science.

Now, the phrase that two theories “describe nature equally well” and you should “take the simpler one” are somewhat vague. To make this prescription operationally useful you’d have to quantify just what it means by suitable statistical measures. We can then quibble about just which statistical measure is the best, but that’s somewhat beside the point here, so let me instead come back to the relevance of Occam’s razor.

We just saw that it’s unscientific to make assumptions which are unnecessary to explain observation and don’t make a theory any simpler. But physicists get this wrong all the time and some have made a business out of it getting it wrong. They invent particles which make theories more complicated and are of no help to explain existing data. They claim this is science because these theories are falsifiable. But the new particles were unnecessary in the first place, so their ideas are dead on arrival, killed by Occam’s razor.

If you still have trouble seeing why adding unnecessary details to established theories is unsound scientific methodology, imagine that scientists of other disciplines would proceed the way that particle physicists do. We’d have biologists writing papers about flying pigs and then hold conferences debating how flying pigs poop because, who knows, we might discover flying pigs tomorrow. Sounds ridiculous? Well, it is ridiculous. But that’s the same “scientific methodology” which has become common in the foundations of physics. The only difference between elaborating on flying pigs and supersymmetric particles is the amount of mathematics. And math certainly comes in handy for particle physicists because it prevents mere mortals from understanding just what the physicists are up to.

But I am not telling you this to bitch about supersymmetry; that would be beating a dead horse. I am telling you this because I have recently had to deal with a lot of climate change deniers (thanks so much, Tim). And many of these deniers, believe that or not, think I must be a denier too because, drums please, I am an outspoken critic of inventing superfluous particles.

Huh, you say. I hear you. It took me a while to figure out what’s with these people, but I believe I now understand where they’re coming from.

You have probably heard the common deniers’ complaint that climate scientists adapt models when new data comes in. That is supposedly unscientific because, here it comes, it’s exactly the same thing that all these physicists do each time their hypothetical particles are not observed! They just fiddle with the parameters of the theory to evade experimental constraints and to keep their pet theories alive. But Popper already said you shouldn’t do that. Then someone yells “Epicycles!” And so, the deniers conclude, climate scientists are as wrong as particle physicists and clearly one shouldn’t listen to either.

But the deniers’ argument merely demonstrates they know even less about scientific methodology than particle physicists. Revising a hypothesis when new data comes in is perfectly fine. In fact, it is what you expect good scientists to do.

The more and the better data you have, the higher the demands on your theory. Sometimes this means you actually need a new theory. Sometimes you have to adjust one or the other parameter. Sometimes you find an actual mistake and have to correct it. But more often than not it just means you neglected something that better measurements are sensitive to and you must add details to your theory. And this is perfectly fine as long as adding details results in a model that explains the data better than before, and does so not just because you now have more parameters. Again, there are statistical measures to quantify in which cases adding parameters actually makes a better fit to data.

Indeed, adding epicycles to make the geocentric model of the solar system fit with observations was entirely proper scientific methodology. It was correcting a hypothesis that ran into conflict with increasingly better observations. Astronomers of the time could have proceeded this way until they’d have noticed there is a simpler way to calculate the same curves, which is by using elliptic motions around the sun rather than cycles around cycles around the Earth. Of course this is not what historically happened, but epicycles in and by themselves are not unscientific, they’re merely parametrically clumsy.

What scientists should not do, however, is to adjust details of a theory that were unnecessary in the first place. Kepler for example also thought that the planets play melodies on their orbits around the sun, an idea that was rightfully abandoned because it explains nothing.

To name another example, adding dark matter and dark energy to the cosmological standard model in order to explain observations is sound scientific practice. These are both simple explanations that vastly improve the fit of the theory to observation. What is not sound scientific methodology is then making these theories more complicated than needs to be, eg by replacing dark energy with complicated scalar fields even though there is no observation that calls for it, or by inventing details about particles that make up dark matter even though these details are irrelevant to fit existing data.

But let me come back to the climate change deniers. You may call me naĂŻve, and I’ll take that, but I believe most of these people are genuinely confused about how science works. It’s of little use to throw evidence at people who don’t understand how scientists make evidence-based predictions. When it comes to climate change, therefore, I think we would all benefit if philosophers of science were given more airtime.

Thursday, February 06, 2020

Ivory Tower [I've been singing again]

I caught a cold and didn't come around to record a new physics video this week. Instead I finished a song that I wrote some weeks ago. Enjoy!

Monday, February 03, 2020

Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 1.” by Tim Palmer

[Tim Palmer is a Royal Society Research Professor in Climate Physics at the University of Oxford, UK. He is only half as crazy as it seems.]


[Screenshot from Tim’s public lecture at Perimeter Institute]


Our three great theories of 20th Century physics – general relativity theory, quantum theory and chaos theory – seem incompatible with each other.

The difficulty combining general relativity and quantum theory to a common theory of “quantum gravity” is legendary; some of our greatest minds have despaired – and still despair – over it.

Superficially, the links between quantum theory and chaos appear to be a little stronger, since both are characterised by unpredictability (in measurement and prediction outcomes respectively). However, the Schrödinger equation is linear and the dynamical equations of chaos are nonlinear. Moreover, in the common interpretation of Bell’s inequality, a chaotic model of quantum physics, since it is deterministic, would be incompatible with Einstein’s notion of relativistic causality.

Finally, although the dynamics of general relativity and chaos theory are both nonlinear and deterministic, it is difficult to even make sense of chaos in the space-time of general relativity. This is because the usual definition of chaos is based on the notion that nearby initial states can diverge exponentially in time. However, speaking of an exponential divergence in time depends on a choice of time-coordinate. If we logarithmically rescale the time coordinate, the defining feature of chaos disappears. Trouble is, in general relativity, the underlying physics must not depend on the space-time coordinates.

So, do we simply have to accept that, “What God hath put asunder, let no man join together”? I don’t think so. A few weeks ago, the Foundational Questions Institute put out a call for essays on the topic of “Undecidability, Uncomputability and Unpredictability”. I have submitted an essay in which I argue that undecidability and uncomputability may provide a new framework for unifying these theories of 20th Century physics. I want to summarize my argument in this and a follow-on guest post.

To start, I need to say what undecidability and uncomputability are in the first place. The concepts go back to the work of Alan Turing who in 1936 showed that no algorithm exists that will take as input a computer program (and its input data), and output 0 if the program halts and 1 if the program does not halt. This “Halting Problem” is therefore undecidable by algorithm. So, a key way to know whether a problem is algorithmically undecidable – or equivalently uncomputable – is to see if the problem is equivalent to the Halting Problem.

Let’s return to thinking about chaotic systems. As mentioned, these are deterministic systems whose evolution is effectively unpredictable (because the evolution is sensitive to the starting conditions). However, what is relevant here is not so much this property of unpredictability, but the fact that no matter what initial condition you start from, there is a class of chaotic system where eventually (technically after an infinite time) the state evolves on a fractal subset of state space, sometimes known as a fractal attractor.

One defining characteristic of a fractal is that its dimension is not a simple integer (like that of a one-dimensional line or the two-dimensional surface of a sphere). Now, the key result I need is a theorem that there is no algorithm that will take as input some point x in state space, and halt if that point belongs to a set with fractional dimension. This implies that the fractal attractor A of a chaotic system is uncomputable and the proposition “x belongs to A” is algorithmically undecidable.

How does this help unify physics?

Firstly defining chaos in terms of the geometry of its fractal attractor (e.g. through the fractional dimension of the attractor) is a coordinate independent and hence more relativistic way to characterise chaos, than defining it in terms of exponential divergence of nearby trajectories. Hence the uncomputable fractal attractor provides a way to unify general relativity and chaos theory.

That was easy! The rest is not so easy which is why I need two guest posts and not one!

When it comes to combining chaos theory with quantum mechanics, the first step is to realize that the linearity of the Schrödinger equation is not at all incompatible with the nonlinearity of chaos.

To understand this, consider an ensemble of integrations of a particular chaotic model based on the Lorenz equations – see Fig 1. These Lorenz equations describe fluid dynamical motion, but the details need not concern us here. The fractal Lorenz attractor is shown in the background in Fig 1. These ensembles can be thought of as describing the evolution of probability – something of practical value when we don’t know the initial conditions precisely (as is the case in weather forecasting).

Fig 1: Evolution of a contour of probability, based on ensembles of integrations of the Lorenz equations, is shown evolving in state space for different initial conditions, with the Lorenz attractor as background. 

In the first panel in Fig 1, small uncertainties do not grow much and we can therefore be confident in the predicted evolution. In the third panel, small uncertainties grow explosively, meaning we can have little confidence in any specific prediction. The second panel is somewhere in between.

Now it turns out that the equation which describes the evolution of probability in such chaotic systems, known as the Liouville equation, is itself a linear equation. The linearity of the Liouville equation ensures that probabilities are conserved in time. Hence, for example, if there is an 80% chance that the actual state of the fluid (as described by the Lorenz equation state) lies within a certain contour of probability at initial time, then there is an 80% chance that the actual state of the fluid lies within the evolved contour of probability at the forecast time.

The remarkable thing is that the Liouville equation is formally very similar to the so-called von-Neumann form of the Schrödinger equation – too much, in my view, for this to be a coincidence. So, just as the linearity of the Liouville equation says nothing about the nonlinearity of the underlying deterministic dynamics which generate such probability, so too the linearity of the Schrödinger equation need say nothing about the nonlinearity of some underlying dynamics which generates quantum probabilities.

However, as I wrote above, in order to satisfy Bell’s theorem, it would appear that, being deterministic, a chaotic model will have to violate relativistic causality, seemingly thwarting the aim of trying to unify our theories of physics. At least, that’s the usual conclusion. However, the undecidable uncomputable properties of fractal attractors provide a novel route to allow us to reassess this conclusion. I will explain how this works in the second part of this post.

Sunday, February 02, 2020

Does nature have a minimal length?

Molecules are made of atoms. Atomic nuclei are made of neutrons and protons. And the neutrons and protons are made of quarks and gluons. Many physicists think that this is not the end of the story, but that quarks and gluons are made of even smaller things, for example the tiny vibrating strings that string theory is all about. But then what? Are strings made of smaller things again? Or is there a smallest scale beyond which nature just does not have any further structure? Does nature have a minimal length?

This is what we will talk about today.



When physicists talk about a minimal length, they usually mean the Planck length, which is about 10-35 meters. The Planck length is named after Max Planck, who introduced it in 1899. 10-35 meters sounds tiny and indeed it is damned tiny.

To give you an idea, think of the tunnel of the Large Hadron Collider. It’s a ring with a diameter of about 10 kilometers. The Planck length compares to the diameter of a proton as the radius of a proton to the diameter of the Large Hadron Collider.

Currently, the smallest structures that we can study are about ten to the minus nineteen meters. That’s what we can do with the energies produced at the Large Hadron Collider and that is still sixteen orders of magnitude larger than the Planck length.

What’s so special about the Planck length? The Planck length seems to be setting a limit to how small a structure can be so that we can still measure it. That’s because to measure small structures, we need to compress more energy into small volumes of space. That’s basically what we do with particle accelerators. Higher energy allows us to find out what happens on shorter distances. But if you stuff too much energy into a small volume, you will make a black hole.

More concretely, if you have an energy E, that will in the best case allow you to resolve a distance of about â„Źc/E. I will call that distance Δx. Here, c is the speed of light and â„Ź is a constant of nature, called Planck’s constant. Yes, that’s the same Planck! This relation comes from the uncertainty principle of quantum mechanics. So, higher energies let you resolve smaller structures.

Now you can ask, if I turn up the energy and the size I can resolve gets smaller, when do I get a black hole? Well that happens, if the Schwarzschild radius associated with the energy is similar to the distance you are trying to measure. That’s not difficult to calculate. So let’s do it.

The Schwarzschild radius is approximately M times G/c2 where G is Newton’s constant and M is the mass. We are asking, when is that radius similar to the distance Δx. As you almost certainly know, the mass associated with the energy is E=Mc2. And, as we previously saw, that energy is just â„Źc/Δx. You can then solve this equation for Δx. And this is what we call the Planck length. It is associated with an energy called the Planck energy. If you go to higher energies than that, you will just make larger black holes. So the Planck length is the shortest distance you can measure.

Now, this is a neat estimate and it’s not entirely wrong, but it’s not a rigorous derivation. If you start thinking about it, it’s a little handwavy, so let me assure you there are much more rigorous ways to do this calculation, and the conclusion remains basically the same. If you combine quantum mechanics with gravity, then the Planck length seems to set a limit to the resolution of structures. That’s why physicists think nature may have a fundamentally minimal length.

Max Planck by the way did not come up with the Planck length because he thought it was a minimal length. He came up with that simply because it’s the only unit of dimension length you can create from the fundamental constants, c, the speed of light, G, Newton’s constant, and â„Ź. He thought that was interesting because, as he wrote in his 1899 paper, these would be natural units that also aliens would use.

The idea that the Planck length is a minimal length only came up after the development of general relativity when physicists started thinking about how to quantize gravity. Today, this idea is supported by attempts to develop a theory of quantum gravity, which I told you about in an earlier video.

In string theory, for example, if you squeeze too much energy into a string it will start spreading out. In Loop Quantum Gravity, the loops themselves have a finite size, given by the Planck length. In Asymptotically Safe Gravity, the gravitational force becomes weaker at high energies, so beyond a certain point you can no longer improve your resolution.

When I speak about a minimal length, a lot of people seem to have a particular image in mind, which is that the minimal length works like a kind of discretization, a pixilation of an photo or something like that. But that is most definitely the wrong image. The minimal length that we are talking about here is more like an unavoidable blur on an image, some kind of fundamental fuzziness that nature has. It may, but does not necessarily come with a discretization.

What does this all mean? Well, it means that we might be close to finding a final theory, one that describes nature at its most fundamental level and there is nothing more beyond that. That is possible, but. Remember that the arguments for the existence of a minimal length rest on extrapolating 16 orders magnitude below the distances what we have tested so far. That’s a lot. That extrapolation might just be wrong. Even though we do not currently have any reason to think that there should be something new on distances even shorter than the Planck length, that situation might change in the future.

Still, I find it intriguing that for all we currently know, it is not necessary to think about distances shorter than the Planck length.