Saturday, February 15, 2020

The Reproducibility Crisis: An Interview with Prof. Dorothy Bishop

On my recent visit to Great Britain (the first one post-Brexit) I had the pleasure of talking to Dorothy Bishop. Bishop is Professor of Psychology at the University of Oxford and has been a leading force in combating the reproducibility crisis in her and other disciplines. You find her on twitter under the handle @deevybee . The comment for Nature magazine which I mention in the video is here.

Monday, February 10, 2020

Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 2.” by Tim Palmer

[This is the second part of Tim’s guest contribution. The first part is here.]



In this second part of my guest post, I want to discuss how the concepts of undecidability and uncomputability can lead to a novel interpretation of Bell’s famous theorem. This theorem states that under seemingly reasonable conditions, a deterministic theory of quantum physics – something Einstein believed in passionately – must satisfy a certain inequality which experiment shows is violated.

These reasonable conditions, broadly speaking, describe the concepts of causality and freedom to choose experimental parameters. The issue I want to discuss is whether the way these conditions are formulated mathematically in Bell’s Theorem actually captures the physics that supposedly underpins them.

The discussion here and in the previous post summarises the essay I recently submitted to the FQXi essay competition on undecidability and uncomputability.

For many, the notion that we have some freedom in our actions and decisions seems irrefutable. But how would we explain this to an alien, or indeed a computer, for whom free will is a meaningless concept? Perhaps we might say that we are free because we could have done otherwise. This invokes the notion of a counterfactual world: even though we in fact did X, we could have done Y.

Counterfactuals also play an important role in describing the notion of causality. Imagine throwing a stone at a glass window. Was the smashed glass caused by my throwing the stone? Yes, I might say, because if I hadn’t thrown the stone, the window wouldn’t have broken.

However, there is an alternative way to describe these notions of free will and causality without invoking counterfactual worlds. I can just as well say that free will denotes an absence of constraints that would otherwise prevent me from doing what I want to do. Or I can use Newton’s laws of motion to determine that a stone with a certain mass, projected at a certain velocity, will hit the window with a momentum guaranteed to shatter the glass. These latter descriptions make no reference to counterfactuals at all; instead the descriptions are based on processes occurring in space-time (e.g. associated with the neurons of my brain or projectiles in physical space).

What has all this got to do with Bell’s Theorem? I mentioned above the need for a given theory to satisfy “certain conditions” in order for it to be constrained by Bell’s inequality (and hence be inconsistent with experiment). One of these conditions, the one linked to free will, is called Statistical Independence. Theories which violate this condition are called Superdeterministic.

Superdeterministic theories are typically excoriated by quantum foundations experts, not least because the Statistical Independence condition appears to underpin scientific methodology in general.

For example, consider a source of particles emitting 1000 spin-1/2 particles. Suppose you measure the spin of 500 of them along one direction and 500 of them along a different direction. Statistical Independence guarantees that the measurement statistics (e.g. the frequency of spin-up measurements) will not depend on the particular way in which the experimenter chooses to partition the full ensemble of 1000 particles into the two sub-ensembles of 500 particles.

If you violate Statistical Independence, the experts say, you are effectively invoking some conspiratorial prescient daemon who could, unknown to the experimenter, preselect particles for the particular measurements the experimenter choses to make - or even worse perhaps, could subvert the mind of the experimenter when deciding which type of measurement to perform on a given particle. Effectively, violating Statistical Independence turns experimenters into mindless zombies! No wonder experimentalists hate Superdeterministic theories of quantum physics!!

However, the experts miss a subtle but crucial point here: whilst imposing Statistical Independence guarantees that real-world sub-ensembles are statistically equivalent, violating Statistical Independence does not guarantee that real-world sub-ensembles are not statistically equivalent. In particular it is possible to violate Statistical Independence in such a way that it is only sub-ensembles of particles subject to certain counterfactual measurements that may be statistically inequivalent to the corresponding sub-ensembles with real-world measurements.

In the example above, a sub-ensemble of particles subject to a counterfactual measurement would be associated with the first sub-ensemble of 500 particles subject to the measurement direction applied to the second sub-ensemble of 500 particles. It is possible to violate Statistical Independence when comparing this counterfactual sub-ensemble with the real-world equivalent, without violating the statistical equivalence of the two corresponding sub-ensembles measured along their real-world directions.

However, for this idea to make any theoretical sense at all, there has to be some mathematical basis for asserting that sub-ensembles with real-world measurements can be different to sub-ensembles with counterfactual-world measurements. This is where uncomputable fractal attractors play a key role.

It is worth keeping an example of a fractal attractor in mind here. The Lorenz fractal attractor, discussed in my first post, is a geometric representation in state space of fluid motion in Newtonian space-time.

The Lorenz attractor.
[Image Credits: Markus Fritzsch.]

As I explained in my first post, the attractor is uncomputable in the sense that there is no algorithm which can decide whether a given point in state space lies on the attractor (in exactly the same sense that, as Turing discovered, there is no algorithm for deciding whether a given computer program will halt for given input data). However, as I lay out in my essay, the differential equations for the fluid motion in space-time associated with the Lorenz attractor are themselves solvable by algorithm to arbitrary accuracy and hence are computable. This dichotomy (between state space and space-time) is extremely important to bear in mind below.

With this in mind, suppose the universe itself evolves on some uncomputable fractal subset of state space, such that the corresponding evolution equations for physics in space-time are computable. In such a model, Statistical Independence will be violated for sub-ensembles if the corresponding counterfactual measurements take states of the universe off the fractal subset (since such counterfactual states have probability of occurrence equal to zero by definition).

In the model I have developed this always occurs when considering counterfactual measurements such as those in Bell’s Theorem. (This is a nontrivial result and is the consequence of number-theoretic properties of trigonometric functions.) Importantly, in this theory, Statistical Independence is never violated when comparing two sub-ensembles subject to real-world measurements such as occurs in analysing Bell’s Theorem.

This is all a bit mind numbing, I do admit. However, the bottom line is that I believe that the mathematical definitions of free choice and causality used to understand quantum entanglement are much too general – in particular they admit counterfactual worlds as physical in a completely unconstrained way. I have proposed alternative definitions of free choice and causality which strongly constrain counterfactual states (essentially they must lie on the fractal subset in state space), whilst leaving untouched descriptions of free choice and causality based only on space-time processes. (For the experts, in the classical limit of this theory, Statistical Independence is not violated for any counterfactual states.)

With these alternative definitions, it is possible to violate Bell’s inequality in a deterministic theory which respects free choice and local causality, in exactly the way it is violated in quantum mechanics. Einstein may have been right after all!

If we can explain entanglement deterministically and causally, then synthesising quantum and gravitational physics may have become easier. Indeed, it is through such synthesis that experimental tests of my model may eventually come.

In conclusion, I believe that the uncomputable fractal attractors of chaotic systems may provide a key geometric ingredient needed to unify our theories of physics.

My thanks to Sabine for allowing me the space on her blog to express these points of view.

Saturday, February 08, 2020

Philosophers should talk more about climate change. Yes, philosophers.


I never cease to be shocked – shocked! – how many scientists don’t know how science works and, worse, don’t seem to care about it. Most of those I have to deal with still think Popper was right when he claimed falsifiability is both necessary and sufficient to make a theory scientific, even though this position has logical consequences they’d strongly object to.

Trouble is, if falsifiability was all it took, then arbitrary statements about the future would be scientific. I should, for example, be able to publish a paper predicting that tomorrow the sky will be pink and next Wednesday my cat will speak French. That’s totally falsifiable, yet I hope we all agree that if we’d let such nonsense pass as scientific, science would be entirely useless. I don’t even have a cat.

As the contemporary philosopher Larry Laudan politely put it, Popper’s idea of telling science from non-science by falsifiability “has the untoward consequence of countenancing as `scientific’ every crank claim which makes ascertainably false assertions.” Which is why the world’s cranks love Popper.

But you are not a crank, oh no, not you. And so you surely know that almost all of today’s philosophers of science agree that falsification is not a sufficient criterion of demarcation (though they disagree on whether it is necessary). Luckily, you don’t need to know anything about these philosophers to understand today’s post because I will not attempt to solve the demarcation problem (which, for the record, I don’t think is a philosophical question). I merely want to clarify just when it is scientifically justified to amend a theory whose predictions ran into tension with new data. And the only thing you need to know to understand this is that science cannot work without Occam’s razor.

Occam’s razor tells you that among two theories that describe nature equally well you should take the simpler one. Roughly speaking it means you must discard superfluous assumptions. Occam’s razor is important because without it we were allowed to add all kinds of unnecessary clutter to a theory just because we like it. We would be permitted, for example, to add the assumption “all particles were made by god” to the standard model of particle physics. You see right away how this isn’t going well for science.

Now, the phrase that two theories “describe nature equally well” and you should “take the simpler one” are somewhat vague. To make this prescription operationally useful you’d have to quantify just what it means by suitable statistical measures. We can then quibble about just which statistical measure is the best, but that’s somewhat beside the point here, so let me instead come back to the relevance of Occam’s razor.

We just saw that it’s unscientific to make assumptions which are unnecessary to explain observation and don’t make a theory any simpler. But physicists get this wrong all the time and some have made a business out of it getting it wrong. They invent particles which make theories more complicated and are of no help to explain existing data. They claim this is science because these theories are falsifiable. But the new particles were unnecessary in the first place, so their ideas are dead on arrival, killed by Occam’s razor.

If you still have trouble seeing why adding unnecessary details to established theories is unsound scientific methodology, imagine that scientists of other disciplines would proceed the way that particle physicists do. We’d have biologists writing papers about flying pigs and then hold conferences debating how flying pigs poop because, who knows, we might discover flying pigs tomorrow. Sounds ridiculous? Well, it is ridiculous. But that’s the same “scientific methodology” which has become common in the foundations of physics. The only difference between elaborating on flying pigs and supersymmetric particles is the amount of mathematics. And math certainly comes in handy for particle physicists because it prevents mere mortals from understanding just what the physicists are up to.

But I am not telling you this to bitch about supersymmetry; that would be beating a dead horse. I am telling you this because I have recently had to deal with a lot of climate change deniers (thanks so much, Tim). And many of these deniers, believe that or not, think I must be a denier too because, drums please, I am an outspoken critic of inventing superfluous particles.

Huh, you say. I hear you. It took me a while to figure out what’s with these people, but I believe I now understand where they’re coming from.

You have probably heard the common deniers’ complaint that climate scientists adapt models when new data comes in. That is supposedly unscientific because, here it comes, it’s exactly the same thing that all these physicists do each time their hypothetical particles are not observed! They just fiddle with the parameters of the theory to evade experimental constraints and to keep their pet theories alive. But Popper already said you shouldn’t do that. Then someone yells “Epicycles!” And so, the deniers conclude, climate scientists are as wrong as particle physicists and clearly one shouldn’t listen to either.

But the deniers’ argument merely demonstrates they know even less about scientific methodology than particle physicists. Revising a hypothesis when new data comes in is perfectly fine. In fact, it is what you expect good scientists to do.

The more and the better data you have, the higher the demands on your theory. Sometimes this means you actually need a new theory. Sometimes you have to adjust one or the other parameter. Sometimes you find an actual mistake and have to correct it. But more often than not it just means you neglected something that better measurements are sensitive to and you must add details to your theory. And this is perfectly fine as long as adding details results in a model that explains the data better than before, and does so not just because you now have more parameters. Again, there are statistical measures to quantify in which cases adding parameters actually makes a better fit to data.

Indeed, adding epicycles to make the geocentric model of the solar system fit with observations was entirely proper scientific methodology. It was correcting a hypothesis that ran into conflict with increasingly better observations. Astronomers of the time could have proceeded this way until they’d have noticed there is a simpler way to calculate the same curves, which is by using elliptic motions around the sun rather than cycles around cycles around the Earth. Of course this is not what historically happened, but epicycles in and by themselves are not unscientific, they’re merely parametrically clumsy.

What scientists should not do, however, is to adjust details of a theory that were unnecessary in the first place. Kepler for example also thought that the planets play melodies on their orbits around the sun, an idea that was rightfully abandoned because it explains nothing.

To name another example, adding dark matter and dark energy to the cosmological standard model in order to explain observations is sound scientific practice. These are both simple explanations that vastly improve the fit of the theory to observation. What is not sound scientific methodology is then making these theories more complicated than needs to be, eg by replacing dark energy with complicated scalar fields even though there is no observation that calls for it, or by inventing details about particles that make up dark matter even though these details are irrelevant to fit existing data.

But let me come back to the climate change deniers. You may call me naïve, and I’ll take that, but I believe most of these people are genuinely confused about how science works. It’s of little use to throw evidence at people who don’t understand how scientists make evidence-based predictions. When it comes to climate change, therefore, I think we would all benefit if philosophers of science were given more airtime.

Thursday, February 06, 2020

Ivory Tower [I've been singing again]

I caught a cold and didn't come around to record a new physics video this week. Instead I finished a song that I wrote some weeks ago. Enjoy!

Monday, February 03, 2020

Guest Post: “Undecidability, Uncomputability and the Unity of Physics. Part 1.” by Tim Palmer

[Tim Palmer is a Royal Society Research Professor in Climate Physics at the University of Oxford, UK. He is only half as crazy as it seems.]


[Screenshot from Tim’s public lecture at Perimeter Institute]


Our three great theories of 20th Century physics – general relativity theory, quantum theory and chaos theory – seem incompatible with each other.

The difficulty combining general relativity and quantum theory to a common theory of “quantum gravity” is legendary; some of our greatest minds have despaired – and still despair – over it.

Superficially, the links between quantum theory and chaos appear to be a little stronger, since both are characterised by unpredictability (in measurement and prediction outcomes respectively). However, the Schrödinger equation is linear and the dynamical equations of chaos are nonlinear. Moreover, in the common interpretation of Bell’s inequality, a chaotic model of quantum physics, since it is deterministic, would be incompatible with Einstein’s notion of relativistic causality.

Finally, although the dynamics of general relativity and chaos theory are both nonlinear and deterministic, it is difficult to even make sense of chaos in the space-time of general relativity. This is because the usual definition of chaos is based on the notion that nearby initial states can diverge exponentially in time. However, speaking of an exponential divergence in time depends on a choice of time-coordinate. If we logarithmically rescale the time coordinate, the defining feature of chaos disappears. Trouble is, in general relativity, the underlying physics must not depend on the space-time coordinates.

So, do we simply have to accept that, “What God hath put asunder, let no man join together”? I don’t think so. A few weeks ago, the Foundational Questions Institute put out a call for essays on the topic of “Undecidability, Uncomputability and Unpredictability”. I have submitted an essay in which I argue that undecidability and uncomputability may provide a new framework for unifying these theories of 20th Century physics. I want to summarize my argument in this and a follow-on guest post.

To start, I need to say what undecidability and uncomputability are in the first place. The concepts go back to the work of Alan Turing who in 1936 showed that no algorithm exists that will take as input a computer program (and its input data), and output 0 if the program halts and 1 if the program does not halt. This “Halting Problem” is therefore undecidable by algorithm. So, a key way to know whether a problem is algorithmically undecidable – or equivalently uncomputable – is to see if the problem is equivalent to the Halting Problem.

Let’s return to thinking about chaotic systems. As mentioned, these are deterministic systems whose evolution is effectively unpredictable (because the evolution is sensitive to the starting conditions). However, what is relevant here is not so much this property of unpredictability, but the fact that no matter what initial condition you start from, there is a class of chaotic system where eventually (technically after an infinite time) the state evolves on a fractal subset of state space, sometimes known as a fractal attractor.

One defining characteristic of a fractal is that its dimension is not a simple integer (like that of a one-dimensional line or the two-dimensional surface of a sphere). Now, the key result I need is a theorem that there is no algorithm that will take as input some point x in state space, and halt if that point belongs to a set with fractional dimension. This implies that the fractal attractor A of a chaotic system is uncomputable and the proposition “x belongs to A” is algorithmically undecidable.

How does this help unify physics?

Firstly defining chaos in terms of the geometry of its fractal attractor (e.g. through the fractional dimension of the attractor) is a coordinate independent and hence more relativistic way to characterise chaos, than defining it in terms of exponential divergence of nearby trajectories. Hence the uncomputable fractal attractor provides a way to unify general relativity and chaos theory.

That was easy! The rest is not so easy which is why I need two guest posts and not one!

When it comes to combining chaos theory with quantum mechanics, the first step is to realize that the linearity of the Schrödinger equation is not at all incompatible with the nonlinearity of chaos.

To understand this, consider an ensemble of integrations of a particular chaotic model based on the Lorenz equations – see Fig 1. These Lorenz equations describe fluid dynamical motion, but the details need not concern us here. The fractal Lorenz attractor is shown in the background in Fig 1. These ensembles can be thought of as describing the evolution of probability – something of practical value when we don’t know the initial conditions precisely (as is the case in weather forecasting).

Fig 1: Evolution of a contour of probability, based on ensembles of integrations of the Lorenz equations, is shown evolving in state space for different initial conditions, with the Lorenz attractor as background. 

In the first panel in Fig 1, small uncertainties do not grow much and we can therefore be confident in the predicted evolution. In the third panel, small uncertainties grow explosively, meaning we can have little confidence in any specific prediction. The second panel is somewhere in between.

Now it turns out that the equation which describes the evolution of probability in such chaotic systems, known as the Liouville equation, is itself a linear equation. The linearity of the Liouville equation ensures that probabilities are conserved in time. Hence, for example, if there is an 80% chance that the actual state of the fluid (as described by the Lorenz equation state) lies within a certain contour of probability at initial time, then there is an 80% chance that the actual state of the fluid lies within the evolved contour of probability at the forecast time.

The remarkable thing is that the Liouville equation is formally very similar to the so-called von-Neumann form of the Schrödinger equation – too much, in my view, for this to be a coincidence. So, just as the linearity of the Liouville equation says nothing about the nonlinearity of the underlying deterministic dynamics which generate such probability, so too the linearity of the Schrödinger equation need say nothing about the nonlinearity of some underlying dynamics which generates quantum probabilities.

However, as I wrote above, in order to satisfy Bell’s theorem, it would appear that, being deterministic, a chaotic model will have to violate relativistic causality, seemingly thwarting the aim of trying to unify our theories of physics. At least, that’s the usual conclusion. However, the undecidable uncomputable properties of fractal attractors provide a novel route to allow us to reassess this conclusion. I will explain how this works in the second part of this post.

Sunday, February 02, 2020

Does nature have a minimal length?

Molecules are made of atoms. Atomic nuclei are made of neutrons and protons. And the neutrons and protons are made of quarks and gluons. Many physicists think that this is not the end of the story, but that quarks and gluons are made of even smaller things, for example the tiny vibrating strings that string theory is all about. But then what? Are strings made of smaller things again? Or is there a smallest scale beyond which nature just does not have any further structure? Does nature have a minimal length?

This is what we will talk about today.



When physicists talk about a minimal length, they usually mean the Planck length, which is about 10-35 meters. The Planck length is named after Max Planck, who introduced it in 1899. 10-35 meters sounds tiny and indeed it is damned tiny.

To give you an idea, think of the tunnel of the Large Hadron Collider. It’s a ring with a diameter of about 10 kilometers. The Planck length compares to the diameter of a proton as the radius of a proton to the diameter of the Large Hadron Collider.

Currently, the smallest structures that we can study are about ten to the minus nineteen meters. That’s what we can do with the energies produced at the Large Hadron Collider and that is still sixteen orders of magnitude larger than the Planck length.

What’s so special about the Planck length? The Planck length seems to be setting a limit to how small a structure can be so that we can still measure it. That’s because to measure small structures, we need to compress more energy into small volumes of space. That’s basically what we do with particle accelerators. Higher energy allows us to find out what happens on shorter distances. But if you stuff too much energy into a small volume, you will make a black hole.

More concretely, if you have an energy E, that will in the best case allow you to resolve a distance of about ℏc/E. I will call that distance Δx. Here, c is the speed of light and ℏ is a constant of nature, called Planck’s constant. Yes, that’s the same Planck! This relation comes from the uncertainty principle of quantum mechanics. So, higher energies let you resolve smaller structures.

Now you can ask, if I turn up the energy and the size I can resolve gets smaller, when do I get a black hole? Well that happens, if the Schwarzschild radius associated with the energy is similar to the distance you are trying to measure. That’s not difficult to calculate. So let’s do it.

The Schwarzschild radius is approximately M times G/c2 where G is Newton’s constant and M is the mass. We are asking, when is that radius similar to the distance Δx. As you almost certainly know, the mass associated with the energy is E=Mc2. And, as we previously saw, that energy is just ℏcx. You can then solve this equation for Δx. And this is what we call the Planck length. It is associated with an energy called the Planck energy. If you go to higher energies than that, you will just make larger black holes. So the Planck length is the shortest distance you can measure.

Now, this is a neat estimate and it’s not entirely wrong, but it’s not a rigorous derivation. If you start thinking about it, it’s a little handwavy, so let me assure you there are much more rigorous ways to do this calculation, and the conclusion remains basically the same. If you combine quantum mechanics with gravity, then the Planck length seems to set a limit to the resolution of structures. That’s why physicists think nature may have a fundamentally minimal length.

Max Planck by the way did not come up with the Planck length because he thought it was a minimal length. He came up with that simply because it’s the only unit of dimension length you can create from the fundamental constants, c, the speed of light, G, Newton’s constant, and ℏ. He thought that was interesting because, as he wrote in his 1899 paper, these would be natural units that also aliens would use.

The idea that the Planck length is a minimal length only came up after the development of general relativity when physicists started thinking about how to quantize gravity. Today, this idea is supported by attempts to develop a theory of quantum gravity, which I told you about in an earlier video.

In string theory, for example, if you squeeze too much energy into a string it will start spreading out. In Loop Quantum Gravity, the loops themselves have a finite size, given by the Planck length. In Asymptotically Safe Gravity, the gravitational force becomes weaker at high energies, so beyond a certain point you can no longer improve your resolution.

When I speak about a minimal length, a lot of people seem to have a particular image in mind, which is that the minimal length works like a kind of discretization, a pixilation of an photo or something like that. But that is most definitely the wrong image. The minimal length that we are talking about here is more like an unavoidable blur on an image, some kind of fundamental fuzziness that nature has. It may, but does not necessarily come with a discretization.

What does this all mean? Well, it means that we might be close to finding a final theory, one that describes nature at its most fundamental level and there is nothing more beyond that. That is possible, but. Remember that the arguments for the existence of a minimal length rest on extrapolating 16 orders magnitude below the distances what we have tested so far. That’s a lot. That extrapolation might just be wrong. Even though we do not currently have any reason to think that there should be something new on distances even shorter than the Planck length, that situation might change in the future.

Still, I find it intriguing that for all we currently know, it is not necessary to think about distances shorter than the Planck length.

Friday, January 24, 2020

Do Black Holes Echo?

What happens with the event horizon of two black holes if they merge? Might gravitational waves emitted from such a merger tell us if Einstein’s theory of general relativity is wrong? Yes, they might. But it’s unlikely. In this video, I will explain why. In more detail, I will tell you about the possibility that a gravitational wave signal from a black hole merger has echoes.


But first, some context. We know that Einstein’s theory of general relativity is incomplete. We know that because it cannot handle quantum properties. To complete General Relativity, we need a theory of quantum gravity. But progress in theory development has been slow and experimental evidence for quantum gravity is hard to come by because quantum fluctuations of space-time are so damn tiny. In my previous video I told you about the most promising ways of testing quantum gravity. Today I want to tell you about testing quantum gravity with black hole horizons in particular.

The effects of quantum gravity become large when space and time are strongly curved. This is the case towards the center of a black hole, but it is not the case at the horizon of a black hole. Most people get this wrong, so let me repeat this. The curvature of space is not strong at the horizon of a black hole. It can, in fact, be arbitrarily weak. That’s because the curvature at the horizon is inversely proportional to the square of the black hole’s mass. This means the larger the black hole, the weaker the curvature at the horizon. It also means we have no reason to think that there are any quantum gravitational effects near the horizon of a black hole. It’s an almost flat and empty space.

Black holes do emit radiation by quantum effects. This is the Hawking radiation named after Stephen Hawking. But Hawking radiation comes from the quantum properties of matter. It is an effect of ordinary quantum mechanics and *not an effect of quantum gravity.

However, one can certainly speculate that maybe General Relativity does not correctly describe black hole horizons. So how would you do that? In General Relativity, the horizon is the boundary of a region that you can only get in but never get out. The horizon itself has no substance and indeed you would not notice crossing it. But quantum effects could change the situation. And that might be observable.

Just what you would observe has been studied by Niayesh Afshordi and his group at Perimeter Institute. They try to understand what happens if quantum effects turn the horizon into a physical obstacle that partly reflects gravitational waves. If that was so, the gravitational waves produced in a black hole merger would bounce back and forth between the horizon and the black hole’s photon sphere.

The photon sphere is a potential barrier at about one and a half times the radius of the horizon. The gravitational waves would slowly leak during each iteration rather than escape in one bang. And if that is what is really going on, then gravitational wave interferometers like LIGO should detect echoes of the original merger signal.

And here is the thing! Niayesh and his group did find an echo signal in the gravitational wave data. This signal is in the first event ever detected by LIGO in September 2015. The statistical significance of this echo was originally at 2.5 σ. This means roughly one-in-a-hundred times random fluctuations conspire to look like the observed echo. So, it’s not a great level of significance, at least not by physics standards. But it’s still 2.5σ better than nothing.

Some members of the LIGO collaboration then went and did their own analysis of the data. And they also found the echo, but at a somewhat smaller significance. There has since been some effort by several groups to extract a signal from the data with different techniques of analysis using different models for the exact type of echo signal. The signal could for example be dampened over time, or it’s frequency distribution could change. The reported false alarm rate of these findings ranges from 5% to 0.002%, the latter is a near discovery.

However, if you know anything about statistical analysis, then you know that trying out different methods of analysis and different models until you find something is not a good idea. Because if you try long enough, you will eventually find something. And in the case of black hole echoes, I suspect that most of the models that gave negative results never appeared in the literature. So the statistical significance may be misleading.

I also have to admit that as a theorist, I am not enthusiastic about black hole echoes because there are no compelling theoretical reasons to expect them. We know that quantum gravitational effects become important towards the center of the black hole. But that’s hidden deep inside the horizon and the gravitational waves we detect are not sensitive to what is going on there. That quantum gravitational effects are also relevant at the horizon is speculative and pure conjecture, and yet that’s what it takes to have black hole echoes.

But theoretical misgivings aside, we have never tested the properties of black hole horizons before, and on unexplored territory all stones should be turned. You find a summary of the current status of the search for black hole echoes in Afshordi’s most recent paper.

Wednesday, January 22, 2020

Travel and Book Update

My book “Lost in Math” has meanwhile also been translated to Hungarian and Polish. Previous translations have appeared in German, Spanish, Italian, and French, I believe. I have somewhat lost overview. There should have been a Chinese and Romanian translation too, I think, but I’m not sure what happened to these. In case someone spots them, please let me know. The paperback version of the US-Edition is scheduled to appear in June.

My upcoming trips are to Cambridge, UK, for a public debate on the question “How is the scientific method doing?” (on Jan 28th) and a seminar about Superdeterminism (on Jan 29). On Feb 13, I am in Oxford (again) giving a talk about Superfluid Dark Matter (again), but this time at the physics department. On Feb 24th, I am in London for the Researcher to Reader Conference 2020.

On March 9th I am giving a colloq at Brown University. On March 19th I am in Zurich for some kind of panel discussion, details of which I have either forgotten or never knew. On April 8, I am in Gelsenkirchen for a public lecture.

Our Superdeterminism workshop is scheduled for the first week of May (details to come soon). Mid of May I am in Copenhagen for a public lecture. In June I’ll be on Long Island for a conference on peer review organized by the APS.

The easiest way to keep track of my whatabouts and whereabouts is to follow me on Twitter or on Facebook.

Thursday, January 16, 2020

How to test quantum gravity

Today I want to talk about a topic that most physicists get wrong: How to test quantum gravity. Most physicists believe it is just is not possible. But it is possible.


Einstein’s theory of general relativity tells us that gravity is due to the curvature of space and time. But this theory is strictly speaking wrong. It is wrong because according to general relativity, gravity does not have quantum properties. I told you all about this in my earlier videos. This lacking quantum behavior of gravity gives rise to mathematical inconsistencies that make no physical sense. To really make sense of gravity, we need a theory of quantum gravity. But we do not have such a theory yet. In this video, we will look at the experimental possibilities that we have to find the missing theory.

But before I do that, I want to tell you why so many physicists think that it is not possible to test quantum gravity.

The reason is that gravity is a very weak force and its quantum effects are even weaker. Gravity does not seem weak in everyday life. But that is because gravity, unlike all the other fundamental forces, does not neutralize. So, on long distances, it is the only remaining force and that’s why we notice it so prominently. But if you look at, for example, the gravitational force between an electron and a proton and the electromagnetic force between them, then the electromagnetic force is a factor 10^40 stronger.

One way to see what this means is to look at a fridge magnet. The magnetic force of that tiny thing is stronger than the gravitational pull of the whole planet.

Now, in most approaches to quantum gravity, the gravitational force is mediated by a particle. This particle is called the graviton, and it belongs to the gravitational force the same way that the photon belongs to the electromagnetic force. But since gravity is so much weaker than the electromagnetic force, you need ridiculously high energies to produce a measureable amount of gravitons. With the currently available technology, it would take a particle accelerator about the size of the Milky Way to reach sufficiently high energies.

And this is why most physicists think that one cannot test quantum gravity. It is testable in principle, all right, but not in practice, because one needs these ridiculously large accelerators or detectors.

However, this argument is wrong. It is wrong because one does not need to produce a quantum of a field to demonstrate that the field must be quantized. Take electromagnetism as an example. We have evidence that it must be quantized right here. Because if it was not quantized, then atoms would not be stable. Somewhat more quantitatively, the discrete energy levels of atomic spectral lines demonstrate that electromagnetism is quantized. And you do not need to detect individual photons for that.

With the quantization of gravity, it’s somewhat more difficult, but not impossible. A big advantage of gravity is that the gravitational force becomes stronger for larger systems because, recall, gravity, unlike the other forces, does not neutralize and therefore adds up. So, we can make quantum gravitational effects stronger by just taking larger masses and bringing them into quantum states, for example into a state in which the masses are in two places at once. One should then be able to tell whether the gravitational field is also in two places at once. And if one can do that, one can demonstrate that gravity has quantum behavior.

But the trouble is that quantum effects for large objects quickly fade away, or “decohere” as the physicists say. So the challenge to measuring quantum gravity comes down to producing and maintaining quantum states of heavy objects. “Heavy” here means something like a milli-gram. That doesn’t sound heavy, but it is very heavy compared to the masses of elementary particles.

The objects you need for such an experiment have to be heavy enough so that one can actually measure the gravitational field. There are a few experiments attempting to measure this. But presently the masses that one can bring into quantum states are not quite high enough. However, it is something that will reasonably become possible in the coming decades.

Another good chance to observe quantum gravitational effects is to use the universe as laboratory. Quantum gravitational effects should be strong right after the big bang and inside of black holes. Evidence from what happened in the early universe could still be around today, for example in the cosmic microwave background. Indeed, several groups are trying to find out whether the cosmic microwave background can be analyzed to show that gravity must have been quantized. But at least for now the signal is well below measurement precision.

With black holes, it’s more complicated, because the region where quantum gravity is strong is hidden behind the event horizon. But some computer simulations seem to show that stars can collapse without forming a horizon. In this case we could look right at the quantum gravitational effects. The challenge with this idea is to find out just how the observations would differ between a “normal” black hole and a singularity without horizon but with quantum gravitational effects. Again, that’s subject of current research.

And there are other options. For example, the theory of quantum gravity may violate symmetries that are respected by general relativity. Symmetry violations can show up in high-precision measurements at low energies, even if they are very small. This is something that one can look for with particle decays or particle interactions and indeed something that various experimental groups are looking for.

There are several other ways to test quantum gravity, but these are more speculative in that they look for properties that a theory of quantum gravity may not have.

For example, the way in which gravitational waves are emitted in a black hole merger is different if the black hole horizon has quantum effects. However, this may just not be the case. The same goes for ideas that space itself may have the properties of a medium give rise to dispersion, which means that light of different colors travels at different speed, or may have viscosity. Again, this is something that one can look for, and that physicists are looking for. It’s not our best shot though, because quantum gravity may not give rise to these effects.

In any case, as you can see, clearly it is possible to test quantum gravity. Indeed I think it is possible that we will see experimental evidence for quantum gravity in the next 20 years, most likely by the type of test that I talked about first, with the massive quantum objects.

Wednesday, January 08, 2020

Update January 2020

A quick update on some topics that I previously told you about.


Remember I explained the issue with the missing electromagnetic counterparts to gravitational wave detections? In a recent paper a group of physicists from Russia claimed they had evidence for the detection of a gamma ray event coincident with the gravitational wave detection from a binary neutron star merger. They say they found it in the data from the INTEGRAL satellite mission.

Their analysis was swiftly criticized informally by other experts in the field, but so far there is no formal correspondence about this. So the current status is that we are still missing confirmation that the LIGO and Virgo gravitational wave interferometers indeed detect signals from outer space.

So much about gravitational waves. There is also news about dark energy. Last month I told you that a new analysis of the supernova data showed they can be explained without dark energy. The supernova data, to remind you, are the major evidence that physicists have for dark energy. And if that evidence does not hold up, that’s a big deal because the discovery of dark energy was awarded a nobel prize in 2011.

However, that new analysis of the supernova data was swiftly criticized by another group. This criticism, to be honest, did not make much sense to me because they picked on the use of the coordinate system, which was basically the whole point of the original analysis. In any case, the authors of the original paper then debunked the criticism. And that is still the status today.

Quanta Magazine was happy to quote a couple of astrophysicists saying that the evidence for dark energy from supernovae is sound without giving further reasons.

Unfortunately, this is a very common thing to happen. Someone, or a group, goes and challenges a widely accepted result. Then someone else criticizes the new work. So far, so good. But after this, what frequently happens is that everybody else, scientists as well as the popular science press, will just quote the criticism as having sorted out the situation just so that they do not have to think about the problem themselves. I do not know, but I am afraid that this is what’s going on.

I was about to tell you more about this, but something better came to my mind. The lead author of the supernova paper, Subir Sakar is located in Oxford and I will be visiting Oxford next month. So, I asked if he would be in for an interview and he kindly agreed on that. So you will have him explain his work himself.

Speaking of supernovae. There was another paper just a few days ago that claimed that actually supernovae are not very good standards for standard candles, and that indeed their luminosity might just depend on the average age of the star that goes supernova.

Now, if you look at more distant supernovae, the light has had to travel for a long time to reach us, which means they are on the average younger. So, if younger stars that go bang have a different luminosity than older ones, that introduces a bias in the analysis that can mimic the effect of dark energy. Indeed, the authors of that new paper also claim that one does not need dark energy to explain the observations.

This gives me somewhat of a headache because these are two different reasons for why dark energy might not exist. Which raises the question what happens if you combine them. Maybe that makes the expansion too slow? Also, I said this before, but let me emphasize again that the supernova data are not the only evidence for dark energy. Someone’s got to do a global fit of all the available data before we can draw conclusions.

One final point for today, the well-known particle physicist Mikhail Shifman has an article on the arXiv that could best be called an opinion piece. It is titled “Musings on the current status of high energy physics”. In this article he writes “Low energy-supersymmetry is ruled out, and gone with it is the concept of naturalness, a basic principle which theorists cherished and followed for decades.” And in a footnote he adds “By the way, this principle has never been substantiated by arguments other than aesthetical.”

This is entirely correct and one of the main topics in my book “Lost in Math”. Naturalness, to remind you, was the main reason so many physicists thought that the Large Hadron Collider should see new particles besides the Higgs boson. Which has not happened. The principle of naturalness is now pretty much dead because it’s just in conflict with observation.

However, the particle physics community has still not analyzed how it could possibly be that such a large group of people for such a long time based their research on an argument that was so obviously non-scientific. Something has seriously gone wrong here and if we do not understand what, it can happen again.

Friday, January 03, 2020

The Real Butterfly Effect

If a butterfly flaps its wings in China today, it may cause a tornado in America next week. Most of you will be familiar with this “Butterfly Effect” that is frequently used to illustrate a typical behavior of chaotic systems: Even smallest disturbances can grow and have big consequences.


The name “Butterfly Effect” was popularized by James Gleick in his 1987 book “Chaos” and is usually attributed to the meteorologist Edward Lorenz. But I recently learned that this is not what Lorenz actually meant by Butterfly Effect.

I learned this from a paper by Tim Palmer, Andreas Döring, and Gregory Seregin called “The Real Butterfly Effect” and that led me to dig up Lorenz’ original paper from 1969.

Lorenz, in this paper, does not write about butterfly wings. He instead refers to a sea gull’s wings, but then attributes that to a meteorologist whose name he can’t recall. The reference to a butterfly seems to have come from a talk that Lorenz gave in 1972, which was titled “Does the Flap of a Butterfly’s Wings in Brazil set off a Tornado in Texas?”

The title of this talk was actually suggested by the session chair, a meteorologist by name Phil Merilees. In any case, it was the butterfly that stuck instead of the sea gull. And what was the butterfly talk about? It was a summary of Lorentz 1969 paper. So what’s in that paper?

In that paper, Lorenz made a much stronger claim than that a chaotic system is sensitive to the initial conditions. The usual butterfly effect says that any small inaccuracy in the knowledge that you have about the initial state of the system will eventually blow up and make a large difference. But if you did precisely know the initial state, then you could precisely predict the outcome, and if only you had good enough data you could make predictions as far ahead as you like. It’s chaos, alright, but it’s still deterministic.

Now, in the 1969 paper, Lorenz looks at a system that has an even worse behavior. He talks about weather, so the system he considers is the Earth, but that doesn’t really matter, it could be anything. He says, let us divide up the system into pieces of equal size. In each piece we put a detector that makes a measurement of some quantity. That quantity is what you need as input to make a prediction. Say, air pressure and temperature. He further assumes that these measurements are arbitrarily accurate. Clearly unrealistic, but that’s just to make a point.

How well can you make predictions using the data from your measurements? You have data on that finite grid. But that does not mean you can generally make a good prediction on the scale of that grid, because errors will creep into your prediction from scales smaller than the grid. You expect that to happen of course because that’s chaos; the non-linearity couples all the different scales together and the error on the small scales doesn’t stay on the small scales.

But you can try to combat this error by making the grid smaller and putting in more measurement devices. For example, Lorenz says, if you have a typical grid of some thousand kilometers, you can make a prediction that’s good for, say, 5 days. After these 5 days, the errors from smaller distances screw you up. So then you go and decrease your grid length by a factor of two.

Now you have many more measurements and much more data. But, and here comes the important point: Lorenz says this may only increase the time for which you can make a good prediction by half of the original time. So now you have 5 days plus 2 and a half days. Then you can go and make your grid finer again. And again you will gain half of the time. So now you have 5 days plus 2 and half plus 1 and a quarter. And so on.

Most of you will know that if you sum up this series all the way to infinity it will converge to a finite value, in this case that’s 10 days. This means that even if you have an arbitrarily fine grid and you know the initial condition precisely, you will only be able to make predictions for a finite amount of time.

And this is the real butterfly effect. That a chaotic system may be deterministic and yet still be non-predictable beyond a finite amount of time .

This of course raises the question whether there actually is any system that has such properties. There are differential equations which have such a behavior. But whether the real butterfly effect occurs for any equation that describes nature is unclear. The Navier-Stokes equation, which Lorenz was talking about may or may not suffer from the “real” butterfly effect. No one knows. This is presently one of the big unsolved problems in mathematics.

However, the Navier-Stokes equation, and really any other equation for macroscopic systems, is strictly speaking only an approximation. On the most fundamental level it’s all particle physics and, ultimately, quantum mechanics. And the equations of quantum mechanics do not have butterfly effects because they are linear. Then again, no one would use quantum mechanics to predict the weather, so that’s a rather theoretical answer.

The brief summary is that even in a deterministic system predictions may only be possible for a finite amount of time and that is what Lorenz really meant by “Butterfly Effect.”