Wednesday, October 16, 2019

Dark matter nightmare: What if we are just using the wrong equations?

Dark matter filaments. Computer simulation.
[Image: John Dubinski (U of Toronto)]
Einstein’s theory of general relativity is an extremely well-confirmed theory. Countless experiments have shown that its predictions for our solar system agree with observation to utmost accuracy. But when we point our telescopes at larger distances, something is amiss. Galaxies rotate faster than expected. Galaxies in clusters move faster than they should. The expansion of the universe is speeding up.

General relativity does not tell us what is going on.

Physicists have attributed these puzzling observations to two newly postulated substances: Dark matter and dark energy. These two names are merely placeholders in Einstein’s original equations; their sole purpose is to remove the mismatch between prediction and observation.

This is not a new story. We have had evidence for dark matter since the 1930s, and dark energy was on the radar already in the 1990. Both have since occupied thousands of physicists with attempts to explain just what we are dealing with: Is dark matter a particle, and if so what type, and how can we measure it? If it is not a particle, then what do we change about general relativity to fix the discrepancy with measurements? Is dark energy maybe a new type of field? Is it, too, made of some particle? Does dark matter have something to do with dark energy or are the two unrelated?

To answer these questions, hundreds of hypotheses have been proposed, conferences have been held, careers have been made – but here we are, in 2019, and we still don’t know.

Bad enough, you may say, but the thing that really keeps me up at night is this: Maybe all these thousands of physicists are simply using the wrong equations. I don’t mean that general relativity needs to be modified. I mean that we incorrectly use the equations of general relativity to begin with.

The issue is this. General relativity relates the curvature of space and time to the sources of matter and energy. Put in a distribution of matter and energy at any one moment of time, and the equations tell you what space and time do in response, and how the matter must move according to this response.

But general relativity is a non-linear theory. This means, loosely speaking, that gravity gravitates. More concretely, it means that if you have two solutions to the equations and you take their sum, this sum will not also be a solution.

Now, what we do when we want to explain what a galaxy does, or a galaxy cluster, or even the whole universe, is not to plug the matter and energy of every single planet and star into the equations. This would be computationally unfeasible. Instead, we use an average of matter and energy, and use that as the source for gravity.

Needless to say, taking an average on one side of the equation requires that you also take an average on the other side. But since the gravitational part is non-linear, this will not give you the same equations that we use for the solar system: The average of a function of a variable is not the same as the function of the average of the variable. We know it’s not. But whenever we use general relativity on large scales, we assume that this is the case.

So, we know that strictly speaking the equations we use are wrong. The big question is, then, just how wrong are they?

Nosy students who ask this question are usually told these equations are not very wrong and are good to use. The argument goes that the difference between the equation we use and the equation we should use is negligible because gravity is weak in all these cases.

But if you look at the literature somewhat closer, then this argument has been questioned. And these questions have been questioned. And the questioning questions have been questioned. And the debate has remained unsettled until today.

That it is difficult to average non-linear equations is of course not a problem specific to cosmology. It’s a difficulty that condensed matter physicists have to deal with all the time, and it’s a major headache also for climate scientists. These scientists have a variety of techniques to derive the correct equations, but unfortunately the known methods do not easily carry over to general relativity because they do not respect the symmetries of Einstein’s theory.

It’s admittedly an unsexy research topic. It’s technical and tedious and most physicists ignore it. And so, while there are thousands of physicists who simply assume that the correction-terms from averaging are negligible, there are merely two dozen or so people trying to make sure that this assumption is actually correct.

Given how much brain-power physicists have spent on trying to figure out what dark matter and dark energy is, I think it would be a good idea to definitely settle the question whether it is anything at all. At the very least, I would sleep better.

Further reading: Does the growth of structure affect our dynamical models of the universe? The averaging, backreaction and fitting problems in cosmology, by Chris Clarkson, George Ellis, Julien Larena, and Obinna Umeh. Rept. Prog. Phys. 74 (2011) 112901, arXiv:1109.2314 [astro-ph.CO].

Monday, October 07, 2019

What does the future hold for particle physics?

In my new video, I talk about the reason why the Large Hadron Collider, LHC for short, has not found fundamentally new particles besides the Higgs boson, and what this means for the future of particle physics. Below you find a transcript with references.


Before the LHC turned on, particle physicists had high hopes it would find something new besides the Higgs boson, something that would go beyond the standard model of particle physics. There was a lot of talk about the particles that supposedly make up dark matter, which the collider might produce. Many physicists also expected it to find the first of a class of entirely new particles that were predicted based on a hypothesis known as supersymmetry. Others talked about dark energy, additional dimensions of space, string balls, black holes, time travel, making contact to parallel universes or “unparticles”. That’s particles which aren’t particles. So, clearly, some wild ideas were in the air.

To illustrate the situation before the LHC began taking data, let me quote a few articles from back then.

Here is Valerie Jamieson writing for New Scientist in 2008:
“The Higgs and supersymmetry are on firm theoretical footing. Some theorists speculate about more outlandish scenarios for the LHC, including the production of extra dimensions, mini black holes, new forces, and particles smaller than quarks and electrons. A test for time travel has also been proposed.”
Or, here is Ian Sample for the Guardian, also in 2008:
“Scientists have some pretty good hunches about what the machine might find, from creating never-seen-before particles to discovering hidden dimensions and dark matter, the mysterious substance that makes up 25% of the universe.”
Paul Langacker in 2010, writing for the APS:

“Theorists have predicted that spectacular signals of supersymmetry should be visible at the LHC.” Michael Dine for Physics Today in 2007:
“The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.”
The Telegraph in 2010:
“[The LHC] could answer the question of what causes mass, or even surprise its creators by revealing the existence of a fifth, sixth or seventh secret dimension of time and space.”
A final one. Here is Steve Giddings writing in 2010 for phys.org:
“LHC collisions might produce dark-matter particles... The collider might also shed light on the more predominant “dark energy,”... the LHC may reveal extra dimensions of space... if these extra dimensions are configured in certain ways, the LHC could produce microscopic black holes... Supersymmetry could be discovered by the LHC...”
The Large Hadron collider has been running since 2010. It has found the Higgs boson. But why didn’t it find any of the other things?

This question is surprisingly easy to answer. There was never a good reason to expect any of these things in the first place. The more difficult question is why did so many particle physicists think those were reasonable expectations, and why has not a single one of them told us what they have learned from their failed predictions?

To see what happened here, it is useful to look at the difference between the prediction of the Higgs-boson and the other speculations. The standard model without the Higgs does not work properly. It becomes mathematically inconsistent at energies that the LHC is able to reach. Concretely, without the Higgs, the standard model predicts probabilities larger than one, which makes no sense.

We therefore knew, before the LHC turned on, that something new had to happen. It could have been something else besides the Higgs. The Higgs was one way to fix the problem with the standard model, but there are other ways. However, the Higgs turned out to be right.

All other proposed ideas, extra dimensions, supersymmetry, time-travel, and so on, are unnecessary. These theories have been constructed so that they are compatible with all existing observations. But they are not necessary to solve any problem with the standard model. They are basically wishful thinking.

The reason that many particle physicists believed in these speculations is that they mistakenly thought the standard model has another problem which the existence of the Higgs would not fix. I am afraid that many of them still believe this. This supposed problem is that the standard model is not “technically natural”.

This means the standard model contains one number that is small, but there is no explanation for why it is small. This number is the mass of the Higgs-boson divided by the Planck mass, which happens to be about 10-15. The standard model works just fine with that number and it fits the data. But a small number like this, without explanation, is ugly and particle physicists didn’t want to believe nature could be that ugly.

Well, now they know that nature doesn’t care what physicists want it to be like.

What does this mean for the future of particle physics? This argument from “technical naturalness” was the only reason that physicists had to think that the standard model is incomplete and something to complete it must appear at LHC energies. Now that it is clear this argument did not work, there is no reason why a next larger collider should see anything new either. The standard model runs into mathematical trouble again at energies about a billion times higher than what a next larger collider could test. At the moment, therefore, we have no good reason to build a larger particle collider.

But particle physics is not only collider physics. And so, it seems likely to me, that research will shift to other areas of physics. A shift that has been going on for two decades already, and will probably become more pronounced now, is the move to astrophysics, in particular the study of dark matter and dark energy and also, to some extent, the early universe.

The other shift that we are likely to see is a move away from high energy particle physics and move towards high precision measurements at lower energies, or to table top experiments probing the quantum behavior of many particle systems, where we still have much to learn.

Wednesday, October 02, 2019

Has Reductionism Run its Course?

For more than 2000 years, ever since Democritus’ first musings about atoms, reductionism has driven scientific inquiry. The idea is simple enough: Things are made of smaller things, and if you know what the small things do, you learn what the large things do. Simple – and stunningly successful.

After 2000 years of taking things apart into smaller things, we have learned that all matter is made of molecules, and that molecules are made of atoms. Democritus originally coined the word “atom” to refer to indivisible, elementary units of matter. But what we have come to call “atoms”, we now know, is made of even smaller particles. And those smaller particles are yet again made of even smaller particles.

© Sabine Hossenfelder
The smallest constituents of matter, for all we currently know, are the 25 particles which physicists collect in the standard model of particle physics. Are these particles made up of yet another set of smaller particles, strings, or other things?

It is certainly possible that the particles of the standard model are not the ultimate constituents of matter. But we presently have no particular reason to think they have a substructure. And this raises the question whether attempting to look even closer into the structure of matter is a promising research direction – right here, right now.

It is a question that every researcher in the foundations of physics will be asking themselves, now that the Large Hadron Collider has confirmed the standard model, but found nothing beyond that.

20 years ago, it seemed clear to me that probing physical processes at ever shorter distances is the most reliable way to better understand how the universe works. And since it takes high energies to resolve short distances, this means that slamming particles together at high energies is the route forward. In other words, if you want to know more, you build bigger particle colliders.

This is also, unsurprisingly, what most particle physicists are convinced of. Going to higher energies, so their story goes, is the most reliable way to search for something fundamentally new. This is, in a nutshell, particle physicists’ major argument in favor of building a new particle collider, one even larger than the presently operating Large Hadron Collider.

But this simple story is too simple.

The idea that reductionism means things are made of smaller things is what philosophers more specifically call “methodological reductionism”. It’s a statement about the properties of stuff. But there is another type of reductionism, “theory reductionism”, which instead refers to the relation between theories. One theory can be “reduced” to another one, if the former can be derived from the latter.

Now, the examples of reductionism that particle physicists like to put forward are the cases where both types of reductionism coincide: Atomic physics explains chemistry. Statistical mechanics explains the laws of thermodynamics. The quark model explains regularities in proton collisions. And so on.

But not all cases of successful theory reduction have also been cases of methodological reduction. Take Maxwell’s unification of the electric and magnetic force. From Maxwell’s theory you can derive a whole bunch of equations, such as the Coulomb law and Faraday’s law, that people used before Maxwell explained where they come from. Electromagnetism, is therefore clearly a case of theory reduction, but it did not come with a methodological reduction.

Another well-known exception is Einstein’s theory of General Relativity. General Relativity can be used in more situations than Newton’s theory of gravity. But it is not the physics on short distances that reveals the differences between the two theories. Instead, it is the behavior of bodies at high relative speed and strong gravitational fields that Newtonian gravity cannot cope with.

Another example that belongs on this list is quantum mechanics. Quantum mechanics reproduces classical mechanics in suitable approximations. It is not, however, a theory about small constituents of larger things. Yes, quantum mechanics is often portrayed as a theory for microscopic scales, but, no, this is not correct. Quantum mechanics is really a theory for all scales, large to small. We have observed quantum effects over distances exceeding 100km and for objects weighting as “much” as a nanogram, composed of more than 1013 atoms. It’s just that quantum effects on large scales are difficult to create and observe.

Finally, I would like to mention Noether’s theorem, according to which symmetries give rise to conservation laws. This example is different from the previous ones in that Noether’s theorem was not applied to any theory in particular. But it has resulted in a more fundamental understanding of natural law, and therefore I think it deserve a place on the list.

In summary, history does not support particle physicists’ belief that a deeper understanding of natural law will most likely come from studying shorter distances. On the very contrary, I have begun to worry that physicists’ confidence in methodological reductionism stands in the way of progress. That’s because it suggests we ask certain questions instead of others. And those may just be the wrong questions to ask.

If you believe in methodological reductionism, for example, you may ask what dark energy is made of. But maybe dark energy is not made of anything. Instead, dark energy may be an artifact of our difficulty averaging non-linear equations.

It’s similar with dark matter. The methodological reductionist will ask for a microscopic theory and look for a particle that dark matter is made of. Yet, maybe dark matter is really a phenomenon associated with our misunderstanding of space-time on long distances.

The maybe biggest problem that methodological reductionism causes lies in the area of quantum gravity, that is our attempt to resolve the inconsistency between quantum theory and general relativity. Pretty much all existing approaches – string theory, loop quantum gravity, causal dynamical triangulation (check out my video for more) – assume that methodological reductionism is the answer. Therefore, they rely on new hypotheses for short-distance physics. But maybe that’s the wrong way to tackle the problem. The root of our problem may instead be that quantum theory itself must be replaced by a more fundamental theory, one that explains how quantization works in the first place.

Approaches based on methodological reductionism – like grand unified forces, supersymmetry, string theory, preon models, or technicolor – have failed for the past 30 years. This does not mean that there is nothing more to find at short distances. But it does strongly suggest that the next step forward will be a case of theory reduction that does not rely on taking things apart into smaller things.

Sunday, September 29, 2019

Travel Update

The coming days I am in Brussels, for a workshop that I’m not sure where it is or what it is about. It also doesn’t seem to have a website. In any case, I’ll be away, just don’t ask me exactly where or why.

On Oct 15, I am giving a public lecture at the University of Minnesota. On Oct 17, I am giving a colloquium in Cleveland. On Oct 25, I am giving a public lecture in Göttingen (in German). On Oct 29, I’m in Genoa giving a talk at the “Festival della Scienza” to accompany the publication of the Italian translation of my book “Lost in Math.” I don’t speak Italian, so this talk will be in English.

On Nov 5th I’m speaking in Berlin about dark matter. On Nov 6th I am supposed to give a lecture at the Einstein Forum on Potsdam, though that doesn’t seem to be on their website. These two talks in Berlin and Potsdam will also be in German.

On Nov 12th I’m giving a seminar in Oxford, in case Britain still exists at that point. Dec 9th I’m speaking in Wuppertal, details to come, and that will hopefully be the last trip this year.

Next time I’m in the USA will probably be late March 2020. In case you are interested that I stop by at your place, please get in touch.

I am always happy to meet readers of my blog, so in case our paths cross, do not hesitate to say hi.

Friday, September 27, 2019

The Trouble with Many Worlds

Today I want to talk about the many worlds interpretation of quantum mechanics and explain why I do not think it is a complete theory.



But first, a brief summary of what the many worlds interpretation says. In quantum mechanics, every system is described by a wave-function from which one calculates the probability of obtaining a specific measurement outcome. Physicists usually take the Greek letter Psi to refer to the wave-function.

From the wave-function you can calculate, for example, that a particle which enters a beam-splitter has a 50% chance of going left and a 50% chance of going right. But – and that’s the important point – once you have measured the particle, you know with 100% probability where it is. This means that you have to update your probability and with it the wave-function. This update is also called the wave-function collapse.

The wave-function collapse, I have to emphasize, is not optional. It is an observational requirement. We never observe a particle that is 50% here and 50% there. That’s just not a thing. If we observe it at all, it’s either here or it isn’t. Speaking of 50% probabilities really makes sense only as long as you are talking about a prediction.

Now, this wave-function collapse is a problem for the following reason. We have an equation that tells us what the wave-function does as long as you do not measure it. It’s called the Schrödinger equation. The Schrödinger equation is a linear equation. What does this mean? It means that if you have two solutions to this equation, and you add them with arbitrary prefactors, then this sum will also be a solution to the Schrödinger equation. Such a sum, btw, is also called a “superposition”. I know that superposition sounds mysterious, but that’s really all it is, it’s a sum with prefactors.

The problem is now that the wave-function collapse is not linear, and therefore it cannot be described by the Schrödinger equation. Here is an easy way to understand this. Suppose you have a wave-function for a particle that goes right with 100% probability. Then you will measure it right with 100% probability. No mystery here. Likewise, if you have a particle that just goes left, you will measure it left with 100% probability. But here’s the thing. If you take a superposition of these two states, you will not get a superposition of probabilities. You will get 100% either on the one side, or on the other.

The measurement process therefore is not only an additional assumption that quantum mechanics needs to reproduce what we observe. It is actually incompatible with the Schrödinger equation.

Now, the most obvious way to deal with that is to say, well, the measurement process is something complicated that we do not yet understand, and the wave-function collapse is a placeholder that we use until we will figured out something better.

But that’s not how most physicists deal with it. Most sign up for what is known as the Copenhagen interpretation, that basically says you’re not supposed to ask what happens during measurement. In this interpretation, quantum mechanics is merely a mathematical machinery that makes predictions and that’s that. The problem with Copenhagen – and with all similar interpretations – is that they require you to give up the idea that what a macroscopic object, like a detector does should be derivable from theory of its microscopic constituents.

If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents. Because if you could do that, you would not need a second equation besides the Schrödinger equation. That you need this second equation, then is incompatible with reductionism. It is possible that this is correct, but then you have to explain just where reductionism breaks down and why, which no one has done. And without that, the Copenhagen interpretation and its cousins do not solve the measurement problem, they simply refuse to acknowledge that the problem exists in the first place.

The many world interpretation, now, supposedly does away with the problem of the quantum measurement and it does this by just saying there isn’t such a thing as wavefunction collapse. Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel worlds, one for each possible measurement outcome. This universe splitting is also sometimes called branching.

Some people have a problem with the branching because it’s not clear just exactly when or where it should take place, but I do not think this is a serious problem, it’s just a matter of definition. No, the real problem is that after throwing out the measurement postulate, the many worlds interpretation needs another assumption, that brings the measurement problem back.

The reason is this. In the many worlds interpretation, if you set up a detector for a measurement, then the detector will also split into several universes. Therefore, if you just ask “what will the detector measure”, then the answer is “The detector will measure anything that’s possible with probability 1.”

This, of course, is not what we observe. We observe only one measurement outcome. The many worlds people explain this as follows. Of course you are not supposed to calculate the probability for each branch of the detector. Because when we say detector, we don’t mean all detector branches together. You should only evaluate the probability relative to the detector in one specific branch at a time.

That sounds reasonable. Indeed, it is reasonable. It is just as reasonable as the measurement postulate. In fact, it is logically entirely equivalent to the measurement postulate. The measurement postulate says: Update probability at measurement to 100%. The detector definition in many worlds says: The “Detector” is by definition only the thing in one branch. Now evaluate probabilities relative to this, which gives you 100% in each branch. Same thing.

And because it’s the same thing you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible. What the many worlds people are now trying instead is to derive this postulate from rational choice theory. But of course that brings back in macroscopic terms, like actors who make decisions and so on. In other words, this reference to knowledge is equally in conflict with reductionism as is the Copenhagen interpretation.

And that’s why the many worlds interpretation does not solve the measurement problem and therefore it is equally troubled as all other interpretations of quantum mechanics. What’s the trouble with the other interpretations? We will talk about this some other time. So stay tuned.

Wednesday, September 18, 2019

Windows Black Screen Nightmare

Folks, I have a warning to utter that is somewhat outside my usual preaching.

For the past couple of days one of my laptops has tried to install Windows updates but didn’t succeed. In the morning I would find an error message that said something went wrong. I ignored this because really I couldn’t care less what problems Microsoft causes itself. But this morning, Windows wouldn’t properly start. All I got was a black screen with a mouse cursor. This is the computer I use for my audio and video processing.

Now, I’ve been a Windows user for 20+ years and I don’t get easily discouraged by spontaneously appearing malfunctions. After some back and forth, I managed to open a command prompt from the task manager to launch the Windows explorer by hand. But this just produced an error message about some obscure .dll file being corrupted. Ok, then, I thought, I’ll run an sfc /scandisk. But this didn’t work; the command just wouldn’t run. At this point I began to feel really bad about this.

I then rebooted the computer a few times with different login options, but got the exact same problem with an administrator login and in the so-called safe mode. The system restore produced an error message, too. Finally, I tried the last thing that came to my mind, a factory reset. Just to have Windows inform me that the command couldn’t be executed.

With that, I had run out of Windows-wisdom and called a helpline. Even the guy on the helpline was impressed by this system’s fuckedupness (if that isn’t a word, it should be) and, after trying a few other things that didn’t work, recommended I wipe the disk clean and reinstall Windows.

So that’s basically how I spent my day, today. Which, btw, happens to be my birthday.

The system is running fine now, though I will have to reinstall all my software. Luckily enough my hard-disk partition seems to have saved all my video and audio files. It doesn’t seem to have been a hardware problem. It also doesn’t smell like a virus. The two IT guys I spoke with said that most likely something went badly wrong with one of those Windows updates. In fact, if you ask Google for Windows Black Screen you’ll find that similar things have happened before after Windows updates. Though, it seems, not quite as severe as this case.

The reason I am telling you this isn’t just to vent (though there’s that), but to ask you that in case you encounter the same problem, please let us know. Especially if you find a solution that doesn’t require reinstalling Windows from scratch.

Update: Managed to finish what I meant to do before my computer became dysfunctional

Monday, September 16, 2019

Why do some scientists believe that our universe is a hologram?



Today, I want to tell you why some scientists believe that our universe is really a 3-dimensional projection of a 2-dimensional space. They call it the “holographic principle” and the key idea is this.

Usually, the number of different things you can imagine happening inside a part of space increases with the volume. Think of a bag of particles. The larger the bag, the more particles, and the more details you need to describe what the particles do. These details that you need to describe what happens are what physicists call the “degrees of freedom,” and the number of these degrees of freedom is proportional to the number of particles, which is proportional to the volume.

At least that’s how it normally works. The holographic principle, in contrast, says that you can describe what happens inside the bag by encoding it on the surface of that bag, at the same resolution.

This may not sounds all that remarkable, but it is. Here is why. Take a cube that’s made of smaller cubes, each of which is either black or white. You can think of each small cube as a single bit of information. How much information is in the large cube? Well, that’s the number of the smaller cubes, so 3 cube in this example. Or, if you divide every side of the large cube into N pieces instead of three, that’s N cube. But if you instead count the surface elements of the cube, at the same resolution, you have only 6 x N square. This means that for large N, there are many more volume bits than surface bits at the same resolution.

The holographic principle now says that even though there are so many fewer surface bits, the surface bits are sufficient to describe everything that happens in the volume. This does not mean that the surface bits correspond to certain regions of volume, it’s somewhat more complicated. It means instead that the surface bits describe certain correlations between the pieces of volume. So if you think again of the particles in the bag, these will not move entirely independently.

And that’s what is called the holographic principle, that really you can encode the events inside any volume on the surface of the volume, at the same resolution.

But, you may say, how come we never notice that particles in a bag are somehow constrained in their freedom? Good question. The reason is that the stuff that we deal with in every-day life, say, that bag of particles, doesn’t remotely make use of the theoretically available degrees of freedom. Our present observations only test situations well below the limit that the holographic principle says should exist.

The limit from the holographic principle really only matters if the degrees of freedom are strongly compressed, as is the case, for example, for stuff that collapses to a black hole. Indeed, the physics of black holes is one of the most important clues that physicists have for the holographic principle. That’s because we know that black holes have an entropy that is proportional to the area of the black hole horizon, not to its volume. That’s the important part: black hole entropy is proportional to the area, not to the volume.

Now, in thermodynamics entropy counts the number of different microscopic configurations that have the same macroscopic appearance. So, the entropy basically counts how much information you could stuff into a macroscopic thing if you kept track of the microscopic details. Therefore, the area-scaling of the black hole entropy tells you that the information content of black holes is bounded by a quantity which proportional to the horizon area. This relation is the origin of the holographic principle.

The other important clue for the holographic principle comes from string theory. That’s because string theorists like to apply their mathematical methods in a space-time with a negative cosmological constant, which is called an Anti-de Sitter space. Most of them believe, though it has strictly speaking never been proved, that gravity in an Anti-de Sitter space can be described by a different theory that is entirely located on the boundary of that space. And while this idea came from string theory, one does not actually need the strings for this relation between the volume and the surface to work. More concretely, it uses a limit in which the effects of the strings no longer appear. So the holographic principle seems to be more general than string theory.

I have to add though that we do not live in an Anti-de Sitter space because, for all we currently know, the cosmological constant in our universe is positive. Therefore it’s unclear how much the volume-surface relation in Anti-De Sitter space tells us about the real world. And for what the black hole entropy is concerned, the mathematics we currently have does not actually tell us that it counts the information that one can stuff into a black hole. It may instead only count the information that one loses by disconnecting the inside and outside of the black hole. This is called the “entanglement entropy”. It scales with the surface for many systems other than black holes and there is nothing particularly holographic about it.

Whether or not you buy the motivations for the holographic principle, you may want to know whether we can test it. The answer is definitely maybe. Earlier this year, Erik Verlinde and Kathryn Zurek proposed that we try to test the holographic principle using gravitational wave interferometers. The idea is that if the universe is holographic, then the fluctuations in the two orthogonal directions that the interferometer arms extend into would be more strongly correlated than one normally expects. However, not everyone agrees that the particular realization of holography which Verlinde and Zurek use is the correct one.

Personally I think that the motivations for the holographic principle are not particularly strong and in any case we’ll not be able to test this hypothesis in the coming centuries. Therefore writing papers about it is a waste of time. But it’s an interesting idea and at least you now know what physicists are talking about when they say the universe is a hologram.

Tuesday, September 10, 2019

Book Review: “Something Deeply Hidden” by Sean Carroll

Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime
Sean Carroll
Dutton, September 10, 2019

Of all the weird ideas that quantum mechanics has to offer, the existence of parallel universes is the weirdest. But with his new book, Sean Carroll wants to convince you that it isn’t weird at all. Instead, he argues, if we only take quantum mechanics seriously enough, then “many worlds” are the logical consequence.

Most remarkably, the many worlds interpretation implies that in every instance you split into many separate you’s, all of which go on to live their own lives. It takes something to convince yourself that this is reality, but if you want to be convinced, Carroll’s book is a good starting point.

“Something Deeply Hidden” is an enjoyable and easy-to-follow introduction to quantum mechanics that will answer your most pressing questions about many worlds, such as how worlds split, what happens with energy conservation, or whether you should worry about the moral standards of all your copies.

The book is also notable for what it does not contain. Carroll avoids going through all the different interpretations of quantum mechanics in detail, and only provides short summaries. Instead, the second half of the book is dedicated to his own recent work, which is about constructing space from quantum entanglement. I do find this a promising line of research and he presents it well.

I was somewhat perplexed that Carroll does not mention what I think are the two biggest objections to the many world’s interpretation, but I will write about this in a separate post.

Like Carroll’s previous books, this one is engaging, well-written, and clearly argued. I can unhesitatingly recommend it to anyone who is interested in the foundations of physics.

[Disclaimer: Free review copy]

Sunday, September 08, 2019

Away Note

I'm attending a conference in Oxford the coming week, so there won't be much happening on this blog. Also, please be warned that comments may be stuck in the moderation queue longer than usual.

Friday, September 06, 2019

The five most promising ways to quantize gravity

Today, I want to tell you what ideas physicists have come up with to quantize gravity. But before I get to that, I want to tell you why it matters.



That we do not have a theory of quantum gravity is currently one of the biggest unsolved problems in the foundations of physics. A lot of people, including many of my colleagues, seem to think that a theory of quantum gravity will remain an academic curiosity without practical relevance.

I think they are wrong. That’s because whatever solves this problem will tell us something about quantum theory, and that’s the theory on which all modern electronic devices run, like the ones on which you are watching this video. Maybe it will take 100 years for quantum gravity to find a practical application, or maybe it will even take a 1000 years. But I am sure that understanding nature better will not forever remain a merely academic speculation.

Before I go on, I want to be clear that quantizing gravity by itself is not the problem. We can, and have, quantized gravity the same way that we quantize the other interactions. The problem is that the theory which one gets this way breaks down at high energies, and therefore it cannot be how nature works, fundamentally.

This naïve quantization is called “perturbatively quantized gravity” and it was worked out in the 1960s by Feynman and DeWitt and some others. Perturbatively quantized gravity is today widely believed to be an approximation to whatever is the correct theory.

So really the problem is not just to quantize gravity per se, you want to quantize it and get a theory that does not break down at high energies. Because energies are proportional to frequencies, physicists like to refer to high energies as “the ultraviolet” or just “the UV”. Therefore, the theory of quantum gravity that we look for is said to be “UV complete”.

Now, let me go through the five most popular approaches to quantum gravity.

1. String Theory

The most widely known and still the most popular attempt to get a UV-complete theory of quantum gravity is string theory. The idea of string theory is that instead of talking about particles and quantizing them, you take strings and quantize those. Amazingly enough, this automatically has the consequence that the strings exchange a force which has the same properties as the gravitational force.

This was discovered in the 1970s and at the time, it got physicists very excited. However, in the past decades several problems have appeared in string theory that were patched, which has made the theory increasingly contrived. You can hear all about this in my earlier video. It has never been proved that string theory is indeed UV-complete.

2. Loop Quantum Gravity

Loop Quantum Gravity is often named as the biggest competitor of string theory, but this comparison is somewhat misleading. String theory is not just a theory for quantum gravity, it is also supposed to unify the other interactions. Loop Quantum Gravity on the other hand, is only about quantizing gravity.

It works by discretizing space in terms of a network, and then using integrals around small loops to describe the space, hence the name. In this network, the nodes represent volumes and the links between nodes the areas of the surfaces where the volumes meet.

Loop Quantum Gravity is about as old as string theory. It solves the problem of combining general relativity and quantum mechanics to one consistent theory but it has remained unclear just exactly how one recovers general relativity in this approach.

3. Asymptotically Safe Gravity

Asymptotic Safety is an idea that goes back to a 1976 paper by Steven Weinberg. It says that a theory which seems to have problems at high energies when quantized naively, may not have a problem after all, it’s just that it’s more complicated to find out what happens at high energies than it seems. Asymptotically Safe Gravity applies the idea of asymptotic safety to gravity in particular.

This approach also solves the problem of quantum gravity. Its major problem is currently that it has not been proved that the theory which one gets this way at high energies still makes sense as a quantum theory.

4. Causal Dynamical Triangulation

The problem with quantizing gravity comes from infinities that appear when particles interact at very short distances. This is why most approaches to quantum gravity rely on removing the short distances by using objects of finite extensions. Loop Quantum Gravity works this way, and so does String Theory.

Causal Dynamical Triangulation also relies on removing short distances. It does so by approximating a curved space with triangles, or their higher-dimensional counterparts respectively. In contrast to the other approaches though, where the finite extension is a postulated, new property of the underlying true nature of space, in Causal Dynamical Triangulation, the finite size of the triangles is a mathematical aid, and one eventually takes the limit where this size goes to zero.

The major reason why many people have remained unconvinced of Causal Dynamical Triangulation is that it treats space and time differently, which Einstein taught us not to do.

5. Emergent Gravity

Emergent gravity is not one specific theory, but a class of approaches. These approaches have in common that gravity derives from the collective behavior of a large number of constituents, much like the laws of thermodynamics do. And much like for thermodynamics, in emergent gravity, one does not actually need to know all that much about the exact properties of these constituents to get the dynamical law.

If you think that gravity is really emergent, then quantizing gravity does not make sense. Because, if you think of the analogy to thermodynamics, you also do not obtain a theory for the structure of atom by quantizing the equations for gases. Therefore, in emergent gravity one does not quantize gravity. One instead removes the inconsistency between gravity and quantum mechanics by saying that quantizing gravity is not the right thing to do.

Which one of these theories is the right one? No one knows. The problem is that it’s really, really hard to find experimental evidence for quantum gravity. But that it’s hard doesn’t mean impossible. I will tell you some other time how we might be able to experimentally test quantum gravity after all. So, stay tuned.

Wednesday, September 04, 2019

What’s up with LIGO?

The Nobel-Prize winning figure.
We don’t know exactly what it shows.
[Image Credits: LIGO]
Almost four years ago, on September 14 2015, the LIGO collaboration detected gravitational waves for the first time. In 2017, this achievement was awarded the Nobel Prize. Also in that year, the two LIGO interferometers were joined by VIRGO. Since then, a total of three detectors have been on the lookout for space-time’s subtle motions.

By now, the LIGO/VIRGO collaboration has reported dozens of gravitational wave events: black hole mergers (like the first), neutron star mergers, and black hole-neutron star mergers. But not everyone is convinced the signals are really what the collaboration claims they are.

Already in 2017, a group of physicists around Andrew Jackson in Denmark reported difficulties when they tried to reproduce the signal reconstruction of the first event. In an interview dated November last year, Jackson maintained that the only signal they have been able to reproduce is the first. About the other supposed detections he said: “We can’t see any of those events when we do a blind analysis of the data. Coming from Denmark, I am tempted to say it’s a case of the emperor’s new gravitational waves.”

For most physicists, the 170817 neutron-star merger – the strongest signal LIGO has seen so-far – erased any worries raised by the Danish group’s claims. That’s because this event came with an electromagnetic counterpart that was seen by multiple telescopes, which can demonstrate that LIGO indeed sees something of astrophysical origin and not terrestrial noise. But, as critics have pointed out correctly, the LIGO alert for this event came 40 minutes after NASA’s gamma-ray alert. For this reason, the event cannot be used as an independent confirmation of LIGO’s detection capacity. Furthermore, the interpretation of this signal as a neutron-star merger has also been criticized. And this criticism has been criticized for yet other reasons.

It further fueled the critics’ fire when Michael Brooks reported last year for New Scientist that, according to two members of the collaboration, the Nobel-prize winning figure of LIGO’s seminal detection was “not found using analysis algorithms” but partly done “by eye” and “hand-tuned for pedagogical purposes.” To this date, the journal that published the paper has refused to comment.

The LIGO collaboration has remained silent on the matter, except for issuing a statement according to which they have “full confidence” in their published results (surprise), and that we are to await further details. Glaciers are now moving faster than this collaboration.

In April this year, LIGO started the third observation run (O3) after an upgrade that increased the detection sensitivity by about 40% over the previous run.  Many physicists hoped the new observations would bring clarity with more neutron-star events that have electromagnetic counterparts, but that hasn’t happened.

Since April, the collaboration has issued 33 alerts for new events, but so-far no electromagnetic counterparts have been seen. You can check the complete list for yourself here. 9 of the 33 events have meanwhile been downgraded because they were identified as likely of terrestrial origin, and been retracted.

The number of retractions is fairly high partly because the collaboration is still coming to grips with the upgraded detector. This is new scientific territory and the researchers themselves are still learning how to best analyze and interpret the data. A further difficulty is that the alerts must go out quickly in order for telescopes to be swung around and point at the right location in the sky. This does not leave much time for careful analysis.

With the still lacking independent confirmation that LIGO sees events of astrophysical origin, critics are having a good time. In a recent article for the German online magazine Heise, Alexander Unzicker – author of a book called “The Higgs Fake” – contemplates whether the first event was a blind injection, ie, a fake signal. The three people on the blind injection team at the time say it wasn’t them, but Unzicker argues that given our lack of knowledge about the collaboration’s internal proceedings, there might well have been other people able to inject a signal. (You can find an English translation here.)

In the third observation run, the collaboration has so-far seen one high-significance binary neutron star candidate (S190425z). But the associated electromagnetic signal for this event has not been found. This may be for various reasons. For example, the analysis of the signal revealed that the event must have been far away, about 4 times farther than the 2017 neutron-star event. This means that any electromagnetic signal would have been fainter by a factor of about 16. In addition, the location in the sky was rather uncertain. So, the electromagnetic signal was plausibly hard to detect.

More recently, on August 14th, the collaboration reported a neutron-star black hole merger. Again the electromagnetic counterpart is missing. In this case they were able to locate the origin to better precision. But they still estimate the source is about 7 times farther away than the 2017 neutron-star event, meaning it would have been fainter by a factor of about 50.

Still, it is somewhat perplexing the signal wasn’t seen by any of the telescopes that looked for it. There may have been physical reasons at the source, such that the neutron-star was swallowed in one bite, in which case there wouldn’t be much emitted, or that the system was surrounded by dust, blocking the electromagnetic signal.

A second neutron star-black hole merger on August 17 was retracted

And then there are the “glitches”.

LIGO’s “glitches” are detector events of unknown origin whose frequency spectrum does not look like the expected gravitational wave signals. I don’t know exactly how many of those the detector suffers from, but the way they are numbered, by a date and two digits, indicates between 10 and 100 a day. LIGO uses a citizen science project, called “Gravity Spy” to identify glitches. There isn’t one type of glitch, there are many different types of them, with names like “Koi fish,” “whistle,” or “blip.” In the figures below you see a few examples.


Examples for LIGO's detector glitches. [Image Source]

This gives me some headaches, folks. If you do not know why your detector detects something that does not look like what you expect, how can you trust it in the cases where it does see what you expect?

Here is what Andrew Jackson had to say on the matter:

Jackson: “The thing you can conclude if you use a template analysis is [...] that the results are consistent with a black hole merger. But in order to make the stronger statement that it really and truly is a black hole merger you have to rule out anything else that it could be.

“And the characteristic signal here is actually pretty generic. What do they find? They find something where the amplitude increases, where the frequency increases, and then everything dies down eventually. And that describes just about every catastrophic event you can imagine. You see, increasing amplitude, increasing frequency, and then it settles into some new state. So they really were obliged to rule out every terrestrial effects, including seismic effects, and the fact that there was an enormous lightning string in Burkina Faso at exactly the same time [...]”
Interviewer: “Do you think that they failed to rule out all these other possibilities?

Jackson: “Yes…”
If it was correct what Jackson said, this would be highly problematic indeed. But I have not been able to think of any other event that looks remotely like a gravitational wave signal, even leaving aside the detector correlations. Unlike what Jackson states, a typical catastrophic event does not have a frequency increase followed by a ring-down and sudden near-silence.

Think of an earthquake for example. For the most part, earthquakes happen when stresses exceed a critical threshold. The signal don’t have a frequency build-up, and after the quake, there’s a lot of rumbling, often followed by smaller quakes. Just look at the below figure that shows the surface movement of a typical seismic event.
Example of typical earthquake signal. [Image Source]


It looks nothing like that of a gravitational wave signal.

For this reason, I don’t share Jackson’s doubts over the origin of the signals that LIGO detects. However, the question whether there are any events of terrestrial origin with similar frequency characteristics arguably requires consideration beyond Sabine scratching her head for half an hour.

So, even though I do not have the same concerns as were raised by the LIGO critics, I must say that I do find it peculiar indeed there is so little discussion about this issue. A Nobel Prize was handed out, and yet we still do not have confirmation that LIGO’s signals are not of terrestrial origin. In which other discipline is it considered good scientific practice to discard unwelcome yet not understood data, like LIGO does with the glitches? Why do we still not know just exactly what was shown in the figure of the first paper? Where are the electromagnetic counterparts?

LIGO’s third observing run will continue until March 2020. It presently doesn’t look like it will bring the awaited clarity. I certainly hope that the collaboration will make somewhat more efforts to erase the doubts that still linger around their supposed detections.

Wednesday, August 28, 2019

Solutions to the black hole information paradox

In the early 1970s, Stephen Hawking discovered that black holes can emit radiation. This radiation allows black holes to lose mass and, eventually, to entirely evaporate. This process seems to destroy all the information that is contained in the black hole and therefore contradicts what we know about the laws of nature. This contradiction is what we call the black hole information paradox.

After discovering this problem 40 years ago, Hawking spent the rest of his life trying to solve it. He passed away last year, but the problem is still alive and there is no resolution in sight.

Today, I want to tell you what solutions physicists have so-far proposed for the black hole information loss problem. If you want to know more about just what exactly is the problem, please read my previous blogpost.

There are hundreds of proposed solutions to the information loss problem, that I can’t possibly all list here. But I want to tell you about the five most plausible ones.

1. Remnants.

The calculation that Hawking did to obtain the properties of the black hole radiation makes use of general relativity. But we know that general relativity is only approximately correct. It eventually has to be replaced by a more fundamental theory, which is quantum gravity. The effects of quantum gravity are not relevant near the horizon of large black holes, which is why the approximation that Hawking made is good. But it breaks down eventually, when the black hole has shrunk to a very small size. Then, the space-time curvature at the horizon becomes very strong and quantum gravity must be taken into account.

Now, if quantum gravity becomes important, we really do not know what will happen because we don’t have a theory for quantum gravity. In particular we have no reason to think that the black hole will entirely evaporate to begin with. This opens the possibility that a small remainder is left behind which just sits there forever. Such a black hole remnant could keep all the information about what formed the black hole, and no contradiction results.

2. Information comes out very late.

Instead of just stopping to evaporate when quantum gravity becomes relevant, the black hole could also start to leak information in that final phase. Some estimates indicate that this leakage would take a very long time, which is why this solution is also known as a “quasi-stable remnant”. However, it is not entirely clear just how long it would take. After all, we don’t have a theory of quantum gravity. This second option removes the contradiction for the same reason as the first.

3. Information comes out early.

The first two scenarios are very conservative in that they postulate new effects will appear only when we know that our theories break down. A more speculative idea is that quantum gravity plays a much larger role near the horizon and the radiation carries information all along, it’s just that Hawking’s calculation doesn’t capture it.

Many physicists prefer this solution over the first two for the following reason. Black holes do not only have a temperature, they also have an entropy, called the Bekenstein-Hawking entropy. This entropy is proportional to the area of the black hole. It is often interpreted as counting the number of possible states that the black hole geometry can have in a theory of quantum gravity.

If that is so, then the entropy must shrink when the black hole shrinks and this is not the case for the remnant and the quasi-stable remnant.

So, if you want to interpret the black hole entropy in terms of microscopic states, then the information must begin to come out early, when the black hole is still large. This solution is supported by the idea that we live in a holographic universe, which is currently popular, especially among string theorists.

4. Information is just lost.

Black hole evaporation, it seems, is irreversible and that irreversibility is inconsistent with the dynamical law of quantum theory. But quantum theory does have its own irreversible process, which is the measurement. So, some physicists argue that we should just accept black hole evaporation is irreversible and destroys information, not unlike quantum measurements do. This option is not particularly popular because it is hard to include additional irreversible process into quantum theory without spoiling conservation laws.

5. Black holes don’t exist.

Finally, some physicists have tried to argue that black holes are never created in the first place in which case no information can get lost in them. To make this work, one has to find a way to prevent a distribution of matter from collapsing to a size that is below its Schwarzschild radius. But since the formation of a black hole horizon can happen at arbitrarily small matter densities, this requires that one invents some new physics which violates the equivalence principle, and that is the key principle underlying Einstein’s theory of general relativity. This option is a logical possibility, but for most physicists, it’s asking for too much.

Personally, I think that several of the proposed solutions are consistent, that includes option 1-3 above, and other proposals such as those by Horowitz and Maldacena, ‘t Hooft, or Maudlin. This means that this is a problem which just cannot be solved by relying on mathematics alone.

Unfortunately, we cannot experimentally test what is happening when black holes evaporate because the temperature of the radiation is much, much too small to be measurable for the astrophysical black holes we know of. And so, I suspect we will be arguing about this for a long, long time.

Friday, August 23, 2019

How do black holes destroy information and why is that a problem?

Today I want to pick up a question that many of you asked, which is how do black holes destroy information and why is that a problem?


I will not explain here what a black hole is or how we that know black holes exist, for this you can watch my earlier video. Let me instead get right to black hole information loss. To understand the problem, you first need to know the mathematics that we use for our theories in physics. These theories all have two ingredients.

First, there is something called the “state” of the system, that’s a complete description of whatever you want to make a prediction for. In a classical theory, that’s one which is not quantized, the state would be, for example, the positions and velocities of particles. To describe the state in a quantum theory, you would instead take the wave-functions.

The second ingredient to the current theories is a dynamical law, which is also often called an “evolution equation”. This has nothing to do with Darwinian evolution. Evolution here just means this is an equation which tells you how the state changes from one moment of time to the next. So, if I give you a state at any one time, you can use the evolution equation to compute the state at any other time.

The important thing is that all evolution equations that we know of are time-reversible. This means it never happens that two states that differ at an initial time will become identical states at a later time. If that was so, then at the later time, you wouldn’t know where you started from and that would not be reversible.

A confusion that I frequently encounter is that between time-reversibility and time-reversal invariance. These are not the same. Time reversible just means you can run a process backwards. Time reversal invariance on the other hand means, it will look the same if you run it backwards. In the following, I am talking about time-reversibility, not time-reversal invariance.

Now, all fundamental evolution equations in physics are time-reversible. But this time-reversibility is in many cases entirely theoretical because of entropy increase. If the entropy of a system increases, this means that it if you wanted to reverse the time-evolution you would have to arrange the initial state very, very precisely, more precisely than is humanly possible. Therefore, many processes which are time-reversible in principle are for all practical purposes irreversible.

Think of mixing dough. You’ll never be able to unmix it in practice. But if only you could arrange precisely enough the position of each single atom, you could very well unmix the dough. The same goes for burning a piece of paper. Irreversible in practice. But in principle, if you only knew precisely enough the details of the smoke and the ashes, you could reverse it.

The evolution equation of quantum mechanics is called the Schroedinger equation and it is just as time-reversible as the evolution equation of classical physics. Quantum mechanics, however, has an additional equation which describes the measurement process, and this equation is not time-reversible. The reason it’s not time-reversible is that you can have different states that, when measured, give you the same measurement outcome. So, if you only know the outcome of the measurement, you cannot tell what was the original state.

Let us come to black holes then. The defining property of a black hole is the horizon, which is a one-way surface. You can only get in, but never get out of a black hole. The horizon does not have substance, it’s really just the name for a location in space. Other than that it’s vacuum.

But quantum theory tells us that vacuum is not nothing. It is full of particle-antiparticle pairs that are constantly created and destroyed. And in general relativity, the notion of a particle itself depends on the observer, much like the passage of time does. For this reason, what looks like vacuum close by the horizon does not look like vacuum far away from the horizon. Which is just another way of saying that black holes emit radiation.

This effect was first derived by Stephen Hawking in the 1970s and the radiation is therefore called Hawking radiation. It’s really important to keep in mind that you get this result by using just the normal quantum theory of matter in the curved space-time of a black hole. You do not need a theory of quantum gravity to derive that black holes radiate.

For our purposes, the relevant property of the radiation is that it is completely thermal. It is entirely determined by the total mass, charge, and spin of the black hole. Besides that, it’s random.

Now, what happens when the black hole radiates is that it loses mass and shrinks. It shrinks until it’s entirely gone and the radiation is the only thing that is left. But if you only have the radiation, then all you know is the mass, change, and spin of the black hole. You have no idea what formed the black hole originally or what fell in later. Therefore, black hole evaporation is irreversible because many different initial states will result in the same final state. And this is before you have even made a measurement on the radiation.

Such an irreversible process does not fit together with any of the known evolution laws – and that’s the problem. If you combine gravity with quantum theory, it seems, you get a result that’s inconsistent with quantum theory.

As you have probably noticed, I didn’t say anything about information. That’s because really the reference to information in “black hole information loss” is entirely unnecessary and just causes confusion. The problem of black hole “information loss” really has nothing to do with just exactly what you mean by information. It’s just a term that loosely speaking says you can’t tell from the final state what was the exact initial state.

There have been many, many attempts to solve this problem. Literally thousands of papers have been written about this. I will tell you about the most promising solutions some other time, so stay tuned.

Thursday, August 22, 2019

You will probably not understand this

Hieroglyps. [Image: Wikipedia Commons.]

Two years ago, I gave a talk at the University of Toronto, at the institute for the history and philosophy of science. At the time, I didn’t think much about it. But in hindsight, it changed my life, at least my work-life.

I spoke about the topic of my first book. It’s a talk I have given dozens of times, and though I adapted my slides for the Toronto audience, there was nothing remarkable about it. The oddity was the format of the talk. I would speak for half an hour. After this, someone else would summarize the topic for 15 minutes. Then there would be 15 minutes discussion.

Fine, I said, sounds like fun.

A few weeks before my visit, I was contacted by a postdoc who said he’d be doing the summary. He asked for my slides, and further reading material, and if there was anything else he should know. I sent him references.

But when his turn came to speak, he did not, as I expected, summarize the argument I had delivered. Instead he reported what he had dug up about my philosophy of science, my attitude towards metaphysics, realism, and what I might mean with “explanation” or “theory” and other philosophically loaded words.

He got it largely right, though I cannot today recall the details. I only recall I didn’t have much to say about what struck me as a peculiar exercise, dedicated not to understanding my research, but to understanding me.

It was awkward, too, because I have always disliked philosophers’ dissection of scientists’ lives. Their obsessive analyses of who Schrödinger, Einstein, or Bohr talked to when, about what, in which period of what marriage, never made a lot of sense to me. It reeked too much of hero-worship, looked too much like post-mortem psychoanalysis, equally helpful to understand Einstein’s work as cutting his brain into slices.

In the months that followed the Toronto talk, though, I began reading my own blogposts with that postdoc’s interpretation in mind. And I realized that in many cases it was essential information to understand what I was trying to get across. In the past year, I have therefore made more effort to repeat background, or at least link to previous pieces, to provide that necessary context. Context which – of course! – I thought is obvious. Because certainly we all agree what a theory is. Right?

But having written a public weblog for more than 12 years makes me a comparably simple subject of study. I have, over the years, provided explanations for just exactly what I mean when I say “scientific method” or “true” or “real”. So at least you could find out if only you wanted to. Not that I expect anyone who comes here for a 1,000 word essay to study an 800,000 word archive. Still, at least that archive exists. The same, however, isn’t the case for most scientists.

I was reminded of this at a recent workshop where I spoke with another woman about her attempts to make sense of one of her senior colleague’s papers.

I don’t want to name names, but it’s someone whose research you’ll be familiar with if you follow the popular science media. His papers are chronically hard to understand. And I know it isn’t just me who struggles, because I heard a lot of people in the field make dismissive comments about his work. On the occasion which the woman told me about, apparently he got frustrated with his own inability to explain himself, resulting in rather aggressive responses to her questions.

He’s not the only one frustrated. I could tell you many stories of renown physicists who told me, or wrote to me, about their struggles to get people to listen to them. Being white and male, it seems, doesn’t help. Neither do titles, honors, or award-winning popular science books.

And if you look at the ideas they are trying to get across, there’s a pattern.

These are people who have – in some cases over decades – built their own theoretical frameworks, developed personal philosophies of science, invented their own, idiosyncratic way of expressing themselves. Along the way, they have become incomprehensible for anyone else. But they didn’t notice.

Typically, they have written multiple papers circling around a key insight which they never quite manage to bring into focus. They’re constantly trying and constantly failing. And while they usually have done parts of their work with other people, the co-authors are clearly side-characters in a single-fighter story.

So they have their potentially brilliant insights out there, for anyone to see. And yet, no one has the patience to look at their life’s work. No one makes an effort to decipher their code. In brief, no one understands them.

Of course they’re frustrated. Just as frustrated as I am that no one understands me. Not even the people who agree with me. Especially not those, actually. It’s so frustrating.

The issue, I think, is symptomatic of our times, not only in science, but in society at large. Look at any social media site. You will see people going to great lengths explaining themselves just to end up frustrated and – not seldom – aggressive. They are aggressive because no one listens to what they are trying so hard to say. Indeed, all too often, no one even tries. Why bother if misunderstanding is such an easy win? If you cannot explain yourself, that’s your fault. If you do not understand me, that’s also your fault.

And so, what I took away from my Toronto talk is that communication is much more difficult than we usually acknowledge. It takes a lot of patience, both from the sender and the receiver, to accurately decode a message. You need all that context to make sense of someone else’s ideas. I now see why philosophers spend so much time dissecting the lives of other people. And instead of talking so much, I have come to think, I should listen a little more. Who knows, I might finally understand something.

Saturday, August 17, 2019

How we know that Einstein's General Relativity cannot be quite right

Today I want to explain how we know that the way Einstein thought about gravity cannot be correct.



Einstein’s idea was that gravity is not a force, but it is really an effect caused by the curvature of space and time. Matter curves space-time in its vicinity, and this curvature in return affects how matter moves. This means that, according to Einstein, space and time are responsive. They deform in the presence of matter and not only matter, but really all types of energies, including pressure and momentum flux and so on.

Einstein called his theory “General Relativity” because it’s a generalization of Special Relativity. Both are based on “observer-independence”, that is the idea that the laws of nature should not depend on the motion of an observer. The difference between General Relativity and Special Relativity is that in Special Relativity space-time is flat, like a sheet of paper, while in General Relativity it can be curved, like the often-named rubber sheet.

General Relativity is an extremely well-confirmed theory. It predicts that light rays bend around massive objects, like the sun, which we have observed. The same effect also gives rise to gravitational lensing, which we have also observed. General Relativity further predicts that the universe should expand, which it does. It predicts that time runs more slowly in gravitational potentials, which is correct. General Relativity predicts black holes, and it predicts just how the black hole shadow looks, which is what we have observed. It also predicts gravitational waves, which we have observed. And the list goes on.

So, there is no doubt that General Relativity works extremely well. But we already know that it cannot ultimately be the correct theory for space and time. It is an approximation that works in many circumstances, but fails in others.

We know this because General Relativity does not fit together with another extremely well confirmed theory, that is quantum mechanics. It’s one of these problems that’s easy to explain but extremely difficult to solve.

Here is what goes wrong if you want to combine gravity and quantum mechanics. We know experimentally that particles have some strange quantum properties. They obey the uncertainty principle and they can do things like being in two places at once. Concretely, think about an electron going through a double slit. Quantum mechanics tells us that the particle goes through both slits.

Now, electrons have a mass and masses generate a gravitational pull by bending space-time. This brings up the question, to which place does the gravitational pull go if the electron travels through both slits at the same time. You would expect the gravitational pull to also go to two places at the same time. But this cannot be the case in general relativity, because general relativity is not a quantum theory.

To solve this problem, we have to understand the quantum properties of gravity. We need what physicists call a theory of quantum gravity. And since Einstein taught us that gravity is really about the curvature of space and time, what we need is a theory for the quantum properties of space and time.

There are two other reasons how we know that General Relativity can’t be quite right. Besides the double-slit problem, there is the issue with singularities in General Relativity. Singularities are places where both the curvature and the energy-density of matter become infinitely large; at least that’s what General Relativity predicts. This happens for example inside of black holes and at the beginning of the universe.

In any other theory that we have, singularities are a sign that the theory breaks down and has to be replaced by a more fundamental theory. And we think the same has to be the case in General Relativity, where the more fundamental theory to replace it is quantum gravity.

The third reason we think gravity must be quantized is the trouble with information loss in black holes. If we combine quantum theory with general relativity but without quantizing gravity, then we find that black holes slowly shrink by emitting radiation. This was first derived by Stephen Hawking in the 1970s and so this black hole radiation is also called Hawking radiation.

Now, it seems that black holes can entirely vanish by emitting this radiation. Problem is, the radiation itself is entirely random and does not carry any information. So when a black hole is entirely gone and all you have left is the radiation, you do not know what formed the black hole. Such a process is fundamentally irreversible and therefore incompatible with quantum theory. It just does not fit together. A lot of physicists think that to solve this problem we need a theory of quantum gravity.

So this is how we know that General Relativity must be replaced by a theory of quantum gravity. This problem has been known since the 1930s. Since then, there have been many attempts to solve the problem. I will tell you about this some other time, so don’t forget to subscribe.

Tuesday, August 13, 2019

The Problem with Quantum Measurements


Have you heard that particle physicists want a larger collider because there is supposedly something funny about the Higgs boson? They call it the “Hierarchy Problem,” that there are 15 orders of magnitude between the Planck mass, which determines the strength of gravity, and the mass of the Higgs boson.

What is problematic about this, you ask? Nothing. Why do particle physicists think it’s problematic? Because they have been told as students it’s problematic. So now they want $20 billion to solve a problem that doesn’t exist.

Let us then look at an actual problem, that is that we don’t know how a measurement happens in quantum mechanics. The discussion of this problem today happens largely among philosophers; physicists pay pretty much no attention to it. Why not, you ask? Because they have been told as students that the problem doesn’t exist.

But there is a light at the end of the tunnel and the light is… you. Yes, you. Because I know that you are just the right person to both understand and solve the measurement problem. So let’s get you started.

Quantum mechanics is today mostly taught in what is known as the Copenhagen Interpretation and it works as follows. Particles are described by a mathematical object called the “wave-function,” usually denoted Ψ (“Psi”). The wave-function is sometimes sharply peaked and looks much like a particle, sometimes it’s spread out and looks more like a wave. Ψ is basically the embodiment of particle-wave duality.

The wave-function moves according to the Schrödinger equation. This equation is compatible with Einstein’s Special Relativity and it can be run both forward and backward in time. If I give you complete information about a system at any one time – ie, if I tell you the “state” of the system – you can use the Schrödinger equation to calculate the state at all earlier and all later times. This makes the Schrödinger equation what we call a “deterministic” equation.

But the Schrödinger equation alone does not predict what we observe. If you use only the Schrödinger equation to calculate what happens when a particle interacts with a detector, you find that the two undergo a process called “decoherence.” Decoherence wipes out quantum-typical behavior, like dead-and-alive cats and such. What you have left then is a probability distribution for a measurement outcome (what is known as a “mixed state”). You have, say, a 50% chance that the particle hits the left side of the screen. And this, importantly, is not a prediction for a collection of particles or repeated measurements. We are talking about one measurement on one particle.

The moment you measure the particle, however, you know with 100% probability what you have got; in our example you now know which side of the screen the particle is. This sudden jump of the probability is often referred to as the “collapse” of the wave-function and the Schrödinger equation does not predict it. The Copenhagen Interpretation, therefore, requires an additional assumption called the “Measurement Postulate.” The Measurement Postulate tells you that the probability of whatever you have measured must be updated to 100%.

Now, the collapse together with the Schrödinger equation describes what we observe. But the detector is of course also made of particles and therefore itself obeys the Schrödinger equation. So if quantum mechanics is fundamental, we should be able to calculate what happens during measurement using the Schrödinger equation alone. We should not need a second postulate.

The measurement problem, then, is that the collapse of the wave-function is incompatible with the Schrödinger equation. It isn’t merely that we do not know how to derive it from the Schrödinger equation, it’s that it actually contradicts the Schrödinger equation. The easiest way to see this is to note that the Schrödinger equation is linear while the measurement process is non-linear. This strongly suggests that the measurement is an effective description of some underlying non-linear process, something we haven’t yet figured out.

There is another problem. As an instantaneous process, wave-function collapse doesn’t fit together with the speed of light limit in Special Relativity. This is the “spooky action” that irked Einstein so much about quantum mechanics.

This incompatibility with Special Relativity, however, has (by assumption) no observable consequences, so you can try and convince yourself it’s philosophically permissible (and good luck with that). But the problem comes back to haunt you when you ask what happens with the mass (and energy) of a particle when its wave-function collapses. You’ll notice then that the instantaneous jump screws up General Relativity. (And for this quantum gravitational effects shouldn’t play a role, so mumbling “string theory” doesn’t help.) This issue is still unobservable in practice, all right, but now it’s observable in principle.

One way to deal with the measurement problem is to argue that the wave-function does not describe a real object, but only encodes knowledge, and that probabilities should not be interpreted as frequencies of occurrence, but instead as statements of our confidence. This is what’s known as a “Psi-epistemic” interpretation of quantum mechanics, as opposed to the “Psi-ontic” ones in which the wave-function is a real thing.

The trouble with Psi-epistemic interpretations is that the moment you refer to something like “knowledge” you have to tell me what you mean by “knowledge”, who or what has this “knowledge,” and how they obtain “knowledge.” Personally, I would also really like to know what this knowledge is supposedly about, but if you insist I’ll keep my mouth shut. Even so, for all we presently know, “knowledge” is not fundamental, but emergent. Referring to knowledge in the postulates of your theory, therefore, is incompatible with reductionism. This means if you like Psi-epistemic interpretations, you will have to tell me just why and when reductionism breaks down or, alternatively, tell me how to derive Psi from a more fundamental law.

None of the existing interpretations and modifications of quantum mechanics really solve the problem, which I can go through in detail some other time. For now let me just say that either way you turn the pieces, they won’t fit together.

So, forget about particle colliders; grab a pen and get started.

---

Note: If the comment count exceeds 200, you have to click on “Load More” at the bottom of the page to see recent comments. This is also why the link in the recent comment widget does not work. Please do not complain to me about this shitfuckery. Blogger is hosted by Google. Please direct complaints to their forum.

Saturday, August 10, 2019

Book Review: “The Secret Life of Science” by Jeremy Baumberg

The Secret Life of Science: How It Really Works and Why It Matters
Jeremy Baumberg
Princeton University Press (16 Mar. 2018)

The most remarkable thing about science is that most scientists have no idea how it works. With his 2018 book “The Secret Life of Science,” Jeremy Baumberg aims to change this.

The book is thoroughly researched and well-organized. In the first chapter, Baumberg starts with explaining what science is. He goes about this pragmatically and without getting lost in irrelevant philosophical discussions. In this chapter, he also introduces the terms “simplifier science” and “constructor science” to replace “basic” and “applied” research.

Baumberg suggests to think of science as an ecosystem with multiple species and flows of nutrients that need to be balanced, which is an analogy that he comes back to throughout the book. This first chapter is followed by a brief chapter about the motivations to do science and its societal relevance.

In the next chapters, Baumberg then focuses on various aspects of a scientist’s work-life and explains how these are organized in praxis: Scientific publishing, information sharing in the community (conferences and so on), science communication (PR, science journalism), funding, and hiring. In this, Baumberg make an effort to distinguish between research in academia and in business, and in many cases he also points out national differences.

The book finishes with a chapter about the future of science and Baumberg’s own suggestions for improvement. Except for the very last chapter, the author does not draw attention to existing problems with the current organization of science, though these will be obvious to most readers.

Baumberg is a physicist by training and, according to the book flap, works in nanotechnology and photonics. As most physicists who do not work in particle physics, he is well aware that particle physics is in deep trouble. He writes:
Knowing the mind of god” and “The theory of everything” are brands currently attached to particle physics. Yet they have become less powerful with time, attracting an air of liability, perhaps reaching that of a “toxic brand.” That the science involved now finds it hard to shake off precisely this layer of values attached to them shows how sticky they are.
The book contains a lot of concrete information for example about salaries and grant success rates. I have generally found Baumberg’s analysis to be spot on, for example when he writes “Science spending seems to rise until it becomes noticed and then stops.” Or
Because this competition [for research grants] is so well defined as a clear race for money it can become the raison d’etre for scientists’ existence, rather than just what is needed to develop resources to actually do science.
On counting citations, he likewise remarks aptly:
“[The h-index rewards] wide collaborators rather than lone specialists, rewards fields that cite more, and rewards those who always stay at the trendy edge of all research.”
Unfortunately I have to add that the book is not particularly engagingly written. Some of the chapters could have been shorter, Baumberg overuses the metaphor of the ecosystem, and the figures are not helpful. To give you an idea why I say this, I challenge you to make sense of this illustration:


In summary, Baumberg’s is a useful book though it’s somewhat tedious to read. Nevertheless, I think everyone who wants to understand how science works in reality should read it. It’s time we get over the idea that science somehow magically self-corrects. Science is the way we organize knowledge discovery, and its success depends on us paying attention to how it is organized.