The coming days I am in Brussels, for a workshop that I’m not sure where it is or what it is about. It also doesn’t seem to have a website. In any case, I’ll be away, just don’t ask me exactly where or why.
On Oct 15, I am giving a public lecture at the University of Minnesota. On Oct 17, I am giving a colloquium in Cleveland. On Oct 25, I am giving a public lecture in Göttingen (in German). On Oct 29, I’m in Genoa giving a talk at the “Festival della Scienza” to accompany the publication of the Italian translation of my book “Lost in Math.” I don’t speak Italian, so this talk will be in English.
On Nov 5th I’m speaking in Berlin about dark matter. On Nov 6th I am supposed to give a lecture at the Einstein Forum on Potsdam, though that doesn’t seem to be on their website. These two talks in Berlin and Potsdam will also be in German.
On Nov 12th I’m giving a seminar in Oxford, in case Britain still exists at that point. Dec 9th I’m speaking in Wuppertal, details to come, and that will hopefully be the last trip this year.
Next time I’m in the USA will probably be late March 2020. In case you are interested that I stop by at your place, please get in touch.
I am always happy to meet readers of my blog, so in case our paths cross, do not hesitate to say hi.
Sunday, September 29, 2019
Friday, September 27, 2019
The Trouble with Many Worlds
Today I want to talk about the many worlds interpretation of quantum mechanics and explain why I do not think it is a complete theory.
But first, a brief summary of what the many worlds interpretation says. In quantum mechanics, every system is described by a wave-function from which one calculates the probability of obtaining a specific measurement outcome. Physicists usually take the Greek letter Psi to refer to the wave-function.
From the wave-function you can calculate, for example, that a particle which enters a beam-splitter has a 50% chance of going left and a 50% chance of going right. But – and that’s the important point – once you have measured the particle, you know with 100% probability where it is. This means that you have to update your probability and with it the wave-function. This update is also called the wave-function collapse.
The wave-function collapse, I have to emphasize, is not optional. It is an observational requirement. We never observe a particle that is 50% here and 50% there. That’s just not a thing. If we observe it at all, it’s either here or it isn’t. Speaking of 50% probabilities really makes sense only as long as you are talking about a prediction.
Now, this wave-function collapse is a problem for the following reason. We have an equation that tells us what the wave-function does as long as you do not measure it. It’s called the Schrödinger equation. The Schrödinger equation is a linear equation. What does this mean? It means that if you have two solutions to this equation, and you add them with arbitrary prefactors, then this sum will also be a solution to the Schrödinger equation. Such a sum, btw, is also called a “superposition”. I know that superposition sounds mysterious, but that’s really all it is, it’s a sum with prefactors.
The problem is now that the wave-function collapse is not linear, and therefore it cannot be described by the Schrödinger equation. Here is an easy way to understand this. Suppose you have a wave-function for a particle that goes right with 100% probability. Then you will measure it right with 100% probability. No mystery here. Likewise, if you have a particle that just goes left, you will measure it left with 100% probability. But here’s the thing. If you take a superposition of these two states, you will not get a superposition of probabilities. You will get 100% either on the one side, or on the other.
The measurement process therefore is not only an additional assumption that quantum mechanics needs to reproduce what we observe. It is actually incompatible with the Schrödinger equation.
Now, the most obvious way to deal with that is to say, well, the measurement process is something complicated that we do not yet understand, and the wave-function collapse is a placeholder that we use until we will figured out something better.
But that’s not how most physicists deal with it. Most sign up for what is known as the Copenhagen interpretation, that basically says you’re not supposed to ask what happens during measurement. In this interpretation, quantum mechanics is merely a mathematical machinery that makes predictions and that’s that. The problem with Copenhagen – and with all similar interpretations – is that they require you to give up the idea that what a macroscopic object, like a detector does should be derivable from theory of its microscopic constituents.
If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents. Because if you could do that, you would not need a second equation besides the Schrödinger equation. That you need this second equation, then is incompatible with reductionism. It is possible that this is correct, but then you have to explain just where reductionism breaks down and why, which no one has done. And without that, the Copenhagen interpretation and its cousins do not solve the measurement problem, they simply refuse to acknowledge that the problem exists in the first place.
The many world interpretation, now, supposedly does away with the problem of the quantum measurement and it does this by just saying there isn’t such a thing as wavefunction collapse. Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel worlds, one for each possible measurement outcome. This universe splitting is also sometimes called branching.
Some people have a problem with the branching because it’s not clear just exactly when or where it should take place, but I do not think this is a serious problem, it’s just a matter of definition. No, the real problem is that after throwing out the measurement postulate, the many worlds interpretation needs another assumption, that brings the measurement problem back.
The reason is this. In the many worlds interpretation, if you set up a detector for a measurement, then the detector will also split into several universes. Therefore, if you just ask “what will the detector measure”, then the answer is “The detector will measure anything that’s possible with probability 1.”
This, of course, is not what we observe. We observe only one measurement outcome. The many worlds people explain this as follows. Of course you are not supposed to calculate the probability for each branch of the detector. Because when we say detector, we don’t mean all detector branches together. You should only evaluate the probability relative to the detector in one specific branch at a time.
That sounds reasonable. Indeed, it is reasonable. It is just as reasonable as the measurement postulate. In fact, it is logically entirely equivalent to the measurement postulate. The measurement postulate says: Update probability at measurement to 100%. The detector definition in many worlds says: The “Detector” is by definition only the thing in one branch. Now evaluate probabilities relative to this, which gives you 100% in each branch. Same thing.
And because it’s the same thing you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible. What the many worlds people are now trying instead is to derive this postulate from rational choice theory. But of course that brings back in macroscopic terms, like actors who make decisions and so on. In other words, this reference to knowledge is equally in conflict with reductionism as is the Copenhagen interpretation.
And that’s why the many worlds interpretation does not solve the measurement problem and therefore it is equally troubled as all other interpretations of quantum mechanics. What’s the trouble with the other interpretations? We will talk about this some other time. So stay tuned.
But first, a brief summary of what the many worlds interpretation says. In quantum mechanics, every system is described by a wave-function from which one calculates the probability of obtaining a specific measurement outcome. Physicists usually take the Greek letter Psi to refer to the wave-function.
From the wave-function you can calculate, for example, that a particle which enters a beam-splitter has a 50% chance of going left and a 50% chance of going right. But – and that’s the important point – once you have measured the particle, you know with 100% probability where it is. This means that you have to update your probability and with it the wave-function. This update is also called the wave-function collapse.
The wave-function collapse, I have to emphasize, is not optional. It is an observational requirement. We never observe a particle that is 50% here and 50% there. That’s just not a thing. If we observe it at all, it’s either here or it isn’t. Speaking of 50% probabilities really makes sense only as long as you are talking about a prediction.
Now, this wave-function collapse is a problem for the following reason. We have an equation that tells us what the wave-function does as long as you do not measure it. It’s called the Schrödinger equation. The Schrödinger equation is a linear equation. What does this mean? It means that if you have two solutions to this equation, and you add them with arbitrary prefactors, then this sum will also be a solution to the Schrödinger equation. Such a sum, btw, is also called a “superposition”. I know that superposition sounds mysterious, but that’s really all it is, it’s a sum with prefactors.
The problem is now that the wave-function collapse is not linear, and therefore it cannot be described by the Schrödinger equation. Here is an easy way to understand this. Suppose you have a wave-function for a particle that goes right with 100% probability. Then you will measure it right with 100% probability. No mystery here. Likewise, if you have a particle that just goes left, you will measure it left with 100% probability. But here’s the thing. If you take a superposition of these two states, you will not get a superposition of probabilities. You will get 100% either on the one side, or on the other.
The measurement process therefore is not only an additional assumption that quantum mechanics needs to reproduce what we observe. It is actually incompatible with the Schrödinger equation.
Now, the most obvious way to deal with that is to say, well, the measurement process is something complicated that we do not yet understand, and the wave-function collapse is a placeholder that we use until we will figured out something better.
But that’s not how most physicists deal with it. Most sign up for what is known as the Copenhagen interpretation, that basically says you’re not supposed to ask what happens during measurement. In this interpretation, quantum mechanics is merely a mathematical machinery that makes predictions and that’s that. The problem with Copenhagen – and with all similar interpretations – is that they require you to give up the idea that what a macroscopic object, like a detector does should be derivable from theory of its microscopic constituents.
If you believe in the Copenhagen interpretation you have to buy that what the detector does just cannot be derived from the behavior of its microscopic constituents. Because if you could do that, you would not need a second equation besides the Schrödinger equation. That you need this second equation, then is incompatible with reductionism. It is possible that this is correct, but then you have to explain just where reductionism breaks down and why, which no one has done. And without that, the Copenhagen interpretation and its cousins do not solve the measurement problem, they simply refuse to acknowledge that the problem exists in the first place.
The many world interpretation, now, supposedly does away with the problem of the quantum measurement and it does this by just saying there isn’t such a thing as wavefunction collapse. Instead, many worlds people say, every time you make a measurement, the universe splits into several parallel worlds, one for each possible measurement outcome. This universe splitting is also sometimes called branching.
Some people have a problem with the branching because it’s not clear just exactly when or where it should take place, but I do not think this is a serious problem, it’s just a matter of definition. No, the real problem is that after throwing out the measurement postulate, the many worlds interpretation needs another assumption, that brings the measurement problem back.
The reason is this. In the many worlds interpretation, if you set up a detector for a measurement, then the detector will also split into several universes. Therefore, if you just ask “what will the detector measure”, then the answer is “The detector will measure anything that’s possible with probability 1.”
This, of course, is not what we observe. We observe only one measurement outcome. The many worlds people explain this as follows. Of course you are not supposed to calculate the probability for each branch of the detector. Because when we say detector, we don’t mean all detector branches together. You should only evaluate the probability relative to the detector in one specific branch at a time.
That sounds reasonable. Indeed, it is reasonable. It is just as reasonable as the measurement postulate. In fact, it is logically entirely equivalent to the measurement postulate. The measurement postulate says: Update probability at measurement to 100%. The detector definition in many worlds says: The “Detector” is by definition only the thing in one branch. Now evaluate probabilities relative to this, which gives you 100% in each branch. Same thing.
And because it’s the same thing you already know that you cannot derive this detector definition from the Schrödinger equation. It’s not possible. What the many worlds people are now trying instead is to derive this postulate from rational choice theory. But of course that brings back in macroscopic terms, like actors who make decisions and so on. In other words, this reference to knowledge is equally in conflict with reductionism as is the Copenhagen interpretation.
And that’s why the many worlds interpretation does not solve the measurement problem and therefore it is equally troubled as all other interpretations of quantum mechanics. What’s the trouble with the other interpretations? We will talk about this some other time. So stay tuned.
Wednesday, September 18, 2019
Windows Black Screen Nightmare
Folks, I have a warning to utter that is somewhat outside my usual preaching.
For the past couple of days one of my laptops has tried to install Windows updates but didn’t succeed. In the morning I would find an error message that said something went wrong. I ignored this because really I couldn’t care less what problems Microsoft causes itself. But this morning, Windows wouldn’t properly start. All I got was a black screen with a mouse cursor. This is the computer I use for my audio and video processing.
Now, I’ve been a Windows user for 20+ years and I don’t get easily discouraged by spontaneously appearing malfunctions. After some back and forth, I managed to open a command prompt from the task manager to launch the Windows explorer by hand. But this just produced an error message about some obscure .dll file being corrupted. Ok, then, I thought, I’ll run an sfc /scandisk. But this didn’t work; the command just wouldn’t run. At this point I began to feel really bad about this.
I then rebooted the computer a few times with different login options, but got the exact same problem with an administrator login and in the so-called safe mode. The system restore produced an error message, too. Finally, I tried the last thing that came to my mind, a factory reset. Just to have Windows inform me that the command couldn’t be executed.
With that, I had run out of Windows-wisdom and called a helpline. Even the guy on the helpline was impressed by this system’s fuckedupness (if that isn’t a word, it should be) and, after trying a few other things that didn’t work, recommended I wipe the disk clean and reinstall Windows.
So that’s basically how I spent my day, today. Which, btw, happens to be my birthday.
The system is running fine now, though I will have to reinstall all my software. Luckily enough my hard-disk partition seems to have saved all my video and audio files. It doesn’t seem to have been a hardware problem. It also doesn’t smell like a virus. The two IT guys I spoke with said that most likely something went badly wrong with one of those Windows updates. In fact, if you ask Google for Windows Black Screen you’ll find that similar things have happened before after Windows updates. Though, it seems, not quite as severe as this case.
The reason I am telling you this isn’t just to vent (though there’s that), but to ask you that in case you encounter the same problem, please let us know. Especially if you find a solution that doesn’t require reinstalling Windows from scratch.
Update: Managed to finish what I meant to do before my computer became dysfunctional
For the past couple of days one of my laptops has tried to install Windows updates but didn’t succeed. In the morning I would find an error message that said something went wrong. I ignored this because really I couldn’t care less what problems Microsoft causes itself. But this morning, Windows wouldn’t properly start. All I got was a black screen with a mouse cursor. This is the computer I use for my audio and video processing.
Now, I’ve been a Windows user for 20+ years and I don’t get easily discouraged by spontaneously appearing malfunctions. After some back and forth, I managed to open a command prompt from the task manager to launch the Windows explorer by hand. But this just produced an error message about some obscure .dll file being corrupted. Ok, then, I thought, I’ll run an sfc /scandisk. But this didn’t work; the command just wouldn’t run. At this point I began to feel really bad about this.
I then rebooted the computer a few times with different login options, but got the exact same problem with an administrator login and in the so-called safe mode. The system restore produced an error message, too. Finally, I tried the last thing that came to my mind, a factory reset. Just to have Windows inform me that the command couldn’t be executed.
With that, I had run out of Windows-wisdom and called a helpline. Even the guy on the helpline was impressed by this system’s fuckedupness (if that isn’t a word, it should be) and, after trying a few other things that didn’t work, recommended I wipe the disk clean and reinstall Windows.
So that’s basically how I spent my day, today. Which, btw, happens to be my birthday.
The system is running fine now, though I will have to reinstall all my software. Luckily enough my hard-disk partition seems to have saved all my video and audio files. It doesn’t seem to have been a hardware problem. It also doesn’t smell like a virus. The two IT guys I spoke with said that most likely something went badly wrong with one of those Windows updates. In fact, if you ask Google for Windows Black Screen you’ll find that similar things have happened before after Windows updates. Though, it seems, not quite as severe as this case.
The reason I am telling you this isn’t just to vent (though there’s that), but to ask you that in case you encounter the same problem, please let us know. Especially if you find a solution that doesn’t require reinstalling Windows from scratch.
Update: Managed to finish what I meant to do before my computer became dysfunctional
Monday, September 16, 2019
Why do some scientists believe that our universe is a hologram?
Today, I want to tell you why some scientists believe that our universe is really a 3-dimensional projection of a 2-dimensional space. They call it the “holographic principle” and the key idea is this.
Usually, the number of different things you can imagine happening inside a part of space increases with the volume. Think of a bag of particles. The larger the bag, the more particles, and the more details you need to describe what the particles do. These details that you need to describe what happens are what physicists call the “degrees of freedom,” and the number of these degrees of freedom is proportional to the number of particles, which is proportional to the volume.
At least that’s how it normally works. The holographic principle, in contrast, says that you can describe what happens inside the bag by encoding it on the surface of that bag, at the same resolution.
This may not sounds all that remarkable, but it is. Here is why. Take a cube that’s made of smaller cubes, each of which is either black or white. You can think of each small cube as a single bit of information. How much information is in the large cube? Well, that’s the number of the smaller cubes, so 3 cube in this example. Or, if you divide every side of the large cube into N pieces instead of three, that’s N cube. But if you instead count the surface elements of the cube, at the same resolution, you have only 6 x N square. This means that for large N, there are many more volume bits than surface bits at the same resolution.
The holographic principle now says that even though there are so many fewer surface bits, the surface bits are sufficient to describe everything that happens in the volume. This does not mean that the surface bits correspond to certain regions of volume, it’s somewhat more complicated. It means instead that the surface bits describe certain correlations between the pieces of volume. So if you think again of the particles in the bag, these will not move entirely independently.
And that’s what is called the holographic principle, that really you can encode the events inside any volume on the surface of the volume, at the same resolution.
But, you may say, how come we never notice that particles in a bag are somehow constrained in their freedom? Good question. The reason is that the stuff that we deal with in every-day life, say, that bag of particles, doesn’t remotely make use of the theoretically available degrees of freedom. Our present observations only test situations well below the limit that the holographic principle says should exist.
The limit from the holographic principle really only matters if the degrees of freedom are strongly compressed, as is the case, for example, for stuff that collapses to a black hole. Indeed, the physics of black holes is one of the most important clues that physicists have for the holographic principle. That’s because we know that black holes have an entropy that is proportional to the area of the black hole horizon, not to its volume. That’s the important part: black hole entropy is proportional to the area, not to the volume.
Now, in thermodynamics entropy counts the number of different microscopic configurations that have the same macroscopic appearance. So, the entropy basically counts how much information you could stuff into a macroscopic thing if you kept track of the microscopic details. Therefore, the area-scaling of the black hole entropy tells you that the information content of black holes is bounded by a quantity which proportional to the horizon area. This relation is the origin of the holographic principle.
The other important clue for the holographic principle comes from string theory. That’s because string theorists like to apply their mathematical methods in a space-time with a negative cosmological constant, which is called an Anti-de Sitter space. Most of them believe, though it has strictly speaking never been proved, that gravity in an Anti-de Sitter space can be described by a different theory that is entirely located on the boundary of that space. And while this idea came from string theory, one does not actually need the strings for this relation between the volume and the surface to work. More concretely, it uses a limit in which the effects of the strings no longer appear. So the holographic principle seems to be more general than string theory.
I have to add though that we do not live in an Anti-de Sitter space because, for all we currently know, the cosmological constant in our universe is positive. Therefore it’s unclear how much the volume-surface relation in Anti-De Sitter space tells us about the real world. And for what the black hole entropy is concerned, the mathematics we currently have does not actually tell us that it counts the information that one can stuff into a black hole. It may instead only count the information that one loses by disconnecting the inside and outside of the black hole. This is called the “entanglement entropy”. It scales with the surface for many systems other than black holes and there is nothing particularly holographic about it.
Whether or not you buy the motivations for the holographic principle, you may want to know whether we can test it. The answer is definitely maybe. Earlier this year, Erik Verlinde and Kathryn Zurek proposed that we try to test the holographic principle using gravitational wave interferometers. The idea is that if the universe is holographic, then the fluctuations in the two orthogonal directions that the interferometer arms extend into would be more strongly correlated than one normally expects. However, not everyone agrees that the particular realization of holography which Verlinde and Zurek use is the correct one.
Personally I think that the motivations for the holographic principle are not particularly strong and in any case we’ll not be able to test this hypothesis in the coming centuries. Therefore writing papers about it is a waste of time. But it’s an interesting idea and at least you now know what physicists are talking about when they say the universe is a hologram.
Tuesday, September 10, 2019
Book Review: “Something Deeply Hidden” by Sean Carroll
Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime
Sean Carroll
Dutton, September 10, 2019
Of all the weird ideas that quantum mechanics has to offer, the existence of parallel universes is the weirdest. But with his new book, Sean Carroll wants to convince you that it isn’t weird at all. Instead, he argues, if we only take quantum mechanics seriously enough, then “many worlds” are the logical consequence.
Most remarkably, the many worlds interpretation implies that in every instance you split into many separate you’s, all of which go on to live their own lives. It takes something to convince yourself that this is reality, but if you want to be convinced, Carroll’s book is a good starting point.
“Something Deeply Hidden” is an enjoyable and easy-to-follow introduction to quantum mechanics that will answer your most pressing questions about many worlds, such as how worlds split, what happens with energy conservation, or whether you should worry about the moral standards of all your copies.
The book is also notable for what it does not contain. Carroll avoids going through all the different interpretations of quantum mechanics in detail, and only provides short summaries. Instead, the second half of the book is dedicated to his own recent work, which is about constructing space from quantum entanglement. I do find this a promising line of research and he presents it well.
I was somewhat perplexed that Carroll does not mention what I think are the two biggest objections to the many world’s interpretation, but I will write about this in a separate post.
Like Carroll’s previous books, this one is engaging, well-written, and clearly argued. I can unhesitatingly recommend it to anyone who is interested in the foundations of physics.
[Disclaimer: Free review copy]
Sean Carroll
Dutton, September 10, 2019
Of all the weird ideas that quantum mechanics has to offer, the existence of parallel universes is the weirdest. But with his new book, Sean Carroll wants to convince you that it isn’t weird at all. Instead, he argues, if we only take quantum mechanics seriously enough, then “many worlds” are the logical consequence.
Most remarkably, the many worlds interpretation implies that in every instance you split into many separate you’s, all of which go on to live their own lives. It takes something to convince yourself that this is reality, but if you want to be convinced, Carroll’s book is a good starting point.
“Something Deeply Hidden” is an enjoyable and easy-to-follow introduction to quantum mechanics that will answer your most pressing questions about many worlds, such as how worlds split, what happens with energy conservation, or whether you should worry about the moral standards of all your copies.
The book is also notable for what it does not contain. Carroll avoids going through all the different interpretations of quantum mechanics in detail, and only provides short summaries. Instead, the second half of the book is dedicated to his own recent work, which is about constructing space from quantum entanglement. I do find this a promising line of research and he presents it well.
I was somewhat perplexed that Carroll does not mention what I think are the two biggest objections to the many world’s interpretation, but I will write about this in a separate post.
Like Carroll’s previous books, this one is engaging, well-written, and clearly argued. I can unhesitatingly recommend it to anyone who is interested in the foundations of physics.
[Disclaimer: Free review copy]
Sunday, September 08, 2019
Away Note
I'm attending a conference in Oxford the coming week, so there won't be much happening on this blog. Also, please be warned that comments may be stuck in the moderation queue longer than usual.
Friday, September 06, 2019
The five most promising ways to quantize gravity
Today, I want to tell you what ideas physicists have come up with to quantize gravity. But before I get to that, I want to tell you why it matters.
That we do not have a theory of quantum gravity is currently one of the biggest unsolved problems in the foundations of physics. A lot of people, including many of my colleagues, seem to think that a theory of quantum gravity will remain an academic curiosity without practical relevance.
I think they are wrong. That’s because whatever solves this problem will tell us something about quantum theory, and that’s the theory on which all modern electronic devices run, like the ones on which you are watching this video. Maybe it will take 100 years for quantum gravity to find a practical application, or maybe it will even take a 1000 years. But I am sure that understanding nature better will not forever remain a merely academic speculation.
Before I go on, I want to be clear that quantizing gravity by itself is not the problem. We can, and have, quantized gravity the same way that we quantize the other interactions. The problem is that the theory which one gets this way breaks down at high energies, and therefore it cannot be how nature works, fundamentally.
This naïve quantization is called “perturbatively quantized gravity” and it was worked out in the 1960s by Feynman and DeWitt and some others. Perturbatively quantized gravity is today widely believed to be an approximation to whatever is the correct theory.
So really the problem is not just to quantize gravity per se, you want to quantize it and get a theory that does not break down at high energies. Because energies are proportional to frequencies, physicists like to refer to high energies as “the ultraviolet” or just “the UV”. Therefore, the theory of quantum gravity that we look for is said to be “UV complete”.
Now, let me go through the five most popular approaches to quantum gravity.
1. String Theory
The most widely known and still the most popular attempt to get a UV-complete theory of quantum gravity is string theory. The idea of string theory is that instead of talking about particles and quantizing them, you take strings and quantize those. Amazingly enough, this automatically has the consequence that the strings exchange a force which has the same properties as the gravitational force.
This was discovered in the 1970s and at the time, it got physicists very excited. However, in the past decades several problems have appeared in string theory that were patched, which has made the theory increasingly contrived. You can hear all about this in my earlier video. It has never been proved that string theory is indeed UV-complete.
2. Loop Quantum Gravity
Loop Quantum Gravity is often named as the biggest competitor of string theory, but this comparison is somewhat misleading. String theory is not just a theory for quantum gravity, it is also supposed to unify the other interactions. Loop Quantum Gravity on the other hand, is only about quantizing gravity.
It works by discretizing space in terms of a network, and then using integrals around small loops to describe the space, hence the name. In this network, the nodes represent volumes and the links between nodes the areas of the surfaces where the volumes meet.
Loop Quantum Gravity is about as old as string theory. It solves the problem of combining general relativity and quantum mechanics to one consistent theory but it has remained unclear just exactly how one recovers general relativity in this approach.
3. Asymptotically Safe Gravity
Asymptotic Safety is an idea that goes back to a 1976 paper by Steven Weinberg. It says that a theory which seems to have problems at high energies when quantized naively, may not have a problem after all, it’s just that it’s more complicated to find out what happens at high energies than it seems. Asymptotically Safe Gravity applies the idea of asymptotic safety to gravity in particular.
This approach also solves the problem of quantum gravity. Its major problem is currently that it has not been proved that the theory which one gets this way at high energies still makes sense as a quantum theory.
4. Causal Dynamical Triangulation
The problem with quantizing gravity comes from infinities that appear when particles interact at very short distances. This is why most approaches to quantum gravity rely on removing the short distances by using objects of finite extensions. Loop Quantum Gravity works this way, and so does String Theory.
Causal Dynamical Triangulation also relies on removing short distances. It does so by approximating a curved space with triangles, or their higher-dimensional counterparts respectively. In contrast to the other approaches though, where the finite extension is a postulated, new property of the underlying true nature of space, in Causal Dynamical Triangulation, the finite size of the triangles is a mathematical aid, and one eventually takes the limit where this size goes to zero.
The major reason why many people have remained unconvinced of Causal Dynamical Triangulation is that it treats space and time differently, which Einstein taught us not to do.
5. Emergent Gravity
Emergent gravity is not one specific theory, but a class of approaches. These approaches have in common that gravity derives from the collective behavior of a large number of constituents, much like the laws of thermodynamics do. And much like for thermodynamics, in emergent gravity, one does not actually need to know all that much about the exact properties of these constituents to get the dynamical law.
If you think that gravity is really emergent, then quantizing gravity does not make sense. Because, if you think of the analogy to thermodynamics, you also do not obtain a theory for the structure of atom by quantizing the equations for gases. Therefore, in emergent gravity one does not quantize gravity. One instead removes the inconsistency between gravity and quantum mechanics by saying that quantizing gravity is not the right thing to do.
Which one of these theories is the right one? No one knows. The problem is that it’s really, really hard to find experimental evidence for quantum gravity. But that it’s hard doesn’t mean impossible. I will tell you some other time how we might be able to experimentally test quantum gravity after all. So, stay tuned.
That we do not have a theory of quantum gravity is currently one of the biggest unsolved problems in the foundations of physics. A lot of people, including many of my colleagues, seem to think that a theory of quantum gravity will remain an academic curiosity without practical relevance.
I think they are wrong. That’s because whatever solves this problem will tell us something about quantum theory, and that’s the theory on which all modern electronic devices run, like the ones on which you are watching this video. Maybe it will take 100 years for quantum gravity to find a practical application, or maybe it will even take a 1000 years. But I am sure that understanding nature better will not forever remain a merely academic speculation.
Before I go on, I want to be clear that quantizing gravity by itself is not the problem. We can, and have, quantized gravity the same way that we quantize the other interactions. The problem is that the theory which one gets this way breaks down at high energies, and therefore it cannot be how nature works, fundamentally.
This naïve quantization is called “perturbatively quantized gravity” and it was worked out in the 1960s by Feynman and DeWitt and some others. Perturbatively quantized gravity is today widely believed to be an approximation to whatever is the correct theory.
So really the problem is not just to quantize gravity per se, you want to quantize it and get a theory that does not break down at high energies. Because energies are proportional to frequencies, physicists like to refer to high energies as “the ultraviolet” or just “the UV”. Therefore, the theory of quantum gravity that we look for is said to be “UV complete”.
Now, let me go through the five most popular approaches to quantum gravity.
1. String Theory
The most widely known and still the most popular attempt to get a UV-complete theory of quantum gravity is string theory. The idea of string theory is that instead of talking about particles and quantizing them, you take strings and quantize those. Amazingly enough, this automatically has the consequence that the strings exchange a force which has the same properties as the gravitational force.
This was discovered in the 1970s and at the time, it got physicists very excited. However, in the past decades several problems have appeared in string theory that were patched, which has made the theory increasingly contrived. You can hear all about this in my earlier video. It has never been proved that string theory is indeed UV-complete.
2. Loop Quantum Gravity
Loop Quantum Gravity is often named as the biggest competitor of string theory, but this comparison is somewhat misleading. String theory is not just a theory for quantum gravity, it is also supposed to unify the other interactions. Loop Quantum Gravity on the other hand, is only about quantizing gravity.
It works by discretizing space in terms of a network, and then using integrals around small loops to describe the space, hence the name. In this network, the nodes represent volumes and the links between nodes the areas of the surfaces where the volumes meet.
Loop Quantum Gravity is about as old as string theory. It solves the problem of combining general relativity and quantum mechanics to one consistent theory but it has remained unclear just exactly how one recovers general relativity in this approach.
3. Asymptotically Safe Gravity
Asymptotic Safety is an idea that goes back to a 1976 paper by Steven Weinberg. It says that a theory which seems to have problems at high energies when quantized naively, may not have a problem after all, it’s just that it’s more complicated to find out what happens at high energies than it seems. Asymptotically Safe Gravity applies the idea of asymptotic safety to gravity in particular.
This approach also solves the problem of quantum gravity. Its major problem is currently that it has not been proved that the theory which one gets this way at high energies still makes sense as a quantum theory.
4. Causal Dynamical Triangulation
The problem with quantizing gravity comes from infinities that appear when particles interact at very short distances. This is why most approaches to quantum gravity rely on removing the short distances by using objects of finite extensions. Loop Quantum Gravity works this way, and so does String Theory.
Causal Dynamical Triangulation also relies on removing short distances. It does so by approximating a curved space with triangles, or their higher-dimensional counterparts respectively. In contrast to the other approaches though, where the finite extension is a postulated, new property of the underlying true nature of space, in Causal Dynamical Triangulation, the finite size of the triangles is a mathematical aid, and one eventually takes the limit where this size goes to zero.
The major reason why many people have remained unconvinced of Causal Dynamical Triangulation is that it treats space and time differently, which Einstein taught us not to do.
5. Emergent Gravity
Emergent gravity is not one specific theory, but a class of approaches. These approaches have in common that gravity derives from the collective behavior of a large number of constituents, much like the laws of thermodynamics do. And much like for thermodynamics, in emergent gravity, one does not actually need to know all that much about the exact properties of these constituents to get the dynamical law.
If you think that gravity is really emergent, then quantizing gravity does not make sense. Because, if you think of the analogy to thermodynamics, you also do not obtain a theory for the structure of atom by quantizing the equations for gases. Therefore, in emergent gravity one does not quantize gravity. One instead removes the inconsistency between gravity and quantum mechanics by saying that quantizing gravity is not the right thing to do.
Which one of these theories is the right one? No one knows. The problem is that it’s really, really hard to find experimental evidence for quantum gravity. But that it’s hard doesn’t mean impossible. I will tell you some other time how we might be able to experimentally test quantum gravity after all. So, stay tuned.
Wednesday, September 04, 2019
What’s up with LIGO?
The Nobel-Prize winning figure. We don’t know exactly what it shows. [Image Credits: LIGO] |
By now, the LIGO/VIRGO collaboration has reported dozens of gravitational wave events: black hole mergers (like the first), neutron star mergers, and black hole-neutron star mergers. But not everyone is convinced the signals are really what the collaboration claims they are.
Already in 2017, a group of physicists around Andrew Jackson in Denmark reported difficulties when they tried to reproduce the signal reconstruction of the first event. In an interview dated November last year, Jackson maintained that the only signal they have been able to reproduce is the first. About the other supposed detections he said: “We can’t see any of those events when we do a blind analysis of the data. Coming from Denmark, I am tempted to say it’s a case of the emperor’s new gravitational waves.”
For most physicists, the 170817 neutron-star merger – the strongest signal LIGO has seen so-far – erased any worries raised by the Danish group’s claims. That’s because this event came with an electromagnetic counterpart that was seen by multiple telescopes, which can demonstrate that LIGO indeed sees something of astrophysical origin and not terrestrial noise. But, as critics have pointed out correctly, the LIGO alert for this event came 40 minutes after NASA’s gamma-ray alert. For this reason, the event cannot be used as an independent confirmation of LIGO’s detection capacity. Furthermore, the interpretation of this signal as a neutron-star merger has also been criticized. And this criticism has been criticized for yet other reasons.
It further fueled the critics’ fire when Michael Brooks reported last year for New Scientist that, according to two members of the collaboration, the Nobel-prize winning figure of LIGO’s seminal detection was “not found using analysis algorithms” but partly done “by eye” and “hand-tuned for pedagogical purposes.” To this date, the journal that published the paper has refused to comment.
The LIGO collaboration has remained silent on the matter, except for issuing a statement according to which they have “full confidence” in their published results (surprise), and that we are to await further details. Glaciers are now moving faster than this collaboration.
In April this year, LIGO started the third observation run (O3) after an upgrade that increased the detection sensitivity by about 40% over the previous run. Many physicists hoped the new observations would bring clarity with more neutron-star events that have electromagnetic counterparts, but that hasn’t happened.
Since April, the collaboration has issued 33 alerts for new events, but so-far no electromagnetic counterparts have been seen. You can check the complete list for yourself here. 9 of the 33 events have meanwhile been downgraded because they were identified as likely of terrestrial origin, and been retracted.
The number of retractions is fairly high partly because the collaboration is still coming to grips with the upgraded detector. This is new scientific territory and the researchers themselves are still learning how to best analyze and interpret the data. A further difficulty is that the alerts must go out quickly in order for telescopes to be swung around and point at the right location in the sky. This does not leave much time for careful analysis.
With the still lacking independent confirmation that LIGO sees events of astrophysical origin, critics are having a good time. In a recent article for the German online magazine Heise, Alexander Unzicker – author of a book called “The Higgs Fake” – contemplates whether the first event was a blind injection, ie, a fake signal. The three people on the blind injection team at the time say it wasn’t them, but Unzicker argues that given our lack of knowledge about the collaboration’s internal proceedings, there might well have been other people able to inject a signal. (You can find an English translation here.)
In the third observation run, the collaboration has so-far seen one high-significance binary neutron star candidate (S190425z). But the associated electromagnetic signal for this event has not been found. This may be for various reasons. For example, the analysis of the signal revealed that the event must have been far away, about 4 times farther than the 2017 neutron-star event. This means that any electromagnetic signal would have been fainter by a factor of about 16. In addition, the location in the sky was rather uncertain. So, the electromagnetic signal was plausibly hard to detect.
More recently, on August 14th, the collaboration reported a neutron-star black hole merger. Again the electromagnetic counterpart is missing. In this case they were able to locate the origin to better precision. But they still estimate the source is about 7 times farther away than the 2017 neutron-star event, meaning it would have been fainter by a factor of about 50.
Still, it is somewhat perplexing the signal wasn’t seen by any of the telescopes that looked for it. There may have been physical reasons at the source, such that the neutron-star was swallowed in one bite, in which case there wouldn’t be much emitted, or that the system was surrounded by dust, blocking the electromagnetic signal.
A second neutron star-black hole merger on August 17 was retracted.
And then there are the “glitches”.
LIGO’s “glitches” are detector events of unknown origin whose frequency spectrum does not look like the expected gravitational wave signals. I don’t know exactly how many of those the detector suffers from, but the way they are numbered, by a date and two digits, indicates between 10 and 100 a day. LIGO uses a citizen science project, called “Gravity Spy” to identify glitches. There isn’t one type of glitch, there are many different types of them, with names like “Koi fish,” “whistle,” or “blip.” In the figures below you see a few examples.
Examples for LIGO's detector glitches. [Image Source] |
This gives me some headaches, folks. If you do not know why your detector detects something that does not look like what you expect, how can you trust it in the cases where it does see what you expect?
Here is what Andrew Jackson had to say on the matter:
Jackson: “The thing you can conclude if you use a template analysis is [...] that the results are consistent with a black hole merger. But in order to make the stronger statement that it really and truly is a black hole merger you have to rule out anything else that it could be.If it was correct what Jackson said, this would be highly problematic indeed. But I have not been able to think of any other event that looks remotely like a gravitational wave signal, even leaving aside the detector correlations. Unlike what Jackson states, a typical catastrophic event does not have a frequency increase followed by a ring-down and sudden near-silence.
“And the characteristic signal here is actually pretty generic. What do they find? They find something where the amplitude increases, where the frequency increases, and then everything dies down eventually. And that describes just about every catastrophic event you can imagine. You see, increasing amplitude, increasing frequency, and then it settles into some new state. So they really were obliged to rule out every terrestrial effects, including seismic effects, and the fact that there was an enormous lightning string in Burkina Faso at exactly the same time [...]”
Interviewer: “Do you think that they failed to rule out all these other possibilities?
Jackson: “Yes…”
Think of an earthquake for example. For the most part, earthquakes happen when stresses exceed a critical threshold. The signal don’t have a frequency build-up, and after the quake, there’s a lot of rumbling, often followed by smaller quakes. Just look at the below figure that shows the surface movement of a typical seismic event.
Example of typical earthquake signal. [Image Source] |
It looks nothing like that of a gravitational wave signal.
For this reason, I don’t share Jackson’s doubts over the origin of the signals that LIGO detects. However, the question whether there are any events of terrestrial origin with similar frequency characteristics arguably requires consideration beyond Sabine scratching her head for half an hour.
So, even though I do not have the same concerns as were raised by the LIGO critics, I must say that I do find it peculiar indeed there is so little discussion about this issue. A Nobel Prize was handed out, and yet we still do not have confirmation that LIGO’s signals are not of terrestrial origin. In which other discipline is it considered good scientific practice to discard unwelcome yet not understood data, like LIGO does with the glitches? Why do we still not know just exactly what was shown in the figure of the first paper? Where are the electromagnetic counterparts?
LIGO’s third observing run will continue until March 2020. It presently doesn’t look like it will bring the awaited clarity. I certainly hope that the collaboration will make somewhat more efforts to erase the doubts that still linger around their supposed detections.
Subscribe to:
Posts (Atom)