Sunday, April 28, 2019

My name is Cassandra [I've been singing again]

Here is what I did on Easter. For this video I recruited my mom and my younger brother to hold the camera. Not a masterpiece of camera work, but definitely better than my green screen.

Thursday, April 25, 2019

Yes, scientific theories have to be falsifiable. Why do we even have to talk about this?

The task of scientists is to find useful descriptions for our observations. By useful I mean that the descriptions are either predictive or explain data already collected. An explanation is anything that is simpler than just storing the data itself.

An hypothesis that is not falsifiable through observation is optional. You may believe in it or not. Such hypotheses belong into the realm of religion. That much is clear, and I doubt any scientist would disagree with that. But troubles start when we begin to ask just what it means for a theory to be falsifiable. One runs into the following issues:

1. How long it should take to make a falsifiable prediction (or postdiction) with a hypothesis?

If you start out working on an idea, it might not be clear immediately where it will lead, or even if it will lead anywhere. That could be because mathematical methods to make predictions do not exist, or because crucial details of the hypothesis are missing, or just because you don’t have enough time or people to do the work.

My personal opinion is that it makes no sense to require predictions within any particular time, because such a requirement would inevitably be arbitrary. However, if scientists work on hypotheses without even trying to arrive at predictions, such a research direction should be discontinued. Once you allow this to happen, you will end up funding scientists forever because falsifiable predictions become an inconvenient career risk.

2. How practical should a falsification be?

Some hypotheses are falsifiable in principle, but not falsifiable in practice. Even in practice, testing them might take so long that for all practical purposes they’re unfalsifiable. String theory is the obvious example. It is testable, but no experiment in the foreseeable future will be able to probe its predictions. A similar consideration goes for the detection of quanta of the gravitational field. You can measure those, in principle. But with existing methods, you will still collect data when the heat death of the universe chokes your ambitious research agenda.

Personally, I think predictions for observations that are not presently measurable are worthwhile because you never know what future technology will enable. However, it makes no sense working out details of futuristic detectors. This belongs into the realm of science fiction, not science. I do not mind if scientists on occasion engage in such speculation, but it should be the exception rather than the norm.

3. What even counts as a hypothesis?

In physics we work with theories. The theories themselves are based on axioms, that are mathematical requirements or principles, eg symmetries or functional relations. But neither theories nor principles by themselves lead to predictions.

To make predictions you always need a concrete model, and you need initial conditions. Quantum field theory, for example, does not make predictions – the standard model does. Supersymmetry also does not make predictions – only supersymmetric models do. Dark matter is neither a theory nor a principle, it is a word. Only specific models for dark matter particles are falsifiable. General relativity does not make predictions unless you specify the number of dimensions and chose initial conditions. And so on.

In some circumstances, one can arrive at predictions that are “model-independent”, which are the most useful predictions you can have. I scare-quote “model-independent” because such predictions are not really independent of the model, they merely hold for a large number of models. Violations of Bell’s inequality are a good example. They rule out a whole class of models, not just a particular one. Einstein’s equivalence principle is another such example.

Troubles begin if scientists attempt to falsify principles by producing large numbers of models that all make different predictions. This is, unfortunately, the current situation in both cosmology and particle physics. It documents that these models are strongly underdetermined. In such a case, no further models should be developed because that is a waste of time. Instead, scientists need to find ways to arrive at more strongly determined predictions. This can be done, eg, by looking for model-independent predictions, or by focusing on inconsistencies in the existing theories.

This is not currently happening because it would make it more difficult for scientists to produce predictions, and hence decrease their paper output. As long as we continue to think that a large number of publications is a signal of good science, we will continue to see wrong predictions based on useless models.

4. Falsifiability is necessary but not sufficient.

A lot of hypotheses are falsifiable but just plain nonsense. Really arguing that a hypothesis must be science just because you can test it is typical crackpot thinking. I previously wrote about this here.

5. Not all aspects of a hypothesis must be falsifiable.

It can happen that a hypothesis which makes some falsifiable predictions leads to unanswerable questions. An often named example is that certain models of eternal inflation seem to imply that besides our own universe there exist an infinite number of other universes. These other universes, however, are unobservable. We have a similar conundrum already in quantum mechanics. If you take the theory at face value then the question what a particle does before you measure it is not answerable.

There is nothing wrong with a hypothesis that generates such problems; it can still be a good theory, and its non-falsifiable predictions certainly make for good after-dinner conversations. However, debating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests.

This post was brought on by Matthew Francis’ article “Falsifiability and Physics” for Symmetry Magazine.

Monday, April 22, 2019

Comments on “Quantum Gravity in the Lab” in Scientific American

The April issue of Scientific American has an article by Tim Folger titled “Quantum Gravity in the Lab”. It is mainly about the experiment by the Aspelmeyer group in Vienna, which I wrote about three years ago here. Folger also briefly mentions the two other proposals by Bose et al and Marletto and Vedral. (For context and references see my recent blogpost about the possibility to experimentally test quantum gravity, or this 2016 summary.)

I am happy that experimental tests for quantum gravity finally receive some media coverage. I worked on the theory-side of this topic for more than a decade, and I am still perplexed by how little attention the possibility to measure quantum gravitational effects gets, both in the media and in the community. Even a lot of physicists think this research area doesn’t exist.

I find this perplexing because if you think one cannot test quantum gravity, then developing a theory for it is not science, hence it should not be pursued in physics departments. So we have a situation in which a lot of physicists think it is fine to spend money on developing a theory they believe is not testable, or whose testability they believe is not worth studying. Excuse me for not being able to make sense of this. Freeman Dyson at least is consistent in that he both believes quantum gravity isn’t testable and isn’t worth the time.

In any case, the SciAm piece contains some bummers that I want to sort out in the hope to prevent them from spreading.

First, the “In Brief” box says:
“Physicists are hoping that by making extremely precise measurements of gravity in small-sale set-ups – experiments that will fit onto a tabletop in a laboratory – they can detect effects from the intersection of gravity and quantum theory.”
But not everything that is at the intersection of “gravity” and “quantum” is actually quantum gravity. A typical example is black hole evaporation. It is caused by quantum effects in a gravitational field, but the gravitational field itself does not need to have quantum properties for the effect to occur. Black hole evaporation, therefore, is not quantum gravitational in origin.

The same goes, eg, for this lovely experiment that measured the quantization of momenta of neutrons in the gravitational potential. Again this is clearly a quantum effect that arises from the gravitational interaction, so it is at “the intersection of gravity and quantum theory.” But since the quantum properties of gravity itself do not play a role for the neutron-experiment, it does not test quantum gravity.

A more helpful statement would therefore be “experiments that can probe the quantum behavior of gravity”. Or, since gravity is really caused by the curvature of space-time, one could also write “experiments that can probe the quantum properties of space and time.”

 Another misleading statement in “In Brief” box is:
“The experiments aim to show whether gravity becomes quantized – that is, divisible into discrete bits – on extremely tiny scales.”
This is either badly phrased, or wrong, or both. I don’t know what it means for gravity to be divisible into discrete bits. The best interpretation I can think of is that the sentence refers to gravitons, particles which should be associated with a quantized gravitational field. The experiments that the article describes, however, do not attempt to measure gravitons.

In the text we further find the sentence:
”A quantum space-time… would become coarse-grained, like a digital photograph that becomes pixelated when magnified.”
But we have no reason to think there is any pixelation going on for space-time. Moreover, the phrase “magnified” refers to distance scales. Quantum gravity, however, does not become relevant just below a certain distance. Indeed, if it became relevant below a certain distance, that would be in conflict with the symmetries of special relativity; this distance-dependence is hence a highly problematic conjecture.

What happens in the known approaches to quantum gravity instead is that quantum uncertainties increase when space-time curvature becomes large. It is not so much a pixelation that is going on, but an inevitable blurring that stems from the quantum fluctuations of space and time.

And then there is this quote from Lajos Diosi:
”[We] know for sure that there will be a total scrambling of the spacetime continuity if you go down to the Planck scale.”
No one knows “for sure” what happens at the Planck scale, so please don’t take this at face value.

Besides this, the article is a recommendable read.

If you want to get a sense of what other ways there are to experimentally test quantum gravity (or don’t have a subscription) read my 2017 Nautilus essay “What Quantum Gravity Needs is more Experiment.”

Monday, April 15, 2019

How Heroes Hurt Science

Einstein, Superhero
Tell a story. That’s the number one advice from and to science communicators, throughout centuries and all over the globe.

We can recite the seven archetypes forward and backward, will call at least three people hoping they disagree with each other, ask open-ended questions to hear what went on backstage, and trade around the always same anecdotes: Wilson checking the telescope for pigeon shit (later executing the poor birds), Guth drawing a box around a key equation of inflation, Geim and Novoselov making graphene with sticky tape, Feynman’s bongos, Einstein’s compass, Newton’s apple, Archimedes jumping out of the tub yelling “Eureka”.

Raise your hand if one of those was news for you. You there, the only one who raised a hand, let me guess: You didn’t know the fate of the pigeons. Now you know. And will read about it at least 20 more times from here on. Mea culpa.  

Sure, stories make science news relatable, entertaining, and, most of all, sellable. They also make them longer. Personally I care very little about people-tales. I know I’m a heartless fuck. But really I would rather just get the research briefing without Peter’s wife and Mary’s move and whatever happened that day in the elevator. By paragraph four I’ll confuse the two guys with Asian names and have forgotten why I was reading your piece to begin with. Help!

Then, maybe, that’s just me. Certainly, if it sells, it means someone buys it. Or at least clicks on an ad every now and then.

I used to think the story-telling is all right, just a spoonful of sugar to make the science news go down. Recently though, I worry such hero-tales are not only not communicating how science works, they are actually standing in the way of progress.

Here is how science really works. At any given time, we have a pool of available knowledge and we have a group of researchers tasked with expanding this knowledge. They use the existing knowledge to either build experiments and collect new data, or to draw new conclusions. If the newly created knowledge results in a better understanding of nature, we speak of a breakthrough. This information will from thereon be available to other researchers, who can build on it, and so on.

What does it take to make a breakthrough? It takes people who have access to the relevant knowledge, have the education to comprehend that knowledge, and bring the skill and motivation to make a contribution to science. Intelligence is one of the key factors that allows a scientist to draw conclusions from existing knowledge, but you can’t draw conclusions from information you don’t have. So, besides having the brain, you also need to have access to knowledge and must decide what to pay attention to.

Science tends to be well-populated by intelligent people, which means that there is always some number of people on the research-front banging their head on the same problem. Who among these smart folks is first to make a breakthrough then often depends on timing and luck.

Evidence of this is in the published literature, demonstrated by new insights often put forward almost simultaneously by different people.

Like Newton and Leibnitz developing calculus in parallel. Like Schrödinger and Heisenberg developing two different routes to quantum mechanics at the same time. Like Bardeen, Cooper, and Schrieffer explaining superconductivity just when Bogolubyov also did. Like the Englert-Brout-Higgs-Guralnik-Hagen-Kibble mechanism, invented three times within a matter of years.

These people had access to the same information, and they had the required processing power, so they were able to make the next step. If they hadn’t done it then and there, someone else would have.  

And then there are the rediscoveries. Like Wikipedia just told me that it’s all been said before: “The concept of multiple discovery opposes a traditional view—the `heroic theory’ of invention and discovery.”

Okay, so I am terribly unoriginal, snort. But yeah, I am opposing the traditional view. And I am opposing it for a reason. Focusing on individual achievement, while forgetting about the scientific infrastructure that enabled it, has downsides.

One downside of our hero-stories is that we fail to acknowledge the big role privilege still plays for access to higher education. Too many people on the planet will never contribute to science, not because they don’t have the intellectual ability, but because they do not have the opportunity to acquire the necessary knowledge.

The other downside is that we underestimate the relevance of communication and information-sharing in scientific communities. If you believe that genius alone will do, then you are likely to believe it doesn’t matter how smart people arrange their interaction. If you, on the other hand, take into account that even big-brained scientists must somehow make decisions about what information to have a look at, you understand that it matters a lot how scientists organize their work-life.

I think that currently many scientists, especially in the foundations of physics, fail to pay attention to how they exchange and gather information, processes that can easily be skewed by social biases. What’s a scientist, after all? A scientist is someone who collects information, chews on it, and outputs new information. But, as they say, garbage in, garbage out.

The major hurdle on the way to progress is presently not a shortage of smart people. It’s smart people who don’t want to realize that they are but wheels in the machinery.

Saturday, April 13, 2019

The LHC has broken physics, according to this philosopher

The Large Hadron Collider (LHC) has not found any new, fundamental particles besides the Higgs-boson. This is a slap in the face of thousands of particle physicists who were confident the mega-collider should see more: Other new particles, additional dimensions of space, tiny black holes, dark matter, new symmetries, or something else entirely. None of these fancy things have shown up.

The reason that particle physicists were confident the LHC should see more than the Higgs was that their best current theory – the Standard Model of particle physics – is not “natural.”

The argument roughly goes like this: The Higgs-boson is different from all the other particles in the standard model. The other particles have either spin 1 or spin ½, but the Higgs has spin 0. Because of this, quantum fluctuations make a much larger contribution to the mass of the Higgs. This large contribution is ugly, therefore particle physicists think that once collision energies are high enough to create the Higgs, something else must appear along with the Higgs so that the theory is beautiful again.

However, that the Standard Model is not technically natural has no practical relevance because these supposedly troublesome quantum contributions are not observable. The lack of naturalness is therefore an entirely philosophical conundrum, and the whole debate about it stems from a fundamental confusion about what requires explanation in science to begin with. If you can’t observe it, there’s nothing to explain. If it’s ugly, that’s too bad. Why are we even talking about this?

It has remained a mystery to me why anyone buys naturalness arguments. You may say, well, that’s just Dr Hossenfelder’s opinion and other people have other opinions. But it is no longer just an opinion: The LHC predictions based on naturalness arguments did, as a matter of fact, not work. So you might think that particle physicists would finally stop using them.

But particle physicists have been mostly quiet about the evident failure of their predictions. Except for a 2017 essay by Gian-Francesco Guidice, head of the CERN theory group and one of the strongest advocates of naturalness-arguments, no one wants to admit something went badly wrong. Maybe they just hope no one will notice their blunder, never mind that it’s all over the published literature. Some are busy inventing new types of naturalness that would predict new particles would show up only at the next larger collider.

This may be an unfortunate situation for particle physicists, but it’s a wonderful situation for philosophers who love nothing quite as much as going on about someone else’s problem. A few weeks ago, we discussed William Porter’s analysis of the naturalness crisis in particle physics. Today I want to draw your attention to a preprint by the philosopher David Wallace, titled “Naturalness and Emergence.”

About the failure of particle physicists’ predictions, Wallace writes:
“I argue that any such naturalness failure threatens to undermine the entire structure of our understanding of inter-theoretic reduction, and so risks a much larger crisis in physics than is sometimes suggested.”
With “inter-theoretic reduction” he refers to the derivation of a low-energy theory from a high-energy theory, which is the root of reductionism. In the text he further writes:
”If Naturalness arguments just brutely fail in particle physics and cosmology, then there can be no reliable methodological argument for assuming them elsewhere (say, in statistical mechanics).”
Sadly enough, Wallace makes the very mistake that I have pointed out repeatedly, that I frequently hear particle physicists make, and that also Porter Williams spells out in his recent paper. Wallace conflates the dynamical degrees of freedom of a theory (say, the momenta of particles) with the parameters of a theory (eg, the strength of the interactions). For the former you can collect statistical data and meaningfully talk about probabilities. For the latter you cannot: We have only one universe with one particular set of constants. Speaking about these constants’ probabilities is scientifically meaningless.

In his paper, Wallace argues that in statistical mechanics one needs to assume an initial probability distribution that is “simple” in a way that he admits is not well-defined. It is correct, of course, that in practice you start with such an assumption to make a calculation that results in a prediction. However, this being science, you can justify this assumption by the mere fact that the predictions work.

The situation is entirely different, however, for distributions over parameters. Not only are these distributions not observable by construction, worse, starting with the supposedly simple assumptions for this distribution demonstrably works badly. That it works badly is, after all, the whole reason we are having this discussion to begin with.

Ironically enough, Wallace, in his paper, complains that I am conflating different types of naturalness in my paper, making me think he cannot have read what I wrote too carefully. The exact opposite is the case: I am not commenting on the naturalness of statistical distributions of degrees of freedom because that’s just a different story. As I say explicitly, naturalness arguments are well-defined if you can sample the probability distribution. If, on the other hand, you have no way to ever determine a distribution, they are ill-defined.

While I am at it, a note for the folks with their Bayesian arguments. Look, if you think that a supersymmetric extension of the standard model is more predictive than the standard model itself, then maybe try to figure out just what it predicts. Besides things we have not seen, I mean. And keep in mind that if you assume that the mass of the Higgs-boson is in the energy range of the LHC, you cannot count that as a prediction. Good luck.

Back to Wallace. His argument is logically equivalent to crossing the Atlantic in a 747 while claiming that heavier-than-air machines can’t fly. The standard model has so far successfully explained every single piece of data that’s come out of the LHC. If you are using a criterion for theory-evaluation according to which that’s not a good theory, the problem is with you, not with the theory.

Saturday, April 06, 2019

Away Note/Travel Update/Interna

I will be away next week, giving three talks at Brookhaven National Lab on Tuesday, April 9, and one at Yale April 10.

Next upcoming lectures are Stuttgart on April 29 (in Deutsch), Barcelona on May 23, Mainz on June 11, (probably) Groningen on June 22, and Hamburg on July 5th.

I may or may not attend this year’s Lindau Nobel Laureate meeting, and have a hard time making up my mind about whether or not to go to SciFoo, because, considering the status of my joints, it’s a choice between physical and financial pain, and frankly I’d rather chose neither.

In any case, I am always happy to meet readers of my blog, so if our paths should cross, please do not hesitate to say Hi.

It follows the obligatory note about slow traffic on this blog while I am traveling: I have comment moderation on. This means comments will only appear after I have manually approve them. Sometimes I sleep, sometimes I am offline, sometimes I have better things to do than checking my inbox. As a result, comments may sit in the queue for a while. Normally it does not take longer than 24 hours.

Let me also mention that I no longer read comments posted as “Unknown”. I have no idea what Google is doing, but the “Unknown” comments are today what anonymous comments were a decade ago. The vast majority of those are spam, and most of the rest are insults and other ill-informed nonsense. I do not have time for this and therefore collectively forwarded them straight to junk.

There have also recently been some (?) people who tried to post random strings of letters or, in some cases, single letters. I am assuming this was to try if the comment feature works. I will not approve such comments, so it is not a useful method to figure out what is going on.

Friday, April 05, 2019

Does the world need a larger particle collider? [video]

Another attempt to explain myself. Transcript below.

I know you all wanted me to say something about the question of whether or not to build a new particle collider, one that is larger than even the Large Hadron Collider. And your wish is my command, so here we go.

There seem to be a lot of people who think I’m an enemy of particle physics. Most of those people happen to be particle physicists. This is silly, of course. I am not against particle physics, or against particle colliders in general. In fact, until recently, I was in favor of building a larger collider.

Here is what I wrote a year ago:
“I too would like to see a next larger particle collider, but not if it takes lies to trick taxpayers into giving us money. More is at stake here than the employment of some thousand particle physicists. If we tolerate fabricated arguments in the scientific literature just because the conclusions suit us, we demonstrate how easy it is for scientists to cheat.

Fact is, we presently have no evidence – neither experimental nor theoretical evidence – that a next larger collider would find new particles.”
And still in December I wrote:
“I am not opposed to building a larger collider. Particle colliders that reach higher energies than we probed before are the cleanest and most reliable way to search for new physics. But I am strongly opposed to misleading the public about the prospects of such costly experiments. We presently have no reliable prediction for new physics at any energy below the Planck energy. A next larger collider may find nothing new. That may be depressing, but it’s true.”
Before I tell you why I changed my mind, I want to tell you what’s great about high energy particle physics, why I worked in that field for some while, and why, until recently I was in favor of building that larger collider.

Particle colliders are really the logical continuation of microscopes, you build them to see small structures. But think of a light microscope: The higher the energy of the light, the shorter its wavelength, and the shorter its wavelength, the better the resolution of small structures. This is why you get better resolution with microscopes that use X-rays than with microscopes that use visible light.

Now, quantum mechanics tells us that particles have wavelengths too, and for particles higher energy also means better resolution. Physicists started this with electron microscopes, and it continues today with particle colliders.

So that’s why we build particle colliders that reach higher and higher energies, because that allows us to test what happens at shorter and shorter distances. The Large Hadron Collider currently probes distances of about one thousandth of the diameter of a proton.

Now, if probing short distances is what you want to do, then particle colliders are presently the cleanest way to do this. There are other ways, but they have disadvantages.

The first alternative is cosmic rays. Cosmic rays are particles that come from outer space at high speed, which means if they hit atoms in the upper atmosphere, that collision happens at high energy.

Most of the cosmic rays are at low energies, but every once in a while one comes at high energy. The highest collision energies still slightly exceed those tested at the LHC.

But it is difficult to learn much from cosmic rays. To begin with, the highly energetic ones are rare, and happen far less frequently than you can make collisions with an accelerator. Few collisions means bad statistics which means limited information.

And there are other problems, for example we don’t know what the incoming particle is to begin with. Astrophysicists currently think it’s a combination of protons and light atomic nuclei, but really they don’t know for sure.

Another problem with cosmic rays is that the collisions do not happen in vacuum. Instead, the first collision creates a lot of secondary particles which collide again with other atoms and so on. This gives rise to what is known as a cosmic ray shower. This whole process has to be modeled on a computer and that again brings in uncertainty.

Then the final problem with cosmic rays is that you cannot cover the whole surface of the planet to catch the particles that were created. So you cover some part of it and extrapolate from there. Again, this adds uncertainty to the results.

With a particle collider, in contrast, you know what is colliding and you can build detectors directly around the collision region. That will still not capture all particles that are created, especially not in the beam direction, but it’s much better than with cosmic rays.

The other alternative to highly energetic particle collisions are high precision measurements at low energies.

You can use high precision instead of high energy because, according to the current theories, everything that happens at high energies also influences what happens at low energies. It is just that this influence is very small.

Now, high precision measurements at low energies are a very powerful method to understand short distance physics. But interpreting the results puts a high burden on theoretical physicists. That’s because you have to be very, well, precise, to make those calculations, and making calculations at low energies is difficult.

This also means that if you should find a discrepancy between theory and experiment, then you will end up debating whether it’s an actual discrepancy or whether it’s a mistake in the calculation.

A good example for this is the magnetic moment of the muon. We have known since the 1960s that the measured value does not fit with the prediction, and this tension has not gone away. Yet it has remained unclear whether this means the theories are missing something, or whether the calculations just are not good enough.

With particle colliders, on the other hand, if there is a new particle to create above a certain energy, you have it in your face. The results are just easier to interpret.

So, now that I have covered why particle colliders are a good way to probe short distances, let me explain why I am not in favor of building a larger one right now. It’s simply because we currently have no reason to think there is anything new to discover at the next shorter distances, not until we get to energies a billion times higher than what even the next larger collider would reach. That, and the fact that the cost of a next larger particle colliders is high compared to the typical expenses for experiments in the foundations of physics.

So a larger particle collider presently has a high cost but a low estimated benefit. It is just not a good way to invest money. Instead, there are other research directions in the foundations of physics which are more promising. Dark matter is a good example.

One of the key motivations for building a larger particle collider that particle physicists like to bring up is that we still do not know what dark matter is made of.

But we are not even sure that dark matter is made of particles. And if it’s a particle, we do not know what mass it has or how it interacts. If it’s a light particle, you would not look for it with a bigger collider. So really it makes more sense to collect more information about the astrophysical situation first. That means concretely better telescopes, better sky coverage, better redshift resolution, better frequency coverage, and so on.

Other research directions in the foundations that are more promising are those where we have problems in the theories that do require solutions, this is currently the case in quantum gravity and in the foundations of quantum mechanics. I can tell you something about this some other time.

But really my intention here is not to advocate a particular alternative. I merely think that physicists should have an honest debate about the evident lack of progress in the foundations of physics and what to do about it.

Since the theoretical development of the standard model was completed in the 1970s, there has been no further progress in theory development. You could say maybe it’s just hard and they haven’t figured it out. But the slow progress in and by itself is not what worries me.

What is worries me is that in the past 40 years physicists have made loads and loads of predictions for physics beyond the standard model, and those were all wrong. Every. Single. One. Of them.

This is not normal. This is bad scientific methodology. And this bad scientific methodology has flourished because experiments have only delivered null results.

And it has become a vicious cycle: Bad predictions motivate experiments. The experiments find only null results. The null results do not help theory development, which leads to bad predictions that motivate experiments, which deliver null results, and so on.

We have to break this cycle. And that’s why I am against building a larger particle collider.

Thursday, April 04, 2019

Is the universe a hologram? Gravitational wave interferometers may tell us.

Shadow on the wall.
Left-right. Forward-backward. Up-down. We are used to living in three dimensions of space. But according to some physicists, that’s an illusion. Instead, they say, the universe is a hologram and every move we make is really inscribed on the surface of the space around us. So far, this idea has been divorced from experimental test. But in a recent arXiv paper, physicists have now proposed a way to test it.

What does it mean to live in a hologram? Take a Rubik’s cube. No, don’t leave! I mean, take it, metaphorically speaking. The cube is made of 3 x 3 x 3 = 27 smaller cubes. It has 6 x 3 x 3 = 54 surface elements. If you use each surface pixel and each volume pixel to encode one bit of information, then you can use the surface elements to represent everything that happens in the volume.

A volume pixel, by the way, is called a voxel. If you learn nothing else from me today, take that.

So the Rubik’s cube has more pixels than voxels. But if you divide it up into smaller and smaller pieces, the ratio soon changes in favor of voxels. For N intervals on each edge, you have N3 voxels compared to 6 x N2 pixels. The amount of information that you can encode in the volume, therefore, increases much faster than the amount you can encode on the surface.

Or so you would think. Because the holographic principle forbids this from happening. This idea, which originates in the physics of black holes, requires that everything which can happen inside a volume of space is inscribed on the surface of that volume. This means if you’d look very closely at what particles do, you would find they cannot move entirely independently. Their motions must have subtle correlations in order to allow a holographic description.

We do not normally notice these holographic correlations, because ordinary stuff needs few of the theoretically available bits of information anyway. Atoms in a bag of milk, for example, are huge compared to the size of the voxels, which is given by the Planck length. The Planck length, that is 25 orders of magnitude smaller the diameter of an atom. And these 25 orders of magnitude, you may think, crush all hopes to ever see the constraints incurred on us by the holographic principle.

But theoretical physicists don’t give up hope that easily. The massive particles we know are too large to be affected by holographic limitations. But physicists expect that space and time have quantum fluctuations. These fluctuations are normally too small to observe. But if the holographic principle is correct, then we might be able to find correlations in these fluctuations.

To see these holographic correlations, we would have to closely monitor the extension of a large volume of space. Which is, in a nutshell, what a gravitational wave interferometer does.

The idea that you can look for holographic space-time fluctuations with gravitational wave interferometers is not new. It was proposed already a decade ago by Craig Hogan, at a time when the GEO600 interferometer reported unexplained noise.

Hogan argued that this noise was a signal of holography. Unfortunately, the noise vanished with a new readout method. Hogan, undeterred, found a factor in his calculation and went on to build his own experiment, the “Holometer” to search for evidence that we live in a hologram.

The major problem with Hogan’s idea, as I explained back then, was that his calculation did not respect the most important symmetry of Special Relativity, the so-called Lorentz-symmetry. For this reason the prediction made with Hogan’s approach had consequences we would have seen long ago with other experiments. His idea, therefore, was ruled out before the experiment even started.

Hogan, since he could not resolve the theoretical problems, later said he did not have a theory and one should just look. His experiment didn’t find anything. Or, well, it delivered null results. Which are also results, of course.

In any case, two weeks ago Erik Verlinde and Kathryn Zurek had another go at holographic noise:
    Observational Signatures of Quantum Gravity in Interferometers
    Erik P. Verlinde, Kathryn M. Zurek
    arXiv:1902.08207 [gr-qc]
The theoretical basis of their approach is considerably better than Hogan’s.

The major problem with using holographic arguments for interferometers is that you need to specify what surface you are talking about on which the information is supposedly encoded. But the moment you define the surface you are in conflict with the observer-independence of Special Relativity. That’s the issue Hogan ran into, and ultimately was not able to resolve.

Verlinde and Zurek circumvent this problem by speaking not about a volume of space and its surface, but about a volume of space-time (a “causal diamond”) and its surface. Then they calculate the amount of fluctuations that a light-ray accumulates when it travels back and forth between the two arms of the interferometer.

The total deviation they calculate scales with the geometric mean of the length of the interferometer arm (about a kilometer) and the Planck length (about 10-35 meters). If you put in the numbers, that comes out to be about 10-16 meters, which is not far off the sensitivity of the LIGO interferometer, currently about 10-15 meters.

Please do not take these numbers too seriously, because they do not account for uncertainties. But this rough estimate explains why the idea put forward by Verlinde and Zurek is not crazy talk. We might indeed be able to reach the required sensitivity in the soon future.

Let me be honest, though. The new approach by Verlinde and Zurek has not eliminated my worries about Lorentz-invariance. The particular observable they calculate is determined by the rest-frame of the interferometer. This is fine. My worry is not with their interferometer calculation, but with the starting assumption they make about the fluctuations. These are by assumption uncorrelated in the radial direction. But that radial direction could be any direction. And then I am not sure how this is compatible with their other assumption, that is holography.

The authors of the paper have been very patient in explaining their idea to me, but at least so far I have not been able to sort out my confusion about this. I hope that one of their future publications will lay this out in more detail. The present paper also does not contain quantitative predictions. This too, I assume, will follow in a future publication.

If they can demonstrate that their theory is compatible with Special Relativity, and therefore not in conflict with other measurements already, this would be a well-motivated prediction for quantum gravitational effects. Indeed, I would consider it one of the currently most promising proposals.

But if Hogan’s null result demonstrates anything, it is that we need solid theoretical predictions to know where to search for evidence of new phenomena. In the foundations of physics, the days when “just look” was a promising strategy are over.

Tuesday, April 02, 2019

Dear Dr B: Does the LHC collide protons at twice the speed of light?

I recently got a brilliant question after a public lecture: “If the LHC accelerates protons to almost the speed of light and then collides them head-on, do they collide at twice the speed of light?”

The short answer is “No.” But it’s a lovely question and the explanation contains a big chunk of 20th century physics.

First, let me clarify that it does not make sense to speak of a collision’s velocity. One can speak about its center-of-mass energy, but one cannot meaningfully assign a velocity to a collision itself. What makes sense, instead, is to speak about relative velocities. If you were one of the protons and the other proton comes directly at you, does it come at you with twice the speed of light?

It does not, of course. You already knew this, because Einstein taught us nothing travels faster than the speed of light. But for this to work, it is necessary that velocities do not add the way we are used to. Indeed, according to Einstein, for velocities, 1 plus 1 is not equal to 2. Instead, 1+1 is equal to 1.

I know that sounds crazy, but it’s true.

To give you an idea how this comes about, let us forget for a moment that we have three dimensions of space and that protons at the LHC actually go in a circle. It is easier to look at the case where the protons move in straight lines, so, basically only in one dimension of space. It is then no longer necessary to worry about the direction of velocities and we can just speak about their absolute value.

Let us also divide all velocities by the speed of light so that we do not have to bother with units.

Now, if you have objects that move almost at the speed of light, you have to use Special Relativity to describe what they do. In particular you want to know, if you see two objects approaching each other at velocity u and v, then what is the velocity of one object if you were flying along with the other? For this, in special relativity, you have to add u and v by the following equation:

You see right away that the result of this addition law is always smaller than 1 if both velocities were smaller than 1. And if u equals 1 – that is, one object moves with the speed of light – then the outcome is also 1. This means that all observers agree on the speed at which light moves.

If you check what happens with the protons at the LHC, you will see that adding twice 99% of the speed of light brings you to something like 99,9999% of the speed of light, but never to 100%, and certainly not to 200%.

I will admit the first time I saw this equation it just seemed entirely arbitrary to me. I was in middle school, then, and really didn’t know much about Special Relativity. I just thought, well, why? Why this? Why not some other weird addition law?

But once you understand the mathematics, it becomes clear there is nothing arbitrary about this equation. What happens is roughly the following.

Special Relativity is based on the symmetry of space-time. This does not mean that time is like space – arguably, it is not – but that the two belong together and cannot be treated separately. Importantly, the combination of space and time has to work the same way for all observers, regardless of how fast they move. This observer-independence is the key principle of Einstein’s theory of Special Relativity.

If you formulate observer-independence mathematically, it turns out there is only one way that a moving clock can tick, and only one way that a moving object can appear – it all follows from the symmetry requirement. The way that moving objects shrink and their time slows down is famously described by time-dilation and length-contraction. But once you have this, you can also derive that there is only one way to add velocities and still be consistent with observer-independence of the space-time symmetry. This is what the above equation expresses.

Let me also mention that the commonly made reference to the speed of light in Special Relativity is somewhat misleading. We do this mostly for historical reasons.

In Special Relativity we have a limiting velocity which cannot be reached by massive particles, no matter how much energy we use to accelerate them. Particles without masses, on the other hand, always move at that limiting velocity. Therefore, if light is made of massless particles, then the speed of light is identical to the limiting velocity. And for all we currently know, light is indeed made of massless particles, the so-called photons.

However, should it turn out one day that photons really have a tiny mass that we just haven’t been able to measure so far, then the limiting velocity would still exist. It would just no longer be equal to the speed of light.

So, in summary: Sometimes 1 and 1 is indeed 1.