Saturday, May 01, 2021

Dark Matter: The Situation Has Changed

[This is a transcript of the video embedded below]

Hi everybody. We haven’t talked about dark matter for some time. Which is why today I want to tell you how my opinion about dark matter has changed over the past twenty years or so. In particular, I want to discuss whether dark matter is made of particles or if not, what else it could be. Let’s get started.

First things first, dark matter is the hypothetical stuff that astrophysicists think makes up eighty percent of the matter in the universe, or 24 percent of the combined matter-energy. Dark matter should not be confused with dark energy. These are two entirely different things. Dark energy is what makes the universe expand faster, dark matter is what makes galaxies rotate faster, though that’s not the only thing dark matter does, as we’ll see in a moment.

But what is dark matter? 20 years ago I thought dark matter is most likely made of some kind of particle that we haven’t measured so far. Because, well, I’m a particle physicist by training. And if a particle can explain an observation, why look any further? Also, at the time there were quite a few proposals for new particles that could fit the data, like some supersymmetric particles or axions. So, the idea that dark matter is stuff, made of particles, seemed plausible to me and like the obvious explanation.

That’s why, just among us, I always thought dark matter is not a particularly interesting problem. Sooner or later they’ll find the particle, give it a name, someone will get a Nobel Prize and that’s that.

But, well, that hasn’t happened. Physicists have tried to measure dark matter particles since the mid 1980s. But no one’s ever seen one. There have been a few anomalies in the data, but these have all gone away upon closer inspection. Instead, what’s happened is that some astrophysical observations have become increasingly difficult to explain with the particle hypothesis. Before I get to the observations that particle dark matter doesn’t explain, I’ll first quickly summarize what it does explain, which are the reasons astrophysicists thought it exists in the first place.

Historically the first evidence for dark matter came from galaxy clusters. Galaxy clusters are made of a few hundred up to a thousand or so galaxies that are held together by their gravitational pull. They move around each other, and how fast they move depends on the total mass of the cluster. The more mass, the faster the galaxies move. Turns out that galaxies in galaxy clusters move way too fast to explain this with the mass that we can attribute to the visible matter. So Fritz Zwicky conjectured in the 1930s, that there must be more matter in galaxy clusters, just that we can’t see it. He called it “dunkle materie” dark matter.

It’s a similar story for galaxies. The velocity of a star which orbits around the center of a galaxy depends on the total mass within this orbit. But the stars in the outer parts of galaxies just orbit too fast around the center. Their velocity should drop with distance to the center of the galaxy, but it doesn’t. Instead, the velocity of the stars becomes approximately constant at far distance to the galactic center. This gives rise to the so-called “flat rotation curves”. Again you can explain that by saying there’s dark matter in the galaxies.

Then there is gravitational lensing. These are galaxies or galaxy clusters which bend light that comes from an object behind them. This object behind them then appears distorted, and from the amount of distortion you can infer the mass of the lens. Again, the visible matter just isn’t enough to explain the observations.

Then there’s the temperature fluctuations in the cosmic microwave background. These fluctuations are what you see in this skymap. All these spots here are deviations from the average temperature, which is about 2.7 Kelvin. The red spots are a little warmer, the blue spots a little colder than that average. Astrophysicists analyze the microwave-background using its power spectrum, where the vertical axis is roughly the number of spots and the horizontal axis is their size, with the larger sizes on the left and increasingly smaller spots to the right. To explain this power spectrum, again you need dark matter.

Finally, there’s the large scale distribution of galaxies and galaxy clusters and interstellar gas and so on, as you see in the image from this computer simulation. Normal matter alone just does not produce enough structure on short scales to fit the observations, and again, adding dark matter will fix the problem.

So, you see, dark matter was a simple idea that fit to a lot of observations, which is why it was such a good scientific explanation. But that was the status 20 years ago. And what’s happened since then is that observations have piled up that dark matter cannot explain.

For example, particle dark matter predicts a density in the cores of small galaxies that peaks, whereas the observations say the distribution should be flat. Dark matter also predicts too many small satellite galaxies, these are small galaxies that fly around a larger host. The Milky Way for example, should have many hundreds, but actually only has a few dozen. Also, these small satellite galaxies are often aligned in planes. Dark matter does not explain why.

We also know from observations that the mass of a galaxy is correlated to the fourth power of the rotation velocity of the outermost stars. This is called the baryonic Tully Fisher relation and it’s just an observational fact. Dark matter does not explain it. It’s a similar issue with Renzo’s rule, that says if you look at the rotation curve of a galaxy, then for every feature in the curve for the visible emission, like a wiggle or bump, there is also a feature in the rotation curve. Again, that’s an observational fact, but it makes absolutely no sense if you think that most of the matter in galaxies is dark matter. The dark matter should remove any correlation between the luminosity and the rotation curves.

Then there are collisions of galaxy clusters at high velocity, like the bullet cluster or the el gordo cluster. These are difficult to explain with particle dark matter, because dark matter creates friction and that makes such high relative velocities incredibly unlikely. Yes, you heard that correctly, the Bullet cluster is a PROBLEM for dark matter, not evidence for it.

And, yes, you can fumble with the computer simulations for dark matter and add more and more parameters to try to get it all right. But that’s no longer a simple explanation, and it’s no longer predictive.

So, if it’s not dark matter then what else could it be? The alternative explanation to particle dark matter is modified gravity. The idea of modified gravity is that we are not missing a source for gravity, but that we have the law of gravity wrong.

Modified gravity solves all the riddles that I just told you about. There’s no friction, so high relative velocities are not a problem. It predicted the Tully-Fisher relation, it explains Renzo’s rule and satellite alignments, it removes the issue with density peaks in galactic cores, and solves the missing satellites problem.

But modified gravity does not do well with the cosmic microwave background and the early universe, and it has some issues with galaxy clusters.

So that looks like a battle between competing hypotheses, and that’s certainly how it’s been portrayed and how most physicists think about it.

But here’s the thing. Purely from the perspective of data, the simplest explanation is that particle dark matter works better in some cases, and modified gravity better in others. A lot of astrophysicist reply to this, well, if you have dark matter anyway, why also have modified gravity? Answer: Because dark matter has difficulties explaining a lot of observations. On its own, it’s no longer parametrically the simplest explanation.

But wait, you may want to say, you can’t just use dark matter for observations a,b,c and modified gravity for observations x,y,z! Well actually, you can totally do that. Nothing in the scientific method that forbids it.

But more importantly, if you look at the mathematics, modified gravity and particle dark matter are actually very similar. Dark matter adds new particles, and modified gravity adds new fields. But because of quantum mechanics, fields are particles and particles are fields, so it’s the same thing really. The difference is the behavior of these fields or particles. It’s the behavior that changes from the scales of galaxies to clusters to filaments and the early universe. So what we need is a kind of phase transition that explains why and under which circumstances the behavior of these additional fields, or particles, changes, so that we need two different sets of equations.

And once you look at it this way, it’s obvious why we have not made progress on the question what dark matter is for such a long time. There’re just the wrong people working on it. It’s not a problem you can solve with particle physics and general relativity. It a problem for condensed matter physics. That’s the physics of gases, fluids, and solids and so on.

So, the conclusion that I have arrived at is that the distinction between dark matter and modified gravity is a false dichotomy. The answer isn’t either – or, it’s both. The question is just how to combine them.

Google talk online now

The major purpose of the talk was to introduce our SciMeter project which I've been working on for a few years now with Tom Price and Tobias Mistele. But I also talk a bit about my PhD topic and particle physics and how my book came about, so maybe it's interesting for some of you.

Saturday, April 24, 2021

Particle Physics Discoveries That Disappeared

[This is a transcript of the video embedded below. Parts of the text will not make sense without the graphics in the video.]

I get asked a lot what I think about this or that report of an anomaly in particle physics, like the B-meson anomaly at the large hadron collider which made headlines last month or the muon g-2 that was just all over the news. But I thought instead of just giving you my opinion, which you may or may not trust, I will instead give you some background to gauge the relevance of such headlines yourself. Why are there so many anomalies in particle physics? And how seriously should you take them? That’s what we will talk about today.

The Higgs boson was discovered in nineteen eighty-four. I’m serious. The Crystal Ball Experiment at DESY in Germany saw a particle that fit the expectation already in nineteen eighty-four. It made it into the New York Times with the headline “Physicists report mystery particle”. But the supposed mystery particle turned out to be a data fluctuation. The Higgs boson was actually only discovered in 2012 at the Large Hadron Collider at CERN. And 1984 was quite a year, because also supersymmetry was observed and then disappeared again.

How can this happen? Particle physicists calculate what they expect to see in an experiment using the best theory they have at the time. Currently that’s the standard model of particle physics. In 1984, that’d have been the standard model minus the particles which hadn’t been discovered.

But the theory alone doesn’t tell you what to expect in a measurement. For this you also have to take into account how the experiment is set up, so for example what beam and what luminosity, and how the detector works and how sensitive it is. This together: theory, setup, detector, gives you an expectation for your measurement. What you are then looking for are deviations from that expectation. Such deviations would be evidence for something new.

Here’s the problem. These expectations are always probabilistic. They don’t tell you exactly what you will see. They only tell you a distribution over possible outcomes. That’s partly due to quantum indeterminism but partly just classical uncertainty.

Therefore, it’s possible that you see a signal when there isn’t one. As an example, suppose I randomly distribute one-hundred points on this square. If I divide the square into four pieces of equal size, I expect about twenty-five points in each square. And indeed that turns out to be about correct for this random distribution. Here is another random distribution. Looks reasonable.

Now let’s do this a million times. No, actually, let’s not do this.

I let my computer do this a million times, and here is one of the outcomes. Whoa. That doesn’t look random! It looks like something’s attracting the points to that one square. Maybe it’s new physics!

No, there’s no new physics going on. Keep in mind, this distribution was randomly created. There’s no signal here, it’s all noise. It’s just that every once in a while noise happens to look like a signal.

This is why particle physicists like scientists in all other disciplines, give a “confidence level” to their observation that tells you how “confident” they are that the observation was not a statistical fluctuation. They do this by calculating the probability that the supposed signal could have been created purely by chance. If fluctuations create a signature like what you are looking for one in twenty times, then the confidence level is 95%. If fluctuations create it one in a hundred times, the confidence level is 99%, and so on. Loosely speaking, the higher the confidence level, the more remarkable the signal.

But exactly at which confidence level you declare a discovery is convention. Since the mid 1990s, particle physicists have used for discoveries a confidence level of 99.99994 percent. That’s about a one in a million chance for the signal to have been a random fluctuation. It’s also frequently referred to as 5 σ, where σ is one standard deviation. (Though that relation only holds for the normal distribution.)

But of course deviations from the expectation attract attention already below the discovery threshold. Here is a little more history. Quarks, for all we currently know, are elementary particles, meaning we haven’t seen an substructures. But a lot of physicists have speculated that quarks might be made up of even small things. These smaller particles are often called “preons”. They were found in 1996. The New York Times reported: “Tiniest Nuclear Building Block May Not Be the Quark”. The significance of the signal was about three sigma, that’s about a one in thousand chance for it to be coincidence and about the same as the current B-meson anomaly. But the supposed quark substructure was a statistical fluctuation.

The same year, the Higgs was discovered again, this time at the Large Electron Positron collider at CERN. It was an excess of Higgs-like events that made it to almost 4 σ, which is a one in sixteenthousand chance to be a random fluctuation. Guess what, that signal vanished too.

Then, in 2003, supersymmetry was “discovered” again, this time in form of a supposed sbottom quark, that’s the hypothetical supersymmetric partner particle of the bottom quark. That signal too was at about 3 σ but then disappeared.

And in 2015, we saw the di-photon anomaly that made it above 4 σ and disappeared again. There have even been some six sigma signals that disappeared again, though these had no known interpretation in terms of new physics.

For example in 1998 the Tevatron at Fermilab measured some events they dubbed “superjets” at six σ. They were never seen again. In 2004 HERA at DESY saw pentaquarks – that are particles made of 5 quarks – with 6 σ significance but that signal also disappeared. And then there is the muon g-2 anomaly that recently increased from 3.7 to 4.2 σ, but still hasn’t crossed the discovery threshold.

Of course not all discoveries that disappeared in particle physics were due to fluctuations. For example, in 1984, the UA1 experiment at CERN saw eleven particle decays of a certain type when they expected only three point five. The signature fit to that expected for the top quark. The physicists were quite optimistic they had found the top quark and this news too made it into the New York Times.

Turned out though they had misestimated the expected number of such events. Really there was nothing out of the ordinary. The top quark wasn’t actually discovered until 1995. A similar thing happened in 2011, when the CDF collaboration at Fermilab saw an excess of events at about 4 \sigma. These were not fluctuations, but they required better understanding of the background.

And then of course there are possible issues with the data analysis. For example, there are various tricks you can play to increase the supposed significance. This basically doesn’t happen in collaboration papers, but you sometimes see individual researchers that use very, erm, creative methods of analysis. And then there may can be systematic problems with the detection, triggers, or filters and so on.

In summary: Possible reasons why a discovery might disappear are (a) fluctuations (b) miscalculations (c) analysis screw-ups (d) systematics. The most common one, just by looking at history, are fluctuations. And why are there so many fluctuations in particle physics? It’s because they have a lot of data. The more data you have, the more likely you are to find fluctuations that look like signals. That, by the way, is why particle physicists introduced the five sigma standard in the first place. Because otherwise they’d constantly have “discoveries” that disappear.

So what’s with that B-meson anomaly at the LHC that recently made headlines. It’s actually been around since 2015, but recently a new analysis came out and so it was in the news again. It’s currently lingering at 3.1 σ. As we saw, signals of that strength go away all the time, but it’s interesting that this one’s stuck around instead of going away. That makes me think it’s either a systematic problem or indeed a real signal.

Note: I have a longer comment about the recent muon g-2 measurement here.

Wednesday, April 21, 2021

All you need to know about Elon Musk’s Carbon Capture Prize

[This is a transcript of the video embedded below.]

Elon Musk has announced he is sponsoring a competition for the best carbon removal ideas with a fifty million dollar prize for the winner. The competition will open on April twenty-second, twenty-twenty-one. In this video, I will tell you all you need to know about carbon capture to get your brain going, and put you on the way for the fifty million dollar prize.

During the formation of our planet, large amounts of carbon dioxide were stored in the ground, and ended up in coal and oil. By burning these fossil fuels, we have released a lot of that old carbon dioxide really suddenly. It accumulates in the atmosphere and prevents our planet from giving off heat the way it used to. As a consequence, the climate changes, and it changes rapidly.

The best course of action would have been to not pump that much carbon dioxide into the atmosphere to begin with, but at this point reducing future emissions alone might no longer be the best way to proceed. We might have to find ways to actually get carbon dioxide back out of the air. Getting this done is what Elon Musk’s competition is all about.

The problem is, once carbon dioxide is in the atmosphere it stays there for a long time. By natural processes alone, it would take several thousand years for atmospheric carbon dioxide levels to return to pre-industrial. And the climate reacts slowly to the sudden increase in carbon dioxide, so we haven’t yet seen the full impact of what we have done already. For example, there’s a lot of water on our planet, and warming up this water takes time.

So, even if we were to entirely stop carbon dioxide emissions today, the climate would continue to change for at least several more decades, if not centuries. It’s like you elected someone out of office, and now they’re really pissed off, but they’ve got six weeks left on the job and nothing you can do about that.

Globally, we are presently emitting about forty billion tons of carbon dioxide per year. According to the Intergovernmental Panel on Climate Change, we’d have to get down to twenty billion tons per year to limit warming to one point five degrees Celsius compared to preindustrial levels. These one point five degrees are what’s called the “Paris target.” This means, if we continue emitting at the same level as today, we’ll have to remove twenty billion tons carbon dioxide per year.

But to score in Musk’s competition, you don’t need a plan to remove the full twenty billion tons per year. You merely need “A working carbon removal prototype that can be rigorously validated” that is “capable of removing at least 1 ton per day” and the carbon “should stay locked up for at least one hundred years.” But other than that, pretty much everything goes. According to the website, the “main metric for the competition is cost per ton”.

So, which options do we have to remove carbon dioxide and how much do they cost?

The obvious thing to try is enhancing natural processes which remove carbon dioxide from the atmosphere. You can do that for example by planting trees because trees take up carbon dioxide as they grow. They are what’s called a natural “carbon sink”. This carbon is released again if the trees die and rot, or are burned, so planting trees alone isn’t enough, we’d have to permanently increase their numbers.

By how much? Depends somewhat on the type of forest, but to get rid of the twenty billion tons per year, we’d have to plant about ten million square kilometers of new forests. That’s about the area of the United States and more than the entire remaining Amazon rainforest.

Planting so many trees seems a bit impractical. And it isn’t cheap either. The cost is about 100 US dollars per ton of carbon dioxide. So, to get rid of the 20 billion tons excess carbon dioxide, that would be a few trillion dollars per year. Trees are clearly part of the solution, but we need to do more than that. And stop burning the rain forest wouldn’t hurt either.

Humans by the way are also a natural carbon sink because we’re eighteen percent carbon. Unfortunately, burying or burning dead people returns that carbon into the environment. Indeed, a single cremation releases about two-hundred-fifty kilograms of carbon dioxide, which could be avoided, for example, by dumping dead people in the deep sea where they won’t rot. So, if we were to do sea burials instead of cremations, that would save up to a million tons carbon dioxide per year. Not a terrible lot. And probably quite expensive. Yeah, I’m not the person to win that prize.

But there’s a more efficient way that oceans could help removing carbon. If one stimulates the growth of algae, these will take up carbon. When the algae die, they sink to the bottom of the ocean, where the carbon could remain, in principle, for millions of years. This is called “ocean fertilization”.

It’s a good idea in theory, but in practice it’s presently unclear how efficient it is. There’s no good data for how many of the algae sink and how many of them get eaten, in which case the carbon might be released, and no one knows what else such fertilization might do to the oceans. So, a lot of research remains to be done here. It’s also unclear how much it would cost. Estimates range from two to four hundred fifty US dollars per ton of carbon dioxide.

Besides enhancing natural carbon sinks, there are a variety of technologies for removing carbon permanently.

For example, if one burns agricultural waste or wood in the absence of oxygen, this will not release all the carbon dioxide but produce a substance called biochar. The biochar keeps about half of the carbon, and not only is it is stable for thousands of years, it can also improve the quality of soil.

The major problem with this idea is that there’s only so much agricultural waste to burn. Still, by some optimistic estimates one could remove up to one point eight billion tons carbon dioxide per year this way. Cost estimates are between thirty and one hundred twenty US dollars per ton of carbon dioxide.

By the way, plastic is about eighty percent carbon. That’s because it’s mostly made of oil and natural gas. And since it isn’t biodegradable, it’ll safely store the carbon – as long as you don’t burn it. So, the Great Pacific garbage patch? That’s carbon storage. Not a particularly popular one though.

A more popular idea is enhanced weathering. For this, one artificially creates certain minerals that, when they come in contact with water, can bind carbon dioxide to them, thereby removing it from the air. The idea is to produce large amounts of these minerals, crush them, and distribute them over large areas of land.

The challenges for this method are: how do you produce large amounts of these minerals, and where do you find enough land to put it on. The supporters of the American weathering project Vesta claim that the cost would be about ten US dollars per ton of carbon dioxide. So that’s a factor ten less than planting trees.

Then there is direct air capture. The most common method for this is pushing air through filters which absorb carbon dioxide. Several petrol companies like Chevron, BHP, and Occidental currently explore this technology. The company Carbon Engineering, which is backed by Bill Gates, has a pilot plant in British Columbia that they want to scale up to commercial plants. They claim every such plant will be equivalent in carbon removal to 40 million trees, removing 1 million tons of carbon dioxide per year.

They estimate the cost between ninety-four and 232 US dollar per ton. That would mean between two to four trillion US dollars per year to eliminate the entire twenty billion tons carbon dioxide which we need to get rid of. That’s between two point five and five percent of the world’s GDP.

But, since carbon dioxide is taken up by the oceans, one can also try to get rid of it by extracting it from seawater. Indeed, the density of carbon dioxide in seawater is about one hundred twenty five times higher than it is in air. And once you’ve removed it, the water will take up new carbon dioxide from the air, so you can basically use the oceans to suck the carbon dioxide out of the atmosphere. That sounds really neat.

The current cost estimate for carbon extraction from seawater is about 50 dollars per ton, so that’s about half as much as carbon extraction from air. The major challenge for this idea is that the currently known methods for extracting carbon dioxide from water require heating the water to about seventy degrees Celsius which takes up a lot of energy. But maybe there are other, more energy efficient ways, to get carbon dioxide out of water? You might be the person to solve this problem.

Finally, there is carbon capture and storage, which means capturing carbon dioxide right where it’s produced and store it away before it’s released into the atmosphere.

About twenty-six commercial facilities already use this method, and a few dozen more are planned. In twenty-twenty, about forty million tons of carbon dioxide were captured by this method. The typical cost is between 50 and 100 US$ per ton of carbon dioxide, though in particularly lucky cases the cost may go down to about 15 dollars per ton. The major challenge here is that present technologies for carbon capture and storage require huge amounts of water.

As you can see an overall problem for these ideas is that they’re expensive. You can therefore score on Musk’s competition by making one of the existing technologies cheaper, or more efficient, or both, or maybe you have an entirely new idea to put forward. I wish you good luck!

Saturday, April 17, 2021

Does the Universe have higher dimensions? Part 2

[This is a transcript of the video embedded below.]

In science fiction, hyper drives allow spaceships to travel faster than light by going through higher dimensions. And physicists have studied the question whether such extra dimensions exist for real in quite some detail. So, what have they found? Are extra dimensions possible? What do they have to do with string theory and black holes at the Large Hadron collider? And if extra dimensions are possible, can we use them for space travel? That’s what we will talk about today.

This video continues the one of last week, in which I talked about the history of extra dimensions. As I explained in the previous video, if one adds 7 dimensions of space to our normal three dimensions, then one can describe all of the fundamental forces of nature geometrically. And that sounds like a really promising idea for a unified theory of physics. Indeed, in the early 1980s, the string theorist Edward Witten thought it was intriguing that seven additional dimensions of space is also the maximum for supergravity.

However, that numerical coincidence turned out to not lead anywhere. This geometric construction of fundamental forces which is called Kaluza-Klein theory, suffers from several problems that no one has managed to solved.

One problem is that the radii of these extra dimensions are unstable. So they could grow or shrink away, and that’s not compatible with observation. Another problem is that some of the particles we know come in two different versions, a left handed and a right handed one. And these two version do not behave the same way. This is called chirality. That particles behave this way is an observational fact, but it does not fit with the Kaluza-Klein idea. Witten actually worried about this in his 1981 paper.

Enter string theory. In string theory, the fundamental entities are strings. That the strings are fundamental means they are not made of anything else. They just are. And everything else is made from these strings. Now you can ask how many dimensions does a string need to wiggle in to correctly describe the physics we observe?

The first answer that string theorists got was twenty six. That’s twenty five dimensions of space and one dimension of time. That’s a lot. Turns out though, if you add supersymmetry the number goes down to ten, so, nine dimension of space and one dimension of time. String theory just does not work properly in fewer dimensions of space.

This creates the same problem that people had with Kaluza-Klein theory a century ago: If these dimensions exist, where are they? And string theorists answered the question the same way: We can’t see them, because they are curled up to small radii.

In string theory, one curls up those extra dimensions to complicated geometrical shapes called “Calabi-Yau manifolds”, but the details aren’t all that important. The important thing is that because of this curling up, the strings have higher harmonics. This is the same thing which happens in Kaluza-Klein theory. And it means, if a string gets enough energy, it can oscillate with certain frequencies that have to match to the radius of these extra dimensions.

Therefore, it’s not true that string theory does not make predictions, though I frequently hear people claim that. String theory makes the prediction that these higher harmonics should exist. The problem is that you need really high energies to create them. That’s because we already know that these curled up dimensions have to be small. And small radii means high frequencies, and therefore high energies.

How high does the energy have to be to see these higher harmonics? Ah, here’s the thing. String theory does not tell you. We only know that these extra dimensions have to be so small we haven’t yet seen them. So, in principle, they could be just out of reach, and the next bigger particle collider could create these higher harmonics.

And this… is where the idea comes from that the Large Hadron Collider might create tiny black holes.

To understand how extra dimensions help with creating black holes, you first have to know that Newton’s one over R squared law is geometrical. The gravitational force of a point mass falls with one over R squared because the surface of the sphere grows with R squared, where R is the radius of the sphere. So, if you increase the distance to the mass, the force lines thin out as the surface of the sphere grows. But… here is the important point. Suppose you have additional dimensions of space. Say you don’t have three, but 3+n, where n is a positive integer. Then, the surface of the sphere increases with R to the (2+n).

Consequently, the gravitational force drops with one over R to the (2+n) as you move away from the mass. This means, if space has more than three dimensions, the force drops much faster with distance to the source than normally.

Of course Newtonian gravity was superseded by Einstein’s theory of General Relativity, but this general geometric consideration about how gravity weakens with distance to the source remains valid. So, in higher dimensions the gravitational force drops faster with distance to the source.

Keep in mind though that the extra dimensions we are concerned with are curled up, because otherwise we’d already have noticed them. This means, into the direction of these extra dimensions, the force lines can only spread out up to a distance that is comparable to the radius of the dimensions. After this, the only directions the force lines can continue to spread out into are the three large directions. This means that on distances much larger than the radius of the extra dimensions, this gives back the usual 1/R^2 law, which we observe.

Now about those black holes. If gravity works as usual in three dimensions of space, we cannot create black holes. That’s because gravity is just too weak. But consider you have these extra dimensions. Since the gravitational force falls much faster as you go away from the mass, it means that if you get closer to a mass, the force gets much stronger than it would in only 3 dimensions. That makes it much easier to create black holes. Indeed, if the extra dimensions are large enough, you could create black holes at the Large Hadron Collider.

At least in theory. In practice, the Large Hadron Collider did not produce black holes, which means that if the extra dimensions exist, they’re really small. How “small”? Depends on the number of extra dimensions, but roughly speaking below a micrometer.

If they existed, could we travel through them? The brief answer is no, and even if we could it would be pointless. The reason is that while the gravitational force can spread into all of the extra dimensions, matter, like the stuff we are made of, can’t go there. It is bound to a 3-dimensional slice, which string theorists call a “brane”, that’s b r a n e, not b r a i n, and it’s a generalization of membrane. So, basically, we’re stuck on this 3-dimensional brane, which is our universe. But even if that was not the case, what do you want in these extra dimensions anyway? There isn’t anything in there and you can’t travel any faster there than in our universe.

People often think that extra dimensions provide a type of shortcut, because of illustrations like this. The idea is that our universe is kind of like this sheet which is bent and then you can go into a direction perpendicular to it, to arrive at a seemingly distant point faster. The thing is though, you don’t need extra dimensions for that. What we call the “dimension” in general relativity would be represented in this image by the dimension of the surface, which doesn’t change. Indeed, these things are called wormholes and you can have them in ordinary general relativity with the odinary three dimensions of space.

This embedding space here does not actually exist in general relativity. This is also why people get confused about the question what the universe expands into. It doesn’t expand into anything, it just expands. By the way, fun fact, if you want to embed a general 4 dimensional space-time into a higher dimensional flat space you need 10 dimensions, which happens to be the same number of dimensions you need for string theory to make sense. Yet another one of these meaningless numerical coincidences, but I digress.

What does this mean for space travel? Well, it means that traveling through higher dimensions by using hyper drives is scientifically extremely implausible. Therefore, my ultimate ranking for the scientific plausibility of science fiction travel is:

3rd place: Hyper drives because it’s a nice idea, it just makes no scientific sense.

2nd place: Wormholes, because at least they exist mathematically, though no one has any idea how to create them.

And the winner is... Warp drives! Because not only does the mathematics work out, it’s in principle possible to create them, at least as long as you stay below the speed of light limit. How to travel faster than light, I am afraid we still don’t know. But maybe you are the one to figure it out.

Saturday, April 10, 2021

Does the Universe have Higher Dimensions? Part 1

[This is a transcript of the video embedded below.]

Space, the way we experience it, has three dimensions. Left-right, forward backward, and up-down. But why three? Why not 7? Or 26? The answer is: No one knows. But if no one knows why space has three dimensions, could it be that it actually has more? Just that we haven’t noticed for some reason? That’s what we will talk about today.

The idea that space has more than three dimensions may sound entirely nuts, but it’s a question that physicists have seriously studied for more than a century. And since there’s quite a bit to say about it, this video will have two parts. In this part we will talk about the origins of the idea of extra dimensions, Kaluza-Klein theory and all that. And in the next part, we will talk about more recent work on it, string theory and black holes at the Large Hadron Collider and so on.

Let us start with recalling how we describe space and objects in it. In two dimensions, we can put a grid on a plane, and then each point is a pair of numbers that says how far away from zero you have to go in the horizontal and vertical direction to reach that point. The arrow pointing to that point is called a “vector”.

This construction is not specific to two dimensions. You can add a third direction, and do exactly the same thing. And why stop there? You can no longer *draw a grid for four dimensions of space, but you can certainly write down the vectors. They’re just a row of four numbers. Indeed, you can construct vector spaces in any number of dimensions, even in infinitely many dimensions.

And once you have vectors in these higher dimensions, you can do geometry with them, like constructing higher dimensional planes, or cubes, and calculating volumes, or the shapes of curves, and so on. And while we cannot directly draw these higher dimensional objects, we can draw their projections into lower dimensions. This for example is the projection of a four-dimensional cube into two dimensions.

Now, it might seem entirely obvious today that you can do geometry in any number of dimensions, but it’s actually a fairly recent development. It wasn’t until eighteen forty-three, that the British mathematician Arthur Cayley wrote about the “Analytical Geometry of (n) Dimensions” where n could be any positive integer. Higher Dimensional Geometry sounds innocent, but it was a big step towards abstract mathematical thinking. It marked the beginning of what is now called “pure mathematics”, that is mathematics pursued for its own sake, and not necessarily because it has an application.

However, abstract mathematical concepts often turn out to be useful for physics. And these higher dimensional geometries came in really handy for physicists because in physics, we usually do not only deal with things that sit in particular places, but with things that also move in particular directions. If you have a particle, for example, then to describe what it does you need both a position and a momentum, where the momentum tells you the direction into which the particle moves. So, actually each particle is described by a vector in a six dimensional space, with three entries for the position and three entries for the momentum. This six-dimensional space is called phase-space.

By dealing with phase-spaces, physicists became quite used to dealing with higher dimensional geometries. And, naturally, they began to wonder if not the *actual space that we live in could have more dimensions. This idea was first pursued by the Finnish physicist Gunnar Nordström, who, in 1914, tried to use a 4th dimension of space to describe gravity. It didn’t work though. The person to figure out how gravity works was Albert Einstein.

Yes, that guy again. Einstein taught us that gravity does not need an additional dimension of space. Three dimensions of space will do, it’s just that you have to add one dimension of time, and allow all these dimensions to be curved.

But then, if you don’t need extra dimensions for gravity, maybe you can use them for something else.

Theodor Kaluza certainly thought so. In 1921, Kaluza wrote a paper in which he tried to use a fourth dimension of space to describe the electromagnetic force in a very similar way to how Einstein described gravity. But Kaluza used an infinitely large additional dimension and did not really explain why we don’t normally get lost in it.

This problem was solved few years later by Oskar Klein, who assumed that the 4th dimension of space has to be rolled up to a small radius, so you can’t get lost in it. You just wouldn’t notice if you stepped into it, it’s too small. This idea that electromagnetism is caused by a curled-up 4th dimension of space is now called Kaluza-Klein theory.

I have always found it amazing that this works. You take an additional dimension of space, roll it up, and out comes gravity together with electromagnetism. You can explain both forces entirely geometrically. It is probably because of this that Einstein in his later years became convinced that geometry is the key to a unified theory for the foundations of physics. But at least so far, that idea has not worked out.

Does Kaluza-Klein theory make predictions? Yes, it does. All the electromagnetic fields which go into this 4th dimension have to be periodic so they fit onto the curled-up dimension. In the simplest case, the fields just don’t change when you go into the extra dimension. And that reproduces the normal electromagnetism. But you can also have fields which oscillate once as you go around, then twice, and so on. These are called higher harmonics, like you have in music. So, Kaluza Klein theory makes a prediction which is that all these higher harmonics should also exist.

Why haven’t we seen them? Because you need energy to make this extra dimension wiggle. And the more it wiggles, that is, the higher the harmonics, the more energy you need. Just how much energy? Well, that depends on the radius of the extra dimension. The smaller the radius, the smaller the wavelength, and the higher the frequency. So a smaller radius means you need higher energy to find out if the extra dimension is there. Just how small the radius is, the theory does not tell you, so we don’t know what energy is necessary to probe it. But the short summary is that we have never seen one of these higher harmonics, so the radius must be very small.

Oskar Klein himself, btw was really modest about his theory. He wrote in 1926:
"Ob hinter diesen Andeutungen von Möglichkeiten etwas Wirkliches besteht, muss natürlich die Zukunft entscheiden."

("Whether these indications of possibilities are built on reality has of course to be decided by the future.")

But we don’t actually use Kaluza-Klein theory instead of electromagnetism, and why is that? It’s because Kaluza-Klein theory has some serious problems.

The first problem is that while the geometry of the additional dimension correctly gives you electric and magnetic fields, it does not give you charged particles, like electrons. You still have to put those in. The second problem is that the radius of the extra dimension is not stable. If you perturb it, it can begin to increase, and that can have observable consequences which we have not seen. The third problem is that the theory is not quantized, and no one has figured out how to quantize geometry without running into problems. You can however quantize plain old electromagnetism without problems.

We also know today of course that the electromagnetic force actually combines with the weak nuclear force to what is called the electroweak force. That, interestingly enough, turns out to not be a problem for Kaluza-Klein theory. Indeed, it was shown in the 1960s by Ryszard Kerner, that one can do Kaluza-Klein theory not only for electromagnetism, but for any similar force, including the strong and weak nuclear force. You just need to add a few more dimensions.

How many? For the weak nuclear force, you need two more, and for the strong nuclear force another four. So in total, we now have one dimension of time, 3 for gravity, one for electromagnetism, 2 for the weak nuclear force and 4 for the strong nuclear force, which adds up to a total of 11.

In 1981, Edward Witten noticed that 11 happened to be the same number of dimensions which is the maximum for supergravity. What happened after this is what we’ll talk about next week.

Saturday, April 03, 2021

Should Stephen Hawking have won the Nobel Prize?

[This is a transcript of the video embedded below.]

Stephen Hawking, who sadly passed away in 2018, has repeatedly joked that he might get a Nobel Prize if the Large Hadron Collider produces tiny black holes. For example, here is a recording of a lecture he gave in 2016:
“Some of the collisions might create micro black holes. These would radiate particles in a pattern that would be easy to recognize. So I might get a Nobel Prize after all.”
The British physicist and science writer Phillip Ball, who attended this 2016 lecture, commented:
“I was struck by how unusual it was for a scientist to state publicly that their work warranted a Nobel… [It] gives a clue to the physicist’s elusive character: shamelessly self-promoting to the point of arrogance, and heedless of what others might think.”
I heard Hawking say pretty much exactly the same thing in a public lecture a year earlier in Stockholm. But I had an entirely different reaction. I didn’t think of his comment as arrogant. I thought he was explaining something which few people knew about. And I thought he was right in that, if the Large Hadron Collider would have seen these tiny black holes decay, he almost certainly would have gotten a Nobel Prize. But I also thought that this was not going to happen. He was much more likely to win a Nobel Prize for something else. And he almost did.

Just exactly what might Hawking have won the Nobel Prize for, and should he have won it? That’s what we will talk about today.

In nineteen-seventy-four, Stephen Hawking published a calculation that showed black holes are not perfectly black, but they emit thermal radiation. This radiation is now called “Hawking radiation”. Hawking’s calculation shows that the temperature of a black hole is inversely proportional to the mass of the black hole. This means, the larger the black hole, the smaller its temperature, and the harder it is to measure the radiation. For the astrophysical black holes that we know of, the temperature is way, way too small to be measurable. So, the chances of him ever winning a Nobel Prize for black hole evaporation seemed very small.

But, in the late nineteen-nineties, the idea came up that tiny black holes might be produced in particle collisions at the Large Hadron Collider. This is only possible if the universe has additional dimensions of space, so not just the three that we know of, but at least five. These additional dimensions of space would have to be curled up to small radii, because otherwise we would already have seen them.

Curled up extra dimensions. Haven’t we heard that before? Yes, because string theorists talk about curled up dimensions all the time. And indeed, string theory was the major motivation to consider this hypothesis of extra dimensions of space. However, I have to warn you that string theory does NOT tell you these extra dimensions should have a size that the Large Hadron Collider could probe. Even if they exist, they might be much too small for that.

Nevertheless, if you just assume that the extra dimensions have the right size, then the Large Hadron Collider could have produced tiny black holes. And since they would have been so small, they would have been really, really hot. So hot, indeed, they’d decay pretty much immediately. To be precise, they’d decay in a time of about ten to the minus twenty-three seconds, long before they’d reach a detector.

But according to Hawking’s calculation, the decay of these tiny black holes should proceed by a very specific pattern. Most importantly, according to Hawking, black holes can decay into pretty much any other particle. And there is no other particle decay which looks like this. So, it would have been easy to see black hole decays in the data. If they had happened. They did not. But if they had, it would almost certainly have gotten Hawking a Nobel Prize.

However, the idea that the Large Hadron Collider would produce tiny black holes was never very plausible. That’s because there was no reason the extra dimensions, in case they exist to begin with, should have just the right size for this production to be possible. The only reason physicists thought this would be the case was an argument from mathematical beauty called “naturalness”. I have explained the problems with this argument in an earlier video, so check this out for more.

So, yeah, I don’t think tiny black holes at the Large Hadron Collider was Hawking’s best shot at a Nobel Prize.

Are there other ways you could see black holes evaporate? Not really. Without these curled up extra dimensions, which do not seem to exist, we can’t make black holes ourselves. Without extra dimensions, the energy density that we’d have to reach to make black holes is way beyond our technological limitations. And the black holes that are produced in natural processes are too large, and then too cold to observe Hawking radiation.

One thing you *can do, though, is simulating black holes with superfluids. This has been done by the group of Jeff Steinhauer in Israel. The idea is that you can use a superfluid to mimic the horizon of a black hole. If you remember, the horizon of a black hole is a boundary in space, from inside of which light cannot escape. In a superfluid, one does not trap light, but one traps sound waves instead. One can do this because the speed of sound in the superfluid depends on the density of the fluid. And since one can experimentally control this density, one can control the speed of sound.

If one then makes the fluid flow, there’ll be regions from within which the sound waves cannot escape because they’re just too slow. It’s like you’re trying to swim away from a waterfall. There’s a boundary beyond which you just can’t swim fast enough to get away. That boundary is much like a black hole horizon. And the superfluid has such a boundary, not for swimmers, but for sound waves.

You can also do this with a normal fluid, but you need the superfluid so that the sound has the right quantum properties, as it does in Hawking’s calculation. And in a series of really neat experiments, Steinhauer’s group has shown that these sound waves in the superfluid indeed have the properties that Hawking predicted. That’s because Hawking’s calculation applies to the superfluid in just exactly the same way it applies to real black holes.

Could Hawking have won a Nobel Prize for this? I don’t think so. That’s because mimicking a black hole with a superfluid is cool, but of course it’s not the real thing. These experiments are a type of quantum simulation, which means they demonstrate that Hawking’s calculation is correct. But the measurements on superfluids cannot demonstrate that Hawking’s prediction is correct for real black holes.

So, in all fairness, it never seemed likely Hawking would win a Nobel Prize for Hawking radiation. It’s just too hard to measure. But that wasn’t the only thing Hawking did in his career.

Before he worked on black hole evaporation, Hawking worked with Penrose on the singularity theorems. Penrose’s theorem showed that, in contrast to what most physicists believed at the time, black holes are a pretty much unavoidable consequence of stellar collapse. Before that, physicists thought black holes are mathematical curiosities that would not be produced in reality. It was only because of the singularity theorems that black holes began to be taken seriously. Eventually astronomers looked for them, and now we have solid experimental evidence that black holes exist. Hawking applied the same method to the early universe to show that the Big Bang singularity is likewise unavoidable, unless General Relativity somehow breaks down. And that is an absolutely amazing insight about the origin of our universe.

I made a video about the history of black holes two years ago in which I said that the singularity theorems are worth a Nobel Prize. And indeed, Penrose was one of the recipients of the 2020 Nobel Prize in physics. If Hawking had not died two years earlier, I believe he would have won the Nobel Prize together with Penrose. Or maybe the Nobel Prize committee just waited for him to die, so they wouldn’t have to think about just how to disentangle Hawking’s work from Penrose’s? We’ll never know.

Does it matter that Hawking did not win a Nobel Prize? Personally, I think of the Nobel Prize in the first line as an opportunity to celebrate scientific discoveries. The people who we think might win this prize are highly deserving with or without an additional medal. And Hawking didn’t need a Nobel Prize, he’ll be remembered without it.

Saturday, March 27, 2021

Is the universe REALLY a hologram?

[This is a transcript of the video embedded below.]

Do we live in a hologram? String theorists think we do. But what does that mean? How do holograms work, and how are they related to string theory? That’s what we will talk about today.

In science fiction movies, holograms are 3-dimensional, moving images. But in reality, the technology for motion holograms hasn’t caught up with imagination. At least so far, holograms are still mostly stills.

The holograms you are most likely to have seen are not like those in the movies. They are not a projection of an object into thin air – however that’s supposed to work. Instead, you normally see a three-dimensional object above or behind a flat film. Small holograms are today frequently used as a security measure on credit cards, ID cards, or even banknotes, because they are easy to see, but difficult to copy.

If you hold such a hologram into light, you will see that it seems to have depth, even though it is printed on a flat surface. That’s because in photographs, we are limited to the one perspective from which the picture was taken, and that’s why they look flat. But you can tilt holograms and observe them from different angles, as if you were examining a three-dimensional object.

Now, these holograms on your credit cards, or the ones that you find on postcards or book covers, are not “real” holograms. They are actually composed of several 2-dimensional images and depending on the angle, a different image is reflected back at you, which creates the illusion of a 3-dimensional image.

In a real hologram the image is indeed 3-dimensional. But the market for real holograms is small, so they are hard to come by, even though the technology to produce them is straightforward. A real hologram looks like this.

Real holograms actually encode a three-dimensional object on a flat surface. How is this possible? The answer is interference.

Light is electromagnetic waves, so it has crests and troughs. And a key property of waves is that they can be overlaid and then amplify or wash out each other. If two waves are overlaid so that two crests meet at the same point, that will amplify the wave. This is called constructive interference. But if a crest meets a trough, the waves will cancel. This is called destructive interference.

Now, we don’t normally see light cancelling out other light. That’s because to see interference one needs very regular light, where the crests and troughs are neatly aligned. Sunlight or LED light doesn’t have that property. But laser light has it, and so laser light can be interfered.

And this interference can be used to create holograms. For this, one first splits a laser beam in two with a semi-transparent glass or crystal, called a beam-splitter, and makes each beam broader with a diverging lens. Then, one aims one half of the beam at the object that one wants to take an image of. The light will not just bounce off the object in one single direction, but it will scatter in many different directions. And the scattered light contains information about the surface of the object. Then, one recombines the two beams and captures the intensity of the light with a light-sensitive screen.

Now, remember that laser light can interfere. This means, how large the intensity on the screen is, depends on whether the interference was destructive or constructive, which again depends on just where the object was located and how it was shaped. So, the screen has captured the full three-dimensional information. To view the hologram, one develops the film and shines light onto it at the same wavelength as the image was taken, which reproduces the 3-dimensional image.

To understand this in a little more detail, let us look at the image on the screen if one uses a very small point-like object. It looks like this. It’s called a zone plate. The intensity and width of the rings depends on the distance between the point-like object and the screen, and the wavelength of the light. But any object is basically a large number of point-like objects, so the interference image on the screen is generally an overlap of many different zone plates with these concentric rings.

The amazing thing about holograms is now this. Every part of the screen receives information from every part of the object. As a consequence, if you develop the image to get the hologram, you can take it apart into pieces, and each piece will still recreate the whole 3-dimensional object. To understand better how this works, look again at the zone plate, the one of a single point-like object. If you have only a small piece that contains part of the rings, you can infer the rest of the pattern, though it gets a little more difficult. If you have a general plate that overlaps many zone plates, this is still possible. So, at least mathematically, you can reconstruct the entire object from any part of the holographic plate. In reality, the quality of the image will go down.

So, now that you know how real holograms work, let us talk about the idea that the universe is a hologram.

When string theorists claim that our universe is a hologram, they mean the following. Our universe has a positive cosmological constant. But mathematically, universes with a negative cosmological constant are much easier to work with. So, this is what string theorists usually look at. These universes with a negative cosmological constant are called Anti-de Sitter spaces and into these Anti-de Sitter things they put supersymmetric matter. To best current knowledge, our universe is not Anti De Sitter and matter is not supersymmetric, but mathematically, you can certain do that.

For some specific examples, it has then been shown that the gravitational theory in such an Anti de Sitter universe is mathematically equivalent to a different theory on the conformal boundary of that universe. What the heck is the conformal boundary of the universe? Well, our actual universe doesn’t have one. But these Anti-De Sitter spaces do. Just exactly how they are defined isn’t all that important. You only need to know that this conformal boundary has one dimension of space less than the space it is a boundary of.

So, you have an equivalence between two theories in a different number of dimensions of space. A gravitational theory in this anti-De Sitter space with the weird matter. And a different theory on the boundary of that space, which also has weird matter. And just so you have heard the name: The theory on the boundary is what’s called a conformal field theory, and the whole thing is known as the Anti-de Sitter – Conformal Field Theory duality, or AdS/CFT for short.

This duality has been mathematically confirmed for some specific cases, but pretty much all string theorists seem to believe it is much more generally valid. In fact, a lot of them seem believe it is valid even in our universe, even though there is no evidence for that, neither observational nor mathematical. In this most general form, the duality is simply called the “holographic principle”.

If the holographic principle was correct, it would mean that the information about any volume in our universe is encoded on the boundary of that volume. That’s remarkable because naively, you’d think the amount of information you can store in a volume of space grows much faster than the information you can store on the surface. But according to the holographic principle, the information you can put into the volume somehow isn’t what we think it is. It must have more correlations than we realize. So it the holographic principle was true, that would be very interesting. I talked about this in more detail in an earlier video.

The holographic principle indeed sounds a little like optical holography. In both cases one encodes information about a volume on a surface with one dimension less. But if you look a little more closely, there are two important differences between the holographic principle and real holography:

First, an optical hologram is not actually captured in two dimensions; the holographic film has a thickness, and you need that thickness to store the information. The holographic principle, on the other hand, is a mathematical abstraction, and the encoding really occurs in one dimension less.

Second, as we saw earlier, in a real hologram, each part contains information about the whole object. But in the mathematics of the holographic universe, this is not the case. If you take only a piece of the boundary, that will not allow you to reproduce what goes on in the entire universe.

This is why I don’t think referring to this idea from string theory as holography is a good analogy. But now you know just exactly what the two types of holography do, and do not have in common.

Saturday, March 20, 2021

Whatever happened to Life on Venus?

[This is a transcript of the video embedded below.]

A few months ago, the headlines screamed that scientists had found signs of life on Venus. But it didn’t take long for other scientists to raise objections. So, just exactly what did they find on Venus? Did they actually find it? And what does it all mean? That’s what we will talk about today.

The discovery that made headlines a few months ago was that an international group of researchers said they’d found traces of a molecule called phosphine in the atmosphere of Venus.

Phosphine is a molecule made of one phosphorus and three hydrogen atoms. On planets like Jupiter and Saturn, pressure and temperature are so high that phosphine can form by coincidental chemical reactions, and indeed phosphine has been observed in the atmosphere of these two planets. On planets like Venus, however, the pressure isn’t remotely large enough to produce phosphine this way.

And the only other known processes to create phosphine are biological. On Earth, for example, which in size and distance to the Sun isn’t all that different to Venus, the only natural production processes for phosphine are certain types of microbes. Lest you think this means that phosphine is somehow “good for life”, I should add that the microbes in question live without oxygen. Indeed, phosphine is toxic for forms of life that use oxygen, which is most of life on earth. In fact, phosphine is used in the agricultural industry to kill rodents and insects.

So, the production of phosphine on Venus at fairly low atmospheric pressure seems to require life in some sense, which is why the claim that there’s phosphine on Venus is BIG. It could mean there’s microbial life on Venus. And just in case microbial life doesn’t excite you all that much, this would be super-interesting because it would give us a clue to what the chances are that life evolves on other planets in general.

So, just exactly what did they find?

The suspicion that phosphine might be present on Venus isn’t entirely new. The researchers first saw something that could be phosphine in two-thousand and seventeen in data from the James Clerk Maxwell Telescope, which is a radio telescope in Hawaii. However, this signal was not particularly good, so they didn’t publish it. Instead they waited for more data from the ALMA telescope in Chile. Then they published a combined analysis of the data from both telescopes in Nature Astronomy.

Here’s what they did. One can look for evidence of molecules by exploiting that each molecule reacts to light at different wave-lengths. To some wave-lengths, a molecule may not react at all, but others it may absorb because they cause the molecule to vibrate or rotate around itself. It’s like each molecule has very specific resonance frequencies, like if you’re in an airplane and the engine’s being turned up and then, at a certain pitch the whole plane shakes? That’s a resonance. For the plane it happens at certain wavelengths of sound. For molecules it happens at certain wave-lengths of light.

So, if light passes through a gas, like the atmosphere of Venus, then just how much light at each wave-length passes through depends on what molecules are in the gas. Each molecule has a very specific signature, and that makes the identification possible.

At least in principle. In practice… it’s difficult. That’s because different molecules can have very similar absorption lines.

For example, the phosphine absorption line which all the debate is about has a frequency of two-hundred sixty-six point nine four four Gigahertz. But sulfur dioxide has an absorption line at two-hundred sixty-six point nine four three GigaHertz, and sulfur dioxide is really common in the atmosphere of Venus. That makes it quite a challenge to find traces of phosphine.

But challenges are there to be met. The astrophysicists estimated the contribution from Sulphur dioxide from other lines which this molecule should also produce.

They found that these other lines were almost invisible. So they concluded that the absorption in the frequency range of interest had to be mostly due to phosphine and they estimated the amount with about seven to twenty parts per billion, so that’s seven to twenty molecules of phosphine per billion molecules of anything.

It’s this discovery which made the big headlines. The results they got for the phosphine amount from the two different telescopes are a little different, and such an inconsistency is somewhat of a red flag. But then, these measurements were made some years apart and the atmosphere of Venus could have undergone changes in that period, so it’s not necessarily a problem.

Unfortunately, after publishing their analysis, the team learned that the data from ALMA had not been processed correctly. It was not their fault, but it meant they had to redo their analysis. With the corrected data, the amount of phosphine they claimed to see fell to something between 1 and 4 parts per billion. Less, but still there.

Of course such an important finding attracted a lot of attention, and it didn’t take long for other researchers to have a close look at the analysis. It was not only that finding phosphine was surprising, not finding sulphur dioxide was not normal either; it had been detected many times in the atmosphere of Venus in amounts about 10 times higher than what the phosphine-discovery study claimed it was.

Already in October last year, a paper came out that argued there’s no signal at all in the data, and that said the original study used an overly complicated twelve parameter fit that fooled them into seeing something where there was nothing. This criticism has since been published in a peer reviewed journal. And by the end of January another team put out two papers in which they pointed out several other problems with the original analysis.

First they used a model of the atmosphere of Venus and calculated that the alleged phosphine absorption comes from altitudes higher than eighty kilometers. Problem is, at such a high altitude, phosphine is incredibly unstable because ultraviolet light from the sun breaks it apart quickly. They estimated it would have a lifetime of under one second! This means for phosphine to be present on Venus in the observed amounts, it would’ve to be produced at a rate higher than the production of oxygen by photosynthesis on Earth. You’d need a lot of bacteria to get that done.

Second, they claim that the ALMA telescope should not have been able to see the signal at all, or at least a much smaller signal, because of an effect called line dilution. Line dilution can occur if one has a telescope with many separate dishes like ALMA. A signal that’s smeared out over many of the dishes, like the signal from the atmosphere of Venus, can then be affected by interference effects.

According to estimates in the new paper, line dilution should suppress the signal in the ALMA telescope by about a factor 10-20, in which case it would not be visible at all. And indeed they claim that no signal is entirely consistent with the data from the second telescope. This criticism, too, has now passed peer review.

What does it mean?

Well, the authors of the original study might reply to this criticism, and so it will probably take some time until the dust settles. But even if the criticism is correct, this would not mean there’s no phosphine on Venus. As they say, absence of evidence is not evidence of absence. If the criticism is correct, then the observations, exactly because they probe only high altitudes where phosphine is unstable, can neither exclude, nor confirm, the presence of phosphine on Venus. And so, the summary is, as so often in science: More work is needed.

Wednesday, March 17, 2021

Live Seminar about Dark Matter on Friday

I will give an online simiar about dark matter and modified gravity on Friday at 4pm CET, if you want to attend, the link is here:

I'm speaking in English (as you can see, half in American, half in British English, as usual), but the seminar will be live translated to Spanish, for which there's a zoom link somewhere.

Saturday, March 13, 2021

Can we stop hurricanes?

[This is a transcript of the video embedded below.]

Hurricanes are among the most devastating natural disasters. That’s because hurricanes are enormous! A medium-sized hurricane extends over an area about the size of Texas. On a globe they’ll cover 6 to 12 degrees latitude. And as they blow over land, they leave behind wide trails of destruction, caused by strong winds and rain. Damages from hurricanes regularly exceed billions of US dollars. Can’t we do something about that? Can’t we blast hurricanes apart? Redirect them? Or stop them from forming in the first place? What does science say about that? That’s what we’ll talk about today.

Donald Trump, the former president of the United States, has reportedly asked repeatedly whether it’s possible to get rid of hurricanes by dropping nuclear bombs on them. His proposal was swiftly dismissed by scientists and the media likewise. Their argument can be summed up with “you can’t” and even if you could “it’d be a bad idea.” Trump then denied he ever said anything, the world forgot about it, and here we are, still wondering if not there’s something we can do to stop hurricanes.

Trumps idea might sound crazy, but he was not the first to think of nuking a hurricane, and he probably won’t be the last. And I think trying to prevent hurricanes isn’t as crazy as it sounds.

The idea to nuke a hurricane came up already right after nuclear weapons were deployed for the first time, in Japan in August 1945. August is in the middle of the hurricane season in Florida. The mayor of Miami Beach, Herbert Frink, made the connection. He asked President Harry Truman about the possibility to use the new weapon to fight against hurricanes. And, sure enough, the Americans looked into it.

But they quickly realized that while the energy released by a nuclear bomb was gigantic compared to all other kinds of weapons, it was still nothing compared to the energies that build up in hurricanes. For comparison: The atomic bombs dropped on Japan released an energy of about 20 kilotons each. A typical hurricane releases about 10,000 times as much energy – per hour. The total power of a hurricane is comparable to the entire global power consumption. That’s because hurricanes are enormous!

By the way, hurricanes and typhoons are the same thing. The generic term used by meterologists is “tropical cyclone”. It refers to “a rotating, organized system of clouds and thunderstorms that originates over tropical or subtropical waters.” If they get large enough, they’re then either called hurricanes or typhoons, or they just remain tropical cyclones. But it’s like the difference between an astronaut and a cosmonaut. The same thing!

But back to the nukes. In 1956 an Air Force meteorologist by name Jack W Reed proposed to launch a megaton nuclear bomb – that is about 50 times the power of the ones in Japan – into a hurricane. Just to see what happened. He argued: “Since a complete theory for the dynamics of hurricanes will probably not be derived by meteorologists for several years, argument pros and con without conclusive foundation will be made over the effects to be expected… Only a full-scale test could prove the results.” In other words, if we don’t do it, we’ll never know just how bad the idea is. For what the radiation hazard was concerned, Reed claimed it would be negligible: “An airburst would cause no intense fallout,” never mind that a complete theory for the dynamics of hurricanes wasn’t available then and still isn’t.

Reed’s proposal was dismissed by both the military and the scientific community. The test never took place, but the proposal is interesting nevertheless, because Reed went to some length to explain how to go about nuking a hurricane smartly.

To understand what he was trying to get at, let’s briefly talk about how hurricanes form. Hurricanes can form over the ocean when the water temperature is high enough. Trouble begins at around 26 degrees Celsius or 80 degrees Fahrenheit. The warm water evaporates and rises. As it rises it cools and creates clouds. This tower of water-heavy clouds begins to spin because the Coriolis force, which comes from the rotation of planet Earth, acts on the air that’s drawn in, and the more the clouds spin, the better they get at drawing in more air. As the spinning accelerates, the center of the hurricane clears out and leaves behind a mostly calm region that’s usually a few dozen miles in diameter and has very low barometric pressure. This calm center is called the “eye” of the hurricane.

Reed now argued that if one detonates a megaton nuclear weapon directly in the eye of a hurricane, this would blast away the warm air that feeds the cycle, increase the barometric pressure, and prevent the storm from gathering more strength.

Now, the obvious problem with this idea is that even if you succeeded, you’d deposit radioactive debris in clouds that you just blasted all over the globe, congratulations. But even leaving aside the little issue with the radioactivity, it almost certainly wouldn’t work because - hurricanes are enormous.

It’s not only that you’re still up against a power that exceeds that of your nuclear bomb by three orders of magnitude, it’s also that an explosion doesn’t actually move a lot of air from one place to another, which is what Reed envisioned. The blast creates a shock wave – that’s bad news for everything in the way of that shock – but it does little to change the barometric pressure after the shock wave has passed through.

So if nuclear bombs are not the way to deal with hurricanes, can we maybe make them rain off before they make landfall? This technique is called “cloud seeding” and we talked about this in a previous video. If you remember, there are two types of cloud seeding, one that creates snow or ice, and one that creates rain.

The first one, called glaciogenic seeding was indeed tried on hurricanes by Homer Simpson. No, not this Homer, but a man by name Robert Homer Simpson, who in 1962 was the first director of the American Project Stormfury, which had the goal of weakening hurricanes.

The Americans actually *did spray a hurricane with silver iodide and observed afterwards that the hurricane indeed weakened. Hooray! But wait. Further research showed that hurricane clouds contain very little supercooled water droplets, so the method couldn’t work even in theory. Instead, it turned out that hurricanes frequently undergo similar changes without intervention, so the observation was most likely coincidence. Project Stormfury was canceled in 1983.

What about hygroscopic cloud seeding, which works by spraying clouds with particles that absorb water, to make the clouds rain off? The effects of this have been studied to some extent by observing natural phenomena. For example, dust that’s blown up over the Sahara Desert can be transported by winds over long distances. Though much remains to be understood, some observations seem to indicate that interactions with this dust makes it easier for the clouds to rain off, which naturally weaken hurricanes.

So why don’t we try something similar? Again, the problem is that hurricanes are enormous! You’d need a whole army of airplanes to spray the clouds, and even then that would almost certainly not make the hurricanes disappear, but merely weaken them.

There’s a long list of other things people have considered to get rid of hurricanes. For example, spraying the upper layers of a hurricane with particles that absorb sunlight to warm up the air, and thereby reduce the updraft. But again, the problem is that hurricanes are enormous! Keep in mind, you’d have to spray an area about the size of Texas.

A similar idea is to prevent the air above the ocean from evaporating and feeding the growth of the hurricane, for example by covering the ocean surface with oil films. The obvious problem with this idea is that, well, now you have all that oil on the ocean. But also, some small-scale experiments have shown that the oil-cover tends to break up, and where it doesn’t break up, it can actually aid the warming of the water, which is exactly what you don’t want.

How about we cool the ocean surface instead? This idea has been pursued for example by Bill Gates, who, in 2009, together with a group of scientists and entrepreneurs patented a pump system that would float in the ocean and pump cool water from deep down to the surface. In 2017 the Norwegian company SINTEF put forward a similar proposal. The problem with this idea is, guess what, hurricanes are enormous! You’d have to get a huge number of these pumps in the right place at the right time.

Another seemingly popular idea is to drag icebergs from the poles to the tropics to cool the water. I leave it to you to figure out the logistics for making this happen.

Yet again other people have argued that one doesn’t actually have to blow apart a hurricane to get rid of it, one merely has to detonate a nuclear bomb strategically so that the hurricane changes direction. The problem with this idea is that no one wants multiple nations to play nuclear billiard on the oceans.

As you have seen, there are lots of ideas, but the key problem is that hurricanes are enormous!

And that means the most promising way to prevent them is to intervene before they get too large. Hurricanes don’t suddenly pop out of nowhere, they take several days to form and usually arise from storms in the tropics which also don’t pop out of nowhere.

What the problem then comes down to is that meteorologists can’t presently predict well enough and not long enough in advance just which regions will go on to form hurricanes. But, as you have seen, researchers have tried quite a few methods to interfere with the feedback cycle that grows hurricanes, and some of them actually work. So, if we could tell just when and where to interfere, that might actually make a difference.

My conclusion therefore is: If you want to prevent hurricanes, you don’t need larger bombs, you need to invest into better weather forecasts.

Saturday, March 06, 2021

Do Complex Numbers Exist?

[This is a transcript of the video embedded below.]

When the world seems particularly crazy, I like looking into niche-controversies. A case where the nerds argue passionately over something that no one knew was controversial in the first place. In this video, I want to pick up one of these super-niche nerd fights: Are complex numbers necessary to describe the world as we observe it? Do they exist? Or are they just a mathematical convenience? That’s what we’ll talk about today.

So the recent controversy broke out when a paper appeared on the preprint server with the title “Quantum physics needs complex numbers”. The paper contains a proof for the claim in the title, in response to an earlier claim that one can do without the complex numbers.

What happened next is that the computer scientist Scott Aaronson wrote a blogpost in which he called the paper “striking”. But the responses were, well, not very enthusiastic. They ranged from “why fuss about it” to “bullshit” to “it’s missing the point.”

We’ll look at the paper in a moment, but first I will briefly summarize what we’re even talking about, so that no one’s left behind.

The Math of Complex Numbers

You probably remember from school that complex numbers are what you need to solve equations like x squared equals minus 1. You can’t solve that equation with the real numbers that we are used to. Real numbers are numbers that can have infinitely many digits after the decimal point, like square root of 2 and π, but they also include integers and fractions and so on. You can’t solve this equation with real numbers because they’ll always square to a positive number. If you want to solve equations like this, you therefore introduce a new number, usually denoted “i” with the property that it squares to -1.

Interestingly enough, just giving a name to the solution of this one equation and adding it to the set of real numbers turns out to be sufficient to make all algebraic equations solvable. Doesn’t matter how long or how complicated the equation, you can always write all their solutions as a+ib, where a and b are real numbers. 

Fun fact: This doesn’t work for numbers that have infinitely many digits before the point. Yes, that’s a thing, they’re called p-adic numbers. Maybe we’ll talk about this some other time.

Complex numbers are now all numbers of the type a plus I time b, where a and b are real numbers. “a” is called the “real” part, and “b” the “imaginary” part of the complex number. Complex numbers are frequently drawn in a plane, called the complex plane, where the horizontal axis is the real part and the vertical axis is the imaginary part. i itself is by convention in the upper half of the complex plane. But this looks the same as if you draw a map on a grid and name each point with two real numbers. Doesn’t this mean that the complex numbers are just a two-dimensional real vector space?

No, they’re not. And that’s because complex numbers multiply by a particular rule that you can work out by taking into account that the square of i is minus 1. Two complex numbers can be added like they were vectors, but the multiplication law makes them different. Complex number are, to use the mathematical term, a “field”, like the real numbers. They have a rule both for addition AND for multiplication. They are not just like that two-dimensional grid.

The Physics of Complex Numbers

We use complex numbers in physics all the time because they’re extremely useful. There useful for many reasons, but the major reason is this. If you take any real number, let’s call it α, multiply it with I, and put it into an exponential function, you get exp(Iα). In the complex plane, this number, exp(Iα), always lies on a circle of radius one around zero. And if you increase α, you’ll go around that circle. Now, if you look only at the real or only at the imaginary part of that circular motion, you’ll get an oscillation. And indeed, this exponential function is a sum of a cosine and I times a sine function.

Here’s the thing. If you multiply two of these complex exponentials say, one with α and one with β, you can just add the exponents. But if you multiply two cosines or a sine with a cosine… that’s a mess. You don’t want to do that. That’s why, in physics, we do the calculation with the complex numbers, and then, at the very end, we take either the real or the imaginary part. Especially when we describe electromagnetic radiation, we have to deal with a lot of oscillations, and complex numbers come in very handy.

But we don’t have to use them. In most cases we could do the calculation with only real numbers. It’s just cumbersome. With the exception of quantum mechanics, to which we’ll get in a moment, the complex numbers are not necessary.

And, as I have explained in an earlier video, it’s only if a mathematical structure is actually necessary to describe observations that we can say they “exist” in a scientifically meaningful way. For the complex numbers in non-quantum physics that’s not the case. They’re not necessary.

So, as long as you ignore quantum mechanics, you can think of complex numbers as a mathematical tool, and you have no reason to think they physically exist. Let’s then talk about quantum mechanics.

Complex Numbers in Quantum Mechanics

In quantum mechanics, we work with wave-function, usually denoted Ψ, which are complex valued, and the equation that tells us what the wave-function does is the Schrödinger equation. It looks like this. You’ll see immediately, there’s an “i” in this equation, which is why the wave-function has to be complex valued.

However, you can of course take the wave-function and this equation apart into a real and an imaginary part. Indeed, one often does that, if one solves the equation numerically. And I remind you, that both the real and the imaginary part of a complex number are real numbers. Now, if we calculate a prediction for a measurement outcome in quantum mechanics, then that measurement outcome will also always be a real number. So, it looks like you can get rid of the complex numbers in quantum mechanics, by splitting the equation into a real and imaginary part, and that’ll never make a difference for the result of the calculation.

This finally brings us to the paper I mentioned in the beginning. What I just said about decomposing the Schrödinger equation is of course correct, but that’s not what they looked at in the paper, that would be rather lame.

Instead they ask what happens with the wave-function if you have a system that is composed of several parts, in the simplest case that would be several particles. In normal quantum mechanics, each of these particles has a wave-function that’s complex-valued, and from these we construct a wave-function for all the particles together, which is also complex-valued. Just what this wave-function looks like depends on which particle is entangled with which. If two particles are entangled, this means their properties are correlated, and we know experimentally that this entanglement-correlation is stronger than what you can do without quantum theory.

The question which they look at in the new paper is then whether there are ways to entangle particles in the normal, complex quantum mechanics that you cannot build up from particles that are described entirely by real valued functions. Previous calculation showed that this could always be done if the particles came from a single source. But in the new paper they look at particles from two independent sources, and claim that there are cases which you cannot reproduce with real numbers only. They also propose a way to experimentally measure this specific entanglement.

I have to warn you that this paper has not yet been peer reviewed, so maybe someone finds a flaw in their proof. But assuming their result holds up, this means if the experiment which they propose finds the specific entanglement predicted by complex quantum mechanics, then you know you can’t describe observations with real numbers. It would then be fair to say that complex numbers exist. So, this is why it’s cool. They’ve figured out a way to experimentally test if complex numbers exist!

Well, kind of. Here is the fineprint: This conclusion only applies if you want the purely real-valued theory to work the same way as normal quantum mechanics. If you are willing to alter quantum mechanics, so that it becomes even more non-local than it already is, then you can still create the necessary entanglement with real valued numbers.

Why is it controversial? Well, if you belong to the shut-up and calculate camp, then this finding is entirely irrelevant. Because there’s nothing wrong with complex numbers in the first place. So that’s why you have half of the people saying “what’s the point” or “why all the fuss about it”. If you, on the other hand, are in the camp of people who think there’s something wrong with quantum mechanics because it uses complex numbers that we can never measure, then you are now caught between a rock and a hard place. Either embrace complex numbers, or accept that nature is even more non-local than quantum mechanics.

Or, of course, it might be that that the experiment will not agree with the predictions of quantum mechanics, which would be the most exciting of all possible outcomes. Either way, I am sure that this is a topic we will hear about again.

Tuesday, March 02, 2021

[Guest Post] Problems with Eric Weinstein's “Geometric Unity”

[This post is written by Timothy Nguyen, a mathematician and an author of the recently released paper “A Response to Geometric Unity”.]

On April 2, 2020, Eric Weinstein released a video of his 2013 Oxford lecture in which he presents his theory of everything “Geometric Unity” (GU). Since then, Weinstein has appeared in interviews alongside Sabine Hossenfelder, Brian Keating, Lee Smolin, Max Tegmark, and Stephen Wolfram to discuss his theory. 

In these interviews, Weinstein laments that the scientific community is dismissive of GU because he has not released a technical paper, but insists that scientists should be able to understand the substantive content of GU from the lecture alone (see here and here). In fact, Weinstein regards the conventional requirement of writing a paper to be flawed, since he questions the legitimacy of peer review, credit assignment, and institutional recognition (see here, here, here, and here).

Theo, my anonymous physicist coauthor, and I became aware of Weinstein and Geometric Unity through his podcast The Portal. We independently communicated with Weinstein on Discord and we both came to the conclusion that Weinstein was unable to provide an adequate explanation of GU or why it was a compelling theory. 

I also became increasingly skeptical of Weinstein’s claims when I pressed him about his alleged discovery of the Seiberg-Witten equations before Seiberg and Witten (see here, here, here, and here), a set of equations which was the central focus of my PhD thesis and several resultant papers. When I asked Weinstein for certain mathematical details about how he had arrived at the Seiberg-Witten equations, his vague responses led me to doubt his claims. Though Weinstein proposed to host a more in-depth discussion about GU and the requisite math and physics, no such discussion ever materialized.

These difficulties in communicating with Weinstein is what motivated our response paper. Suffice it to say that it was no easy task, as it required repeatedly watching his YouTube lecture and carefully timestamping its content in order to cite the material. These appear as clickable links in our response paper for those who wish to verify that our transcription of Weinstein's presentation is accurate.

Here's the high-level overview of how GU makes a claim towards a Theory of Everything. Essentially, GU asserts that there is a set of equations in 14 dimensions that are to contain the Einstein equations, Dirac equation, and Yang-Mills equations. Because the Einstein equations describe gravity, the Dirac equation accounts for fermions, and the Yang-Mills equations account for gauge-theories describing the strong and electroweak forces, all fundamental forces and particle types are therefore superficially accounted for. It is our understanding that it is in this very limited and weak sense that GU attempts to position itself as a Theory of Everything.

The most glaring deficiency in Weinstein’s presentation is that it does not incorporate any quantum theory. Establishing a consistent quantum theory of gravity alone has defied the efforts of nearly a century’s worth of vigorous research and is part of what makes formulating a Theory of Everything an enormous challenge. For GU to overlook this obstacle means that it has no possible claim on being a Theory of Everything.

Our findings are that even aside from its status as Theory of Everything, GU contains serious technical gaps both mathematical and physical. In summary:
  • GU introduces a “shiab” operator that overlooks a required complexification step. Omitting this step creates a mathematical error but including it precludes having a physically sensible quantum theory. 
  • The choice of gauge group for GU naively leads to a quantum gauge anomaly, thereby rendering the quantum theory inconsistent. Any straightforward attempt to eliminate this anomaly would make the shiab operator impossible to define, compounding the previous objection. 
  • The setup of GU asserts that it will have supersymmetry. In 14 dimensions, adopting supersymmetry is highly restrictive. It implies that the proposed gauge group of GU cannot be correct and that the theory as stated is incomplete. 
  •  Essential technical details of GU are omitted, leaving many of the central claims unverifiable.

Coincidentally, the night before we posted our response paper, Weinstein announced on Lex Fridman’s podcast that he plans on releasing a paper on GU on April 1st. We look forward to seeing Weinstein's response to the problems we have identified.