Saturday, June 19, 2021

Asteroid Mining – A fast way to get rich?

[This is a transcript of the video embedded below.]


Asteroids are the new gold mines. Fly to an asteroid, dig up its minerals, become a billionaire. I used to think this is crazy and will never make financial sense. But after a lot of reading I’m now thinking maybe it will work – by letting bacteria do the digging. How do you dig with bacteria? Is it even legal to mine asteroids? And will it happen in your lifetime? That’s what we’ll talk about today.

Space Agencies like NASA and ESA have found about 25000 asteroids. In 2020 alone, they discovered 3000 new near earth asteroids. About 900 of them have an extension of 1 kilometer or more.

What makes asteroids so interesting for mining is that their chemical composition is often similar to what you find in the core of our planet. Metals from the platinum group are very expensive because they are useful but rare in Earth’s crust. On an asteroid, they can be much easier to dig up. And that’s a straight-forward way to get rich – very, very rich.

The asteroid Psyche, for example, has a diameter of about two-hundred kilometers and astrophysicists estimate it’s about ninety percent metal, mostly iron and nickel. Lindy Elkins-Tanton, NASA's lead scientist of the Psyche mission, estimated the asteroid is worth about 10 quintillion US dollars. That’s a 1 followed by 19 zeros. Now imagine that thing was made of platinum...

NASA, by the way, is planning a mission to Psyche that’s to be launched in 2022. Not because of the quintillions but because they want to study its composition to learn more about how planetary systems form.

How does one find an asteroid that’s good for mining? Well, first of all it shouldn’t take forever to get there, so you want one that comes reasonably close to earth every once in a while. You also don’t want it to spin too much because that’d make it very hard to land or operate on it. And finally you want one that’s cheap to get to, so that means there’s a small amount of acceleration needed during the flight, a small “Delta V” as it’s called.

How many asteroids are there that fit these demands? The astrophysicist Martin Elvis from Harvard estimated it using an equation that’s now called the Elvis equation. It’s similar to the Drake equation which one uses to estimate the number of extraterrestrial civilizations by multiplying a lot of factors. And like the Drake equation, the Elvis Equation depends a lot on the assumptions that you make.

In any case, Elvis with his Elvis equation estimates that only about 10 of the known asteroids are worth mining. For the other ones, the cost-benefit ratio doesn’t work out, because they’re either too difficult to reach or don’t have enough worth mining or they’re too small. In principle one could also think of catching small asteroids, and bringing them back to earth, but in practice that’s difficult: The small ones are hard to find and track. At least right now this doesn’t work.

So the first two problems with asteroid mining are finding an asteroid and getting there. The next problem is digging. The gravitational pull on these asteroids is so small that one can’t just drill into the ground, that would simply kick the spacecraft off the asteroid.

The maybe most obvious way around this problem is to anchor the digging machine to the asteroid. Another solution that researchers from NASA are pursuing are shovels that dig in two opposite directions simultaneously, so there’s no net force to kick the machine off the asteroid. NASA is also looking into the option that instead of using one large machine, one could use instead a swarm of small robots that coordinate their tasks.

Another smart idea is optical digging. For this one use mirrors and lenses to concentrate sunlight to heat up the surface. This can burn off the surface layer by layer, and the material can then be caught in bags.

And then there is mining with bacteria. Using bacteria for mining is actually not a new idea. It’s called “biomining” and according to some historians the Romans have been doing it 2000 years ago already – though they almost certainly didn’t understand how it works since they didn’t know of bacteria to begin with. But we know today that some bacteria eat and decompose minerals. And during their digestion process, they separate off the metal that you want to extract. So basically, the idea is that you ship the bacteria to your asteroid, let them eat dust, and wait for them to digest.

On Earth, biomining is responsible for approximately twenty percent of global copper production and five percent of global gold production. But how can bacteria survive on asteroids? You can’t very well put them into space suits!

For one you wouldn’t directly dump the bacteria onto the asteroid, but put them into some kind of gel. Still there are pretty harsh conditions on an asteroid and you need to find the right bacteria for the task. It’s not hopeless. Microbiologists know that some species of bacteria have adapted to temperatures that would easily kill humans. Some bacteria can for example live at temperatures up to one-hundred thirteen degrees Celsius and some at temperatures down to minus twenty-eight Celsius. At low metabolic rates, they’ve been found to survive even at minus fourty degrees. And some species of bacteria survive vacuum as low as 10 to the minus five Pascal, which should allow them to survive in the vicinity of a spacecraft.

What about radiation? Again, bacteria are remarkably resistant. The bacterium Deinococcus radiodurans, for example, can cope with ionizing radiation up to twenty kiloGray. For comparison, in humans, acute radiation poisoning sets in at about zero point seven Gray. The bacteria easily tolerate twenty-thousand times as much!

And while the perfect bacterium for space mining hasn’t yet been found, there’s a lot of research going on in this area. It looks like a really promising idea to me.

But, you may wonder now, is it even legal to mine an asteroid? Probably yes. This kind of question is addressed by the nineteen sixty-seven Outer Space Treaty, which has been signed by one hundred eleven countries including the United States, Russia, and almost all of Europe.

According to that treaty, celestial bodies may not be subject to “national appropriation”. However, the treaty does not directly address the extraction of “space resources”, that is stuff you find on those celestial bodies. Some countries have interpreted this to mean that commercial mining does not amount to national appropriation and is permitted.

For example, since 2015 American citizens have the right to possess and sell space resources. Luxembourg has established a similar legal framework in 2017. And Russia too is about to pass such legislation.

This isn’t the only development in the area. You can now make a university degree in space resources, for example at the Colorado School of Mines, the University of Central Florida, and the University of Luxembourg. And at the same time several space agencies are planning to visit more asteroids. NASA wants to fly not only to Psyche, but also to Bennu, that is expected to get close to Earth in September twenty twenty-three.

The Chinese National Space Administration has proposed a similar mission to retrieve a sample from the asteroid Kamo’oalewa. And there are several other missions on the horizon.

And then there’s the industry interest. Starting about a decade ago, a number of start-ups appeared with the goal of mining asteroids, such as Planetary Resources and Deep Space Industries. These companies attracted some investors when they appeared but since then they have been struggling to attract more money, and they have basically disappeared – they’ve been bought by other companies which are more interested in their assets than in furthering the asteroid mining adventure.

The issue is that asteroid mining is real business, but it’s business in which there’s still tons of research to do: How to identify what asteroid is a good target, how to get to the asteroid, how to dig on it. And let’s not forget that once you’ve managed to do that, you also have to get the stuff back to earth. It’d take billions of up-front investments and decades of time to pay off even in the best case. So, while it’s promising, it looks unlikely to me that private investors will drive the technological development in this area. It will likely remain up to tax funded space agencies to finance this research for some more time.

Saturday, June 12, 2021

2+2 doesn't always equal 4

[This is a transcript of the video embedded below.]



2 plus 2 makes 5 is the paradigmatic example of an obvious falsehood, a falsehood that everybody knows to be false. Because 2 plus 2 is equal to 1. Right? At the end of this video, you’ll know what I am talking about.

George Orwell famously used two plus two equals five in his novel nineteen eighty-four as an example for an obviously false statement that you can nevertheless make people believe in.

The same example was used already in seventeen eighty-nine by the French priest and writer Emmanuel Siey├Ęs in his essay “what is the third estate”. At this time the third estate – the “bourgeoisie” – made up the big bulk of the population in France, but wasn’t allowed to vote. Sieyes wrote
“[If] it be claimed that under the French constitution two hundred thousand individuals out of twenty-six million citizens constitute two-thirds of the common will, only one comment is possible: it is a claim that two and two make five.” This was briefly before the French revolution.

So you can see there’s a heavy legacy to using two plus two is five as an example of an obvious untruth. And if you claim otherwise that can understandably upset some people. For example, the mathematician Kareem Carr recently got fire on twitter for pointing out that 2+2 isn’t always equal four.

He was accused of being “woke” because he supposedly excused obviously wrong math as okay. Even he was surprised at how upset some people got about it, because his point is of course entirely correct. 2+2 isn’t always equal to four. And I don’t just mean that you could change the symbol “4” with the symbol “5”. You can do that of course, but that’s not the point. The point is that two plus two is a symbolic representation for the properties of elements of a group. And the result depends on what the 2s refer to and how the mathematical operation “+” is defined.

Strictly speaking, without those definitions 2+2 can be pretty much anything. That’s where the joke comes from that you shouldn’t let mathematicians sort out the restaurant bill, because they haven’t yet agreed on how to define addition.

To see why it’s important to know what you are adding and how, let’s back up for a moment to see where the “normal” addition law comes from. If I have two apples and I add two apples, then that makes four apples. Right? Right.

Ok, but how about this. If I have a glass of water with a temperature of twenty degrees and I pour it together with another glass of water at 20 degrees, then together the water will have a temperature of 40 degrees. Erm. No, certainly not.

If both glasses contained the same amount of water, the final temperature will be one half the sum of the temperatures, so that’d still be 20 degrees, which makes much more sense. Temperatures don’t add by the rule two plus two equals four. And why is that?

It’s because temperature is a measure for the average energy of particles and averages don’t add the same way as apples. The average height of women in the United States is 5 ft 4 inches, and that of men 5 ft 9 inches, but that doesn’t mean that the average American has a height of 11 ft 1. You have to know what you’re adding to know how to add it.

Another example. Suppose you switch on a flashlight. The light moves at, well, the speed of light. And as you know the speed of light is the same for all observers. We learned that from Albert Einstein. Yes, that guy again. Now suppose I switch on the flashlight while you come running at me at, say, ten kilometers per hour. At what velocity is the light coming at *you. Well, that’s the speed of light plus ten kilometers per hour. Right? Erm, no. Because that’d be faster than the speed of light. What’s going on?

What’s going on is that velocities don’t add like apples either. They merely approximately do this if all the velocities involved are much smaller than the speed of light. But strictly speaking they have to be added using this formula.

Here u and v are the two velocities that you want to add and w is the result. C is the speed of light. You see immediately if one of the velocities, say u, is also the speed of light, then the resulting velocity stays the speed of light.

So, if you add something to the speed of light, the speed of light doesn’t change. If you come running at me, the light from my flashlight still comes at you with the speed of light.

Indeed, if you add the speed of light to the speed of light because maybe you want to know the velocity at which two light beams approach each other head on, you get c plus c equals c. So, in units of the speed of light, according to Einstein, 1+1 is 1.

That’s some examples from physics for quantities that just have different addition laws. Here is another one from mathematics. Suppose you want to add two numbers that are elements of a finite group, to keep things simple, say one with only three elements. We can give these elements the numbers zero, one, and two.

We can then define an addition rule on this group, which I’ll write as a plus with a circle around it, to make clear it’s not the usual addition. This new addition rule works like this. Take the usual sum of two number, then divide the result by three and take the rest.

So, for example 1+2 = 3, divide by three, the rest is 0. This addition law is defined so that it keeps us within the group. And with this addition law, you have 1 plus 2 equals 0. By the same rule 2 plus 2 equals one.

I know this looks odd, but it’s completely standard mathematics, and it’s not even very advanced mathematics, it just isn’t commonly taught in school. This remainder after division is usually called the modulus. So this addition law can be written as the plus with the circle equals the normal plus mod 3. A set of numbers with this addition law is called a cyclic group.

You can’t only do this with 4, but with any integer number. For example if you take the number 12, that just means if you add numbers to something larger than 12 you start over from zero again. That’s how clocks work, basically, 8+7=3, add another 12 and that gives 3 again. We’re fairly used to this.

Clocks are a nice visual example for how to add numbers in a cyclic group, but time-keeping itself is not an example for cyclic addition. That’s because the “real” physical time of course does not go in a circle. It’s just that on a simple clock we might not have an indicator for the time switching from am to pm or to the next day.

So in summary, if you add numbers you need to know what it is that you are adding and take the right addition law to describe what you are interested in. If you take two integers and use the standard addition law, then, yes, two plus two equals four. But there are many other things those numbers could stand for and many other addition laws, and depending on your definition, two plus two might be two or one or five or really anything at all. That’s not “woke” that’s math.

Saturday, June 05, 2021

Why do we see things that aren't there?

[This is a transcript of the video embedded below.]

A few weeks ago, we talked about false discoveries, scientists who claimed they’d found evidence for alien life. But why is it that we get so easily fooled and jump to conclusions. How bad is it and what can we do about it? That’s what we will talk about today.


My younger daughter had spaghetti for the first time when she was about two years old. When I put the plate in front of her she said “hair”.

The remarkable thing about this is not so much that she said this, but that all of you immediately understood why she said it. Spaghetti are kind of like hair. And as we get older and learn more about the world we find other things that also look kind of like hair. Willows, for example. Or mops. Even my hair sometimes looks like hair.

Our brains are pattern detectors. If you’ve seen one thing, it’ll tell you if you come across similar things. Psychologists call this apophenia, we see connections between unrelated things. These connections are not wrong, but they’re not particularly meaningful. That we see these connections, therefore, tells us more about the brain than about the things that our brain connects.

The famous Rorschach inkblot test, for example, uses apophenia in the attempt to find out what connections the patient readily draws. Of course these tests are difficult to interpret because if you start thinking about it, you’ll come up with all kinds of things for all kinds of reasons. Seeing patterns in clouds is also an example of apophenia.

And there are some patterns which we are particularly good at spotting, the ones that are most important for our survival, ahead of all: Faces. We see faces everywhere and in anything. Psychologists call this pareidolia.

The most famous example may be Jesus on a toast. But there’s also a Jesus on the butt of that dog. There’s a face on Mars, a face in this box, a face in this pepper, and this washing machine has had enough.

The face on Mars is worth a closer look to see what’s going on. In 1976, the Viking mission sent back images from its orbit around Mars. When one of them looked like a face, a guy by name y Richard C. Hoagland went on TV to declare it was evidence of lost Martian civilization. But higher resolution images of the same spot from later missions don’t look like faces to us anymore. What’s going on?

What’s going on is that, when we lack information, our brain fills in details with whatever it thinks is the best matching pattern. That’s also what happened, if you remember my earlier video, with the canals on Mars. There never were any canals on Mars. They were imaging artifacts, supported by vivid imagination.

Michael Shermer, the American science writer and founder of The Skeptics Society, explains this phenomenon in his book “The believing brain”. He writes: “It is in this intersection of non-existing theory and nebulous data that the power of belief is at its zenith and the mind fills in the blanks”.

He uses as example what happened when Galileo first observed Saturn, in 1620. Galileo’s telescope at the time had a poor resolution, so Galileo couldn’t actually see the rings. But he could see there was something strange about Saturn, it didn’t seem to be round. Here is a photo that Jordi Busque took a few months ago with a resolution similar to what Galileo must have seen. What does it look like to you? Galileo claimed that Saturn was a triple planet.

Again, what’s happening is that the human brain isn’t just a passive data analysis machine. The brain doesn’t just look at an image and says: I don’t have enough data, maybe it’s noise or maybe it isn’t. No, it’ll come up with something that matches the noise, whether or not it has enough data to actually draw that conclusion reliably.

This makes sense from an evolutionary perspective. It’s better to see a mountain lion when there isn’t one than to not see a mountain lion when there is one. Can you spot the mountain lion? Pause the video before I spoil your fun. It’s here.

A remarkable experiment to show how we find patterns in noise was done in 2003 by researchers from Quebec and Scotland. They showed images of random white noise to their study participants. But the participants were told that half of those images contained the letter “S” covered under noise. And sure enough, people saw letters where there weren’t any.

Here’s the fun part. The researchers then took the images which the participants had identified as containing the letter “S” and overlaid them. And this overlay clearly showed an “S”.

What is going on? Well, if you randomly scatter points on a screen, then every once in a while they will coincidentally look somewhat like an “S”. If you then selectively pick random distributions that look a particular way, and discard the others, you indeed find what you were looking for. This experiment shows that the brain is really good at finding patterns. But it’s extremely bad at calculating the probability that this pattern could have come about coincidentally.

A final cognitive bias that I want to mention which is built into our brain is anthropomorphism, that means we assign agency to inanimate objects. That’s why we, for example, get angry at our phones or cars though that makes absolutely no sense.

Anthropomorphism was first studied in 1944 by Fritz Heider and Marianne Simmel. They showed people a video in which squares and triangles were moving around. And they found the participants described the video as if the squares and triangles had intentions. We naturally make up such stories. This is also why we have absolutely no problem with animation movies whose “main characters” are cars, sponges, or potatoes.

What does this mean? It means that our brains have a built-in tendency to jump to conclusions and to see meaningful connections when there aren’t any. That’s why we have astrophysicists who yell “aliens” each time they have unexplained data, and why we have particle physicists who get excited about each little “anomaly” even though they should full well know that they are almost certainly wasting their time. And it’s why, if I hear Beatles songs playing on two different radio stations at the same time, I’m afraid Paul McCartney died.

Kidding aside, it’s also why so many people fall for conspiracy theories. If someone they know gets ill, they can’t put it down as an unfortunate coincidence. They will look for an explanation, and if they look, they will find one. Maybe that’s some kind of radiation, or chemicals, or the evil government. Doesn’t really matter, the brain wants an explanation.

So, this is something to keep in mind: Our brains come up with a lot of false positives. We see patterns that aren’t there, we see intention where there isn’t any, and sometimes we see Jesus on the butt of a dog.

Saturday, May 29, 2021

What does the universe expand into? Do we expand with it?

[This is a transcript of the video embedded below.]


If the universe expands, what does it expand into? That’s one of the most frequent questions I get, followed by “Do we expand with the universe?” And “Could it be that the universe doesn’t expand but we shrink?” At the end of this video, you’ll know the answers.

I haven’t made a video about this so far, because there are already lots of videos about it. But then I was thinking, if you keep asking, those other videos probably didn’t answer the question. And why is that? I am guessing it may be because one can’t really understand the answer without knowing at least a little bit about how Einstein’s theory of general relativity works. Hi Albert. Today is all about you.

So here’s that little bit you need to know about General Relativity. First of all, Einstein used from special relativity that time is a dimension, so we really live in a four dimensional space-time with one dimension of time and three dimensions of space.

Without general relativity, space-time is flat, like a sheet of paper. With general relativity, it can curve. But what is curvature? That’s the key to understanding space-time. To see what it means for space-time to curve, let us start with the simplest example, a two-dimensional sphere, no time, just space.

That image of a sphere is familiar to you, but really what you see isn’t just the sphere. You see a sphere in a three dimensional space. That three dimensional space is called the “embedding space”. The embedding space itself is flat, it doesn’t have curvature. If you embed the sphere, you immediately see that it’s curved. But that’s NOT how it works in general relativity.

In general relativity we are asking how we can find out what the curvature of space-time is, while living inside it. There’s no outside. There’s no embedding space. So, for the sphere that’d mean, we’d have to ask how’d we find out it’s curved if we were living on the surface, maybe ants crawling around on it.

One way to do it is to remember that in flat space the inner angles of triangles always sum to 180 degrees. In a curved space, that’s no longer the case. An extreme example is to take a triangle that has a right angle at one of the poles of the sphere, goes down to the equator, and closes along the equator. This triangle has three right angles. They sum to 270 degrees. That just isn’t possible in flat space. So if the ant measures those angles, it can tell it’s crawling around on a sphere.

There is another way that ant can figure out it’s in a curved space. In flat space, the circumference of a circle is related to the radius by 2 Pi R, where R is the radius of the circle. But that relation too doesn’t hold in a curved space. If our ant crawls a distance R from the pole of the sphere and you then goes around in a circle, the radius of the circle will be less than 2πR. This means, measuring the circumference is another way to find out the surface is curved without knowing anything about the embedding space.

By the way, if you try these two methods for a cylinder instead of a sphere you’ll get the same result as in flat space. And that’s entirely correct. A cylinder has no intrinsic curvature. It’s periodic in one direction, but it’s internally flat.

General Relativity now uses a higher dimensional generalization of this intrinsic curvature. So, the curvature of space-time is defined entirely in terms which are internal to the space-time. You don’t need to know anything about the embedding pace. The space-time curvature shows up in Einstein’s field equations in these quantities called R.

Roughly speaking, to calculate those, you take all the angles of all possible triangles in all orientations at all points. From that you can construct an object called the curvature tensor that tells you exactly how space-time curves where, how strong, and into which direction. The things in Einstein’s field equations are sums over that curvature tensor.

That’s the one important thing you need to know about General Relativity, the curvature of space-time can be defined and measured entirely inside of space-time. The other important thing is the word “relativity” in General Relativity. That means you are free to choose a coordinate system, and the choice of a coordinate system doesn’t make any difference for the prediction of measurable quantities.

It’s one of these things that sounds rather obvious in hindsight. Certainly if you make a prediction for a measurement and that prediction depends on an arbitrary choice you made in the calculation, like choosing a coordinate system, then that’s no good. However, it took Albert Einstein to convert that “obvious” insight into a scientific theory, first special relativity and then, general relativity.

So with that background knowledge, let us then look at the first question. What does the universe expand into? It doesn’t expand into anything, it just expands. The statement that the universe expands is, as any other statement that we make in general relativity, about the internal properties of space-time. It says, loosely speaking, that the space between galaxies stretches. Think back of the sphere and imagine its radius increases. As we discussed, you can figure that out by making measurements on the surface of the sphere. You don’t need to say anything about the embedding space surrounding the sphere.

Now you may ask, but can we embed our 4 dimensional space-time in a higher dimensional flat space? The answer is yes. You can do that. It takes in general 10 dimensions. But you could indeed say the universe is expanding into that higher dimensional embedding space. However, the embedding space is by construction entirely unobservable, which is why we have no rationale to say it’s real. The scientifically sound statement is therefore that the universe doesn’t expand into anything.

Do we expand with the universe? No, we don’t. Indeed, it’s not only that we don’t expand, but galaxies don’t expand either. It’s because they are held together by their own gravitational pull. They are “gravitationally bound”, as physicists say. The pull that comes from the expansion is just too weak. The same goes for solar systems and planet. And atoms are held together by much stronger forces, so atoms in intergalactic space also don’t expand. It’s only the space between them that expands.

How do we know that the universe expands and it’s not that we shrink? Well, to some extent that’s a matter of convention. Remember that Einstein says you are free to choose whatever coordinate system you like. So you can use a coordinate system that has yardsticks which expand at exactly the same rate as the universe. If you use those, you’d conclude the universe doesn’t expand in those coordinates.

You can indeed do that. However, those coordinates have no good physical interpretation. That’s because they will mix space with time. So in those coordinates, you can’t stand still. Whenever you move forward in time, you also move sideward in space. That’s weird and it’s why we don’t use those coordinates.

The statement that the universe expands is really a statement about certain types of observations, notably the redshift of light from distant galaxies, but also a number of other measurements. And those statements are entirely independent on just what coordinates you chose to describe them. However, explaining them by saying the universe expands in this particular coordinate system is an intuitive interpretation.

So, the two most important things you need to know to make sense of General Relativity is first that the curvature of space-time can be defined and measured entirely within space-time. An embedding space is unnecessary. And second, you are free to choose whatever coordinate system you like. It doesn’t change the physics.

In summary: General Relativity tells us that the universe doesn’t expand into anything, we don’t expand with it, and while you could say that the universe doesn’t expand but we shrink that interpretation doesn’t make a lot of physical sense.

Thursday, May 27, 2021

The Climate Book You Didn’t Know You Need

The Physics of Climate Change
Lawrence Krauss
Post Hill Press (March 2021)

In the past years, media coverage of climate change has noticeably shifted. Many outlets have begun referring to it as “climate crisis” or “climate emergency”, a mostly symbolic move, in my eyes, because those who trust that their readers will tolerate this nomenclature are those whose readers don’t need to be reminded of the graveness of the situation. Even more marked has been the move to no longer mention climate change skeptics and, moreover, to proudly declare the intention to no longer acknowledge even the existence of the skeptics’ claims.

As a scientist who has worked in science communication for more than a decade, I am of two minds about this. On the one hand, I perfectly understand the futility of repeating the same facts to people who are unwilling or unable to comprehend them – it’s the reason I don’t respond when someone emails me their home-brewed theory of everything. On the other hand, it’s what most science communication comes down to: patiently rephrasing the same thing over and over again. That science writers – who dedicate their life to communicating research – refuse to explain that very research, strikes me as an odd development.

This makes me suspect something else is going on. Declaring the science settled relieves news contributors of the burden of actually having to understand said science. It’s temptingly convenient and cheap, both literally and figuratively. Think about the last dozen or so news reports on climate change you’ve read. Earliest cherry blossom bloom in Japan, ice still melting in Antarctica, Greta Thunberg doesn’t want to travel to Glasgow in November. Did one of those actually explain how scientists know that climate change is man-made? I suspect not. Are you sure you understand it? Would you be comfortable explaining it to a climate change skeptic?

If not, then Lawrence Krauss’ new book “The Physics of Climate Change” is for you. It’s a well-curated collection of facts and data with explanations that are just about technical enough to understand the science without getting bogged down in details. The book covers historical and contemporary records of carbon dioxide levels and temperature, greenhouse gases and how their atmospheric concentrations change the energy balance, how we can tell one cause of climate change from another, and impacts we have seen and can expect to see, from sea level rise to tipping points.

To me, learning some climate science has been a series of realizations that it’s more difficult than it looks at first sight. Remember, for example, the explanation for the greenhouse effect we all learned in school? Carbon dioxide in the atmosphere lets incoming sunlight through, but prevents infrared light from escaping into space, hence raising the temperature. Alas, a climate change skeptic might point out, the absorption of infrared light is saturated at carbon dioxide levels well below the current ones. So, burning fossil fuels can’t possible make any difference, right?

No, wrong. But explaining just why is not so simple...

In a nutshell, the problem with the greenhouse analogy is that Earth isn’t a greenhouse. It isn’t surrounded by a surface that traps light, but rather by an atmosphere whose temperature and density falls gradually with altitude. The reason that increasing carbon dioxide concentrations continue to affect the heat balance of our planet is that they move the average altitude from which infrared light can escape upwards. But in the relevant region of the atmosphere (the troposphere) higher altitude means lower temperature. Hence, the increasing carbon dioxide level makes it more difficult for Earth to lose heat. The atmosphere must therefore warm to get back into an energy balance with the sun. If that explanation was too short, Krauss goes through the details in one of the chapters of his book.

There are a number of other stumbling points that took me some time to wrap my head around. Isn’t water vapor a much more potent greenhouse gas? How can we possibly tell whether global temperatures rise because of us or because of other, natural, causes, for example changes in the sun? Have climate models ever correctly predicted anything, and if so what? And in any case, what’s the problem with a temperature increase that’s hard to even read off the old-fashioned liquid thermometer pinned to our patio wall? I believe these are all obvious questions that everybody has at some point, and Krauss does a great job answering them.

I welcome this book because I have found it hard to come by a didactic introduction to climate science that doesn’t raise more question than it answers. Yes, there are websites which answer skeptics’ claims, but more often than not they offer little more than reference lists. Well intended, I concur, but not terribly illuminating. I took Michael Mann’s online course Climate Change: The Science and Global Impact, which provides a good overview. But I know enough physics to know that Mann’s course doesn’t say much about the physics. And, yes, I suppose I could take a more sophisticated course, but there are only so many hours in a day. I am sure the problem is familiar to you.

So, at least for me, Krauss book fills a gap in the literature. To begin with, at under 200 pages in generous font size, it’s a short book. I have also found it a pleasure to read for Krauss neither trivializes the situation nor pushes conclusions in the reader’s face. It becomes clear from his writing that he is concerned, but his main mission is to inform, not to preach.

I welcome Krauss’ book for another reason. As a physicist myself, I have been somewhat embarrassed by the numerous physicists who have put forward very – I am looking for a polite word here – shall we say, iconoclastic, ideas about climate change. I have also noticed this personally in several occasions, that physicists have rather strong yet uninformed opinion about what climate models are good for. I am therefore happy that a physicist as well-known as Krauss counteracts the impression that physicists believe they know everything better. He 100% sticks with the established science and doesn’t put forward own speculations.

There are some topics though I wish Krauss would have said more about. One particularly glaring omission is the uncertainty in climate trend projections due to the lacking understanding of cloud formation. Indeed, Krauss says little about the shortcomings of current climate models aside from acknowledging that tipping points are difficult to predict, and nothing about the difficulties of quantifying the uncertainty. This is unfortunate, for it’s another issue that irks me when I read about climate change in newspapers or magazines. Every model has shortcomings, and when those shortcomings aren’t openly put on the table I begin to wonder if something’s being swept under the rug. You see, I’m chronically skeptical myself. Maybe it’s something to do with being a physicist after all.

I for one certainly wish there was more science in the news coverage of climate change. Yes, there are social science studies showing that facts do little to change opinions. But many people, I believe, genuinely don’t know what to think because without at least a little background knowledge it isn’t all that easy to identify mistakes in the arguments of climate change deniers. Krauss’ book is a good starting point to get that background knowledge.

Saturday, May 22, 2021

Aliens that weren't aliens

[This is a transcript of the video embedded below.]


The interstellar object ‘Oumuamua travelled through our solar system in 2017. Soon after it was spotted, the astrophysicist Avi Loeb claimed it was alien technology. Now it looks like it was just a big chunk of nitrogen.

This wasn’t the first time scientists yelled aliens mistakenly and it certainly won’t be the last. So, in this video we’ll look at the history of supposed alien discoveries. What did astronomers see, what did they think it was, what did it turn out to be in the end? And what are we to make of these claims? That’s what we’ll talk about today.

Let’s then talk about all the times when aliens weren’t aliens. In 1877, the Italian astronomer Giovanni Shiaparelli studied the surface of our neighbor planet Mars. He saw a network of long, nearly straight lines. At that time, astronomers didn’t have the ability take photographs of their observation and the usual procedure was to make drawings and write down what they saw. Schiaparelli called the structures “canali” in Italian, a word which leaves their origin unspecified. In the English translation, however, the “canali” became “canals” which strongly suggested an artificial origin. The better word would have been “channels”.

This translation blunder made scientific history. Even though the resolution of telescopes at the time wasn’t good enough to identify surface structures on Mars, a couple of other astronomers quickly reported they also saw canals. Around the turn of the 19th to the 20th century, the American Astronomer Percival Lowell published three books in which he presented the hypothesis that the canals were an irrigation system built by an intelligent civilization.

The idea that there had once been, or maybe still was, intelligent life on Mars persisted until 1965. In this year, the American space mission Mariner 5 flew by Mars and sent back the first photos of Mars’s surface to Earth. The photos showed craters but nothing resembling canals. The canals turned out to have been imaging artifacts, supported by vivid imagination. And even though the scientific community laid the idea of canals on Mars to rest in 1965, it took much longer for the public to get over the idea of life on Mars. I recall my grandmother was still telling me about the canals in the 1980s.

But in any case the friends of ET didn’t have to wait long for renewed hope. In 1967, the Irish Astrophysicist Jocelyn Bell Burnell noticed that the radio telescope in Cambridge which she worked on at the time recorded a recurring signal that pulsed with a period of somewhat more than a second. She noted down “LGM” on the printout of the measurement curve, short for „little green men”.

The little green men were a joke, of course. But at the time, astrophysicists didn’t know any natural process that could explain Bell Burnell’s observations, so they couldn’t entirely exclude that it was a signal stemming from alien technology. However, a few years after the signal was first recorded it became clear that its origin was not aliens, but a rotating neutron star.

Rotating neutron stars can build up strong magnetic fields and then emit a steady, but directed beam of electromagnetic radiation. And since the neutron star rotates, we only see this beam if it happens to point into our direction. This is why the signal appears to be pulsed. Such objects are now called “Pulsars”.

Then in 1996, life on Mars had a brief comeback. That year, a group of Americans found a meteorite in Antarctica which seemed to carry traces of bacteria. This rock was probably flung into the direction of our planet when a heavier meteorite crashed into the surface of Mars. Indeed, other scientists confirmed that the Antarctic meteorite most likely came from mars. However, they concluded that the structures in the rock are too small to be of bacterial origin.

That wasn’t it with alien sightings. In 2015, the Kepler telescope found a star with irregular changes in its brightness. Officially the star has the catchy name KIC8462852, but unofficially it’s been called WTF. That stands, as you certainly know for Where’s the flux? The name which stuck in the end though was “Tabby’s star,” after the first name of its discoverer, Tabetha Boyajian.

At first astrophysicists didn’t have a good explanation for the odd behavior of Tabby’s star. And so, it didn’t take long until a group of researchers from the University of Pennsylvania proposed aliens are building a megastructure around their star.

Indeed, the physicist freeman Dyson had argued already in the 1960s, that advanced extraterrestrial civilizations would try to capture energy from their sun as directly as possible. To this end, Dyson speculated, they’d build a sphere around the star. It’s remained unclear how such a sphere would be constructed or remain stable, but, well, they are advanced, these civilizations, so presumably they’ve figured it out. And they’re covering up their star to catch its energy, that can quite plausibly lead to a signal like the one observed from Tabby’s star.

Several radio telescopes scanned the area around Tabby’s star on the lookout for signs of intelligent life, but didn’t find anything. Further observations now seem to support the hypothesis that the star is surrounded by debris from a destroyed moon or other large rocks.

Then, in 2017, the Canadian astronomer Robert Weryk made a surprising discovery when he analyzed data from the Pan-STARRS telescope in Hawaii. He saw an object that passed closely by our planet, but it looked neither like a comet nor like an asteroid.

When Weryk traced back its path, the object turned out to have come from outside our solar system. “‘Oumuamua” the astronomers named it, Hawaiian for “messenger from afar arriving first”.

‘Oumuamua gave astronomers and physicists quite something to think. It entered our solar system on a path that agreed with the laws of gravity, with no hints at any further propulsion. But as it got closer to the sun, it began to emit particles of some sort that gave it an acceleration.

This particle emission didn’t fit that usually observed from comets. Also, the shape of ‘Oumuamua, is rather untypical for asteroids or comets. The shape which fits the data best is that of a disk, about 6-8 times as wide as high.

When ‘Oumuamua was first observed, no one had any good idea what it was, what it was made of, or what happened when it got close to the sun. The Astrophysicist Avi Loeb used the situation to claim that ‘Oumuamua is technology of an alien civilization. “[T]he simplest, most direct line from an object with all of ‘Oumuamua’s observed qualities to an explanation for them is that it was manufactured.”

According to a new study it now looks like ‘Oumuamua is a piece of frozen nitrogen that was split off a nitrogen planet in another solar system. It remained frozen until it got close to our sun, when it began to partly evaporate. Though we will never know exactly because the object has left our solar system for good and the data we have now is all the data we will ever have.

And just a few weeks ago, we talked about what happened to the idea that there’s life on Venus. Check out my earlier video for more about that.

So, what do we learn from that? When new discoveries are made it takes some time until scientists have collected and analyzed all the data, formulated hypotheses, and evaluated which hypothesis explains the data best. Before that is done, the only thing that can reliably be said is “we don’t know”.

But “we don’t know” is boring and doesn’t make headlines. Which is why some scientists use the situation to put forward highly speculative ideas before anyone else can show they’re wrong. This is why headlines about possible signs of extraterrestrial life are certainly entertaining but usually, after a few years, disappear.

Thanks for watching, don’t forget to subscribe, see you next week.

Saturday, May 15, 2021

Quantum Computing: Top Players 2021

[This is a transcript of the video embedded below.]


Quantum computing is currently one of the most exciting emergent technologies, and it’s almost certainly a topic that will continue to make headlines in the coming years. But there are now so many companies working on quantum computing, that it’s become really confusing. Who is working on what? What are the benefits and disadvantages of each technology? And who are the newcomers to watch out for? That’s what we will talk about today.

Quantum computers use units that are called “quantum-bits” or qubits for short. In contrast to normal bits, which can take on two values, like 0 and 1, a qubit can take on an arbitrary combination of two values. The magic of quantum computing happens when you entangle qubits.

Entanglement is a type of correlation, so it ties qubits together, but it’s a correlation that has no equivalent in the non-quantum world. There are a huge number of ways qubits can be entangled and that creates a computational advantage - if you want to solve certain mathematical problems.

Quantum computer can help for example to solve the Schrödinger equation for complicated molecules. One could use that to find out what properties a material has without having to synthetically produce it. Quantum computers can also solve certain logistic problems of optimize financial systems. So there is a real potential for application.

But quantum computing does not help for *all types of calculations, they are special purpose machines. They also don’t operate all by themselves, but the quantum parts have to be controlled and read out by a conventional computer. You could say that quantum computers are for problem solving what wormholes are for space-travel. They might not bring you everywhere you want to go, but *if they can bring you somewhere, you’ll get there really fast.

What makes quantum computing special is also what makes it challenging. To use quantum computers, you have to maintain the entanglement between the qubits long enough to actually do the calculation. And quantum effects are really, really sensitive even to smallest disturbances. To be reliable, quantum computer therefore need to operate with several copies of the information, together with an error correction protocol. And to do this error correction, you need more qubits. Estimates say that the number of qubits we need to reach for a quantum computer to do reliable and useful calculations that a conventional computer can’t do is about a million.

The exact number depends on the type of problem you are trying to solve, the algorithm, and the quality of the qubits and so on, but as a rule of thumb, a million is a good benchmark to keep in mind. Below that, quantum computers are mainly of academic interest.

Having said that, let’s now look at what different types of qubits there are, and how far we are on the way to that million.

1. Superconducting Qubits

Superconducting qubits are by far the most widely used, and most advanced type of qubits. They are basically small currents on a chip. The two states of the qubit can be physically realized either by the distribution of the charge, or by the flux of the current.

The big advantage of superconducting qubits is that they can be produced by the same techniques that the electronics industry has used for the past 5 decades. These qubits are basically microchips, except, here it comes, they have to be cooled to extremely low temperatures, about 10-20 milli Kelvin. One needs these low temperatures to make the circuits superconducting, otherwise you can’t keep them in these neat two qubit states.

Despite the low temperatures, quantum effects in superconducting qubits disappear extremely quickly. This disappearance of quantum effects is measured in the “decoherence time”, which for the superconducting qubits is currently a few 10s of micro-seconds.

Superconducting qubits are the technology which is used by Google and IBM and also by a number of smaller companies. In 2019, Google was first to demonstrate “quantum supremacy”, which means they performed a task that a conventional computer could not have done in a reasonable amount of time. The processor they used for this had 53 qubits. I made a video about this topic specifically, so check this out for more. Google’s supremacy claim was later debated by IBM. IBM argued that actually the calculation could have been performed within reasonable time on a conventional super-computer, so Google’s claim was somewhat premature. Maybe it was. Or maybe IBM was just annoyed they weren’t first.

IBM’s quantum computers also use superconducting qubits. Their biggest one currently has 65 qubits and they recently put out a roadmap that projects 1000 qubits by 2023. IBMs smaller quantum computers, the ones with 5 and 16 qubits, are free to access in the cloud.

The biggest problem for superconducting qubits is the cooling. Beyond a few thousand or so, it’ll become difficult to put all qubits into one cooling system, so that’s where it’ll become challenging.

2. Photonic quantum computing

In photonic quantum computing the qubits are properties related to photons. That may be the presence of a photon itself, or the uncertainty in a particular state of the photon. This approach is pursued for example by the company Xanadu in Toronto. It is also the approach that was used a few months ago by a group of Chinese researchers, which demonstrated quantum supremacy for photonic quantum computing.

The biggest advantage of using photons is that they can be operated at room temperature, and the quantum effects last much longer than for superconducting qubits, typically some milliseconds but it can go up to some hours in ideal cases. This makes photonic quantum computers much cheaper and easier to handle. The big disadvantage is that the systems become really large really quickly because of the laser guides and optical components. For example, the photonic system of the Chinese group covers a whole tabletop, whereas superconducting circuits are just tiny chips.

The company PsiQuantum however claims they have solved the problem and have found an approach to photonic quantum computing that can be scaled up to a million qubits. Exactly how they want to do that, no one knows, but that’s definitely a development to have an eye on.

3. Ion traps

In ion traps, the qubits are atoms that are missing some electrons and therefore have a net positive charge. You can then trap these ions in electromagnetic fields, and use lasers to move them around and entangle them. Such ion traps are comparable in size to the qubit chips. They also need to be cooled but not quite as much, “only” to temperatures of a few Kelvin.

The biggest player in trapped ion quantum computing is Honeywell, but the start-up IonQ uses the same approach. The advantages of trapped ion computing are longer coherence times than superconducting qubits – up to a few minutes. The other advantage is that trapped ions can interact with more neighbors than superconducting qubits.

But ion traps also have disadvantages. Notably, they are slower to react than superconducting qubits, and it’s more difficult to put many traps onto a single chip. However, they’ve kept up with superconducting qubits well.

Honeywell claims to have the best quantum computer in the world by quantum volume. What the heck is quantum volume? It’s a metric, originally introduced by IBM, that combines many different factors like errors, crosstalk and connectivity. Honeywell reports a quantum volume of 64, and according to their website, they too are moving to the cloud next year. IonQ’s latest model contains 32 trapped ions sitting in a chain. They also have a roadmap according to which they expect quantum supremacy by 2025 and be able to solve interesting problems by 2028.

4. D-Wave

Now what about D-Wave? D-wave is so far the only company that sells commercially available quantum computers, and they also use superconducting qubits. Their 2020 model has a stunning 5600 qubits.

However, the D-wave computers can’t be compared to the approaches pursued by Google and IBM because D-wave uses a completely different computation strategy. D-wave computers can be used for solving certain optimization problems that are defined by the design of the machine, whereas the technology developed by Google and IBM is good to create a programmable computer that can be applied to all kinds of different problems. Both are interesting, but it’s comparing apples and oranges.

5. Topological quantum computing

Topological quantum computing is the wild card. There isn’t currently any workable machine that uses the technique. But the idea is great: In topological quantum computers, information would be stored in conserved properties of “quasi-particles”, that are collective motions of particles. The great thing about this is that this information would be very robust to decoherence.

According to Microsoft “the upside is enormous and there is practically no downside.” In 2018, their director of quantum computing business development, told the BBC Microsoft would have a “commercially relevant quantum computer within five years.” However, Microsoft had a big setback in February when they had to retract a paper that demonstrated the existence of the quasi-particles they hoped to use. So much about “no downside”.

6. The far field

These were the biggest players, but there are two newcomers that are worth having an eye on.

The first is semi-conducting qubits. They are very similar to the superconducting qubits, but here the qubits are either the spin or charge of single electrons. The advantage is that the temperature doesn’t need to be quite as low. Instead of 10 mK, one “only” has to reach a few Kelvin. This approach is presently pursued by researchers at TU Delft in the Netherlands, supported by Intel.

The second are Nitrogen Vacancy systems where the qubits are places in the structure of a carbon crystal where a carbon atom is replaced with nitrogen. The great advantage of those is that they’re both small and can be operated at room temperatures. This approach is pursued by The Hanson lab at Qutech, some people at MIT, and a startup in Australia called Quantum Brilliance.

So far there hasn’t been any demonstration of quantum computation for these two approaches, but they could become very promising.

So, that’s the status of quantum computing in early 2021, and I hope this video will help you to make sense of the next quantum computing headlines, which are certain to come.

I want to thank Tanuj Kumar for help with this video.

Saturday, May 08, 2021

What did Einstein mean by “spooky action at a distance”?

[This is a transcript of the video embedded below.]


Quantum mechanics is weird – I am sure you’ve read that somewhere. And why is it weird? Oh, it’s because it’s got that “spooky action at a distance”, doesn’t it? Einstein said that. Yes, that guy again. But what is spooky at a distance? What did Einstein really say? And what does it mean? That’s what we’ll talk about today.

The vast majority of sources on the internet claim that Einstein’s “spooky action at a distance” referred to entanglement. Wikipedia for example. And here is an example from Science Magazine. You will also find lots of videos on YouTube that say the same thing: Einstein’s spooky action at a distance was entanglement. But I do not think that’s what Einstein meant.

Let’s look at what Einstein actually said. The origin of the phrase “spooky action at a distance” is a letter that Einstein wrote to Max Born in March 1947. In this letter, Einstein explains to Born why he does not believe that quantum mechanics really describes how the world works.

He begins by assuring Born that he knows perfectly well that quantum mechanics is very successful: “I understand of course that the statistical formalism which you pioneered captures a significant truth.” But then he goes on to explain his problem. Einstein writes:
“I cannot seriously believe [in quantum mechanics] because the theory is incompatible with the requirement that physics should represent reality in space and time without spooky action at a distance...”

There it is, the spooky action at a distance. But just exactly what was Einstein referring to? Before we get into this, I have to quickly remind you how quantum mechanics works.

In quantum mechanics, everything is described by a complex-valued wave-function usually denoted Psi. From the wave-function we calculate probabilities for measurement outcomes, for example the probability to find a particle at a particular place. We do this by taking the absolute square of the wave-function.

But we cannot observe the wave-function itself. We only observe the outcome of the measurement. This means most importantly that if we make a measurement for which the outcome was not one hundred percent certain, then we have to suddenly „update” the wave-function. That’s because the moment we measure the particle, we know it’s either there or it isn’t. And this update is instantaneous. It happens at the same time everywhere, seemingly faster than the speed of light. And I think *that’s what Einstein was worried about because he had explained that already twenty years earlier, in the discussion of the 1927 Solvay conference.

In 1927, Einstein used the following example. Suppose you direct a beam of electrons at a screen with a tiny hole and ask what happens with a single electron. The wave-function of the electron will diffract on the hole, which means it will spread symmetrically into all directions. Then you measure it at a certain distance from the hole. The electron has the same probability to have gone in any direction. But if you measure it, you will suddenly find it in one particular point.

Einstein argues: “The interpretation, according to which [the square of the wave-function] expresses the probability that this particle is found at a given point, assumes an entirely peculiar mechanism of action at a distance, which prevents the wave continuously distributed in space from producing an action in two places on the screen.”

What he is saying is that somehow the wave-function on the left side of the screen must know that the particle was actually detected on the other side of the screen. In 1927, he did not call this action at a distance “spooky” but “peculiar” but I think he was referring to the same thing.

However, in Einstein’s electron argument it’s rather unclear what is acting on what. Because there is only one particle. This is why, Einstein together with Podolsky and Rosen later looked at the measurement for two particles that are entangled, which led to their famous 1935 EPR paper. So this is why entanglement comes in: Because you need at least two particles to show that the measurement on one particle can act on the other particle. But entanglement itself is unproblematic. It’s just a type of correlation, and correlations can be non-local without there being any “action” at a distance.

To see what I mean, forget all about quantum mechanics for a moment. Suppose I have two socks that are identical, except the one is red and the other one blue. I put them in two identical envelopes and ship one to you. The moment you open the envelope and see that your sock is red, you know that my sock is blue. That’s because the information about the color in the envelopes is correlated, and this correlation can span over large distances.

There isn’t any spooky action going on though because that correlation was created locally. Such correlations exist everywhere and are created all the time. Imagine for example you bounce a ball off a wall and it comes back. That transfers momentum to the wall. You can’t see how much, but you know that the total momentum is conserved, so the momentum of the wall is now correlated with that of the ball.

Entanglement is a correlation like this, it’s just that you can only create it with quantum particles. Suppose you have a particle with total spin zero that decays in two particles that can have spin either plus or minus one. One particle goes left, the other one right. You don’t know which particle has which spin, but you know that the total spin is conserved. So either the particle going to the right had spin plus one and the one going left minus one or the other way round.

According to quantum mechanics, before you have measured one of the particles, both possibilities exist. You can then measure the correlations between the spins of both particles with two detectors on the left and right side. It turns out that the entanglement correlations can in certain circumstances be stronger than non-quantum correlations. That’s what makes them so interesting. But there’s no spooky action in the correlation themselves. These correlations were created locally. What Einstein worried about instead is that once you measure the particle on one side, the wave-function for the particle on the other side changes.

But isn’t this the same with the two socks? Before you open the envelope the probability was 50-50 and then when you open it, it jumps to 100:0. But there’s no spooky action going on there. It’s just that the probability was a statement about what you knew, and not about what really was the case. Really, which sock was in which envelope was already decided the time I sent them.

Yes, that explains the case for the socks. But in quantum mechanics, that explanation does not work. If you think that really it was decided already which spin went into which direction when they were emitted, that will not create sufficiently strong correlations. It’s just incompatible with observations. Einstein did not know that. These experiments were done only after he died. But he knew that using entangled states you can demonstrate whether spooky action is real, or not.

I will admit that I’m a little defensive of good, old Albert Einstein because I feel that a lot of people too cheerfully declare that Einstein was wrong about quantum mechanics. But if you read what Einstein actually wrote, he was exceedingly careful in expressing himself and yet most physicists dismissed his concerns. In April 1948, he repeats his argument to Born. He writes that a measurement on one part of the wave-function is a “physical intervention” and that “such an intervention cannot immediately influence the physically reality in a distant part of space.” Einstein concludes:
“For this reason I tend to believe that quantum mechanics is an incomplete and indirect description of reality which will later be replaced by a complete and direct one.”

So, Einstein did not think that quantum mechanics was wrong. He thought it was incomplete, that something fundamental was missing in it. And in my reading, the term “spooky action at a distance” referred to the measurement update, not to entanglement.

Saturday, May 01, 2021

Dark Matter: The Situation Has Changed

[This is a transcript of the video embedded below]


Hi everybody. We haven’t talked about dark matter for some time. Which is why today I want to tell you how my opinion about dark matter has changed over the past twenty years or so. In particular, I want to discuss whether dark matter is made of particles or if not, what else it could be. Let’s get started.

First things first, dark matter is the hypothetical stuff that astrophysicists think makes up eighty percent of the matter in the universe, or 24 percent of the combined matter-energy. Dark matter should not be confused with dark energy. These are two entirely different things. Dark energy is what makes the universe expand faster, dark matter is what makes galaxies rotate faster, though that’s not the only thing dark matter does, as we’ll see in a moment.

But what is dark matter? 20 years ago I thought dark matter is most likely made of some kind of particle that we haven’t measured so far. Because, well, I’m a particle physicist by training. And if a particle can explain an observation, why look any further? Also, at the time there were quite a few proposals for new particles that could fit the data, like some supersymmetric particles or axions. So, the idea that dark matter is stuff, made of particles, seemed plausible to me and like the obvious explanation.

That’s why, just among us, I always thought dark matter is not a particularly interesting problem. Sooner or later they’ll find the particle, give it a name, someone will get a Nobel Prize and that’s that.

But, well, that hasn’t happened. Physicists have tried to measure dark matter particles since the mid 1980s. But no one’s ever seen one. There have been a few anomalies in the data, but these have all gone away upon closer inspection. Instead, what’s happened is that some astrophysical observations have become increasingly difficult to explain with the particle hypothesis. Before I get to the observations that particle dark matter doesn’t explain, I’ll first quickly summarize what it does explain, which are the reasons astrophysicists thought it exists in the first place.

Historically the first evidence for dark matter came from galaxy clusters. Galaxy clusters are made of a few hundred up to a thousand or so galaxies that are held together by their gravitational pull. They move around each other, and how fast they move depends on the total mass of the cluster. The more mass, the faster the galaxies move. Turns out that galaxies in galaxy clusters move way too fast to explain this with the mass that we can attribute to the visible matter. So Fritz Zwicky conjectured in the 1930s, that there must be more matter in galaxy clusters, just that we can’t see it. He called it “dunkle materie” dark matter.

It’s a similar story for galaxies. The velocity of a star which orbits around the center of a galaxy depends on the total mass within this orbit. But the stars in the outer parts of galaxies just orbit too fast around the center. Their velocity should drop with distance to the center of the galaxy, but it doesn’t. Instead, the velocity of the stars becomes approximately constant at far distance to the galactic center. This gives rise to the so-called “flat rotation curves”. Again you can explain that by saying there’s dark matter in the galaxies.

Then there is gravitational lensing. These are galaxies or galaxy clusters which bend light that comes from an object behind them. This object behind them then appears distorted, and from the amount of distortion you can infer the mass of the lens. Again, the visible matter just isn’t enough to explain the observations.

Then there’s the temperature fluctuations in the cosmic microwave background. These fluctuations are what you see in this skymap. All these spots here are deviations from the average temperature, which is about 2.7 Kelvin. The red spots are a little warmer, the blue spots a little colder than that average. Astrophysicists analyze the microwave-background using its power spectrum, where the vertical axis is roughly the number of spots and the horizontal axis is their size, with the larger sizes on the left and increasingly smaller spots to the right. To explain this power spectrum, again you need dark matter.

Finally, there’s the large scale distribution of galaxies and galaxy clusters and interstellar gas and so on, as you see in the image from this computer simulation. Normal matter alone just does not produce enough structure on short scales to fit the observations, and again, adding dark matter will fix the problem.

So, you see, dark matter was a simple idea that fit to a lot of observations, which is why it was such a good scientific explanation. But that was the status 20 years ago. And what’s happened since then is that observations have piled up that dark matter cannot explain.

For example, particle dark matter predicts a density in the cores of small galaxies that peaks, whereas the observations say the distribution should be flat. Dark matter also predicts too many small satellite galaxies, these are small galaxies that fly around a larger host. The Milky Way for example, should have many hundreds, but actually only has a few dozen. Also, these small satellite galaxies are often aligned in planes. Dark matter does not explain why.

We also know from observations that the mass of a galaxy is correlated to the fourth power of the rotation velocity of the outermost stars. This is called the baryonic Tully Fisher relation and it’s just an observational fact. Dark matter does not explain it. It’s a similar issue with Renzo’s rule, that says if you look at the rotation curve of a galaxy, then for every feature in the curve for the visible emission, like a wiggle or bump, there is also a feature in the rotation curve. Again, that’s an observational fact, but it makes absolutely no sense if you think that most of the matter in galaxies is dark matter. The dark matter should remove any correlation between the luminosity and the rotation curves.

Then there are collisions of galaxy clusters at high velocity, like the bullet cluster or the el gordo cluster. These are difficult to explain with particle dark matter, because dark matter creates friction and that makes such high relative velocities incredibly unlikely. Yes, you heard that correctly, the Bullet cluster is a PROBLEM for dark matter, not evidence for it.

And, yes, you can fumble with the computer simulations for dark matter and add more and more parameters to try to get it all right. But that’s no longer a simple explanation, and it’s no longer predictive.

So, if it’s not dark matter then what else could it be? The alternative explanation to particle dark matter is modified gravity. The idea of modified gravity is that we are not missing a source for gravity, but that we have the law of gravity wrong.

Modified gravity solves all the riddles that I just told you about. There’s no friction, so high relative velocities are not a problem. It predicted the Tully-Fisher relation, it explains Renzo’s rule and satellite alignments, it removes the issue with density peaks in galactic cores, and solves the missing satellites problem.

But modified gravity does not do well with the cosmic microwave background and the early universe, and it has some issues with galaxy clusters.

So that looks like a battle between competing hypotheses, and that’s certainly how it’s been portrayed and how most physicists think about it.

But here’s the thing. Purely from the perspective of data, the simplest explanation is that particle dark matter works better in some cases, and modified gravity better in others. A lot of astrophysicist reply to this, well, if you have dark matter anyway, why also have modified gravity? Answer: Because dark matter has difficulties explaining a lot of observations. On its own, it’s no longer parametrically the simplest explanation.

But wait, you may want to say, you can’t just use dark matter for observations a,b,c and modified gravity for observations x,y,z! Well actually, you can totally do that. Nothing in the scientific method that forbids it.

But more importantly, if you look at the mathematics, modified gravity and particle dark matter are actually very similar. Dark matter adds new particles, and modified gravity adds new fields. But because of quantum mechanics, fields are particles and particles are fields, so it’s the same thing really. The difference is the behavior of these fields or particles. It’s the behavior that changes from the scales of galaxies to clusters to filaments and the early universe. So what we need is a kind of phase transition that explains why and under which circumstances the behavior of these additional fields, or particles, changes, so that we need two different sets of equations.

And once you look at it this way, it’s obvious why we have not made progress on the question what dark matter is for such a long time. There’re just the wrong people working on it. It’s not a problem you can solve with particle physics and general relativity. It a problem for condensed matter physics. That’s the physics of gases, fluids, and solids and so on.

So, the conclusion that I have arrived at is that the distinction between dark matter and modified gravity is a false dichotomy. The answer isn’t either – or, it’s both. The question is just how to combine them.

Google talk online now

The major purpose of the talk was to introduce our SciMeter project which I've been working on for a few years now with Tom Price and Tobias Mistele. But I also talk a bit about my PhD topic and particle physics and how my book came about, so maybe it's interesting for some of you.

Saturday, April 24, 2021

Particle Physics Discoveries That Disappeared

[This is a transcript of the video embedded below. Parts of the text will not make sense without the graphics in the video.]


I get asked a lot what I think about this or that report of an anomaly in particle physics, like the B-meson anomaly at the large hadron collider which made headlines last month or the muon g-2 that was just all over the news. But I thought instead of just giving you my opinion, which you may or may not trust, I will instead give you some background to gauge the relevance of such headlines yourself. Why are there so many anomalies in particle physics? And how seriously should you take them? That’s what we will talk about today.

The Higgs boson was discovered in nineteen eighty-four. I’m serious. The Crystal Ball Experiment at DESY in Germany saw a particle that fit the expectation already in nineteen eighty-four. It made it into the New York Times with the headline “Physicists report mystery particle”. But the supposed mystery particle turned out to be a data fluctuation. The Higgs boson was actually only discovered in 2012 at the Large Hadron Collider at CERN. And 1984 was quite a year, because also supersymmetry was observed and then disappeared again.

How can this happen? Particle physicists calculate what they expect to see in an experiment using the best theory they have at the time. Currently that’s the standard model of particle physics. In 1984, that’d have been the standard model minus the particles which hadn’t been discovered.

But the theory alone doesn’t tell you what to expect in a measurement. For this you also have to take into account how the experiment is set up, so for example what beam and what luminosity, and how the detector works and how sensitive it is. This together: theory, setup, detector, gives you an expectation for your measurement. What you are then looking for are deviations from that expectation. Such deviations would be evidence for something new.

Here’s the problem. These expectations are always probabilistic. They don’t tell you exactly what you will see. They only tell you a distribution over possible outcomes. That’s partly due to quantum indeterminism but partly just classical uncertainty.

Therefore, it’s possible that you see a signal when there isn’t one. As an example, suppose I randomly distribute one-hundred points on this square. If I divide the square into four pieces of equal size, I expect about twenty-five points in each square. And indeed that turns out to be about correct for this random distribution. Here is another random distribution. Looks reasonable.

Now let’s do this a million times. No, actually, let’s not do this.

I let my computer do this a million times, and here is one of the outcomes. Whoa. That doesn’t look random! It looks like something’s attracting the points to that one square. Maybe it’s new physics!

No, there’s no new physics going on. Keep in mind, this distribution was randomly created. There’s no signal here, it’s all noise. It’s just that every once in a while noise happens to look like a signal.

This is why particle physicists like scientists in all other disciplines, give a “confidence level” to their observation that tells you how “confident” they are that the observation was not a statistical fluctuation. They do this by calculating the probability that the supposed signal could have been created purely by chance. If fluctuations create a signature like what you are looking for one in twenty times, then the confidence level is 95%. If fluctuations create it one in a hundred times, the confidence level is 99%, and so on. Loosely speaking, the higher the confidence level, the more remarkable the signal.

But exactly at which confidence level you declare a discovery is convention. Since the mid 1990s, particle physicists have used for discoveries a confidence level of 99.99994 percent. That’s about a one in a million chance for the signal to have been a random fluctuation. It’s also frequently referred to as 5 σ, where σ is one standard deviation. (Though that relation only holds for the normal distribution.)

But of course deviations from the expectation attract attention already below the discovery threshold. Here is a little more history. Quarks, for all we currently know, are elementary particles, meaning we haven’t seen an substructures. But a lot of physicists have speculated that quarks might be made up of even small things. These smaller particles are often called “preons”. They were found in 1996. The New York Times reported: “Tiniest Nuclear Building Block May Not Be the Quark”. The significance of the signal was about three sigma, that’s about a one in thousand chance for it to be coincidence and about the same as the current B-meson anomaly. But the supposed quark substructure was a statistical fluctuation.

The same year, the Higgs was discovered again, this time at the Large Electron Positron collider at CERN. It was an excess of Higgs-like events that made it to almost 4 σ, which is a one in sixteenthousand chance to be a random fluctuation. Guess what, that signal vanished too.

Then, in 2003, supersymmetry was “discovered” again, this time in form of a supposed sbottom quark, that’s the hypothetical supersymmetric partner particle of the bottom quark. That signal too was at about 3 σ but then disappeared.

And in 2015, we saw the di-photon anomaly that made it above 4 σ and disappeared again. There have even been some six sigma signals that disappeared again, though these had no known interpretation in terms of new physics.

For example in 1998 the Tevatron at Fermilab measured some events they dubbed “superjets” at six σ. They were never seen again. In 2004 HERA at DESY saw pentaquarks – that are particles made of 5 quarks – with 6 σ significance but that signal also disappeared. And then there is the muon g-2 anomaly that recently increased from 3.7 to 4.2 σ, but still hasn’t crossed the discovery threshold.

Of course not all discoveries that disappeared in particle physics were due to fluctuations. For example, in 1984, the UA1 experiment at CERN saw eleven particle decays of a certain type when they expected only three point five. The signature fit to that expected for the top quark. The physicists were quite optimistic they had found the top quark and this news too made it into the New York Times.

Turned out though they had misestimated the expected number of such events. Really there was nothing out of the ordinary. The top quark wasn’t actually discovered until 1995. A similar thing happened in 2011, when the CDF collaboration at Fermilab saw an excess of events at about 4 \sigma. These were not fluctuations, but they required better understanding of the background.

And then of course there are possible issues with the data analysis. For example, there are various tricks you can play to increase the supposed significance. This basically doesn’t happen in collaboration papers, but you sometimes see individual researchers that use very, erm, creative methods of analysis. And then there may can be systematic problems with the detection, triggers, or filters and so on.

In summary: Possible reasons why a discovery might disappear are (a) fluctuations (b) miscalculations (c) analysis screw-ups (d) systematics. The most common one, just by looking at history, are fluctuations. And why are there so many fluctuations in particle physics? It’s because they have a lot of data. The more data you have, the more likely you are to find fluctuations that look like signals. That, by the way, is why particle physicists introduced the five sigma standard in the first place. Because otherwise they’d constantly have “discoveries” that disappear.

So what’s with that B-meson anomaly at the LHC that recently made headlines. It’s actually been around since 2015, but recently a new analysis came out and so it was in the news again. It’s currently lingering at 3.1 σ. As we saw, signals of that strength go away all the time, but it’s interesting that this one’s stuck around instead of going away. That makes me think it’s either a systematic problem or indeed a real signal.

Note: I have a longer comment about the recent muon g-2 measurement here.

Wednesday, April 21, 2021

All you need to know about Elon Musk’s Carbon Capture Prize

[This is a transcript of the video embedded below.]


Elon Musk has announced he is sponsoring a competition for the best carbon removal ideas with a fifty million dollar prize for the winner. The competition will open on April twenty-second, twenty-twenty-one. In this video, I will tell you all you need to know about carbon capture to get your brain going, and put you on the way for the fifty million dollar prize.

During the formation of our planet, large amounts of carbon dioxide were stored in the ground, and ended up in coal and oil. By burning these fossil fuels, we have released a lot of that old carbon dioxide really suddenly. It accumulates in the atmosphere and prevents our planet from giving off heat the way it used to. As a consequence, the climate changes, and it changes rapidly.

The best course of action would have been to not pump that much carbon dioxide into the atmosphere to begin with, but at this point reducing future emissions alone might no longer be the best way to proceed. We might have to find ways to actually get carbon dioxide back out of the air. Getting this done is what Elon Musk’s competition is all about.

The problem is, once carbon dioxide is in the atmosphere it stays there for a long time. By natural processes alone, it would take several thousand years for atmospheric carbon dioxide levels to return to pre-industrial. And the climate reacts slowly to the sudden increase in carbon dioxide, so we haven’t yet seen the full impact of what we have done already. For example, there’s a lot of water on our planet, and warming up this water takes time.

So, even if we were to entirely stop carbon dioxide emissions today, the climate would continue to change for at least several more decades, if not centuries. It’s like you elected someone out of office, and now they’re really pissed off, but they’ve got six weeks left on the job and nothing you can do about that.

Globally, we are presently emitting about forty billion tons of carbon dioxide per year. According to the Intergovernmental Panel on Climate Change, we’d have to get down to twenty billion tons per year to limit warming to one point five degrees Celsius compared to preindustrial levels. These one point five degrees are what’s called the “Paris target.” This means, if we continue emitting at the same level as today, we’ll have to remove twenty billion tons carbon dioxide per year.

But to score in Musk’s competition, you don’t need a plan to remove the full twenty billion tons per year. You merely need “A working carbon removal prototype that can be rigorously validated” that is “capable of removing at least 1 ton per day” and the carbon “should stay locked up for at least one hundred years.” But other than that, pretty much everything goes. According to the website, the “main metric for the competition is cost per ton”.

So, which options do we have to remove carbon dioxide and how much do they cost?

The obvious thing to try is enhancing natural processes which remove carbon dioxide from the atmosphere. You can do that for example by planting trees because trees take up carbon dioxide as they grow. They are what’s called a natural “carbon sink”. This carbon is released again if the trees die and rot, or are burned, so planting trees alone isn’t enough, we’d have to permanently increase their numbers.

By how much? Depends somewhat on the type of forest, but to get rid of the twenty billion tons per year, we’d have to plant about ten million square kilometers of new forests. That’s about the area of the United States and more than the entire remaining Amazon rainforest.

Planting so many trees seems a bit impractical. And it isn’t cheap either. The cost is about 100 US dollars per ton of carbon dioxide. So, to get rid of the 20 billion tons excess carbon dioxide, that would be a few trillion dollars per year. Trees are clearly part of the solution, but we need to do more than that. And stop burning the rain forest wouldn’t hurt either.

Humans by the way are also a natural carbon sink because we’re eighteen percent carbon. Unfortunately, burying or burning dead people returns that carbon into the environment. Indeed, a single cremation releases about two-hundred-fifty kilograms of carbon dioxide, which could be avoided, for example, by dumping dead people in the deep sea where they won’t rot. So, if we were to do sea burials instead of cremations, that would save up to a million tons carbon dioxide per year. Not a terrible lot. And probably quite expensive. Yeah, I’m not the person to win that prize.

But there’s a more efficient way that oceans could help removing carbon. If one stimulates the growth of algae, these will take up carbon. When the algae die, they sink to the bottom of the ocean, where the carbon could remain, in principle, for millions of years. This is called “ocean fertilization”.

It’s a good idea in theory, but in practice it’s presently unclear how efficient it is. There’s no good data for how many of the algae sink and how many of them get eaten, in which case the carbon might be released, and no one knows what else such fertilization might do to the oceans. So, a lot of research remains to be done here. It’s also unclear how much it would cost. Estimates range from two to four hundred fifty US dollars per ton of carbon dioxide.

Besides enhancing natural carbon sinks, there are a variety of technologies for removing carbon permanently.

For example, if one burns agricultural waste or wood in the absence of oxygen, this will not release all the carbon dioxide but produce a substance called biochar. The biochar keeps about half of the carbon, and not only is it is stable for thousands of years, it can also improve the quality of soil.

The major problem with this idea is that there’s only so much agricultural waste to burn. Still, by some optimistic estimates one could remove up to one point eight billion tons carbon dioxide per year this way. Cost estimates are between thirty and one hundred twenty US dollars per ton of carbon dioxide.

By the way, plastic is about eighty percent carbon. That’s because it’s mostly made of oil and natural gas. And since it isn’t biodegradable, it’ll safely store the carbon – as long as you don’t burn it. So, the Great Pacific garbage patch? That’s carbon storage. Not a particularly popular one though.

A more popular idea is enhanced weathering. For this, one artificially creates certain minerals that, when they come in contact with water, can bind carbon dioxide to them, thereby removing it from the air. The idea is to produce large amounts of these minerals, crush them, and distribute them over large areas of land.

The challenges for this method are: how do you produce large amounts of these minerals, and where do you find enough land to put it on. The supporters of the American weathering project Vesta claim that the cost would be about ten US dollars per ton of carbon dioxide. So that’s a factor ten less than planting trees.

Then there is direct air capture. The most common method for this is pushing air through filters which absorb carbon dioxide. Several petrol companies like Chevron, BHP, and Occidental currently explore this technology. The company Carbon Engineering, which is backed by Bill Gates, has a pilot plant in British Columbia that they want to scale up to commercial plants. They claim every such plant will be equivalent in carbon removal to 40 million trees, removing 1 million tons of carbon dioxide per year.

They estimate the cost between ninety-four and 232 US dollar per ton. That would mean between two to four trillion US dollars per year to eliminate the entire twenty billion tons carbon dioxide which we need to get rid of. That’s between two point five and five percent of the world’s GDP.

But, since carbon dioxide is taken up by the oceans, one can also try to get rid of it by extracting it from seawater. Indeed, the density of carbon dioxide in seawater is about one hundred twenty five times higher than it is in air. And once you’ve removed it, the water will take up new carbon dioxide from the air, so you can basically use the oceans to suck the carbon dioxide out of the atmosphere. That sounds really neat.

The current cost estimate for carbon extraction from seawater is about 50 dollars per ton, so that’s about half as much as carbon extraction from air. The major challenge for this idea is that the currently known methods for extracting carbon dioxide from water require heating the water to about seventy degrees Celsius which takes up a lot of energy. But maybe there are other, more energy efficient ways, to get carbon dioxide out of water? You might be the person to solve this problem.

Finally, there is carbon capture and storage, which means capturing carbon dioxide right where it’s produced and store it away before it’s released into the atmosphere.

About twenty-six commercial facilities already use this method, and a few dozen more are planned. In twenty-twenty, about forty million tons of carbon dioxide were captured by this method. The typical cost is between 50 and 100 US$ per ton of carbon dioxide, though in particularly lucky cases the cost may go down to about 15 dollars per ton. The major challenge here is that present technologies for carbon capture and storage require huge amounts of water.

As you can see an overall problem for these ideas is that they’re expensive. You can therefore score on Musk’s competition by making one of the existing technologies cheaper, or more efficient, or both, or maybe you have an entirely new idea to put forward. I wish you good luck!

Saturday, April 17, 2021

Does the Universe have higher dimensions? Part 2

[This is a transcript of the video embedded below.]


In science fiction, hyper drives allow spaceships to travel faster than light by going through higher dimensions. And physicists have studied the question whether such extra dimensions exist for real in quite some detail. So, what have they found? Are extra dimensions possible? What do they have to do with string theory and black holes at the Large Hadron collider? And if extra dimensions are possible, can we use them for space travel? That’s what we will talk about today.

This video continues the one of last week, in which I talked about the history of extra dimensions. As I explained in the previous video, if one adds 7 dimensions of space to our normal three dimensions, then one can describe all of the fundamental forces of nature geometrically. And that sounds like a really promising idea for a unified theory of physics. Indeed, in the early 1980s, the string theorist Edward Witten thought it was intriguing that seven additional dimensions of space is also the maximum for supergravity.

However, that numerical coincidence turned out to not lead anywhere. This geometric construction of fundamental forces which is called Kaluza-Klein theory, suffers from several problems that no one has managed to solved.

One problem is that the radii of these extra dimensions are unstable. So they could grow or shrink away, and that’s not compatible with observation. Another problem is that some of the particles we know come in two different versions, a left handed and a right handed one. And these two version do not behave the same way. This is called chirality. That particles behave this way is an observational fact, but it does not fit with the Kaluza-Klein idea. Witten actually worried about this in his 1981 paper.

Enter string theory. In string theory, the fundamental entities are strings. That the strings are fundamental means they are not made of anything else. They just are. And everything else is made from these strings. Now you can ask how many dimensions does a string need to wiggle in to correctly describe the physics we observe?

The first answer that string theorists got was twenty six. That’s twenty five dimensions of space and one dimension of time. That’s a lot. Turns out though, if you add supersymmetry the number goes down to ten, so, nine dimension of space and one dimension of time. String theory just does not work properly in fewer dimensions of space.

This creates the same problem that people had with Kaluza-Klein theory a century ago: If these dimensions exist, where are they? And string theorists answered the question the same way: We can’t see them, because they are curled up to small radii.

In string theory, one curls up those extra dimensions to complicated geometrical shapes called “Calabi-Yau manifolds”, but the details aren’t all that important. The important thing is that because of this curling up, the strings have higher harmonics. This is the same thing which happens in Kaluza-Klein theory. And it means, if a string gets enough energy, it can oscillate with certain frequencies that have to match to the radius of these extra dimensions.

Therefore, it’s not true that string theory does not make predictions, though I frequently hear people claim that. String theory makes the prediction that these higher harmonics should exist. The problem is that you need really high energies to create them. That’s because we already know that these curled up dimensions have to be small. And small radii means high frequencies, and therefore high energies.

How high does the energy have to be to see these higher harmonics? Ah, here’s the thing. String theory does not tell you. We only know that these extra dimensions have to be so small we haven’t yet seen them. So, in principle, they could be just out of reach, and the next bigger particle collider could create these higher harmonics.

And this… is where the idea comes from that the Large Hadron Collider might create tiny black holes.

To understand how extra dimensions help with creating black holes, you first have to know that Newton’s one over R squared law is geometrical. The gravitational force of a point mass falls with one over R squared because the surface of the sphere grows with R squared, where R is the radius of the sphere. So, if you increase the distance to the mass, the force lines thin out as the surface of the sphere grows. But… here is the important point. Suppose you have additional dimensions of space. Say you don’t have three, but 3+n, where n is a positive integer. Then, the surface of the sphere increases with R to the (2+n).

Consequently, the gravitational force drops with one over R to the (2+n) as you move away from the mass. This means, if space has more than three dimensions, the force drops much faster with distance to the source than normally.

Of course Newtonian gravity was superseded by Einstein’s theory of General Relativity, but this general geometric consideration about how gravity weakens with distance to the source remains valid. So, in higher dimensions the gravitational force drops faster with distance to the source.

Keep in mind though that the extra dimensions we are concerned with are curled up, because otherwise we’d already have noticed them. This means, into the direction of these extra dimensions, the force lines can only spread out up to a distance that is comparable to the radius of the dimensions. After this, the only directions the force lines can continue to spread out into are the three large directions. This means that on distances much larger than the radius of the extra dimensions, this gives back the usual 1/R^2 law, which we observe.

Now about those black holes. If gravity works as usual in three dimensions of space, we cannot create black holes. That’s because gravity is just too weak. But consider you have these extra dimensions. Since the gravitational force falls much faster as you go away from the mass, it means that if you get closer to a mass, the force gets much stronger than it would in only 3 dimensions. That makes it much easier to create black holes. Indeed, if the extra dimensions are large enough, you could create black holes at the Large Hadron Collider.

At least in theory. In practice, the Large Hadron Collider did not produce black holes, which means that if the extra dimensions exist, they’re really small. How “small”? Depends on the number of extra dimensions, but roughly speaking below a micrometer.

If they existed, could we travel through them? The brief answer is no, and even if we could it would be pointless. The reason is that while the gravitational force can spread into all of the extra dimensions, matter, like the stuff we are made of, can’t go there. It is bound to a 3-dimensional slice, which string theorists call a “brane”, that’s b r a n e, not b r a i n, and it’s a generalization of membrane. So, basically, we’re stuck on this 3-dimensional brane, which is our universe. But even if that was not the case, what do you want in these extra dimensions anyway? There isn’t anything in there and you can’t travel any faster there than in our universe.

People often think that extra dimensions provide a type of shortcut, because of illustrations like this. The idea is that our universe is kind of like this sheet which is bent and then you can go into a direction perpendicular to it, to arrive at a seemingly distant point faster. The thing is though, you don’t need extra dimensions for that. What we call the “dimension” in general relativity would be represented in this image by the dimension of the surface, which doesn’t change. Indeed, these things are called wormholes and you can have them in ordinary general relativity with the odinary three dimensions of space.

This embedding space here does not actually exist in general relativity. This is also why people get confused about the question what the universe expands into. It doesn’t expand into anything, it just expands. By the way, fun fact, if you want to embed a general 4 dimensional space-time into a higher dimensional flat space you need 10 dimensions, which happens to be the same number of dimensions you need for string theory to make sense. Yet another one of these meaningless numerical coincidences, but I digress.

What does this mean for space travel? Well, it means that traveling through higher dimensions by using hyper drives is scientifically extremely implausible. Therefore, my ultimate ranking for the scientific plausibility of science fiction travel is:

3rd place: Hyper drives because it’s a nice idea, it just makes no scientific sense.

2nd place: Wormholes, because at least they exist mathematically, though no one has any idea how to create them.

And the winner is... Warp drives! Because not only does the mathematics work out, it’s in principle possible to create them, at least as long as you stay below the speed of light limit. How to travel faster than light, I am afraid we still don’t know. But maybe you are the one to figure it out.