Saturday, August 27, 2022

We don't know how the universe began, and we will never know

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Did the universe come out of a black hole? Will the big bang repeat? Was the universe created from strings? Physicists have a lot of ideas about how the universe began, and I am constantly asked to comment on them. In this video I want to explain why you should not take these ideas seriously. Why not? That’s what we’ll talk about today.

The first evidence that the universe expands was discovered by Edwin Hubble who saw that nearby galaxies all move away from us. How this could happen was explained by none other than Albert Einstein. Yes, that guy again. His theory of general relativity says that space responds to the matter and energy in it by expanding.

And so, as time passes, matter and energy in the universe become more thinly diluted on average. I say “on average” because inside of galaxies, matter doesn’t dilute but actually clumps and space doesn’t expand. But in this video we’ll only look at the average over the entire universe.

So we know that the universe expands and on average matter in it dilutes. But if the universe expands today, this means if we look back in time the matter must have been squeezed together, so the density was higher. And a higher density of matter means a higher temperature. This tells us that in the early universe, matter was dense and hot. Really hot. At some point, matter must have been so hot that atoms couldn’t keep electrons around them. And even earlier, there wouldn’t even have been individual atomic nuclei, just a plasma of elementary particles like quarks and gluons and photons and so on. It’s like the alphabet soup of physics.

And before that? We don’t know. We don’t know because we have never tested what matter does at energy densities higher than those which the Large Hadron Collider can produce.

However, we can just ignore this difficulty, and continue using Einstein’s equations further back in time, assuming that nothing changes. What we find then is that the energy density of matter must once have been infinitely large. This is a singularity and it’s where our extrapolation into the past breaks down. The moment at which this happens is approximately thirteen point seven billion years in the past and it’s called the Big Bang.

The Big Bang didn’t happen at any particular place in space, it happened everywhere. I explained this in more detail in this earlier video.

Now, most physicists, me included, think that the Big Bang singularity is a mathematical artifact and not what really happened. It probably just means that Einstein’s theory stops working and we should be using a better one. We think that’s what’s going on, because when singularities occur in other cases in physics, that’s the reason. For example, when a drop of water pinches off a tap, then the surface curvature of the water has a singular point. But this happens only if we describe the water as a smooth fluid. If we would take into account that it’s actually made of atoms, then the singularity would go away.

Something like that is probably also why we get the Big Bang singularity. We should be using a better theory, one that includes the quantum properties of space. Unfortunately, we don’t have the theory for this calculation. And so, all that we can reliably say is: If we extrapolate Einstein’s equations back in time, we get the Big Bang singularity. We think that this isn’t physically correct. So we don’t know how the universe began. And that’s it.

Then how come that you constantly read about all those other ideas for how the universe began? Because you were sitting around at your dentist and had nothing else to do. Ok, but why do physicists put forward such ideas when the answer is we just don’t know. Like, you may have recently seen some videos about how our universe was allegedly born from a black hole.

The issue is that physicists can’t accept the scientifically honest answer: We don’t know, and leave it at that. Instead, they change the extrapolation back in time by using a different set of equations. And then you can do all kind of other things, really pretty much anything you want.

But wait, this is science, right? You don’t just get to make up equations. Unless possible you’re decorating a black board for a TV crew. Though, actually, I did this once and later they asked me what those equations were and I had to tell them they don’t mean anything which was really embarrassing. So even in this case my advice would be, you shouldn’t make up equations. But in cosmology they do it anyway. Here’s why.

Suppose you’re throwing a stone and you calculate where it falls using Newton’s laws. If I give you the initial position and velocity, you can calculate where the stone lands. We call the initial position and velocity the “initial state”, and the equation by which you calculate what happens the “evolution law”.

You can also use this equation the other way round: if you know the final state, that is, the position and velocity at the moment the stone landed, you can calculate where it’s been at any time in between, and where it came from. It’s kind of like when my kids have chocolate all over their fingers, I can deduce where that came from.

Okay, but you didn’t come here to hear me talk about stones, this video was supposedly about the universe. Well, in physics all theories we currently have work this way, even that for the entire universe. The equations are more complicated, alright, but we still have an initial state and an evolution law. We put in some initial state, calculate how it would look like today, and compare that with our observations to see if it’s correct.

But wait. In this case we can only tell that the initial state and the equations *together give the correct prediction for the final state. How can we tell that the equations alone are correct?

Let’s look at the stone example again. You could throw many stones from different places with different initial velocities and check that they always land where the equations say. You could also, say, take a video of the flight of the stone and check that the position at any moment agrees with the equations. I don't think that video would kill it on TikTok, but you never know, people watch the weirdest shit.

But in cosmology we can’t do that. We have only one universe, so we can’t test the equations by changing the initial conditions. And we can’t take any snapshots in between because we’d have to wait 13 billion years. In cosmology we only have observations of the final state, that is, where the stone lands.

That’s a problem. Because then you can take whatever equation you want and use it to calculate what happened earlier. And for each possible equation there will be *some earlier state that, if you use the equation in the other direction, will agree with the final position and velocity that you observed. So it seems like in cosmology we can only test a combination of initial state and equation but not find out what either is separately. And then we can’t say anything about how the universe began.

That sounds bad. But the situation isn’t quite as bad for two reasons.

First: the equations we use for the entire universe have been confirmed by *other observations in which we *can use the standard scientific methods. There are many experiments which show that Einstein’s equations of General Relativity are correct, for example redshift in the gravitational field, or the perihelion precession of Mercury, and so on. We then take these well-confirmed equations and apply them to the entire universe.

This, however, doesn’t entirely solve the problem. That’s because in cosmology we use further assumptions besides Einstein’s equations. For example, we use the cosmological principle about which I talked in an earlier video, or we assume that the universe contains dark matter and dark energy and so on. So, saying that we trust the equations because they work in other cases doesn’t justify the current cosmological model.

But we have a second reason which *does justify it. It’s that Einstein’s equations together with their initial values in the early universe provide a simple *explanation for the observations we make today. When I say simple I mean simple in a quantitative way: you need few numbers to specify it. If you used a different equation, then the initial state would be more difficult. You’d need to put in more numbers. And the theory wouldn’t explain as much.

Just think of the equations as a kind of machine. You put in some assumptions about how the universe began, do the maths, and you get out a prediction for how it looks like today. This is a good explanation if the prediction agrees with observations *and the initial state was simple. The simpler the better. And for this you only need the observations from today, you don’t need to wait some billion years. Unless of course you would like to. You know what? Let's just wait together.

Okay. How about you wait, and we talk again in 10 billion years.

While you wait, the cosmologists who aren’t patient enough justify using one particular equation and one particular initial state by showing that this *combination is a simple explanation in the sense that we can calculate a lot of data from it. The simplest explanation that we have found is the standard model for cosmology, which is also called LamdaCMD, and it’s based on Einstein’s equations.

This model explains for example how our observations of the cosmic microwave background fits together with our observations of galactic filaments. They came out of the same initial distribution of matter, the alphabet soup of the early universe. If we used a different equation, there’d still be some initial state, but it wouldn’t be simple any more.

The requirement that an explanation is simple is super important. And it’s not just because otherwise people fall asleep before your done explaining. It’s because without it we can’t do science at all. Take the idea that the Earth was created 6000 years ago with all dinosaur bones in place because god made it so. This isn’t wrong. But it isn’t simple, so it’s not a scientific explanation. Evolution and geology in contrast are simple explanations for how those dinosaur bones ended up where they are. I explained this in more detail in my new book Existential Physics which has just appeared.

That said, let us then look at what physicists do when they talk about different ideas for how the universe began. For this, they change the equations as we go back in time. Typically, the equations are very similar to Einstein’s equations at the present time, but they differ early in the universe. And then they also need a different initial state, so you might no longer find a Big Bang. As I said earlier, you can always do this, because for any evolution law there will be some initial state that will give you the right prediction for today.

The problem is that this makes a simple explanation more complicated, so these theories are not scientifically justifiable. They don’t improve the explanatory power of the standard cosmological model. Another way to put it is that all those complicated ideas for how the universe began are unnecessary to explain what we observe.

It’s actually worse. Because you might think we just have to wait for better observations and then maybe we’ll see that the current cosmological model is no longer the simplest explanation. But if there was an earlier phase of the universe that was indeed more complicated than the simple initial state that we use today, we couldn’t use the scientific method to decide whether it’s correct or not. The scientific method as we know it just doesn’t cover this case. Science fails!

Sure, making better observations can help us improve the current models a little more. But eventually we’ll run into this problem that more complicated explanations are always possible, and never scientifically justified.

So what’s with all those ideas about the early universe. Here’s one that’s been kind of popular recently, an idea that was put forward by Nikodem Poplawski. For this, you change general relativity by adding new terms to the equations called torsion. This removes the big bang singularity, and replaces it with a bounce. Our universe then came out of a bottleneck that’s quite similar to a black hole, just without the singularity. Can you do this? You can certainly do this in the sense that there’s maths for it. But on that count you can do many other things. Like broccoli. There’s maths for broccoli. So why not make the universe out of broccoli?

I know this sounds crazy, but there are a lot of examples for this, like Penrose’s cyclic cosmology that we talked about some months ago. Or the ekpyrotic universe which starts with a collision of higher dimensional membranes. Or the idea that we came out of a 5-dimensional black hole which made headlines a few years ago. Or the idea that the universe began with a gas of strings which seems to never have been particularly popular. Or the no-boundary proposal which has it that the universe began with only space and no time, an idea put forward by Jim Hartle and Stephen Hawking. Or geometrogenesis, which is the idea that the universe began as a highly connected network that then lost most of its connections and condensed into something that is indistinguishable from the space we inhabit. And so on. 

Have you ever wondered how come there are so many different ideas for the early universe? It’s because by the method that physicists currently use, there are infinitely many stories you can invent for the early universe.

The physicists who work on this always come up with some predictions for observables. But since these hypotheses are already unnecessarily complicated anyway, you can make them fit to any possible observation. And even if you’d rule out some of them, there are infinitely many others you could make up.

This doesn’t mean that these ideas are wrong. It just means that we can’t tell if they’re right or wrong. My friend Tim Palmer suggested to call them ascientific. When it comes to the question how the universe began, we are facing the limits of science itself. It’s a question I think we’ll never be able to answer. Just like we'll never be able to answer the question of why women pluck off their eyebrows and then paint them back on. Some questions defy answers.

So if you read yet another headline about some physicist who thinks our universe could have begun this way or that way, you should really read this as a creation myth written in the language of mathematics. It’s not wrong, but it isn’t scientific either. The Big Bang is the simplest explanation we know, and that is probably wrong, and that’s it. That’s all that science can tell us.

Saturday, August 20, 2022

No Sun, No Wind, Now What? Renewable Energy Storage

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Solar panels and wind turbines are great – so long as the sun shines and the wind blows. What if they don’t? You could try swearing at the sky, but that might attract your neighbor’s attention, so I’ll talk about the next best option: storing energy. But how? What storage do we have for renewable energy, how much do we need, how expensive is it, and how much does it contribute to the carbon footprint of renewables? That’s what we’ll talk about today.

I’ve been hesitating to do a video about energy storage because in all honesty it doesn’t sound particularly captivating, unless possibly you are yourself energy waiting to be stored. But I changed my mind when I learned the technical term for a cloudy and windless day. Dunkelflaute. That’s a German compound noun: dunkel means “dark” and “flaute” means “lull”. So basically I made an entire video just to have an excuse to tell you this. But while you’re here we might as well talk about the problem with dunkelflaute…

The renewable energy source that currently makes the largest contribution to electricity production is hydropower with about 16%. Wind and solar together contribute about 9%. But this is electric energy only. If you include heating and transport in the energy needs, then all renewables together make it to only 11%. That’s right: We still use fossil fuels for more than 80% of our entire energy production.

The reason that wind and solar are so hotly discussed at the moment is that in the past two decades their contribution to electricity production has rapidly increased while the cost per kilo-Watt hour has dropped. This is not the case for hydropower, where expansion is slow and costs have actually somewhat increased in the past decade. This isn’t so surprising: Hydropower works very well in certain places but those places have been occupied long ago. Solar and wind in contrast still have a lot of unused potential, and this is why many nations put their hopes on them.

But then there’s the dunkelflaute and its evil brother, cold dunkelflaute. That’s when the sun doesn’t shine and the wind doesn’t blow, and that happens in the winter. It’s a shame there aren’t any umlauts in the word, otherwise it’d make a great name for a metal band.

It’s no coincidence that Germans in particular go on about this because such weather situations are quite common in Germany. The German weather service estimates that it happens on the average twice each year, that the power production from wind and solar in Germany is less than 10% the expected average for at least 2 days. Every once in a while these situations can last a week or longer.

Of course this isn’t an issue just in Germany. This figure shows the average monthly hours of dunkelflaute for some European countries. As you can see, they are almost all in the winter. A recent paper in Nature Communications looked at how well solar and wind can meet electricity demand in 42 countries. They found that even with optimistic extension scenarios and technology upgrades, no country would be able to avoid the problem.

The color in this figure indicates the maximum reliability that can be achieved without storage. The darker the color, the worse the situation. As you can see, without storage it would be basically impossible to meet the demand reliably anywhere with wind and solar alone. Even Australia which reliably gets sunshine can’t eliminate the risk, and Europe is more at risk than North America.

The situation might actually be worse than that because climate change might weaken the wind in some places and make dunkelflaute a more frequent visitor. That’s because part of the global air circulation is driven by the temperature gradient between the equator and the poles. The poles heat up faster than the equator, which weakens the gradient. What this’ll do to the wind isn’t clear – the current climate models aren’t good enough to tell. But maybe, just maybe, banking on stable climate patterns is not a good idea if the problem you’re trying to address is that the climate changes. Just a thought.

Ok, so how can we deal with the dunkelflaute problem? There are basically two options. One is better connectivity of the power grid, so that the risk can be shared between several countries. However, this can be difficult because neighboring countries often have similar weather conditions. A recent study by Dutch researchers found that even connecting much of Europe wouldn’t eliminate the risk. And in any case, this leaves open the question whether countries who don’t have a problem at the time could possibly cover the demand for everyone else. I mean, the energy still has to come from somewhere.

And then there’s the problem that multi-national cooperation doesn’t always work as you want. Instead of being dependent on gas from Russia we might just end up being dependent on solar power from Egypt.

The other way to address the problem is storing the energy until we need it. First, some technical terms: The capacity of energy storage is measured in Watt hours. It’s the power that the battery can supply multiplied by the discharge time until it’s empty. For example, a battery system with an energy capacity of 20 Giga Watt hours can power 5 Giga Watt for 4 hours before it’s empty. This number alone doesn’t tell you how long you can store energy until it starts leaking; this is something else to keep in mind.

At the moment, the vast majority of energy storage is pumped hydro which means you use the energy you don’t need to pump water up somewhere, and when you need the energy, you let the water run back down and drive a turbine with it. Currently more than 90 percent of energy storage is pumped hydro. Problem is, there are only so many rivers in the world and to pump water up a hill you need a hill, which is more than some countries have. Much of the increase in storage capacity in the past years comes from lithium ion batteries. However, they still only make a small contribution to the total.

To give you a sense of the problem: At present we have 34 Giga Watt hours of energy storage capacity worldwide, not including pumped hydro. If you include pumped hydro, it’s 2 point 2 Tera Watt hours. We need to reach at least 1 Peta Watt hours, that’s about 500 times as much as the total we currently have. It’s an immense challenge.

So let us then have a look at how we could address this problem, other than swearing at the sky and at your neighbor and at the rest of the world while you’re at it. All energy storage systems have the same basic problem: if you put energy into storage, you’ll get less out. This means, if we combine an energy source with storage, then the efficiency goes down.

Pumped hydro which we already talked about has an efficiency between 78 percent and 82 percent for modern systems and can store energy for a long time. The total cost of this type of storage varies dramatically depending on location and the size of the plant, but has been estimated to be between 70-350 dollars per kilo Watt hour of energy storage.

Pumped hydro is really remarkable, and at least for now it’s the clear winner of energy storage. For example, in Bath County Virginia, they store 24 Giga Watt hour this way. But pumped hydro also has its problems, because for some regions of the world, including the united states, climate change brings more drought and you can’t pump water if you don’t have any.

A similar idea is what’s called “gravitational energy battery” which is basically pumped hydro but with solids. You pile concrete blocks on top of each, store the gravitational energy, and when you let the blocks back down, you run a dynamo with it. Fun, right? These systems are very energy efficient, about 90%, and they store energy basically indefinitely. But they’re small compared to the enormous amounts of water in a reservoir.

The Swiss company EnergyVault is working on the construction of one such plant in China which they claim will have 100 Mega Watt hours energy storage capacity. So, nice idea but it isn’t going to make much of a difference. I totally think they should get a participation trophy for the effort, but keep in mind we need to reach 1 Peta Watt hour. That’d be about 10 millions of such plants.

A more promising approach is compressed air energy storage or liquefied air energy storage. As the name suggests, the idea is that you compress or liquefy air, put it aside, and if you need energy, you let the air expand to drive a generator. The good thing about this idea is that you can do it pretty much everywhere.

The efficiency has been estimated to lie between 40 and 70 percent, though it drops by about zero point two percent per day due to leakage, and that’s the optimistic estimate. The costs lie between 50-150 dollars per kilo Watt hour, so that’s a little less than pumped hydro and actually pretty good. This one gets the convenience award. The McIntosh Power Plant in Alabama is a very large one, with capacity of almost three Giga Watt hours.

Another option is thermal energy storage. For this you heat a material, isolate it, and then when you need the energy you use the heat to drive a turbine, or you use it directly for heating. You can also do this by cooling a substance, then it’s called cryogenic energy storage.

The problem with thermal energy storage is that the efficiency is quite low; it typically ranges from only 30 percent to 60 percent. And since no insulation is perfect, the energy gets gradually lost. But being imperfect and losing energy is something we’re all familiar with, so this one gets the sympathy award.

In this video we’re looking into how to store solar and wind energy, but it’s worth mentioning that some countries use thermal energy storage to store heat directly for heating which is much more efficient. The Finnish company Helen Oy, for example, uses a cavern of 300 thousand cubic meters to store warm seawater in the summer which gives them about 11.6 Giga Watt hours. That’s a lot, and the main reason is that it’s just a huge volume.

As I mentioned previously, most of the expansion in energy storage capacity in the past decade has been in lithium-ion batteries. This one’s the runner-up after pumped hydro. They have a round trip efficiency of 80 to 95 percent, and a lifetime of up to 10 years.

But we currently have only a little more than 4 Giga Watt hours in lithium ion batteries, that’s a factor 500 less than what we have in pumped hydro. It isn’t cheap either. The cost in 2018 has been estimated with about 469 dollars per kilo Watt hour. It’s expected to decrease to about 360 in 2025 but this is still much more expensive than liquefied air.

And then there’s hydrogen. Sweet, innocent, hydrogen. Hydrogen has a very low round trip efficiency, between 25 and 45 percent, but it it’s popular because it’s really cheap. The costs have been estimated with 2 to 20 dollars per kilo Watt hour, depending on where and how you store it. So even the most expensive hydrogen storage is ten times less expensive than lithium ion batteries. In total numbers however, we currently have very little hydrogen storage. In 2017 it was about 100 Mega Watt hour. I suspect though that this is going to change very quickly and I give hydrogen the cheap-is-neat award.

Those are the biggest energy storage systems to date but there are a few fun ones that we should mention, for example flywheels. Contrary to what the name suggests, a flywheel is neither a flying wheel nor a gymnastic exercise for people who like being wheeled away in ambulances, but it’s basically a big wheel that can rotate and that stores energy because angular momentum is conserved.

Those flywheels only store energy up to 20 Mega Watt hours for a couple of minutes, so they’ll not solve the dunkelflaute problem. But they can reach efficiencies up to 95 percent, which is quite amazing really. They also don’t require much maintenance and have very long lifetimes, so they can be useful as short-term storage buffers.

There are also ultracapacitors which store electric energy like capacitors, just more of it. They have a high efficiency of 85-95 percent, but can store only small amounts of energy, and are ridiculously expensive, up to 60,000 dollars per kilo Watt hour. 

The difficulty of finding good energy storage technologies drives home just how handy fossil fuels are. Let me illustrate this with some numbers. A kilogram of gasoline gives you about 13 kilo Watt hours, a kilogram of coal a little less, about 8 kilo Watt hours. A lithium ion battery gives you only 0 point 2 kilo Watt hours per kilo gram. A kilo gram of water at one kilometer altitude is just 2.7 Watt hours, that’s another factor thousand less.

On the other hand, 1 kilo gram of Uranium 235 gives you 24 Giga Watt hours. And one kilogram of antimatter plus the same amount of matter would produce 25 Tera Watt hours. 25 Tera Watt hours! With a ton of it we would cover the electric needs of the whole world for a year.

Okay, so we have seen energy storage isn’t cheap and it isn’t easy, and we need a lot of it, fast. In addition, putting energy into storage and getting it back out inevitably lowers the efficiency of the energy source. This already doesn’t sound particularly great, but does it at least help with the carbon footprint? After all, you have to build the storage facility and you need to get those materials from somewhere, and if it doesn’t last long you have to recycle it or rebuild it.

A paper in 2015 from a group of American researchers found that carbon dioxide emissions resulting from storage are substantial when compared to the emissions from electricity generation, ranging from 104 to 407 kilo gram per Mega Watt hour of delivered energy.

This number probably doesn’t tell you anything, so let me put this in context. Coal releases almost a ton of carbon dioxide per Mega Watt hour. But the upper limit of the storage range is very close to the lowest estimate for natural gas. And remember that you have to add the storage on top of the carbon dioxide emissions from the renewables. Plus, the need to store the energy makes them less efficient.

In the case of lithium-ion batteries, the numbers strongly depend on how well you can recharge the batteries, that is, how many cycles they survive. According to a back-of-the-envelope estimate by the Chemical Engineer Robert Rapier, for 400 cycles the emissions are about 330 kilo gram carbon dioxide per Mega Watt hour but assuming 3000 cycles the number goes down to 70 kilo gram per Mega Watt hour.

A few thousand cycles seem possible for current batteries if you use them well. This estimate roughly agrees with a report that was published about two years ago by the Swedish Environmental Research Institute. So this means, depending on how often you use the batteries, the carbon footprint is somewhere between solar and natural gas.

How big the impact of storage is on the overall carbon dioxide emissions of wind and solar then depends on how much, how often and for how long you put energy into storage. But so long as it’s overall a small fraction of days this won’t impact the average carbon-dioxide emissions all that much.

Let’s put in some numbers. A typical estimate we’ve seen used in the literature is something like 10% of days that you’d put energy into storage. If you take this, and one of the middle-of-the-pack values for energy storage and assume it’s 80 percent efficient, then the carbon footprint of wind would increase from about 10 to about 30 kilogram per Mega Watt hour and that of solar from about 45 to about 65. So, they are both clearly still much preferable to fossil fuels, but the need to store them also makes nuclear power increasingly look like a really good idea.

What do we learn from this? At least for me the lessons are that first, it makes sense to use naturally occurring opportunities for storage. Our planet has a lot of water, and, unlike me, water has a high heat capacity. Gravitational energy doesn’t leak, location matters, and storing stuff underground increases efficiency. Second, liquid air storage has potential. And third, there’s a lot of energy in uranium 235.

Did you come to different conclusions? Let us know in the comments, we want to hear what you think.

Saturday, August 13, 2022

Science With the Gobbledygook

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Today we’re celebrating 500 thousand subscribers. That’s right, we made it to half a million! Thanks everyone for being here. YouTube has made it so much easier for me to cover the news that I think deserves to be covered, and you have made it happen. And to honor the occasion, we have collected some examples of science with the gobbledygook. And that’s what we’ll talk about today.

1. Salmon Dreams and Jelly Brains

In 2008, neuroscientist Craig Bennett took a dead Atlantic salmon to the laboratory and placed it in an fMRI machine. He then showed the salmon photographs of people in social situations and asked what the people in the photos might have been feeling. For example, if I show you a stock photo of a physicist with a laser, the associated emotion is obviously uncontrollable excitement. The salmon didn't answer.

You may find that unsurprising given that it was very dead. But Bennet then used standard protocols to analyze the fMRI signal he had recorded while questioning the salmon, and found activity in some region of the salmon’s brain. The absurdity of this finding went a long way to illustrate that the fMRI methods used at the time frequently gave spurious results.

The dead salmon led to quite some soul-searching in the neuroscience community about the usefulness of fMRI readings. A meta-review in 2020 concluded that “common task-fMRI measures are not currently suitable for brain biomarker discovery or for individual-differences research.”

In 2011, a similar point was made by neuroscientists who published an electroencephalogram of jello that showed “mild diffuse slowing of the posterior dominant rhythm”. They also highlighted some other issues that can give rise to artifacts in EEG readings, such as sweating, or being close to a power outlet.

2. Medical Researcher Reinvents Integration

In 1994, Mary Tai from the Obesity Research Center in New York invented a method to calculate the area under a curve and published it in the Journal Diabetes Care. She called her discovery “The Tai Model.” It’s also known as integration, or more specifically the trapezoidal rule. As of date, the paper has been cited more than 400 times.

It’s maybe somewhat unfair to list this as “gobbledygook” because it’s not actually wrong, she just wasn’t exactly the first to have the idea. If you slept through math class, don't worry, you can just go into medicine. What could possibly happen?

3. The Sokal Hoax and its Legacy

This is probably the most famous hoax in academic publishing. Alan Sokal is a physics professor at NYU and UCL, he works mostly on the mathematical properties of quantum field theory. In 1996 he wrote a paper for the journal Social Text. It was titled “Transgressing the Boundaries: Towards a Transformative Hermeneutics of Quantum Gravity”. In this paper, Sokal argued that to resolve the disagreement between Einstein’s theory of gravity and quantum mechanics, we need postmodern science. What does that mean? Here’s what Sokal wrote in his paper:

“The postmodern sciences overthrow the static ontological categories and hierarchies characteristic of modernist science…. [They] appear to be converging on a new epistemological paradigm, one that may be termed an ecological perspective.”

In other words, the reason we still haven’t managed to unify gravity with quantum mechanics is that you can’t eat quantum gravity. So, yes, clearly an ecological problem. Though you should try eating it if you find it. I mean, you never know, right?

Sokal’s paper was published without peer review. According to the editors, the decision was based on the author’s credentials. Sokal argued that if everyone can make up nonsense like this and it’s deemed suitable for publication, then such publications are worthless. The journal still exists. Some of its recent issues are about “Sexology and Its Afterlives” and “Sociality at the End of the World”.

Similar hoaxes have since been pulled off a few times even in journals that *are peer reviewed. For example, in 2018, a group of three Americans who describe themselves as “left wing” and “liberal” succeeded in publishing several nonsense papers in academic journals on topics such as gender and race studies. One paper, for example, claimed to relate observations of dogs and their owners to rape culture. Here’s a quote from the paper:

“Do dogs suffer oppression based upon (perceived) gender? [This article] concludes by applying Black feminist criminology categories through which my observations can be understood and by inferring from lessons relevant to human and dog interactions to suggest practical applications that disrupts hegemonic masculinities and improves access to emancipatory spaces.”

The authors explained in a YouTube video that they certainly don’t think race and gender studies are unimportant but rather the opposite. Such studies are important and it’s therefore hugely concerning if one can publish complete nonsense in academic journals on the topic. They argued that articles which are currently accepted for publication in the area are biased towards airing “grievances” predominantly about white heterosexual men. They called their project “grievance studies” but it became known as Sokal Squared.

The most recent such hoax was revealed last year in October. The journal Higher Education Quarterly published a study that claimed to show that right-wing funding pressures university faculty to promote right-wing causes in hiring and research. The paper contained a number of obviously shady statistics, and yet was accepted for publication.

The authors had submitted the manuscript under pseudonyms with initials that spelled SOKAL, and pretended to be affiliated with universities where no one with those names worked. They later revealed their hoax on twitter. The account has since been suspended. The journal retracted the paper.

4. Fake it till you make it

Those papers in the Sokal hoaxes were written by actual people. But in 2005 a group of computer science students from MIT demonstrated that this isn’t actually necessary. They wrote a program that automatically generated papers with nonsense text, including graphs, figures, and citations.

One of their examples was titled “A Methodology for the Typical Unification of Access Points and Redundancy” and explained “Our implementation of our approach is low-energy, Bayesian, and introspective. Further, the 91 C files contain about 8969 lines of SmallTalk.” They didn’t submit it to a journal but it was accepted for presentation at a conference. They used this to draw attention to the low standards of the meeting.

But this wasn’t the end of the story because the MIT group made their code publicly available. In 2010, the French researcher Cyril Labbe used this code to create more than a hundred junk papers by a fictional author with name Ike Antkare. The papers all cited each other and soon enough Google Scholar listed the non-existing Antkare as the 21st best cited researcher in the world.

A few years later, Labbé wrote a program that could detect the specific pattern of these junk papers that were generated with the MIT group’s software. He found that at least 120 of them had been published. They have since been retracted.

The online version of the MIT code doesn’t work anymore, but there’s another website that’ll allow you to generate a gibberish maths paper, with equations and references and all. Here for example is my new paper on “Existence in Complex Graph Theory” with my co-authors Henri Poincare and Jesus Christ.

The physicist enthusiasts among you might also enjoy the Snarxiv that creates a website that looks like the arXiv but with nonsense abstracts about high energy physics. I’ll leave you links to all these websites in the info below the video.

5. My Phone Did It

Okay so you can write papers with an artificial intelligence. Indeed, artificial intelligence now writes papers about itself. But what if you don’t have one? Look no further than your phone.

In 2016, Christoph Bartneck from the University of Canterbury, New Zealand received an invitation from the International Conference on Atomic and Nuclear Physics to submit a paper. He explained on his blog “Since I have practically no knowledge of Nuclear Physics I resorted to iOS auto-complete function to help me writing the paper.” The paper was accepted. Here is an extract from the text “Physics are great but the way it does it makes you want a good book and I will pick it to the same time.”

6. Get me off your fucking email list

I’m not sure how well-known this is, but if you’ve published a few papers in standard scientific journals you get spammed with invitations to fake conferences and scam journals all the time. In many cases these invitations have nothing to do with your actual research. I’ve been invited to publish papers on everything from cardiology to tea. Most of the time you just delete it, but it does get a bit annoying. I will say though, that the tea conference I attended was lovely.

In 2005, David Mazières and Eddie Kohle dealt with the issue by writing a paper that repeated the one sentence “Get me off your fucking email list” over and over again, all including flow diagram and scatter image. They submitted it to the 9th World Multiconference on Systemics, Cybernetics and Informatics to protest its poor standards.

In 2014, the Australian computer scientist Peter Vamplew sent the same paper to the International Journal of Advanced Computer Technology in response to their persistent emails. To his surprise, he was soon informed that the paper had been accepted for publication. Not only this, its reviewers had allegedly rated the paper “Excellent”. Next thing that happened was that they asked him to pay 150 dollars for the publication. He didn’t pay and they, unfortunately, didn’t take him off the email list.

7. Chicken chicken chicken

Chicken chicken chicken Chicken chicken chicken chicken chicken chicken chick chicken chicken Chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chicken chick chicken chicken chicken Chicken chicken chicken chicken chicken chicken Chicken chick chicken.

8. April’s Fools on the Arxiv

The arXiv is the open access pre-print server which is widely used in physics and related disciplines. The arXiv has a long tradition of accepting joke papers for April 1st, and it’s some of the best nerd humor you’ll find.

For example, two years ago two physicists proposed a “Novel approach to Room Temperature Superconductivity problem”. The problem is that the critical temperature at which superconductivity sets in is extremely low for all known materials. Even the so-called “high temperature superconductors” become superconducting only at -70 degrees Celsius or so. Finding a material that superconducts at room temperature is basically the holy grail of material science. But don’t tell Monty Python, because it’s silly enough already to call minus 70 degrees Celsius a “high temperature”.

In their April first paper, the authors report they have found an ingenious solution to the problem of finding superconductors that work at room temperature: “Instead of increasing the critical temperature of a superconductor, the temperature of the room was decreased to an appropriate [value of the critical temperature]. We consider this approach more promising for obtaining a large number of materials possessing Room Temperature Superconductivity in the near future.”

In 2022 one of the April’s fools papers made fun of Exoplanet sightings and reported Exopet sightings in zoom meetings.


9. Funny Paper Titles

As you just saw, scientists want to have fun too, and not just on April 1st, so sometimes they do it in their paper titles. For example, there’s the paper about laser optics called “One ring to multiplex them all”. Or this one called “Would Bohr be born if Bohm were born before Born?”

Of course physicists aren’t the only scientists with humor. There is also “Premature Speculation Concerning Pornography’s Effects on Relationships”, and “Great Big Boulders I have Known” and “Role of childhood aerobic fitness in successful street crossing”, though maybe that was unintentionally funny.

An honorable mention goes to the paper titled “Will Any Crap We Put into Graphene Increase Its Electrocatalytic Effect?” because the authors did literally put bird crap into graphene. And, yes, it increased the electrocatalytic effect.

10. Dr Cat

In 1975, the American Physicist Jack Hetherington wanted to publish some of his research results in the journal Physical Review Letters. He was the sole author of the paper, but he’d written it in the first person plural, referring to himself as “we”. This is extremely common in the scientific literature and we have done that ourselves, but a colleague pointed out to Hetherington that PRL had a policy that would require him to use the first person singular.

Instead of rewriting his paper, Hetherington decided he’d name his cat as co-author under the name F. D. C. Willard. The paper was published with the cat as co-author and he could keep using the plural.

Hetherington revealed the identity of his co-author by letting the cat “sign” a paper with paw prints. The story of Willard the cat was soon picked up by many colleagues, who’d thank the cat for useful discussions in footnotes of their papers, or invite it to conferences. Willard the cat also later published two single-authored papers, and quickly became a leading researcher, no doubt with a paw-fect CV. On April 1st 2014 the American Physical Society announced that cat-authored papers, including the Hetherington/Willard paper, would henceforth be open-access.

I hope you enjoyed this list of science anecdotes. If you have one to add, please share it in the comments.

Saturday, August 06, 2022

How to compute with a computer that doesn't compute

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Thanks for clicking on this video, by which you’ve ruled out a possible world in which you never watched it. This alternative world has become a “counterfactual” reality. For us, counterfactuals are just things that could have happened but didn’t, like my husband mowing the lawn. In quantum mechanics, it’s more difficult. In quantum mechanics, events which could have happened but didn’t still have an influence on what actually happens. Yeah, that’s weird. What does quantum mechanics have to do with counterfactual reality? That’s what we’ll talk about today.

I have only recently begun working in the foundations of quantum mechanics. For the previous decade I have mostly worked on General Relativity, cosmology, dark matter, and stuff like this. And I have to say I quite like working on quantum mechanics because it’s simple. I like simple things. It’s why I have plastic plants instead of a dog.

In case you were laughing, this wasn’t a joke. I actually do have plastic plants, and quantum mechanics is indeed a simple theory – if you look at the mathematics. The difficult part is making sense of it. For General Relativity it’s the other way round, and all the maths in the world won’t help you make sense of dogs.

Okay, I can see you’re not entirely convinced that quantum mechanics is in some sense simple, but please give me a chance to convince you. In quantum mechanics we describe everything by a wave-function. It’s usually denoted psi, which is a Greek letter but maybe not coincidentally also the reaction I get from my friends when I go on about quantum mechanics.

We compute how the wave-function behaves from the Schrödinger equation, but for many cases we don’t need this. For many cases we just need to know that the Schrödinger equation is a kind of machine that takes in a wave-function and spits out another wave-function. And the wave-function is a device from which you calculate probabilities. To keep things simple, I’ll directly talk about the probabilities. This doesn’t always work, so please don’t think quantum mechanics is really just probabilities, but it’s good enough for our purposes.

Here is an example. Suppose we have a laser. Where did we get it from? Well, maybe it was on sale? Or we borrowed it from the quantum optics lab? Maybe the laser fairy brought it? Look, this is theoretical physics, let’s just assume we have a laser, and not ask where we got it, okay?

So, suppose we have a laser. The laser hits a beam splitter. A beam splitter, well, splits a beam. I told you, this isn’t rocket science! In the simplest case, the splitter splits the beam into half, but this doesn’t have to be the case. Could also be a third and two thirds or a tenth and nine tenth, so long as the fractions add up to 1. You get the idea. For now, let’s just take the case with a half-half split.

So far we’ve been talking about a laser beam, but the beam is made up of many quanta of light. The quanta of light are the photons. What happens with the individual quanta when they hit the beam splitter? The quanta are each described by a wave-function. Did I just hear you sigh?

The Schrödinger equation tells you something complicated happens to this wave-function, but let’s forget about this and just look at the outcome. So we say, the beam splitter is a machine that does something to this wave-function. What does it do?

It’s not that the photon which comes out of the beam splitter goes one way half of the time and the other way the other half. Instead, here it comes, the photon itself is split into half, kind of. We can describe this by saying the photon goes in with a wave-function going in this direction. And out comes a wave-function that is a sum of both paths.

I already told you that the wave-function is a device from which we calculate probabilities. More precisely we do this by taking the absolute square of the weights in the wave-function. Since the probabilities are ½ for each possibility, this means the weight for each path in the wave-function is one over square root two. If the beam splitter doesn’t split the beam half-half but, say 1/3 and 2/3, then the weights are square root of 1/3 and square root of 2/3 and so on.

We say that this sum of wave-functions is a “superposition” of both paths. That’s the simple part. The difficult part is the question whether the photon really is on both paths. I’ll not discuss this here, because we just talked about this some weeks ago, so check out my earlier video about this.

That the photon is now in a superposition of both paths tells you the probability to measure the particle on either path. But of course if you do measure the particle, you know for sure where it is. So this means if you measure the photon, it’s no longer in a superposition of both paths; the wave-function has “collapsed” on one of the paths like me after a long hike.

As long as you don’t measure the wave-function this beam splitter also works backwards. If you turn around the directions of those two photons, for example with mirrors, they’ll combine to a photon on one path. You can understand this by remembering that the photon is a wave, and waves can interfere constructively and destructively. So they interfere constructively on this output direction, but destructively on the other. Again, the nice thing here is that you don’t actually need to know this. The beam splitter is just a machine that converts some wave-functions into others.

Let’s look at something a little more useful. We’ll turn this around again, put two mirrors here and combine the two paths at another beam splitter. What happens? Well, this lower beam splitter is exactly the turned-around version of the upper beam splitter, which is what we just looked at. The superposition will recombine to one path, and the photon always goes into detector 2.

Well, actually, I should add this is only the case if the paths are exactly the same length. Because if you change the length of one path, that will shift the phase-relations, and so the interference may no longer be exactly destructive in detector 1. This means a device like this is extremely sensitive to changes in the lengths of the paths. It’s called an interferometer. If you change the orientation of those mirrors and move the second beam splitter near the first, then this is basically how gravitational wave interferometers work. If a gravitational wave comes through, this changes the relative lengths of the paths and that changes the interference pattern.

Okay, so we have an interferometer. It’s called a Mach-Zehnder interferometer by the way. Now let’s make this a little more complicated, which is what I said last time when I bought the 5000 piece puzzle that’s been sitting on the shelf for 5 years, but luckily we don’t need quite as many pieces.

We add two more beam splitters and another mirror. And then we need a third detector here. And, watch out, here’s an added complication. Those two outer beam splitters split a beam into fractions 1/3 and 2/3, and those two inner ones 1/2 each. Yeah, sorry about that, but otherwise it won’t work.

What happens if you send a photon into this setup? Well, this part that we just added here, that’s just another interferometer. So if something goes in up here, it’ll come out down here. So 2/3 chance the photon ends up in detector 3. And a 1/3 chance it goes down here, and then through the second beam splitter. And remember this splitter splits 2 to 1, so it’s 2/9 in detector 2 and 1/9 in detector 1.

Now comes the fun part. Suppose we have a computer, a really simple one. It can only answer questions with “yes” or no”. It’s a programmable device with some inner working that doesn’t need to concern us. It told me we can call it James, and it actually would prefer that we don’t ask any further questions. Only thing we need to know is that once you have programmed your computer, I mean James, you run it by inputting a single photon. If the answer is “yes” the photon goes right through, entirely undisturbed. If the answer is “no”, the photon just doesn’t come out. Keith Bowden suggested one could do this by creating a maze for the photon, where the layout of the maze encodes the program, though I’m not sure how James feels about this.

So let’s assume you have programmed the computer to once and for all settle the question whether it’s okay to put pineapple on pizza. Then you put your computer… here. What happens if you turn on your photon source this time? If the answer to the question is “yes, pineapple is okay” then nothing happens at the computer, and it’s just the same as we just talked about. The photon goes to detector 3 2/3 of the time, and in the other cases it splits up between detector 1 and 2.

But now suppose the answer is “no”. What happens then? Well, one thing that can happen is that the photon goes into the computer and doesn’t come out. Nothing ever appears in any detector, and you know the answer is “no, pineapples are bad, don’t put them on pizza”. This is the boring case and it happens 1/3 of the time, but at least you now know what to think about people who put pineapple on pizza.

Here is the more interesting case. If the photon is in the inner interferometer but does not go into the computer and gets stuck there, then it goes the upper path. But then when it reaches the next beam splitter, it has nothing to recombine with. So then, it gets split up again into a superposition. It either goes into detector 3, this happens 1/6 of the time, or it goes down here and then it recombines with the lower path from the outer interferometer. This happens in half of the cases, and if it happens, then the photon always goes to detector 2, and never to detector 1. This only comes out correctly if the beam splitters have the right ratios, which is why we need this.

Okay, so we see the maths is just adding up fractions, this is the simple part. But now let’s think about what this means. We have seen that the only way we can measure a photon in detector 1 is if the outcome of the computation is “yes”. But we have also seen that if the answer is “yes” and the photon actually goes through this inner part where the computer is located, it cannot reach detector 1. So we know that the answer is “yes” without ever having to run the computer. The photon that goes to detector 1 seems to know what would have happened, had it gone the other path. It knows its own counterfactual reality. In other words, if we had a quantum lawn then it still wouldn’t be mowed, but it’d know what my husband does when he doesn’t mow the lawn. I hope this makes sense now.

And no, this video isn’t a joke, at least not all of it. It’s actually true, you can compute with a computer that doesn’t compute. It’s called “counterfactual computation”. The idea was brought up in the late 1990s by Richard Josza and Graeme Mitchison. The example which we just discussed isn’t particularly efficient because it happens so rarely that you get your answer without running the computer that you’re better off guessing. But if you make the setup more complicated you can increase the probability for finding out what the computer did without running it.

That this indeed works was demonstrated in 2006 where the computer performed a simple search algorithm known as Grover’s algorithm. This doesn’t tell you whether pineapple on pizza is okay, but if you have an unsorted database with different entries, this algorithm will tell you which entry is the same as your input value.

Now, let me be clear, this is a table-top experiment that doesn’t calculate anything of use to anybody. I mean, not unless you want to count the use of publishing a paper about it. The database they used for this experiment had four entries in terms of polarized photons. You might argue that you don’t need an entire laboratory to search for one among the stunning number of four entries, and I would agree. But this experiment has demonstrated that counterfactual computation indeed works.

The idea has led to a lot of follow-up works, which include counterfactual quantum cryptography, and how to use counterfactual computation to speed up quantum computers and so on. There is a lot of controversy in the literature about what this all means, but no disagreement on how it works or that it works. And that pretty much tells you what the current status of quantum mechanics is. We agree on how it works. We just don’t agree on what it all means.

If You Need a Break, Try Some Physics. (By which I really mean, please buy my new book.)


If I could, I would lock myself up in a cabin in the woods and not read any news for two weeks. But I find cabins in the woods creepy, and I’d miss the bunny pics on twitter. And in any case, I have something better to offer. 

If you want to take a step back from current affairs, why not fill your mind with some of the big mysteries of our existence? It works like a charm for my mental health. Why do we only get older and not younger? Are there copies of us in other universes? Can particles think? Has physics ruled out free will? Will we ever have a theory of everything? Does science have limits? Can information be destroyed? Will we ever know how the universe began? Is human behavior predictable? Ponder these mysteries for an hour a day and it’ll clear your head beautifully. I speak from experience.

I discuss this all these questions and many more in my new book “Existential Physics: A Scientist’s Guide to Life’s Biggest Questions” which will be on sale in the USA and Canada beginning next week, on August 9. I hope this book will help you separate what physicists know about those big questions from what they just speculate about. 

You can buy a signed copy from Midtown Scholar here (but note that they ship only in the USA and Canada). The UK Edition will be published on August 18. The publication date for the German translation has tentatively been set to March 28, 2023. There’ll be a couple of more translations following next year. Some more info about the book (reviews etc) here.