Wednesday, September 28, 2022

I’ve said it all before but here we go again

[I didn't write the title and byline
and indeed didn't see it until it
appeared online.]
For reasons I don’t fully understand, particle physicists have recently started picking on me again for allegedly being hostile, and have been coming at me with their usual ad homimen attacks.

What’s going on? I spent years trying to understand why their field isn’t making progress, analyzing the problem, and putting forward a solution. It’s not that I hate particle physics, it’s rather to the contrary, I think it’s too important to let it die. But they don’t like to hear that their field urgently needs to change direction, so they attack me as the bearer of bad news. 

But trying to get rid of me isn’t going to solve their problem. For one thing, it's not working. More importantly, everyone can see that nothing useful is coming out of particle physics, it’s just a sink of money. Lots of money. And soon enough governments are going to realize that particle physics is a good place to save money that they need for more urgent things. It would be in particle physicists’ own interest to listen to what I have to say.

And I have said this all many times before but I hate long twitter threads, so let me just summarize it in one blogpost:

a) Predictions for fundamentally new phenomena made from new theories in particle physics have all been wrong ever since the completion of the standard model in the 1970s. You have witnessed this ongoing failure in the popular science media. All their ideas were either falsified or they have been turned into eternally amendable and fapp unfalsifiable models, like supersymmetry.

b) Saying that “it’s difficult” explains why they haven’t managed to find new phenomena, but it doesn’t explain why their predictions are constantly wrong. 

c) Scientists should learn from failure. If particle physicists’ method of theory-development isn’t working, they should analyze why, and change their methods. But this isn’t happening.

My answer to why their current method isn’t working is that their new theories (often in the form of new particles) do not solve any problems in the existing theories. They just add unnecessary clutter. When theoretical predictions were correct in the past, they solved problems of consistency (example: the Higgs, anti-particles, neutrinos, general relativity, etc).

Two common misunderstandings: Note that I do NOT say theorists in the past used this argument to make their predictions. I am merely noting in hindsight that’s what they did. It’s what the successful predictions have in common, and we should learn from history. Neither do I say that theoretical predictions were the ONLY way that progress happened. Of course not. Progress can also happen by experimental discoveries. But the more expensive new experiments become, the more careful we have to be about deciding which experiments to make, so we need solid theoretical predictions.

In many cases, particle physicists have made up pseudo-problems that they claim their new particles solve. Pseudo-problems are metaphysical misgivings, often a perceived lack of beauty. A typical example is the alleged problem with the Higgs mass being too small (that was behind the idea that the LHC should see supersymmetry). It’s a pseudo-problem because there is obviously nothing wrong with the Higgs-mass being what it is, seeing that they can very well make predictions with the standard model and its Higgs as it is. 

(I sometimes see particle physicists claiming that supersymmetry “explains” the Higgs-mass. This is bluntly wrong. You cannot calculate the Higgs-mass from supersymmetric models, it remains a free parameter.)

Other pseudo-problems are the baryon asymmetry or the smallness of the cosmological constant etc. I have a list that distinguishes problems from pseudo-problems here.

So my recommendation is that theory development should focus on resolving inconsistencies, and stop wasting time on pseudo-problems. Real problems are eg the lacking quantization of gravity, dark matter, the measurement problem in quantum mechanics, as well as several rather technical issues with quantum mechanics (see the above mentioned list).

When I say “dark matter” I refer to the inconsistency between observation and theory. Note that to solve this problem one does NOT need details of the particles. That’s another point which particle physicists like to misunderstand. You fit the observations with an energy density and that’s pretty much it. You don’t need to fumble together entire “hidden sectors” with “portals” and other nonsense. Come on, people, wake up! This isn’t proper science!

There are several reasons why particle physicists can’t and don’t want to make this change. The most important one is that it would dramatically impede their capability to produce papers. And papers are what keeps grant cycles churning. This is a systemic problem. Next problem is that they can’t believe that what I say can possibly be correct because they have grown up in a community that has taught them their current methods are good. That’s group think in action.

There are solutions to both of these problems, but they require changes from within the community.

Particle physicists, rather unsurprisingly, don’t like the idea that they have to change. Their responses are boringly predictable.

They almost all attack me rather than my argument. Typically they will make claims like I’m just “trying to sell books” or that I “want attention” or that I “like to be contrarian” or that, in one way or another, I don’t know what I am talking about. I yet have to find a particle physicists who actually engaged with the argument I made. Indeed most of them never bother finding out what I said in the first place.

A novel accusation that I recently heard for the first time is that I allegedly refuse to argue with them. A particle physicist claimed on twitter that I had been repeatedly invited to give a seminar at CERN but declined, something she had been told by someone else. This is untrue. I have to my best knowledge never declined an opportunity to talk to particle physicists, even though I have been yelled at repeatedly. I was never invited to give a seminar at CERN. 

The particle physicist who made this claim actually went and asked the main seminar organizers at CERN and they confirmed that I was never invited. She apologized. So it’s all good, except that it documents they have been circulating lies about me in the attempt to question my expertise. (Another symptom of social reinforcement.)

There have also been several instances in the past where particle physicists called senior people at my workplace to complain about me, probably in the hope to intimidate me or to get me fired. It speaks much for my institution that the people in charge exerted no pressure on me. (In other words, don't bother calling them, it’s not going to help.)

The only “arguments” I hear from particle physicists are misunderstandings that I have cleared up thousands of times in the past. Like the dumb claim that inventing particles worked for Dirac. Or that I’m “anti-science” because I think building a bigger collider isn’t a good investment right now.

You would think that scientists should be interested in finding out how their field can make progress, but particle physicists just desperately try to make me go away, as if I was the problem. 

But hey, here’s a pro-tip: If you want to sell books, I recommend you don’t write them about theoretical high energy physics. It’s not a topic that has a huge market. Also, I have way more attention than I need or want. I don’t want attention, I want to see progress. And I don’t like being contrarian, I am just not afraid of being contrarian when it’s necessary.

As a consequence of these recent insults targeted at me, I wrote an opinion piece for the Guardian that appeared on Monday. Please note the causal order: I wrote the piece because particle physicists picked on me in a renewed attempt to justify continuing with their failed methods, not the other way round. 

It's not that I think they will finally see the light. But yeah I’m having fun for sure.

Monday, September 26, 2022

Book Review “The Biggest Ideas in the Universe: Space, Time, and Motion” by Sean Carroll

The Biggest Ideas in the Universe
Space, Time, and Motion
By Sean Carroll
Dutton, Sep 20, 2022

The first time I heard Sean Carroll speak was almost 20 years ago in Tucson, Arizona, where he gave a physics colloquium. He had just published his first book, a textbook on General Relativity. His colloquium was basically an introduction to modern cosmology, dark matter, dark energy, and the cosmic microwave background.

It was a splendidly delivered talk; the students loved it. But later I overheard several faculty members remarking they had found it “too simple” and that Sean didn’t seem to be doing much original work. To them, the only good talk was an incomprehensible one. Those remarks, I would later come to realize, are symptomatic of academia: You impress your colleagues by being incomprehensible.

Sean had begun blogging the same year I heard him speak in Tucson, 2004. I would begin blogging not much later, though for unrelated reasons (I originally didn’t intend to write about science), and naturally I kept track of what he was up to.

Since then, it has made me very happy to see Sean making a good career both in research and in science communication, on his own terms. I have met him a few times over the years, read most of his books, and reviewed a few. But I didn’t anticipate he’d pop up on YouTube in 2020, stuck at home during the first COVID lockdowns, like all of us. There he was, green screen as crappy as mine had been a year earlier, promising to cover “The Biggest Ideas in the Universe”, when I had just decided to put more effort into my own YouTube channel.

To my relief it became clear quickly that Sean’s YouTube ambitions were much different from mine. He went for the basics where I prioritized brevity. If my YouTube channel is a buffet, then his is the farmer’s market. And luckily his YouTube appearance remained temporary.

His newest book is the first of three to summarize his YouTube series, focused on dynamical laws, space, and time. It gradually builds up from functions to equations of motions, to concepts like energy, velocity, and momentum, space-time and its geometry, and finishes with black holes in General Relativity. He uses the most essential equations and explains how they work, but you can follow the explanation just by reading the text.

This isn’t your usual popular science book. It doesn’t discuss speculative new ideas, but it’ll give you the background to understand them. It’s a timeless book that I am sure will become a classic, a go-to reference for the interested non-expert who wants to see how the gears of the machinery turn underneath the superficial stories you find in popular science books.

If the three volumes are complete, they’ll presumably cover the classes you’d take for a master’s degree in physics. There aren’t many books like this, which fill the gap between textbooks and popular science books. The only other example that comes to my mind is the “Theoretical Minimum” series by Susskind, Hrabovsky, Friedman, and Cabannes. Sean’s is more focused on the essentials and somewhat lighter in the maths. I have also found Sean’s to be better written.

I’ve always admired Sean for ignoring the unwritten dictum of academia that inaccessibility makes you move valuable, and for his enthusiasm in helping people understand physics, despite the fact that, 20 years ago, most senior academics considered this a waste of time. Today the situation is entirely different. I think Sean was one of the people who changed this attitude.

Saturday, September 24, 2022

What is "Nothing"?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Like most videos on YouTube, this is a video about nothing. But we’re a science channel, so we’ll talk about nine levels of nothing. What are the nine levels of nothing? Can you really make a universe from them? And if someone asks you why there is something rather than nothing, what’s a good answer? That’s what we will talk about today.

First things first, what do we mean by “nothing”? A first attempt to define nothing is to look at how we use the word in everyday language. Suppose your birthday is coming up and you say “Oh, I want nothing.” So when I give you a box for your birthday, you expect it to be empty It’s nothing, in the sense that it doesn’t contain any objects. We will call this the level 1 nothing. It’s a pre-science nothing, the nothing you might refer to before you’ve ever heard of physics.

But of course, you have heard of physics, and so you know that even a box full of level one nothing still contains air, and air is made of something. You wanted nothing for your birthday, and certainly you’ll be disappointed to get air instead. Let’s therefore pump all the air out of this box. We’ll call what’s left the level 2 nothing. It’s what was called a vacuum in the 17th century, no objects, and no air either.

Okay you might say, but we don’t live in the 17th century, and when you said you want nothing for your birthday you really meant it. If we just pump out the air, there’s still the occasional cosmic ray inside, or neutrinos, or dark matter particles, if they exist. So, we go one step further and remove all types of matter, this gives us the level 3 nothing. Indeed, since objects and air are made of particles, removing particles includes the previous two nothings.

But even if the box is closed, there would still be radiation in the box, for example in the infrared, which is maybe not much, but it’s something. And the magnetic field of the earth would also still go through the box. Therefore, we now also remove all types of radiation and all fields. Because you wanted nothing for your birthday and of course I want you to be happy. Now we have a level four nothing: no particles, no radiation, no fields. What you have left then is what you could call the 21st century vacuum.

The level 4 nothing is however is still something. For one thing, many physicists argue that the vacuum has an energy density and pressure and associate this with the cosmological constant. As I explained in this earlier video, I think this doesn’t make sense, the cosmological constant is just a constant of nature which determines the curvature of empty space. Empty space just isn’t necessarily flat. Talking about the curvature of empty space as if it was energy density and pressure is just a weird interpretation of geometry.

Even leaving aside the cosmological constant, the 21st century vacuum isn’t nothing because in quantum field theories, like the standard model of particle physics, the vacuum contains virtual particles that are created in pairs but quickly destroy each other again. They come out of the vacuum and disappear back into it. Virtual particle pairs are like couples you’ve never heard of that pop up in your news feed, destroy each other, and disappear back into nothing. Except with maths.

We can’t directly measure virtual particles, that’s why they’re called virtual. But we can infer their presence because we can measure their influence on other particles. Or we could, if we hadn’t removed those from the box already.

For example, if we look at the energy levels of electrons around an atomic nucleus, these are slightly shifted in the presence of virtual particles. This can be measured, and it has been measured. That’s one way we know virtual particles exist.

You could argue that the phrase “virtual particle” is really just a name for a mathematical expression that we use to calculate measurement outcomes, and I would agree. But be that as it may, we can observe their effects and nothing has no effects so it’s got to be something. And you wanted nothing for your birthday, not a box full of virtual particles. Besides, virtual particles can sometimes become real, for example near black holes, so they can actually kick us back from level four to level three.

To get to level 5 nothing we therefore remove the twenty-first century vacuum too. Now we have neither virtual nor real particles nor radiation nor fields and there’s also no way that any of them can reappear from the vacuum. What’s left in the box now? Well. There’s still space and time in it. And time is money, and money is the root of all evil, and that’s a terrible joke, but still something rather than nothing.

This is why for level 6 of nothing, things get decidedly weird because we remove space and time, too. And just to make sure, we will also remove all other equations and laws of nature that might give rise to space and time, such as strings or quantum gravity, or whatever other idea you believe in. Remove all of it. At this point there is nothing left from our theories of physics.

So why is there any physics at all? This question is one of the reasons we’ll never have a theory of everything, because even the best theory can’t explain its own existence. Scientific explanations end at this level, and it’s probably where this video should end, but I admit I enjoy talking about nothing, so let’s see what else there is to say.

I have taken inspiration for this video from an essay by Robert Lawrence Kuhn. He also talks about it in this video. My first six levels of nothing are similar to his, though not exactly the same because I’ve looked at it from the perspective of a physicist. But Kuhn doesn’t stop there, he has three more levels of nothing.

Taking away everything physical still leaves us with something in your birthday box because you might grant the existence of non-physical entities. For example, some people believe in god, or other religious ideas, like the belief that consciousness is non-physical. In level 7 we remove those, too. Theological explanations end at this level. If you think that god necessarily has to exist then you have to get off the bus at level 7 and accept that the question why god exists doesn’t have an answer.

Is the box finally empty? Not quite. There’s still mathematics that could be said to exist in some sense. That is, we have abstract ideas and objects, numbers, sets, logic, truths and falsehoods, and the entire platonic world of ideals. For the 8th level of nothing, we remove those too.

Has this finally removed everything? Are you finally happy with your birthday gift? Well, there’s still the possibility that something comes into existence even if that something doesn’t exist. And a possibility is something in and by itself. So, for level 9, we also remove all possibilities. This is Kuhn’s final level of nothing. It’s the best nothing I can give you for your birthday. I hope you’re happy now.

The ninth level of nothing leaves us with the always interesting question whether the absence of something is also something, which is why philosophers like to discuss whether holes in cheese exist. Personally, I’m more interested in the cheese. I guess that’s why I’m a physicist and not a philosopher, but I found Kuhn’s classification of nothings useful because it explains why we sometimes talk past each other apropos of nothing.

For example, “inflation” is a currently popular theory in physics according to which our universe was created by a quantum fluctuation from a vacuum. We have no evidence that this is correct, but let us leave this aside for today, and just ask what kind of creation this would be if it was correct. The idea of inflation is that you have a big space that’s filled with a quantum vacuum, and every once in a while a quantum fluctuation succeeds in becoming so large that it begins to grow. Indeed, it grows into an entire universe like ours, with cheese, and holes in it, and all.

In such a vacuum there are many fluctuations, and therefore the creation of a universe doesn’t happen only once, it happens over and over again. It’s a type of multiverse called “eternal inflation”. We just talked about this some weeks ago. The beginning of our universe in this eternal inflation would be a creation from a level four nothing.

Physics can get you a little further than this because you can write down a theory in which space and time is created from a state without space and time. It’s arguably somewhat hard to imagine what this means, but you can certainly write down mathematics for it.

You see, I just define a symbol for a state without space and time, and an operator that creates space and time, then I let the operator act on the state, and voila, I’ve created space and time. Ok, I have oversimplified this a little, but basically this is how it works. I really think people are way too respectful of all the stuff that physicists made up and get away with just because their maths is incomprehensible.

Lawrence Krauss’ book “A Universe from Nothing” is about this idea of creating space and time from nothing. And this would be a creation from a level 5 nothing. But even if you don’t believe in God, a level 5 nothing is still something. To begin with it has the mathematics that give rise to all the rest.

If physics doesn’t answer the question why there is something rather than nothing, then what could? Philosophers have discussed that back and forth. I’m not much of a philosopher and have a nothing worthwhile to add. That’s a ninth level nothing. But just in case someone stops you on the street and asks “why is there something rather than nothing”, let me tell you the three most popular answers that I have come across.

The most popular answer at the moment seems to be that nothing is absurd. It doesn’t make sense in and by itself and can’t be. It’s just a confusion of human language that we have inflicted on ourselves. The difficulty becomes apparent if you try to explain what nothing is, because any statement about it requires something. I mean if I can talk about nothing, then nothing it’s the thing that I talk about and it's therefore something?

Another answer is that no explanation is needed, or there is no explanation. God made it, que sera, sera, please move on, nothing to see here. See what I did there?

A third answer might be that our universe, or at least any universe, is in some sense the best option, and nothing doesn’t live up to the requirement because nothing can’t be any good.

If someone asked me on the street why there is something rather than nothing, I’d probably just shrug. I can’t think of any way to answer the question, and I also don’t see what difference it would make if we could answer it. I mean, suppose someone came tomorrow with a 2000 page proof that something must exist, what would it be good for? I guess I could do a video about it.

More seriously, just because it’s not a question that I want to spend my time on doesn’t mean I think no one should. In fact, I am glad that we are not all interested in the same questions and I’m happy to leave this one to philosophers. Do you have an answer that I didn’t mention? Let me know in the comments.

Saturday, September 17, 2022

The New Meta-Materials for Superlenses and Invisibility Cloaks

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Meta is the Greek prefix for “after” and Aristotle used the phrase “metaphysics” for the stuff in his writing that came literally “after” he was done with the physics. Metaphysics is concerned with some of the most important questions we face at this critical moment in human history. Questions like whether the holes in cheese exist, whether cheese exists, or whether only the atoms that make up the cheese exist.

But this is not what we’ll talk about today. This video is about metamaterials which, I assure you, have nothing to do with cheese. Though, maybe, a little bit. Metamaterials are the next technological stage “after” materials. It’s a research area that has progressed incredibly quickly in the past decade, and that includes superlenses, invisibility cloaks, earthquake protection, and also chocolate. What are metamaterials, and what are they good for? That’s what we’ll talk about today.

First things first, what are metamaterials? A linguistic approach might lead you to think a metamaterial is what comes after the material, so I guess, that’d be the bill. But that’s not quite right. A metamaterial has custom-designed micro-structures which give a material new properties. These micro-structures are typically arrays that resonate at specific frequencies, and that interact either with acoustic waves or with electromagnetic waves. This way, metamaterials can be used to control sound, heat, light, and even earthquakes.

This sounds pretty abstract, so let us start with a concrete example, the superlens.

When you take an image of an object, with your eyes or with a camera, you collect light that reflects off the surface of an object with a lens. Lenses work by “refraction” which means they change the angle at which the light travels. If an object is too close to the lens, the refraction can no longer converge the light. For this reason, you can’t take images of things that are too close to the lens.

But not all the light that reflects from an object gets away. The part that gets away is called the far field, but there is another part of the light called the near field, which stays near the surface of the object. The electromagnetic waves in the near field are oscillating like usual, but they don’t travel into the distance, they decay exponentially. It’s also called an “evanescent wave”.

This figure shows how waves enter a medium at a surface, which is the red line. The top image is a normal, refracted wave, which continues traveling through space but the angle changes when it enters the medium. The bottom image shows an evanescent wave, which decays with distance from the surface. The evanescent waves contain tiny details of the structure of the object, but since they don’t reach the camera, those details are lost. And you can’t get the camera arbitrarily close to the object, because then you couldn’t refocus the light. And that’s a shame because you might not be able to count the hairs in my eyebrows after all.

But in 2000, the British physicist Sir John Pendry of Imperial College in London found a way to use the information in the near field. He said, it’s easy enough, you just use a material that has a negative refractive index.

What does it mean for a material to have a negative refractive index? Normal materials don’t have this, but metamaterials can. When a ray of light enters a medium, then the refractive indexes of the two media relate the angles. This is called Snell’s law. If the refractive index of the medium is negative, then this means the continuation of the ray in the medium is also reflected from the normal to the surface. So, it goes back into the direction it comes from. How would that look like?

Well, as I said, stuff that we normally encounter in daily life doesn’t have a negative refractive index, so I can’t show you a photo. But we can illustrate what it would look like. You probably remember the “broken pencil” illusion. If you put a pencil half into a glass of water, then the part in the water appears shifted to the side. It’s because the light is refracted in the water but the brain interprets the visual input as if the light travels in straight lines. If the water had a negative refraction index, then the lower part of the pencil wouldn’t just seem shifted, it’d also be reflected to the other side.

Aaron Danner had the great idea to use a raytracer to create a 3-d image of a pool filled with water that has a negative refraction index. Here is the image of the pool with normal water. And here is the image with the negative refractive index. The thing to pay attention to are those three black lines, which indicate the corner of the pool. You’d normally expect this to be out of sight, but since this strange water mixes refraction with reflection, you can now see it. If there were fish in the pool they’d appear to be floating on top of the water. Which, I don't know if you know this, but it’s not what a fish is supposed to do.

What’s this got to do with lenses? Well remember that you need lenses to collect rays of light. But if you put a sheet of a medium with negative refractive index between two with normal refractive index, that’ll basically turn the light rays around and effectively focus them. It acts like a lens. And, here comes the important bit, this also works for evanescent waves which usually get lost. They get focused too, and are prevented from decaying. This is why metamaterials with a negative refraction image can reach a resolution that’s impossible to reach with normal lenses.

A superlens was built for the first time in 2005 by researchers at UC Berkeley. Their lens was made of a silver sheet that was merely 35 nanometers thick. In this case, the structure of the material comes from oscillations in the electron density in the silver which amplifies the evanescent waves coming from the object. You have to put the object directly into contact with the silver surface for that to work.

This image (A) is a lithograph taken with a focused ion beam, so this is the control image. This image (C) is the optical control without superlens. And this one (B) is the superlens image. You can clearly see that the superlens image has a higher resolution. This graph D shows the difference in accuracy between imaging with the superlens, that’s the blue curve, compared to imaging without the superlens, that’s the red curve.

Though this jump in resolution might sound good, these lenses are rather impractical. You have to put the metamaterial directly into contact with whatever you want to image and then your camera on top. So it does away with selfie sticks, but unfortunately also ruins your makeup. This is why, last year, a group of researchers from Iran and Switzerland published a paper in Scientific Reports, in which they propose to use a metamaterial to turn the near field into a far field, so you can put your camera elsewhere.

They call this device a “hyperlens” which to me sounds like it’s a superlens that’s had too much coffee, but they mean a grid of aluminum nanorods that resonate at wavelengths in the visible part of the spectrum. For now, this is just a computer simulation, but the idea is that the resonance converts the evanescent modes into propagating modes, so then you can capture them elsewhere. The researchers claim that at least in their numerical simulations this structure can image biological tissues with a resolution of a tenth of the wave-length of the light. The resolution limit of conventional lenses is about a quarter of a wave-length.


Let’s then talk about what’s the probably best known application of metamaterials, the invisibility cloak. You may have read the headlines a few years ago about this. Metamaterials make invisibility cloaks possible because with a negative refraction index you can bend light in the opposite direction to what normal materials do. This means that, at least in theory, with the right combination of materials and metamaterials, you can bend light around an object. This appears to us as if the object isn’t there, again because the brains assume that light travels in straight lines.

This sounds pretty cool, and indeed scientists have some things to show, or maybe in this case it’s better to say *not show. Early experiments in the mid-2000’s mainly used microwaves. But in 2015, a team of researchers from China made an invisibility cloak that works in the infrared. In this Figure (Figure 1f) you see how the light is redirected. They used several triangles of germanium and put them in a very precise geometric configuration so that it creates a hidden area inside. You might say that this isn’t much of a metamaterial, but it’s the same idea: you custom-design structures to redirect waves as you want. Into this hidden region they put a mouse. (Figure 2b). Then they took an image with and without the cloak (Figure 4a and 4b). Half of the mouse is gone!

Invisibility cloaks in the visible part of the spectrum haven’t yet been made, but some semi-invisibility shields exist, for example this one from a company named Hyperstealth Corp. These don’t work by bending the light around objects, but by spreading the light in the horizontal plane. If you have a narrow object, then its image will be overpowered by the light coming from the sides of the object which blurs out what is behind. This works particularly well when the background is uniform. However, it’s not really an invisibility shield. Easiest way to build an invisibility shield is put a camera behind you and project that on a screen in front of you.

You can also use metamaterials to manipulate electromagnetic fields that are not in the optical range. For example, as I explained in this earlier video, the main problem with wireless power transfer is that power decreases with very rapidly with distance from the sender. A “magnetic superlens”, however, could extend this reach.

That this works was shown in a paper by a group of American researchers in 2014. This figure shows the difference between wireless power transfer using a magnetic superlens compared to wireless power transfer through free space. On the y-axis, we have wireless power transfer efficiency, and on the x-axis, we have distance in meters. The solid black line represents wireless power transfer through free space, which drops quickly to near-zero values as distance increases.

The colored lines represent wireless power transfer with the use of a magnetic superlens made up of metamaterials. You see that at best you can extend the reach by a few centimeters. And notice that the efficiency is in all cases in the single digits. So, nice idea, but in practice it doesn’t make much of a difference.

Another type of wave you can manipulate are acoustic waves. Acoustic metamaterials aren’t really a new thing. Sound absorption foam like this one uses basically the same idea. It has a lot of tiny holes. So you see, it’s kind of like cheese. The holes make it very difficult for sound waves in certain frequencies to bounce back which basically kills echo. If I wrap this around my head, you’ll hear the difference. Wrapping your head into one of those will generally improve your experience of the world, highly recommended.

Metamaterials are more sophisticated versions of this. You can for example design them so that they only absorb particular frequencies, this is called a sonic or phononic crystal. Another thing you can do is to reflect the signal back without spreading it out. This was done by a team of researchers from China and the USA in 2018. The material they used was just a plastic dish with a spiral structure that effectively changes the refractive index. They say an application could be to make vehicles easier to detect. Though I suspect that their metamaterial would sell better if it made a car less easy to detect.

You can also use acoustic metamaterials to build an acoustic type of superlens, which has been done for ultrasound, but it’s the kind of solution still looking for a problem. And, as you can guess, they are trying to build acoustic invisibility shields. This has been done for example underwater with ultrasound which is great if you want to hide from dolphins. And in 2014, a group from Duke University used a pyramid with a special surface structure that makes it reflect sound as if it was an empty plane. Here is how this pyramid would look looks like if you could see sound. The pyramid is hollow, so you can hide stuff inside. Maybe they’ve finally figured out what the Egyptians were up to?

Another application of metamaterials is earthquake protection. Like you can use structures in materials to change how light and sound propagates, you can change the properties of the ground to change how seismic waves propagate. For this you embed structures around or under buildings so that seismic waves are diverted around the building. You basically make the building invisible to earthquakes.

For example, a group at MIT’s Lincoln lab use arrays of boreholes that are either filled or empty to redirect seismic waves. They haven’t actually build a real world example, but they have made measurements on downscaled physical models and they have done computer simulations.

This image is an illustration for how seismic barriers could work in theory. The green squiggly lines are the surface waves, the blue squiggly lines are P-waves, and the black arrows are the S-waves. All these waves get partly redirected and diffused.

At least in a computer simulation, the cloaking effect is quite impressive as you can see in this image from a 2017 paper. For this, they used data from a real earthquake, the Hector Mine earthquake that happened in Southern California in 1999. It had a magnitude of 7.1. The metamaterial barriers effectively reduced it to an earthquake of magnitude 4.5. And just a few months ago, a group from China proposed another metamaterial to dampen seismic waves. They want to use steel embedded with cylinders of foam.

Image A of this figure shows an aerial view of a seismic wave moving through unprotected soil – without protection, the wave moves without losing energy, exposing any infrastructure atop the soil to the full power of the seismic wave. In Image B, the metamaterial array effectively neutralizes the wave. Here you see the effectiveness of the metamaterial array from a side view – in Image A, the seismic wave travels across the surface uninterrupted, while in Image B, the metamaterial array dissipates the wave at Line C. The authors claim that their system can dampen seismic surface waves in the range of 0 point 1 to 20 Hertz with up to 85 percent efficiency.

And as promised, a tasty example to finish. A team of researchers from the Netherlands have created an edible metamaterial. It’s made of chocolate in multiple s-shaped pieces that makes the chocolate more or less crunchy, depending on the direction you chew it. And if you think about it YouTubers do this too when they cut breaths out of their videos and zoom back and forth in every other sentence. This structural changes affects how you travel through a video. So we’re really doing meta-videos.

Metamaterials have opened a whole new dimension to material design, and as you can see, they are well on the way to application already. We will certainly come back to this topic in the future, so if you want to stay up to date, don’t forget to subscribe.

Saturday, September 10, 2022

The Multiverse: Science, Religion, or Pseudoscience?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Why do physicists believe there are universes besides our own? I get a lot of questions about the idea that we live in this “multiverse”. Is it science, religion, pseudoscience, or just wrong? That’s what we’ll talk about today.

The topic of this video is covered in more detail in my new book Existential Physics. 

First things first, what’s a multiverse? You may guess that’s a new form of poetry, and you wouldn’t be entirely wrong. A multiverse is a collection of universes, either infinitely many or a number so large no one’s even bothered giving it a name. It’s an idea that has sprung up in some esoteric corners of theoretical physics and has, not so surprisingly, caught the imagination of science fiction authors, script-writers, and also the public. And it is poetic somehow, isn’t it, all those universes out there.

There isn’t just one multiverse but several different ones, so multiple multiverses, if you wish. The multiverse shouldn’t be confused with the metaverse, which is what universes evolve into when they’ve been fed enough Zuckerberg candy.

How many different multiverses do we have? Well, Brian Greene has written a book in which he lists 9 different ones, but you know how scientists are, the moment the book came out they jumped up to complain about what wasn’t on his list. And I can totally understand that. I mean, everyone knows that a list needs ten items. Nine is just not right. So let me just briefly run through the three types of multiverse that you most often hear about.

1. Many Worlds

The probably best-known and least controversial type of multiverse is the many worlds interpretation of quantum mechanics. If you remember, in quantum mechanics we can make predictions only for probabilities. We can say, for example, a particle goes left or right, each with a 50 percent chance. But then, when we measure the particle, we find it either left or right, and then we know where it is with 100 percent confidence. So, when we have measured the particle, what happened with the other possible outcome?

In the most common interpretation of quantum mechanics, often called the Copenhagen Interpretation, the moment you make a measurement you just update your probabilities because you got new information. The possibilities which you didn’t observe disappear because now you know they didn’t happen. This is called the measurement update, or sometimes the reduction or collapse of the wave-function.

In the many worlds interpretation, in contrast, one postulates that all possible outcomes of an experiment happen, each in a separate universe. It’s just that we live in only one of those universes and never see the other outcomes.

Of course then you have to explain why we don’t spread over all universes like the outcomes of experiments do. Mathematically, this works the same way as the sudden update of the wave-function. This means for what observations are concerned, many worlds is identical to standard quantum mechanics. The difference is what you believe it means.

If you believe in the many worlds interpretation, then every time a quantum object is measured, the universe splits into as many different universes as there were possible outcomes of the measurement. And this doesn’t just happen in laboratories. A measurement in quantum mechanics doesn’t require an apparatus. Anything that’s large enough can cause a “measurement”, that may be Geiger counter, but also banana, or, well you. This means that measurements happens all the time and everywhere. They constantly create new universes, and more are being created as we speak. Which means more bananas! And more yous!

For example, each time the wave-function of a photon spreads into all directions, but then it hits your eye, and the universe splits. In some, the photon arrived in your eye. In others, it hit the wall next to you, in some it went right through your head. And this could be happening to all photons. So, in some universes, an elephant is standing in front of you and you don’t see it. It’s unlikely, but, well, it’s possible, and according to the many worlds interpretation anything that’s possible is also real. I hope you make friends with the invisible elephant. I think that would be nice.

2. Eternal Inflation

We don’t know how our universe began and maybe we will never know. We just talked about this the other week. But according to a presently popular idea called “inflation”, our universe was created from a quantum fluctuation of a field called the “inflaton”. This field supposedly fills an infinitely large space and our universe was created from only a tiny patch of that, the patch where the fluctuation happened.

But the field keeps on fluctuating, so there are infinitely many other universes fluctuating into existence. This universe-creation goes on forever, which is why it’s called eternal inflation. Eternal inflation, by the way lasts forever into the future, but still requires a beginning in the past, so it doesn’t do away with the Big Bang issue.

In Eternal Inflation, the other universes may contain the same matter as ours, but in slightly different arrangements, so there may be copies of you in them. In some versions you became a professional ballet dancer. In some you won a Nobel Prize. In yet another one another you are a professional ballet dancer who won a Nobel prize and dated Elon Musk. And they’re all as real as this one.

Where did this inflaton field go that allegedly created our universe? Well, physicists say it has fallen apart into the particles that we observe now, so it’s gone and that’s why we can’t measure it. Yeah, that is a little sketchy.

3. The String Theory Landscape

String theory is an approach to a unification of gravity with the other forces of nature. Or maybe I should say it was, because it’s rapidly declined in popularity in the past decade. Why? It just didn’t lead anywhere.

String theorists originally hoped that one day it’d be possible to use their theory to calculate the values of the constants of nature, such as the masses of elementary particles and the strength by which they interact and so on. This didn’t work, so they gave up and just postulated that any value is possible. And since they couldn’t explain why we only observe a specific set of values they declared that they all exist.

And so this gives you another version of the multiverse. This collection of universes with all possible values for the constants of nature is called the string theory landscape. It contains universes with different types of matter or that have other laws of nature. For example, in some of them gravity is much weaker than it is in our universe. In some, radioactive decay happens much faster. And some universes expand so quickly that stars can’t form. If you believe in the string theory landscape, this isn’t just theoretically possible, it all actually happens.

You can combine these multiverses in any way you wish. So you can get married to Elon Musk hopping around at half the strength of gravity, with elephants in the room which you coincidentally can’t see. If you believe in the multiverse, then you have to believe this is possible.

There are some other multiverses which I didn’t talk about, like Max Tegmark’s mathematical universe in which all mathematics supposedly exists, or the simulation hypothesis, according to which our universe is a computer simulation. Because if you can simulate our laws of nature, why not simulate some others too? I don’t want to go through all the different multiverses because they all have the same problem.

The issue with all those different multiverses is that they postulate the existence of something you can’t observe, which is those other universes. Not only can you not see them, you can’t interact with them in any way. They are entirely disconnected from ours. There is no possible observation that you could make to infer their presence, not even in principle.

For this reason, postulating that the other universes exist is unnecessary to explain what we do observe, and therefore something that a scientist shouldn’t do. Making an unnecessary assumption is logically equivalent to postulating the existence of an unobservable god, or a flying spaghetti monster, or an omniscient dwarf who lives in your wardrobe. Fine if you do it in private, not so fine if you publish papers about it.

But. This does not mean that other universes do not exist. It merely means that science doesn’t say anything about whether or not they exist. If you postulate that they do not exist, that’s also unnecessary to explain what we observe, and therefore equally unscientific.

So now what, is the multiverse unscientific or pseudoscience or religion? Well, depends on what you do with it.

If you assume that unobservable universes exist and write papers about them, then that’s pseudoscience. Because this is exactly what we mean by pseudoscience: pretends to be science but isn’t. If you accept that science doesn’t say anything about the existence of those other universes one way or another, and you just decide to believe in them, then that’s religion. Either way, multiverses are not science. They’re like Tinker Bell, basically, they exist if you believe in them.

You might find this whole multiverse idea rather silly. And I wouldn’t blame you. But some physicists are quite serious about it. They believe these other universes exist because they show up in their mathematics. You see, they have mathematics, and some of that describes what we observe. And then they claim therefore everything else that their mathematics describes must also exist. They are confusing mathematics with reality.

There are some standard “objections” that physicists always try on me. You have probably heard some of them too, so here’s how you can deal with them.

Objection 1: Black Holes

The first point that multiverse fans always bring up is that we say that the inside of a black hole exists, even though we can’t observe it. But that’s just wrong: You can observe the inside of a black hole, you just can’t come back to tell us what you observed. Besides, we know that black holes evaporate, so they eventually reveal their inside.

Objection 2: Cosmic Horizon

Second objection that I hear is that we can only observe a patch of our own universe because light needs time to travel, and it’s got only so far since the Big Bang. But certainly no one would say that therefore the universe stops existing outside of the part we can observe. No of course not. No one says if you can’t observe it, it doesn’t exist. The point is: if you can’t observe it, science says nothing about whether it exists or not.

Objection 3: Observable Multiverses

The third standard objection is that some physicists have tried to come up with cases in which the presence of other universes would be observable. For example, there has been the idea that another universe could have collided with ours in the past, leaving a specific pattern in the cosmic microwave background. Or our universe could have been entangled with another one. So, the nobel prize winning ballet dancer isn’t married to this Elon Musk but has a quantum connection to an Elon Musk in another universe. Again this would leave a specific pattern in the CMB.

The answer to this objection is that people have looked for these patterns in the CMB and they are just not there. But to be fair, the testable multiverse models are a different problem than the one I named above. The big problem with multiverse ideas is that physicists mistake mathematics for reality. The problem with the testable multiverse ideas is that they think just because a hypothesis is testable it is also scientific. This is not what Popper meant. He said if it isn’t testable it isn’t science. Not “if it’s testable, then it’s science”.

Objection 4: It’s simple

The fourth and final objection is that the multiverse is good because it’s a simple theory. You see, multiverse fans argue that if you don’t make assumptions about what the values of the constants of nature are, but just say “they all exist,” then you have fewer assumptions in your theory. And a simpler theory is better, because Occam’s razor and all.

But look, if that argument was correct, then the best theory would be one with no assumptions at all. There’s just a little problem with that, which is that such a theory doesn’t explain anything. I mean, it literally isn’t a theory, it’s nothing. Just saying that it’s simple doesn’t make a scientific theory a good one. For a theory to be good, it still has to describe what we observe. It’s like just telling my hair to “please stay put” may be simple but doesn’t make it a good hair day.

And that’s exactly what happens in those multiverse theories, they’re too simple to be good for anything. If you don’t specify the values of the constants of nature, then you just can’t make predictions. To be fair, I would agree it’s simpler to not make predictions than making them, but even in physics you can’t publish predictions you didn’t make. At least not yet. Which is why multiverse physicists always end up making assumptions for the values of those constants.

They don’t always do this directly, sometimes they instead postulate probability distributions from which they derive likely values of the constants. But that’s more difficult than just using the constants and certainly not simple.

Same issue with the many world’s interpretations. Those who work on it claim that their theory is simpler than standard quantum mechanics because it just doesn’t use the measurement update. But if you don’t update the wave-function upon measurement, then that just doesn’t describe what we observe. We don’t observe dead-and-alive cats, that was Schrödinger’s whole point.

Therefore, you have to add other assumptions to many worlds, about what a detector is and how the universes split and so on, which for all practical purposes amounts to the same as updating the wave-function. In most cases these prescriptions are actually more complicated than the measurement update. So multiverse theories are either simple but don’t make predictions, or they make predictions but are more complicated than the generally accepted theories.

Let me finish by saying I am not against the multiverse or poetry. I would like to apologize to all the poets watching this. It’s not like I think science is the only thing that matters. You may find the multiverse inspirational, or maybe comforting, or maybe just fun to talk about. And there’s nothing wrong with that – please enjoy your stories and articles and videos about the multiverse. But don’t mistake it for science.

Saturday, September 03, 2022

The Trouble with 5G

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Did you know that there are now more mobile devices in the world than people? Whether you knew this or not, you probably did know that mobile phones work with the next best thing to magic, which is physics. All that data flies through the air in form of electromagnetic radiation. And since video resolution will soon be high enough for you to check whether I’ve plucked my eyebrows, wireless networks constantly have to be upgraded.

The fourth Generation of wireless networks, four G for short, is now being extended to five G, and six G is in planning. But five G interferes with the weather forecast, and six G brings more problems. What’s new about those wireless networks, and what’s the problem with them? That’s what we’ll talk about today.

The first four generations of wireless networks used frequency bands from four hundred Mega Hertz to roughly two GigaHertz. But there are limits to the amount of information you can transfer over a channel with a limited bandwidth. In fact there’s maths for this, it’s called the Shannon-Hartley theorem.

If you want to transfer more information through a channel with a fixed noise-level, you have to increase either the bandwidth or the power. You can’t increase the four G bandwidth, and there are safety limits on the power. The five G generation tries to circumvent the problem by using new bands at higher frequencies, going up to about fifty Giga Hertz.

Those frequencies correspond to wave-lengths in the millimeter range, which is why they’re called millimeter waves. There’s a reason they haven’t been previously used for telecommunication, and it’s not because millimeter waves are also used as good-byes for in-laws. It’s because radiation at the previously used frequencies passes through obstacles largely undisturbed, unless maybe the obstacle is a mountain. But millimeter waves can get blocked by trees or buildings, which isn’t great if you like calling people who aren’t within line of sight. “Could you pass me the salt, please? Thank you so much.”

So the idea of five G is to collect the signals from nearby mobile phones in what’s called a small cell of the network, and pass them on at low power to a bigger antenna that sends them long-distance at higher power. The five G network technology is currently being rolled out in most of the developed world. Cisco estimates that by next year 10 percent of mobile connections will use 5G.

Five G is controversial because it’s the first to use millimeter waves and the health effects have not been well studied. I already talked about this in a previous video but let me be clear that I have no reason to think that five G will have any adverse health effects. To the extent that research exists, it shows that millimeter waves will at high power warm up tissue, and that’s pretty much it.

However, the studies that have been done leave me wanting. Last year, one of the Nature journals published a review on 5G mobile networks and health. They looked at 107 experimental studies that investigated various effects on living tissue including genotoxicity, cell proliferation, gene expression, cell signaling, etc.

The brief summary is that none of those studies found anything of concern. However, this isn’t the interesting part of the paper. The interesting part is that the authors rated the quality of these 107 studies. Only two were rated well-designed, and only one got the maximum quality score. One. Out of 107.

The others all had significant shortcomings, anything from lack of blinding to small sample sizes to poor control of environmental parameters. In fact, the authors’ conclusion is not that five G is safe. Their conclusion is: “Given the low-quality methods of the majority of the experimental studies we infer that a systematic review of different bioeffects is not possible at present.”

Now, as I said, there’s no reason to think that five G is harmful. Indeed, there’s good reason to think it’s not, because millimeter waves have been used in medicine for a long time and for all we know they only enter the upper skin layers.

But I am a little surprised that there aren’t any good studies on the health effects of long-term radiation exposure in this frequency range. The 5G network has been in the planning since 2008. That’s 14 years. That’s longer than it takes NASA to fly to Pluto!

So scientists say there’s nothing to worry. Well, they also said that smoking is good for you and alcohol doesn’t cross the placenta and that copies of you live in parallel universes. As a scientist myself, I can confirm that scientists say a lot when the day is long, and I would much rather see data than just take word for it. Only good thing I have to tell you on the matter is that the World Health Organization is working on their own review about the health risk of five G which is supposed to come out by December.

Ok, while we wait to hear what the WHO says about the idea of irradiating much of the world population with millimeter waves, let’s talk about a known side-effect of five G: It’s a headache for atmospheric scientists, that’s meteorologists but also climate scientists. And yes, that means 5G affects the weather forecast.

You see, among the most important data that goes into weather and climate models is the amount of water vapor in the atmosphere. This is measured from satellites. This movie shows the average amount of water vapor in a column of atmosphere in a given month measured by NASA's Aqua satellite. Accurate measurements of the atmospheric water content are essential for weather forecasts.

You measure the amount of water vapor by measuring electromagnetic radiation that is scattered by the water molecules in the atmosphere. Each molecule emits radiation in particular frequency ranges and that allows you to count how many of those molecules there are. It’s the same method that’s used to detect phosphine in the atmosphere of Venus which we talked about in more detail in this earlier video. The frequency that satellites use to look for water is – you guessed it! – 23.80 Giga Hertz (Table 1, first line).

The issue is now that this water vapor signal is uncomfortably close to one of the 5G bands which covers the range from 24 point 25 to 27 point 5 Giga Hertz. You might say that’s still four hundred Mega Hertz away from the water vapor measurements, and that’s right. But the five G band doesn’t abruptly stop at a particular frequency, it’s more that it tapers out. The emission outside of the assigned band is called leakage. That leakage creates noise. And this noise is the problem.

You see, the weather forecast today sensitively depends on data quality. In recent decades, weather forecast has improved a lot. In this figure you see how much more you can trust the weather forecast today than you could a few decades ago. In 1980, a three-day forecast in the Norther Hemisphere was only correct about 85 percent of the time.

Today it’s correct more than 98 percent of the time. And this isn’t just about deciding whether to bring an umbrella, it’s relevant to warn people of dangerous weather events. A 72-hour hurricane warning today is more accurate than a 24-hour warning was 40 years ago.

The reasons for this improvement have been better computers, better models, but also weather satellites that collect more and better data. And that brings us back to the water vapor signal and the 5G troubles.

The water vapor signal is weak and the strongest contribution usually comes from low altitudes. That’s right, typically the biggest fraction of water vapor in the atmosphere isn’t in the clouds but close to the ground. If you took all the water in the atmosphere and put it on the ground you’d get about 2.5 cm. The clouds alone merely make a tenth of a millimeter.

Those are average values and the details depend on the weather situation, but in any case it means that the water vapor signal is very sensitive to noise near the ground. It’s like trying to hear a whisper in a noisy room. To make matters worse, most of the 5G noise will come from densely populated areas so we’ll get the least accurate forecast where people actually live.

Meteorologists are not happy. This is partly because they had to put away the crystal balls. But the bigger reason is that in 2019, The National Oceanic and Atmospheric Organization in the United States, NOAA for short, did an internal study on the impacts of 5G. In a federal hearing, Neil Jacobs, a former NOAA Administrator, said that the current 5G regulations “would degrade the forecast skill by up to 30 percent, so, if you look back in time to see when our forecast skill was roughly 30 percent less than it was today, it’s somewhere around 1980. This would result in the reduction of hurricane track forecast lead time by roughly 2 to 3 days”

The Secretary-General of the World Meteorological Organization, Petteri Taalas, is also concerned. He said: “Potential effects of this could be felt across multiple impact areas including aviation, shipping, agricultural meteorology and warning of extreme events as well as our common ability to monitor climate change in the future.” His organization calls for strict limits on the 5G leakage.

But well, as they say, there are two sides to every story. On the other side is for example Brad Gillen, executive vice president of the CTIA, that’s a trade association which represents the wireless communications industry in the United States.

He wrote a blogpost for the CTIA website claiming that the effect of five G on the weather forecast is “an absurd claim with no science behind it” He says the study done by NOAA used an obsolete sensor that it’s not in operation. Then he pulls the China card “The Trump Administration has already made its call, and it is time we all get on the same page as China and our other rivals most certainly are today.”

That wasn’t the end of the story. The atmospheric scientist Jordan Gerthan from the University of Wisconsin at Madison, pointed out that the reason the NOAA study mentioned a sensor that isn’t being used on satellites today is that this particular design was cancelled. It was, however, replaced by a very similar one so the argument is a red herring.

In response, a different CTIA guy wrote another blogpost claiming that “5G traffic will be hundreds of megahertz away from the band used in weather data collection”, so he completely ignores the leakage problem and hope his readers don’t know any better. On the other hand, NOAA didn’t publish their study and that didn’t win them any favors either.

However, in 2020, researchers from Rutgers University did their own study. They modeled the leakage of five G into the water vapor signal and evaluated its impact on a weather forecast by using old data. They did a mock 12 hour forecast, one without 5G and then two with different levels of leakage power.

As you can see in these figures, they found that the 5G leakage can affect the forecast up to zero point nine millimeter in precipitation and 1 point three degrees Celsius in temperature at two meters altitude. And it’s not just the value that changes but also the location. That’s a significant difference which would indeed degrade weather forecast accuracy noticeably. Maybe not as dramatic as the NOAA guy claimed, but certainly of concern.

What has happened since? In July 2021, the American Government Accountability Office released a report in which they just said that the arguments about the impact of 5G on weather forecast were “highly contentious.” Despite the lack of consensus, the official US position became to adopt fairly weak rules on the power leakage. They were then adopted by the International Telecommunication Union which is based in Geneva, Switzerland and which writes the global rules.

But most countries in the European Union so far just haven’t auctioned off the troublesome frequency band. Maybe they’re waiting to see how things pan out in the USA, the guinea pig of countries.

And then there’s six G, the 6th generation of wireless networks. This is already being planned, and it’s supposed to use bands at even higher frequencies, above one hundred GigaHertz and up into the TeraHertz range. Six G is supposed to usher in the metaverse era with augmented and virtual-reality and ultrahigh-definition video so we can finally watch live streams of squirrel feeders from New Zealand on our contact lenses.

According to the tech site LiveWire “6G is just the natural progression towards faster and better wireless connectivity…Ultimately, whether it’s with 6G, 7G, or another “G”, we’ll have such incredibly fast speeds that no progress bars or wait times will be required for any normal amount of data, at least at today’s standards. Everything will just be available...instantly.” And who would not like that?

But of course the 6G range, too, is being used by scientists for measurements that could be compromised. For example, NASA measures ozone around 236 GigaHertz, and carbon monoxide at about 230.5 GigaHertz. So we can pretty much expect to see the entire 5G discussion repeat for 6G.

How can the situation be solved? For 5G, the World Meteorological Organization is trying to negotiate limits with the regulating agencies in different countries. They demand that cell towers operating close to weather satellite frequencies should be limited to transmit at minus 55 dBW (Decibel Watt) for out-of-band emission, so that’s the leakage.

The European Commission has agreed on –42 decibel watts for 5G base stations. The FCC in the US set a limit at –20 decibel watt. This is a logarithmic scale, so this is more than 30 orders of magnitude above the limit the meteorologists ask for.

What do we learn from this? When a new technology is developed, scientists usually get there first. And when everyone else catches up, they’ll interfere with the scientists, often metaphorically but sometimes literally.

This isn’t a new story of course. You only have to worry about noise from railways if you have railways and there are actually trains going on them. But a high-tech society also relies on the accuracy of data, so this is a difficult trade-off. There are no easy ways to decide what to do, but I think everyone would be better off if the worries from scientists were taken more seriously in the design stage and not grumpingly acknowledged half through a global roll-out.

Saturday, August 27, 2022

We don't know how the universe began, and we will never know

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Did the universe come out of a black hole? Will the big bang repeat? Was the universe created from strings? Physicists have a lot of ideas about how the universe began, and I am constantly asked to comment on them. In this video I want to explain why you should not take these ideas seriously. Why not? That’s what we’ll talk about today.

The first evidence that the universe expands was discovered by Edwin Hubble who saw that nearby galaxies all move away from us. How this could happen was explained by none other than Albert Einstein. Yes, that guy again. His theory of general relativity says that space responds to the matter and energy in it by expanding.

And so, as time passes, matter and energy in the universe become more thinly diluted on average. I say “on average” because inside of galaxies, matter doesn’t dilute but actually clumps and space doesn’t expand. But in this video we’ll only look at the average over the entire universe.

So we know that the universe expands and on average matter in it dilutes. But if the universe expands today, this means if we look back in time the matter must have been squeezed together, so the density was higher. And a higher density of matter means a higher temperature. This tells us that in the early universe, matter was dense and hot. Really hot. At some point, matter must have been so hot that atoms couldn’t keep electrons around them. And even earlier, there wouldn’t even have been individual atomic nuclei, just a plasma of elementary particles like quarks and gluons and photons and so on. It’s like the alphabet soup of physics.

And before that? We don’t know. We don’t know because we have never tested what matter does at energy densities higher than those which the Large Hadron Collider can produce.

However, we can just ignore this difficulty, and continue using Einstein’s equations further back in time, assuming that nothing changes. What we find then is that the energy density of matter must once have been infinitely large. This is a singularity and it’s where our extrapolation into the past breaks down. The moment at which this happens is approximately thirteen point seven billion years in the past and it’s called the Big Bang.

The Big Bang didn’t happen at any particular place in space, it happened everywhere. I explained this in more detail in this earlier video.

Now, most physicists, me included, think that the Big Bang singularity is a mathematical artifact and not what really happened. It probably just means that Einstein’s theory stops working and we should be using a better one. We think that’s what’s going on, because when singularities occur in other cases in physics, that’s the reason. For example, when a drop of water pinches off a tap, then the surface curvature of the water has a singular point. But this happens only if we describe the water as a smooth fluid. If we would take into account that it’s actually made of atoms, then the singularity would go away.

Something like that is probably also why we get the Big Bang singularity. We should be using a better theory, one that includes the quantum properties of space. Unfortunately, we don’t have the theory for this calculation. And so, all that we can reliably say is: If we extrapolate Einstein’s equations back in time, we get the Big Bang singularity. We think that this isn’t physically correct. So we don’t know how the universe began. And that’s it.

Then how come that you constantly read about all those other ideas for how the universe began? Because you were sitting around at your dentist and had nothing else to do. Ok, but why do physicists put forward such ideas when the answer is we just don’t know. Like, you may have recently seen some videos about how our universe was allegedly born from a black hole.

The issue is that physicists can’t accept the scientifically honest answer: We don’t know, and leave it at that. Instead, they change the extrapolation back in time by using a different set of equations. And then you can do all kind of other things, really pretty much anything you want.

But wait, this is science, right? You don’t just get to make up equations. Unless possible you’re decorating a black board for a TV crew. Though, actually, I did this once and later they asked me what those equations were and I had to tell them they don’t mean anything which was really embarrassing. So even in this case my advice would be, you shouldn’t make up equations. But in cosmology they do it anyway. Here’s why.

Suppose you’re throwing a stone and you calculate where it falls using Newton’s laws. If I give you the initial position and velocity, you can calculate where the stone lands. We call the initial position and velocity the “initial state”, and the equation by which you calculate what happens the “evolution law”.

You can also use this equation the other way round: if you know the final state, that is, the position and velocity at the moment the stone landed, you can calculate where it’s been at any time in between, and where it came from. It’s kind of like when my kids have chocolate all over their fingers, I can deduce where that came from.

Okay, but you didn’t come here to hear me talk about stones, this video was supposedly about the universe. Well, in physics all theories we currently have work this way, even that for the entire universe. The equations are more complicated, alright, but we still have an initial state and an evolution law. We put in some initial state, calculate how it would look like today, and compare that with our observations to see if it’s correct.

But wait. In this case we can only tell that the initial state and the equations *together give the correct prediction for the final state. How can we tell that the equations alone are correct?

Let’s look at the stone example again. You could throw many stones from different places with different initial velocities and check that they always land where the equations say. You could also, say, take a video of the flight of the stone and check that the position at any moment agrees with the equations. I don't think that video would kill it on TikTok, but you never know, people watch the weirdest shit.

But in cosmology we can’t do that. We have only one universe, so we can’t test the equations by changing the initial conditions. And we can’t take any snapshots in between because we’d have to wait 13 billion years. In cosmology we only have observations of the final state, that is, where the stone lands.

That’s a problem. Because then you can take whatever equation you want and use it to calculate what happened earlier. And for each possible equation there will be *some earlier state that, if you use the equation in the other direction, will agree with the final position and velocity that you observed. So it seems like in cosmology we can only test a combination of initial state and equation but not find out what either is separately. And then we can’t say anything about how the universe began.

That sounds bad. But the situation isn’t quite as bad for two reasons.

First: the equations we use for the entire universe have been confirmed by *other observations in which we *can use the standard scientific methods. There are many experiments which show that Einstein’s equations of General Relativity are correct, for example redshift in the gravitational field, or the perihelion precession of Mercury, and so on. We then take these well-confirmed equations and apply them to the entire universe.

This, however, doesn’t entirely solve the problem. That’s because in cosmology we use further assumptions besides Einstein’s equations. For example, we use the cosmological principle about which I talked in an earlier video, or we assume that the universe contains dark matter and dark energy and so on. So, saying that we trust the equations because they work in other cases doesn’t justify the current cosmological model.

But we have a second reason which *does justify it. It’s that Einstein’s equations together with their initial values in the early universe provide a simple *explanation for the observations we make today. When I say simple I mean simple in a quantitative way: you need few numbers to specify it. If you used a different equation, then the initial state would be more difficult. You’d need to put in more numbers. And the theory wouldn’t explain as much.

Just think of the equations as a kind of machine. You put in some assumptions about how the universe began, do the maths, and you get out a prediction for how it looks like today. This is a good explanation if the prediction agrees with observations *and the initial state was simple. The simpler the better. And for this you only need the observations from today, you don’t need to wait some billion years. Unless of course you would like to. You know what? Let's just wait together.

Okay. How about you wait, and we talk again in 10 billion years.

While you wait, the cosmologists who aren’t patient enough justify using one particular equation and one particular initial state by showing that this *combination is a simple explanation in the sense that we can calculate a lot of data from it. The simplest explanation that we have found is the standard model for cosmology, which is also called LamdaCMD, and it’s based on Einstein’s equations.

This model explains for example how our observations of the cosmic microwave background fits together with our observations of galactic filaments. They came out of the same initial distribution of matter, the alphabet soup of the early universe. If we used a different equation, there’d still be some initial state, but it wouldn’t be simple any more.

The requirement that an explanation is simple is super important. And it’s not just because otherwise people fall asleep before your done explaining. It’s because without it we can’t do science at all. Take the idea that the Earth was created 6000 years ago with all dinosaur bones in place because god made it so. This isn’t wrong. But it isn’t simple, so it’s not a scientific explanation. Evolution and geology in contrast are simple explanations for how those dinosaur bones ended up where they are. I explained this in more detail in my new book Existential Physics which has just appeared.

That said, let us then look at what physicists do when they talk about different ideas for how the universe began. For this, they change the equations as we go back in time. Typically, the equations are very similar to Einstein’s equations at the present time, but they differ early in the universe. And then they also need a different initial state, so you might no longer find a Big Bang. As I said earlier, you can always do this, because for any evolution law there will be some initial state that will give you the right prediction for today.

The problem is that this makes a simple explanation more complicated, so these theories are not scientifically justifiable. They don’t improve the explanatory power of the standard cosmological model. Another way to put it is that all those complicated ideas for how the universe began are unnecessary to explain what we observe.

It’s actually worse. Because you might think we just have to wait for better observations and then maybe we’ll see that the current cosmological model is no longer the simplest explanation. But if there was an earlier phase of the universe that was indeed more complicated than the simple initial state that we use today, we couldn’t use the scientific method to decide whether it’s correct or not. The scientific method as we know it just doesn’t cover this case. Science fails!

Sure, making better observations can help us improve the current models a little more. But eventually we’ll run into this problem that more complicated explanations are always possible, and never scientifically justified.

So what’s with all those ideas about the early universe. Here’s one that’s been kind of popular recently, an idea that was put forward by Nikodem Poplawski. For this, you change general relativity by adding new terms to the equations called torsion. This removes the big bang singularity, and replaces it with a bounce. Our universe then came out of a bottleneck that’s quite similar to a black hole, just without the singularity. Can you do this? You can certainly do this in the sense that there’s maths for it. But on that count you can do many other things. Like broccoli. There’s maths for broccoli. So why not make the universe out of broccoli?

I know this sounds crazy, but there are a lot of examples for this, like Penrose’s cyclic cosmology that we talked about some months ago. Or the ekpyrotic universe which starts with a collision of higher dimensional membranes. Or the idea that we came out of a 5-dimensional black hole which made headlines a few years ago. Or the idea that the universe began with a gas of strings which seems to never have been particularly popular. Or the no-boundary proposal which has it that the universe began with only space and no time, an idea put forward by Jim Hartle and Stephen Hawking. Or geometrogenesis, which is the idea that the universe began as a highly connected network that then lost most of its connections and condensed into something that is indistinguishable from the space we inhabit. And so on. 

Have you ever wondered how come there are so many different ideas for the early universe? It’s because by the method that physicists currently use, there are infinitely many stories you can invent for the early universe.

The physicists who work on this always come up with some predictions for observables. But since these hypotheses are already unnecessarily complicated anyway, you can make them fit to any possible observation. And even if you’d rule out some of them, there are infinitely many others you could make up.

This doesn’t mean that these ideas are wrong. It just means that we can’t tell if they’re right or wrong. My friend Tim Palmer suggested to call them ascientific. When it comes to the question how the universe began, we are facing the limits of science itself. It’s a question I think we’ll never be able to answer. Just like we'll never be able to answer the question of why women pluck off their eyebrows and then paint them back on. Some questions defy answers.

So if you read yet another headline about some physicist who thinks our universe could have begun this way or that way, you should really read this as a creation myth written in the language of mathematics. It’s not wrong, but it isn’t scientific either. The Big Bang is the simplest explanation we know, and that is probably wrong, and that’s it. That’s all that science can tell us.

Saturday, August 20, 2022

No Sun, No Wind, Now What? Renewable Energy Storage

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Solar panels and wind turbines are great – so long as the sun shines and the wind blows. What if they don’t? You could try swearing at the sky, but that might attract your neighbor’s attention, so I’ll talk about the next best option: storing energy. But how? What storage do we have for renewable energy, how much do we need, how expensive is it, and how much does it contribute to the carbon footprint of renewables? That’s what we’ll talk about today.

I’ve been hesitating to do a video about energy storage because in all honesty it doesn’t sound particularly captivating, unless possibly you are yourself energy waiting to be stored. But I changed my mind when I learned the technical term for a cloudy and windless day. Dunkelflaute. That’s a German compound noun: dunkel means “dark” and “flaute” means “lull”. So basically I made an entire video just to have an excuse to tell you this. But while you’re here we might as well talk about the problem with dunkelflaute…

The renewable energy source that currently makes the largest contribution to electricity production is hydropower with about 16%. Wind and solar together contribute about 9%. But this is electric energy only. If you include heating and transport in the energy needs, then all renewables together make it to only 11%. That’s right: We still use fossil fuels for more than 80% of our entire energy production.

The reason that wind and solar are so hotly discussed at the moment is that in the past two decades their contribution to electricity production has rapidly increased while the cost per kilo-Watt hour has dropped. This is not the case for hydropower, where expansion is slow and costs have actually somewhat increased in the past decade. This isn’t so surprising: Hydropower works very well in certain places but those places have been occupied long ago. Solar and wind in contrast still have a lot of unused potential, and this is why many nations put their hopes on them.

But then there’s the dunkelflaute and its evil brother, cold dunkelflaute. That’s when the sun doesn’t shine and the wind doesn’t blow, and that happens in the winter. It’s a shame there aren’t any umlauts in the word, otherwise it’d make a great name for a metal band.

It’s no coincidence that Germans in particular go on about this because such weather situations are quite common in Germany. The German weather service estimates that it happens on the average twice each year, that the power production from wind and solar in Germany is less than 10% the expected average for at least 2 days. Every once in a while these situations can last a week or longer.

Of course this isn’t an issue just in Germany. This figure shows the average monthly hours of dunkelflaute for some European countries. As you can see, they are almost all in the winter. A recent paper in Nature Communications looked at how well solar and wind can meet electricity demand in 42 countries. They found that even with optimistic extension scenarios and technology upgrades, no country would be able to avoid the problem.

The color in this figure indicates the maximum reliability that can be achieved without storage. The darker the color, the worse the situation. As you can see, without storage it would be basically impossible to meet the demand reliably anywhere with wind and solar alone. Even Australia which reliably gets sunshine can’t eliminate the risk, and Europe is more at risk than North America.

The situation might actually be worse than that because climate change might weaken the wind in some places and make dunkelflaute a more frequent visitor. That’s because part of the global air circulation is driven by the temperature gradient between the equator and the poles. The poles heat up faster than the equator, which weakens the gradient. What this’ll do to the wind isn’t clear – the current climate models aren’t good enough to tell. But maybe, just maybe, banking on stable climate patterns is not a good idea if the problem you’re trying to address is that the climate changes. Just a thought.

Ok, so how can we deal with the dunkelflaute problem? There are basically two options. One is better connectivity of the power grid, so that the risk can be shared between several countries. However, this can be difficult because neighboring countries often have similar weather conditions. A recent study by Dutch researchers found that even connecting much of Europe wouldn’t eliminate the risk. And in any case, this leaves open the question whether countries who don’t have a problem at the time could possibly cover the demand for everyone else. I mean, the energy still has to come from somewhere.

And then there’s the problem that multi-national cooperation doesn’t always work as you want. Instead of being dependent on gas from Russia we might just end up being dependent on solar power from Egypt.

The other way to address the problem is storing the energy until we need it. First, some technical terms: The capacity of energy storage is measured in Watt hours. It’s the power that the battery can supply multiplied by the discharge time until it’s empty. For example, a battery system with an energy capacity of 20 Giga Watt hours can power 5 Giga Watt for 4 hours before it’s empty. This number alone doesn’t tell you how long you can store energy until it starts leaking; this is something else to keep in mind.

At the moment, the vast majority of energy storage is pumped hydro which means you use the energy you don’t need to pump water up somewhere, and when you need the energy, you let the water run back down and drive a turbine with it. Currently more than 90 percent of energy storage is pumped hydro. Problem is, there are only so many rivers in the world and to pump water up a hill you need a hill, which is more than some countries have. Much of the increase in storage capacity in the past years comes from lithium ion batteries. However, they still only make a small contribution to the total.

To give you a sense of the problem: At present we have 34 Giga Watt hours of energy storage capacity worldwide, not including pumped hydro. If you include pumped hydro, it’s 2 point 2 Tera Watt hours. We need to reach at least 1 Peta Watt hours, that’s about 500 times as much as the total we currently have. It’s an immense challenge.

So let us then have a look at how we could address this problem, other than swearing at the sky and at your neighbor and at the rest of the world while you’re at it. All energy storage systems have the same basic problem: if you put energy into storage, you’ll get less out. This means, if we combine an energy source with storage, then the efficiency goes down.

Pumped hydro which we already talked about has an efficiency between 78 percent and 82 percent for modern systems and can store energy for a long time. The total cost of this type of storage varies dramatically depending on location and the size of the plant, but has been estimated to be between 70-350 dollars per kilo Watt hour of energy storage.

Pumped hydro is really remarkable, and at least for now it’s the clear winner of energy storage. For example, in Bath County Virginia, they store 24 Giga Watt hour this way. But pumped hydro also has its problems, because for some regions of the world, including the united states, climate change brings more drought and you can’t pump water if you don’t have any.

A similar idea is what’s called “gravitational energy battery” which is basically pumped hydro but with solids. You pile concrete blocks on top of each, store the gravitational energy, and when you let the blocks back down, you run a dynamo with it. Fun, right? These systems are very energy efficient, about 90%, and they store energy basically indefinitely. But they’re small compared to the enormous amounts of water in a reservoir.

The Swiss company EnergyVault is working on the construction of one such plant in China which they claim will have 100 Mega Watt hours energy storage capacity. So, nice idea but it isn’t going to make much of a difference. I totally think they should get a participation trophy for the effort, but keep in mind we need to reach 1 Peta Watt hour. That’d be about 10 millions of such plants.

A more promising approach is compressed air energy storage or liquefied air energy storage. As the name suggests, the idea is that you compress or liquefy air, put it aside, and if you need energy, you let the air expand to drive a generator. The good thing about this idea is that you can do it pretty much everywhere.

The efficiency has been estimated to lie between 40 and 70 percent, though it drops by about zero point two percent per day due to leakage, and that’s the optimistic estimate. The costs lie between 50-150 dollars per kilo Watt hour, so that’s a little less than pumped hydro and actually pretty good. This one gets the convenience award. The McIntosh Power Plant in Alabama is a very large one, with capacity of almost three Giga Watt hours.

Another option is thermal energy storage. For this you heat a material, isolate it, and then when you need the energy you use the heat to drive a turbine, or you use it directly for heating. You can also do this by cooling a substance, then it’s called cryogenic energy storage.

The problem with thermal energy storage is that the efficiency is quite low; it typically ranges from only 30 percent to 60 percent. And since no insulation is perfect, the energy gets gradually lost. But being imperfect and losing energy is something we’re all familiar with, so this one gets the sympathy award.

In this video we’re looking into how to store solar and wind energy, but it’s worth mentioning that some countries use thermal energy storage to store heat directly for heating which is much more efficient. The Finnish company Helen Oy, for example, uses a cavern of 300 thousand cubic meters to store warm seawater in the summer which gives them about 11.6 Giga Watt hours. That’s a lot, and the main reason is that it’s just a huge volume.

As I mentioned previously, most of the expansion in energy storage capacity in the past decade has been in lithium-ion batteries. This one’s the runner-up after pumped hydro. They have a round trip efficiency of 80 to 95 percent, and a lifetime of up to 10 years.

But we currently have only a little more than 4 Giga Watt hours in lithium ion batteries, that’s a factor 500 less than what we have in pumped hydro. It isn’t cheap either. The cost in 2018 has been estimated with about 469 dollars per kilo Watt hour. It’s expected to decrease to about 360 in 2025 but this is still much more expensive than liquefied air.

And then there’s hydrogen. Sweet, innocent, hydrogen. Hydrogen has a very low round trip efficiency, between 25 and 45 percent, but it it’s popular because it’s really cheap. The costs have been estimated with 2 to 20 dollars per kilo Watt hour, depending on where and how you store it. So even the most expensive hydrogen storage is ten times less expensive than lithium ion batteries. In total numbers however, we currently have very little hydrogen storage. In 2017 it was about 100 Mega Watt hour. I suspect though that this is going to change very quickly and I give hydrogen the cheap-is-neat award.

Those are the biggest energy storage systems to date but there are a few fun ones that we should mention, for example flywheels. Contrary to what the name suggests, a flywheel is neither a flying wheel nor a gymnastic exercise for people who like being wheeled away in ambulances, but it’s basically a big wheel that can rotate and that stores energy because angular momentum is conserved.

Those flywheels only store energy up to 20 Mega Watt hours for a couple of minutes, so they’ll not solve the dunkelflaute problem. But they can reach efficiencies up to 95 percent, which is quite amazing really. They also don’t require much maintenance and have very long lifetimes, so they can be useful as short-term storage buffers.

There are also ultracapacitors which store electric energy like capacitors, just more of it. They have a high efficiency of 85-95 percent, but can store only small amounts of energy, and are ridiculously expensive, up to 60,000 dollars per kilo Watt hour. 

The difficulty of finding good energy storage technologies drives home just how handy fossil fuels are. Let me illustrate this with some numbers. A kilogram of gasoline gives you about 13 kilo Watt hours, a kilogram of coal a little less, about 8 kilo Watt hours. A lithium ion battery gives you only 0 point 2 kilo Watt hours per kilo gram. A kilo gram of water at one kilometer altitude is just 2.7 Watt hours, that’s another factor thousand less.

On the other hand, 1 kilo gram of Uranium 235 gives you 24 Giga Watt hours. And one kilogram of antimatter plus the same amount of matter would produce 25 Tera Watt hours. 25 Tera Watt hours! With a ton of it we would cover the electric needs of the whole world for a year.

Okay, so we have seen energy storage isn’t cheap and it isn’t easy, and we need a lot of it, fast. In addition, putting energy into storage and getting it back out inevitably lowers the efficiency of the energy source. This already doesn’t sound particularly great, but does it at least help with the carbon footprint? After all, you have to build the storage facility and you need to get those materials from somewhere, and if it doesn’t last long you have to recycle it or rebuild it.

A paper in 2015 from a group of American researchers found that carbon dioxide emissions resulting from storage are substantial when compared to the emissions from electricity generation, ranging from 104 to 407 kilo gram per Mega Watt hour of delivered energy.

This number probably doesn’t tell you anything, so let me put this in context. Coal releases almost a ton of carbon dioxide per Mega Watt hour. But the upper limit of the storage range is very close to the lowest estimate for natural gas. And remember that you have to add the storage on top of the carbon dioxide emissions from the renewables. Plus, the need to store the energy makes them less efficient.

In the case of lithium-ion batteries, the numbers strongly depend on how well you can recharge the batteries, that is, how many cycles they survive. According to a back-of-the-envelope estimate by the Chemical Engineer Robert Rapier, for 400 cycles the emissions are about 330 kilo gram carbon dioxide per Mega Watt hour but assuming 3000 cycles the number goes down to 70 kilo gram per Mega Watt hour.

A few thousand cycles seem possible for current batteries if you use them well. This estimate roughly agrees with a report that was published about two years ago by the Swedish Environmental Research Institute. So this means, depending on how often you use the batteries, the carbon footprint is somewhere between solar and natural gas.

How big the impact of storage is on the overall carbon dioxide emissions of wind and solar then depends on how much, how often and for how long you put energy into storage. But so long as it’s overall a small fraction of days this won’t impact the average carbon-dioxide emissions all that much.

Let’s put in some numbers. A typical estimate we’ve seen used in the literature is something like 10% of days that you’d put energy into storage. If you take this, and one of the middle-of-the-pack values for energy storage and assume it’s 80 percent efficient, then the carbon footprint of wind would increase from about 10 to about 30 kilogram per Mega Watt hour and that of solar from about 45 to about 65. So, they are both clearly still much preferable to fossil fuels, but the need to store them also makes nuclear power increasingly look like a really good idea.

What do we learn from this? At least for me the lessons are that first, it makes sense to use naturally occurring opportunities for storage. Our planet has a lot of water, and, unlike me, water has a high heat capacity. Gravitational energy doesn’t leak, location matters, and storing stuff underground increases efficiency. Second, liquid air storage has potential. And third, there’s a lot of energy in uranium 235.

Did you come to different conclusions? Let us know in the comments, we want to hear what you think.