Pages

Saturday, October 08, 2022

Cold Fusion is Back (there's just one problem)

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Cold fusion could save the world. It’d be a basically unlimited, clean, source of energy. It sounds great. There’s just one problem: it’s not working. Indeed, most physicists think it can’t work even in theory. And yet, the research is making a comeback. So, what’s going on? What do we know about cold fusion? Is it the real deal, or is it pseudoscience? What’s cold fusion to begin with? That’s what we’ll talk about today.

If you push two small atomic nuclei together, they will form a heavier one. This nuclear fusion releases an enormous amount of energy. There’s just one problem: Atomic nuclei all have a positive electric charge, so they repel each other. And they do so very strongly. The closer they are, the stronger the repulsion. It’s called the Coulomb barrier, and it prevents fusion until you get the nuclei so close together that the strong nuclear force takes over. Then the nuclei merge, and boom.

The sun does nuclear fusion with its enormous gravitational pressure. On earth, we can do it by heating a soup of nuclei to enormous temperatures, or by slamming the nuclei into each other with lasers. This is called “hot nuclear fusion”. And that indeed works. There’s just one problem: At least so, far hot fusion eats up more energy than it releases. We talked about the problems with hot nuclear fusion in this earlier video.

But nuclear fusion is possible at far lower energy, and then it’s called cold fusion. The reason this works is that atomic nuclei don’t normally float around alone but have electrons sitting in shells around the nucleus. These electrons shield the positive charges of the nuclei from each other and that makes it easier for the nuclei to approach each other.

There’s just one problem: If the atoms float around freely, the electron shells are really large compared to the size of the nucleus. If you bring these nuclei close together, then their electron shells will be much farther apart than the nuclei. So the electron shells don’t help with the fusion if the nuclei just float around.

One thing you can do is strip off the electrons and replace them with muons. Muons are basically heavier versions of electrons, and since they are heavier, their shells are closer to the nucleus. This shields the electric fields of the nuclei better from each other and makes nuclear fusion easier. It’s called “muon catalyzed fusion”.

Muon catalyzed fusion was theoretically predicted already in the 1940s and successfully done in experiments in the 1950s. It’s cold fusion that actually works. There’s just one problem: muons are unstable. They must be produced with particle accelerators and those take up a lot of energy. The muons then get mostly lost in the first fusion reaction so you can’t reuse them. There’s a lot more to say about muon catalyzed fusion, but we’ll save this for another time.

There’s another type of “cold fusion” that we know works, which is actually a method for neutron production. For this you send a beam of deuterium ions into a metal, for example titanium. Deuterium is a heavy isotope of hydrogen. Its nucleus is a proton with one neutron. At first, the beam just deposits a lot of deuterium in the metal. But when the metal is full of deuterium, some of those nuclei fuse. These devices can be pretty small. The piece of metal where the fusion happens may just be a few millimeters in size. Here is an example of such a device from Sandia Labs which they call the “neutristor”.

The major reason scientists do this is because the fusion releases neutrons, and they want the neutrons. It’s not just because lab life is lonely, and neutrons are better than no company. Neutrons can also be used for treating materials to make them more durable, or for making radioactive waste decay faster.

But the production of the neutrons is quite an amazing process. Because the beam of deuterium ions which you send into this metal typically has an energy of only 5-20 kilo electron Volt. But the neutrons you get out, have almost a thousand times more energy, in the range of a few Mega electron Volt. It’s often called “beam-target fusion” or “solid-state fusion”. It’s a type of cold fusion, and again we know it works.

There’s just one problem: The yield of this method is really, really low. It’s only about one in a million deuterium nuclei that fuse, and the total energy you get out is far less than what you put in with the beam. So, it’s a good method to produce neutrons, but it won’t save the world.

However, when physicists studied this process of neutron production, they made a surprising discovery. When you lower the energy of the incoming particles, the fusion rates are higher than theoretically expected. Why is that? The currently accepted explanation is that the lattice of the metal helps shielding the charges of the deuterium nuclei from each other. So, it lowers the Coulomb barrier, and that makes it more likely that the nuclei fuse when they’re inside the metal. This isn’t news, physicists have known about this since the 1980s.

But if putting the deuterium into metal reduces the Coulomb barrier, maybe we can find some material in which it’s lowered even further? Maybe we can lower it so far that we create energy with it? This idea had been brought up already in the 1920s by researchers in the US and Germany. And it’s what Pons and Fleischman claimed to have achieved in their experiment that made headlines in 1989.

Pons and Fleischman used a metal called palladium. The metal was inside a tank of heavy water, so that’s water where the normal hydrogen is replaced with deuterium. Ponds and Fleischman then applied a current going through the palladium and the heavy water. They claimed this created excess heat, so more than what you’d get from the current alone. They also said they’d seen some decay products of fusion reactions, notably neutrons and tritium. Everyone was very excited.

There was just one problem... Other laboratories were unable to reproduce the claims. It probably didn’t help that Pons and Fleischmann were both chemists, but nuclear fusion has traditionally been territory of physicists. And physicists largely think that chemical reactions simply cannot cause nuclear fusion because the typical energies that are involved in chemical processes are far too low.

A few groups said they’d seen something similar to Ponds and Fleischman, but the findings were inconsistent, and it remained unclear why it would sometimes work and sometimes not. By the early nineties, the Pons and Fleischmann claim was largely considered debunked. Soon enough, no scientist wanted to touch cold fusion because they were afraid it would damage their reputation. The philosopher Huw Price calls it the “reputation trap”. In fact, while I was working on this video, I’ve been warned that I, too, would be damaging my reputation.

Of course not everyone just stopped working on cold fusion. After all, it might save the world! Some carried on, and a few tried to capitalize on the hope.

One such case is that of Andrea Rossi who already in the 1970s said he knew how to build a cold fusion device. In 1998, the Italian government shut down his company on charges of tax fraud and dumping toxic waste into the environment. In the mid 1990s, Rossi moved to the USA and by 2011, he claimed to have a working cold fusion device that produced excess heat.

He tried to patent it, but the international patent office rejected the application arguing that the device goes “against the generally accepted laws of physics and established theories”. A rich Australian guy offered $1 million to Rossi if he could prove that the device produces net power. Rossi didn’t take up the offer and that’s the last we heard from him. There’s more than one problem with that.

In 2019, Google did a research project on cold fusion and they found that the observed fusion rate was 100 times higher than theoretically expected. But it wasn’t enough to create excess heat.

The allure of cold fusion hasn’t entirely gone away. For example, there are two companies in Japan, Technova Inc. and Clean Planet Inc, which claim to have produced excess heat. Clean Planet Inc has a very impressive roadmap on their website, according to which they’ll complete a model reactor for commercial application next year. There’s just one problem: No one has seen the world-saving machine, and no one has reproduced the results.

The people who still work on cold fusion have renamed it to “Low Energy Nuclear Reactions”, LENR for short. Part of the reason is that “cold” isn’t particularly descriptive. I mean, these devices may be cold compared to the interior of the sun, but they can heat up to some hundred degrees Celsius, and maybe that’s not everybody’s idea of cold. But no doubt the major reason for the rebranding is to get out of the reputation trap. So make no mistake, LENR is cold fusion reborn.

I admit that this doesn’t sound particularly convincing. But I think it’s worth looking a little closer at the details. First of all, there are two separate measurements that cold fusion folks usually look at. That’s the production of decay products from the nuclear fusion, and the production of excess heat.

An experiment that tried to shed light on what might be going on comes from a 2010 paper by a group in the United States. They used a setup very similar to that from Fleischmann and Pons but in addition they directed a pulsed laser at the palladium with specific frequencies. They claimed to see excess power generation for specific pulse frequencies, which suggests that phonon excitations have something to do with it. There’s just one problem: a follow-up experiment failed to replicate the result.

Edmund Storms who has been working on this for decades published a paper in 2016 claiming to have measured excess heat in a device that’s very similar to the original Ponds and Fleischman setup. In this figure you see how the deuterium builds up in the palladium, that’s the red dots, and the amount of power that Storms says he measured.

He claims that the reason that these experiments are difficult to reproduce is that the nuclear reactions happen in appreciable rates only in some regions of the palladium which have specific defects that he calls nano-cracks. These could be caused by the treatment of the metal, so some samples have them and others not, and this is why the experiments sometimes seem to work and sometimes not. At least according to Storms. There’s just one problem: No one’s been able to replicate his findings.

There is also a 2020 paper from the Japanese company, Clean Planet Inc which I already mentioned. They use a somewhat different setup with nanoparticles of certain metals that are surrounded by a gas that contains deuterium. The whole thing is put under pressure and heated. They claim that the resulting temperature increase is higher than you’d expect and that their device generates net power. In this figure you see the measured temperature increase in their experiment with Helium gas and with a gas that contains deuterium. The Helium gas serves as a control. As you see there’s more heating with the deuterium. There’s just one problem: No one’s been able to replicate this finding.

The issue with these heat measurements is that they’re incredibly difficult to verify. For this reason it’s much better to look at the decay products. Those are in and by themselves mysterious. In a typical nuclear fusion reaction, there is a very specific amount of energy that’s released, and so the energy distribution of the decay products is very sharply peaked. In deuterium fusion, the neutrons in particular should have an energy of 2.45 MeV. In those cold fusion reactions, however, they see a fairly broad distribution of neutron energies and at higher energies than expected.

Here is an example. The red bars show the number of deuterium ions as a function of energy, the black ones are the background. As you can see the spectrum looks nowhere like the expected peak at about 2.5 MeV. Something is going on and we don’t know what. Forget saving the world for a moment, it’s much simpler, there’s an observation that we don’t understand.

In a recent paper, a group from MIT has put forward two different hypotheses that could explain why nuclear fusion happens more readily in certain metals than you’d naively assume. One is that there are some unknown nuclear resonances which can become excited and make fusion easier. The other one is that the lattice of the metal facilitates an energy transfer from the deuterium to some of the palladium nuclei. So then you have excited Palladium nuclei and those decay. Since the Palladium nuclei have more decay channels than are typical for fusion outputs, this can explain why the energy distribution looks so weird. There’s just one problem: We don’t know that that’s actually correct.

What are we to make of this? The major reason cold fusion has been discarded as pseudoscience is that most physicist think it can’t possibly be that chemical processes cause nuclear reactions. But I think they overestimate how much we know both about nuclear physics and chemistry.

Nuclear physics is dominated by the strong nuclear force which holds quarks and gluons together so that they form neutrons and protons. The strong nuclear force has the peculiar property that it becomes weaker at high energies. This is called asymptotic freedom. Arvin Ash recently did a great video about the strong nuclear force, so check this out for more details.

The Large Hadron Collider pumps a lot of energy into proton collisions. This is why understanding the strong nuclear force in LHC collisions is quite simple, by which I mean a PhD in particle physics will do. The difficult part comes after the collisions, when the quarks and gluons recombine to protons, neutrons, and other bound states such as pions and rhos and so on. It’s called hadronization, and physicists don’t know how to calculate this. They just extract the properties of these processes from data and parameterize it.

I am telling you this to illustrate that just because we understand the properties of the constituents of atomic nuclei doesn’t mean we understand atoms. We can’t even calculate how quarks and gluons hold together.

Another big gap in our understanding are material properties because we often can’t calculate electron bands. That’s especially true for materials with irregularities that, according to Storms, are relevant for cold fusion. Indeed, if you remember, calculating material properties is one of those questions that physicists want to put on a quantum computer exactly because we can’t currently do the calculation. So, is it possible that there is something going on with the nuclei or electron bands in those metals that we haven’t yet figured out? I think that’s totally possible.

But, let me honest, I find it somewhat suspicious that the power production in cold fusion experiments always just so happens to be very close to the power that goes in. I mean, there isn’t a priori any reason why this should be the case. If there is nuclear fusion going on efficiently, why doesn’t it just blow up the lab and settle the case once and for all?

So, well, I am extremely skeptical that we’ll see a working cold fusion device in the next couple of years. But it seems to me there’s quite convincing evidence that something odd is going on in these devices that deserves further study.

I’m not the only one who thinks so. In the past couple of years, research into cold fusion has received a big funding boost, and that’s already showing results. For example, in 1991, a small group of researchers proposed a method to produce palladium samples that generate excess heat more reliably. And, I hope you’re sitting, research groups at NASA and at the US Navy have recently been able to reproduce those results.

A project at the University of Michigan is trying to reproduce the findings by the Japanese companies. The Department of Energy in the United States just put out a call for research projects on low energy nuclear reactions, and also the European research council has been caught in the act of supporting some cold fusion projects.

I think this is a good development. Cold fusion experiments are small and relatively inexpensive and given the enormous potential, it’s worth the investment. It’s a topic that we’ll certainly talk about again, so if you want to stay up to date, don’t forget to subscribe. Many thanks to Florian Metzler for helping with this video.

Wednesday, October 05, 2022

The First Ever Episode of Science News Without the Gobbledygook

One thing I miss about the blogging days is the ability to comment on current events short notice. It's much harder with video than in writing. This is why on my YouTube channel, we now have a weekly Science News episode. 


I hope we will all have some fun with this :o)

I have 6 other people involved in the production of this weekly news show (it's more difficult than you might think). We will only be able to continue with this if we find sponsors, and we will only find sponsors if we have sufficiently many views. That is to say, if you like our science news and would like them to continue, please help us spread the word!

Saturday, October 01, 2022

Can we make flying "Green"?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



“Flight shaming” is a social movement that originated in Sweden a few years ago. Its aim is to discourage people from flying because it’s bad for the environment. But let’s be honest, we’re not going to give up flying just because some Swedes think we should. I mean, we already shop at IKEA, isn’t that Swedish enough?

But seriously, maybe the flight shamers have a point. If aliens come to visit us one day, how are we supposed to explain this mess? Maybe we should indeed try to do something about airplane emissions. What are airlines doing anyway, isn’t it their job? What are the technological options, and will any of them give you a plausible excuse if flight shamers come for you? That’s what we’ll talk about today.

Flying may be good for watching four movies in a row, but it really isn’t good for the planet. It’s the third biggest contribution to carbon emissions from individuals, after having children and driving a car. Altogether, flying currently accounts for around 2 point 5 percent of global carbon dioxide emissions, that’s about a billion tons each year. 81 percent of this comes from passenger flights, and another 60 percent of that, so about half of the total, are international flights.

Most of the flights, not so surprisingly, come from high-income countries. If flying was as a country itself, it would rank sixth in carbon dioxide emissions, and it would congratulate the new British Prime Minister by reminding her that “The closest emergency exit may be behind you.”

The total carbon dioxide emissions from flying have been increasing steeply in the past decades, but the relative contribution has remained stable at 2 point 5 percent. This is partly because everybody is emitting more with everything, but also because planes have become way more fuel efficient. Planes consume today about half as much fuel as they did in the mid 1960’s.

Carbon dioxide emissions are not the only way that flying contributes to climate change. It also adds some other greenhouse gasses, and it creates clouds at high altitude that trap heat. But in this video, we’ll focus on the carbon dioxide emissions because that’s the biggest issue, right after the length of this video.

There are four ways that airlines are currently trying to reduce their carbon emissions, that’s electric planes, hydrogen, biofuel, and synthetic fuel. We’ll talk about each of those starting with electric planes.

1. Electric planes

The idea of electric planes is pretty obvious, charge a battery with a carbon neutral source of energy, use the battery to drive a propeller, try to not fall off the sky. Then you can partly recharge the battery when you’re landing.

And at first sight it does sound like a good idea. In 2016, the Swiss aircraft Solar Impulse 2 completed its first trip around the world. Its wings are covered with solar cells with a wingspan that is comparable to that of a Boeing 747.

A Boeing 747 typically flies at about one thousand kilometers per hour and carries 500 or so people. The Solar Impulse carries two and it flies about 70 kilometers per hour. At that speed it would take about 4 days to get from Frankfurt to New York which requires more in-flight entertainment than the CDC recommends.

You might think the issue is the solar panels, but the bigger problem is that electric batteries are heavy, and you don’t need to be Albert Einstein to understand that something that’s supposed to fly better not be heavy.

One way to see the problem is to compare batteries to kerosene, which is illustrated nicely in this figure. On the vertical axis you have the energy per mass and on the horizontal axis the energy per volume. Watch out, both axes are logarithmic.

You want energy sources that are as much in the top right corner as possible. Kerosene is up here, And lithium-ion batteries down here. You can see that kerosene has 18 times more energy in the same volume as a typical lithium-ion battery and sixty times more energy in the same mass. This means it’s difficult to pack power onto an aircraft in form of electric batteries. Consequently, electric planes are slow and don’t get far.

For example, in 2020, the Slovenian aircraft company Pipistrel brought the first electric aircraft onto the market. It’s powered by lithium-ion batteries, can carry up to 200 kilogram, and flies up to 50 minutes with a speed of about 100 kilometers per hour. It’s called Velis Electro which sounds like a good name for a superhero. And indeed, carrying 200 kilograms at 100 kilometers per hour is great if you want to rescue the new British Prime Minister from an oncoming truck, I mean, lorry. Though there isn’t much risk of that happening because the lorries are stuck at the French border. Which, incidentally, is farther away from London than this plane can even fly.

Nevertheless, some other companies are working on electric planes too. The Swedish start-up Heart Aerospace plans to build the first electric commercial aircraft by 2026. They ambitiously want to reach 400 kilometers of range and hope it’ll carry up to 19 passengers. Presumably that’s 19 average Swedes, which is about the same weight as 2 average Germans.

So, unless there’s a really big breakthrough in battery technology, electric planes aren’t going to replace kerosene powered ones for any sizeable share of the market, though they might be used, for example, to train pilots. Train them to fly, that is, not to rescue prime ministers.

A plus point of electric planes however is that they are more energy-efficient than kerosene powered ones. An electric system has an efficiency of up to ninety percent, but kerosene engines only reach about fifty percent efficiency. To this you must add other inefficiencies in the gearbox and the mechanics of the propeller or fan and so on. The total efficiency is then around 70-75 percent for electric engines and between thirty and forty percent for kerosene engines.

The technological developments that are going to have the biggest impact on electric planes are new types of batteries, that are either lighter or more efficient or, ideally, both. Lithium-sulfur and lithium-oxygen batteries are two examples that are currently attracting attention. They pack three to ten times more energy into the same mass as than Lithium-ion batteries.

2. Hydrogen

Let’s then talk about hydrogen. No, we don’t want to bring the Zeppelin back. We are talking about planes powered by liquid hydrogen. If we look back at this handy figure, you can see that liquid hydrogen really packs a lot of energy into a small amount of mass. It has, however, still a fairly large volume compared to kerosene. And volume on an airplane means you must make the plane bigger which makes it heavier, so the weight-issue creeps back in.

Also, hydrogen usually isn’t liquid, so you have to either cool it or keep it under pressure. Cooling requires a lot of energy, which is bad for efficiency. But keeping hydrogen under pressure requires thick tanks which are heavy. It’s like there’s a reason fossil fuels became popular.

The downside of hydrogen is its low efficiency, which is only around 45 percent. Still, together with the high energy density, it’s not bad, and given that burning hydrogen doesn’t create carbon dioxide it’s worth a try.

Hydrogen powered airplanes aren’t new. Hybrid airplanes that used hydrogen were being tested already in the 1950’s. Airbus is now among the companies who are developing this technology further. Just a few months ago, they presented what they call the ZEROe demonstrator. It’s a hydrogen powered engine, that will be tested both on the ground and in flight, though in the test phase the plane will still be carried by standard engines.

They recently did a 4-hour test flight for the hydrogen engine. The plane they used was an A380, that’s a two-deck plane that can transport up to eight hundred passengers or so. They used this large plane because it has plenty of room for the hydrogen tanks plus the measurement equipment plus a group of engineers plus their emotional support turkeys. However, the intended use of the hydrogen engine is a somewhat smaller plane, the A350. Airbus wants to build the world’s first zero-emission commercial aircraft by 2035.

The Airbus competitor Boeing is not quite so enthusiastic about hydrogen. Their website explains that hydrogen “introduces certification, infrastructure, and operational challenges”. And just to clarify the technical terms, “challenge” is American English for “problem”. Because of those challenges, Boeing focusses on sustainable fuels. According to its CEO, sustainable fuels are “the only answer between now and 2050”. So let’s look at those next.

3. Biofuel

Bio-fuels are usually made from plants. And when I say plants, I mean plants that have recently deceased and not been underground for a hundred million years. Bio-fuels still create carbon dioxide when burned, but the idea is that it’s carbon dioxide you took out of the air when you grew the plants, so the carbon just goes around in a cycle. This means unlike regular jet fuel, which releases carbon dioxide that was long stored underground in oil, bio-fuels don’t increase the net amount of carbon dioxide in the atmosphere.

The most common bio-fuel is ethanol, which can be made for example from corn. It can and is being used for cars. But ethanol isn’t a good choice for airplanes because it’s not energy dense enough, basically, it’s too heavy. In the figure we already looked at earlier, it’s up here.

Another issue with bio-fuel is that, to be used for aircraft, it must fulfil a lot of requirements, in particular it must continue to flow well at low temperatures. I’m not much of an engineer but even I can see that if the fuel freezes midflight that might be a challenge.

A bio jet fuel which fits the bill is synthetic paraffinic kerosene, which can be made from vegetable

oils or animal fats, but also from sugar or corn. Paraffinic kerosene is in some sense better than fossil kerosene. For example, it generates less carbon dioxide and less sulfur.

The International Airport Transport Association considers bio-jet fuel a key element to get off fossil fuels. Indeed, some airlines are already using biofuels. The Brazilian company Azul Airlines has been using biofuel from sugarcane on some of their flights for a decade. British Airways has partnered with the fuel company LanzaJet to develop biofuels that are suitable for aircraft. The American Airline United is also investing into biofuels. And the Scandinavian airline SAS has the goal to use 17 percent biofuel by 2025.

The problem, I mean challenge, is that bio jet fuels still cost three to six times more than conventional jet fuel. Moreover, researchers from the UK and Netherlands have estimated that the start-up cost for a commercial bio jet fuel plant is upwards of 100 million dollar which is a barrier to get things going.

But making the production of bio jet fuel easier and more affordable is a presently a very active research area. An approach that’s attracting a lot of attention is using microalgea. They produce a lot of biomass, and they do so quickly. Microalgae reach about three to eight percent efficiency in transforming solar energy to chemical energy, while conventional biofuel crops stay below 1 percent. Take that, conventional biofuel crops!

Algae also generate more oil than most plants, and genetic engineering can further improve the yield. A few years ago, ExxonMobil partnered with Craig Venter’s Synthetic Genomics and they developed a new strain of algae using the gen editing tool CRISPR. The gene engineered algea had a fat content of 40-55 percent compared to only 20 percent in the naturally occurring strain.

But bio fuels from algae also have a downside. They have a high nitrogen content so the fuel produced from them will release nitrogen oxides. If you remember, we talked about this earlier in our video on pollution from diesel engines. Then again, you can try to filter this out, at least to some extent.

Another way to make biofuels more affordable is to let the customer pay for it. SAS for example says if you pay more for the ticket, they’ll put more biofuel into the jet. So, it’s either more legroom or saving the planet, though times for tall people.

4.Synthetic jet fuel

Finally, you can go for entirely synthetic jet fuel. For this, you take the carbon from the atmosphere using carbon capture, so you get rid of the plants in the production process. Instead, you use a renewable energy source to produce a chemical that’s similar to kerosene from the carbon dioxide and water.

The resulting fuels are not completely carbon neutral because of the production process but compared to fossil fuels they have small carbon footprint. According to some sources, it’s about 70 to 80 percent less than fossil fuels though those number are at present not very reliable.

Synthetic kerosene is already in use. Since 2009, it can be blended with conventional jet fuel. The maximum blending ratio depends on the properties of the synthetic component, but it can be up to fifty percent. This restriction is just a precautionary requirement and it’s likely to be relaxed in the future. The problem, I mean challenge, is that at the moment synthetic kerosene is about 4 to 5 times more expensive than fossil kerosene.

Nevertheless, a lot of airlines have expressed interest in synthetic kerosene. For example, last October, Lufthansa agreed on of annual purchase of at least 25 thousand liters for at least five years. That isn’t a terrible lot. Just for comparison, a single A380 holds up to three hundred twenty thousand liters. But it’s a first step to test if the synthetic stuff works. Quantas announced a few months ago that they’ll invest 35 million dollars in research and development for synthetic jet fuel. They hope to start using it in the early 2030s.

But let me give you some numbers to illustrate the… challenge. In 2020 the market for commercial jet fuel was about 106 billion gallons globally. Twenty-one billion-gallon in the US alone. According to the US Energy Information Administration, it is expected to grow to 230 billion gallons globally by 2050.

At current, the global production of synthetic kerosene is about 33 *million gallons per year. That’s less than a tenth of a percent of the total jet fuel. Still, the International Air Transport Association is tentatively hopeful. They recently issued a report, according to which current investments will expand the annual production of synthetic kerosene to 1 point three billion gallons by 2025. They say that production could reach eight billion gallons by 2030 with effective government incentives, by which they probably mean subsidies.

So, even if we’re widely optimistic and pour a lot of money into it, we might be able to replace 5 percent of jet fuel with synthetic fuel by 2030. It isn’t going to save the planet. But maybe it’s enough to push transatlantic flight down on the sin-list below eating meat, so the Swedes can move on from flight-shaming to meat-shaming.

Wednesday, September 28, 2022

I’ve said it all before but here we go again

[I didn't write the title and byline
and indeed didn't see it until it
appeared online.]
For reasons I don’t fully understand, particle physicists have recently started picking on me again for allegedly being hostile, and have been coming at me with their usual ad homimen attacks.

What’s going on? I spent years trying to understand why their field isn’t making progress, analyzing the problem, and putting forward a solution. It’s not that I hate particle physics, it’s rather to the contrary, I think it’s too important to let it die. But they don’t like to hear that their field urgently needs to change direction, so they attack me as the bearer of bad news. 

But trying to get rid of me isn’t going to solve their problem. For one thing, it's not working. More importantly, everyone can see that nothing useful is coming out of particle physics, it’s just a sink of money. Lots of money. And soon enough governments are going to realize that particle physics is a good place to save money that they need for more urgent things. It would be in particle physicists’ own interest to listen to what I have to say.

And I have said this all many times before but I hate long twitter threads, so let me just summarize it in one blogpost:

a) Predictions for fundamentally new phenomena made from new theories in particle physics have all been wrong ever since the completion of the standard model in the 1970s. You have witnessed this ongoing failure in the popular science media. All their ideas were either falsified or they have been turned into eternally amendable and fapp unfalsifiable models, like supersymmetry.

b) Saying that “it’s difficult” explains why they haven’t managed to find new phenomena, but it doesn’t explain why their predictions are constantly wrong. 

c) Scientists should learn from failure. If particle physicists’ method of theory-development isn’t working, they should analyze why, and change their methods. But this isn’t happening.

My answer to why their current method isn’t working is that their new theories (often in the form of new particles) do not solve any problems in the existing theories. They just add unnecessary clutter. When theoretical predictions were correct in the past, they solved problems of consistency (example: the Higgs, anti-particles, neutrinos, general relativity, etc).

Two common misunderstandings: Note that I do NOT say theorists in the past used this argument to make their predictions. I am merely noting in hindsight that’s what they did. It’s what the successful predictions have in common, and we should learn from history. Neither do I say that theoretical predictions were the ONLY way that progress happened. Of course not. Progress can also happen by experimental discoveries. But the more expensive new experiments become, the more careful we have to be about deciding which experiments to make, so we need solid theoretical predictions.

In many cases, particle physicists have made up pseudo-problems that they claim their new particles solve. Pseudo-problems are metaphysical misgivings, often a perceived lack of beauty. A typical example is the alleged problem with the Higgs mass being too small (that was behind the idea that the LHC should see supersymmetry). It’s a pseudo-problem because there is obviously nothing wrong with the Higgs-mass being what it is, seeing that they can very well make predictions with the standard model and its Higgs as it is. 

(I sometimes see particle physicists claiming that supersymmetry “explains” the Higgs-mass. This is bluntly wrong. You cannot calculate the Higgs-mass from supersymmetric models, it remains a free parameter.)

Other pseudo-problems are the baryon asymmetry or the smallness of the cosmological constant etc. I have a list that distinguishes problems from pseudo-problems here.

So my recommendation is that theory development should focus on resolving inconsistencies, and stop wasting time on pseudo-problems. Real problems are eg the lacking quantization of gravity, dark matter, the measurement problem in quantum mechanics, as well as several rather technical issues with quantum mechanics (see the above mentioned list).

When I say “dark matter” I refer to the inconsistency between observation and theory. Note that to solve this problem one does NOT need details of the particles. That’s another point which particle physicists like to misunderstand. You fit the observations with an energy density and that’s pretty much it. You don’t need to fumble together entire “hidden sectors” with “portals” and other nonsense. Come on, people, wake up! This isn’t proper science!

There are several reasons why particle physicists can’t and don’t want to make this change. The most important one is that it would dramatically impede their capability to produce papers. And papers are what keeps grant cycles churning. This is a systemic problem. Next problem is that they can’t believe that what I say can possibly be correct because they have grown up in a community that has taught them their current methods are good. That’s group think in action.

There are solutions to both of these problems, but they require changes from within the community.

Particle physicists, rather unsurprisingly, don’t like the idea that they have to change. Their responses are boringly predictable.

They almost all attack me rather than my argument. Typically they will make claims like I’m just “trying to sell books” or that I “want attention” or that I “like to be contrarian” or that, in one way or another, I don’t know what I am talking about. I yet have to find a particle physicists who actually engaged with the argument I made. Indeed most of them never bother finding out what I said in the first place.

A novel accusation that I recently heard for the first time is that I allegedly refuse to argue with them. A particle physicist claimed on twitter that I had been repeatedly invited to give a seminar at CERN but declined, something she had been told by someone else. This is untrue. I have to my best knowledge never declined an opportunity to talk to particle physicists, even though I have been yelled at repeatedly. I was never invited to give a seminar at CERN. 

The particle physicist who made this claim actually went and asked the main seminar organizers at CERN and they confirmed that I was never invited. She apologized. So it’s all good, except that it documents they have been circulating lies about me in the attempt to question my expertise. (Another symptom of social reinforcement.)

There have also been several instances in the past where particle physicists called senior people at my workplace to complain about me, probably in the hope to intimidate me or to get me fired. It speaks much for my institution that the people in charge exerted no pressure on me. (In other words, don't bother calling them, it’s not going to help.)

The only “arguments” I hear from particle physicists are misunderstandings that I have cleared up thousands of times in the past. Like the dumb claim that inventing particles worked for Dirac. Or that I’m “anti-science” because I think building a bigger collider isn’t a good investment right now.

You would think that scientists should be interested in finding out how their field can make progress, but particle physicists just desperately try to make me go away, as if I was the problem. 

But hey, here’s a pro-tip: If you want to sell books, I recommend you don’t write them about theoretical high energy physics. It’s not a topic that has a huge market. Also, I have way more attention than I need or want. I don’t want attention, I want to see progress. And I don’t like being contrarian, I am just not afraid of being contrarian when it’s necessary.

As a consequence of these recent insults targeted at me, I wrote an opinion piece for the Guardian that appeared on Monday. Please note the causal order: I wrote the piece because particle physicists picked on me in a renewed attempt to justify continuing with their failed methods, not the other way round. 

It's not that I think they will finally see the light. But yeah I’m having fun for sure.

Monday, September 26, 2022

Book Review “The Biggest Ideas in the Universe: Space, Time, and Motion” by Sean Carroll

The Biggest Ideas in the Universe
Space, Time, and Motion
By Sean Carroll
Dutton, Sep 20, 2022

The first time I heard Sean Carroll speak was almost 20 years ago in Tucson, Arizona, where he gave a physics colloquium. He had just published his first book, a textbook on General Relativity. His colloquium was basically an introduction to modern cosmology, dark matter, dark energy, and the cosmic microwave background.

It was a splendidly delivered talk; the students loved it. But later I overheard several faculty members remarking they had found it “too simple” and that Sean didn’t seem to be doing much original work. To them, the only good talk was an incomprehensible one. Those remarks, I would later come to realize, are symptomatic of academia: You impress your colleagues by being incomprehensible.

Sean had begun blogging the same year I heard him speak in Tucson, 2004. I would begin blogging not much later, though for unrelated reasons (I originally didn’t intend to write about science), and naturally I kept track of what he was up to.

Since then, it has made me very happy to see Sean making a good career both in research and in science communication, on his own terms. I have met him a few times over the years, read most of his books, and reviewed a few. But I didn’t anticipate he’d pop up on YouTube in 2020, stuck at home during the first COVID lockdowns, like all of us. There he was, green screen as crappy as mine had been a year earlier, promising to cover “The Biggest Ideas in the Universe”, when I had just decided to put more effort into my own YouTube channel.

To my relief it became clear quickly that Sean’s YouTube ambitions were much different from mine. He went for the basics where I prioritized brevity. If my YouTube channel is a buffet, then his is the farmer’s market. And luckily his YouTube appearance remained temporary.

His newest book is the first of three to summarize his YouTube series, focused on dynamical laws, space, and time. It gradually builds up from functions to equations of motions, to concepts like energy, velocity, and momentum, space-time and its geometry, and finishes with black holes in General Relativity. He uses the most essential equations and explains how they work, but you can follow the explanation just by reading the text.

This isn’t your usual popular science book. It doesn’t discuss speculative new ideas, but it’ll give you the background to understand them. It’s a timeless book that I am sure will become a classic, a go-to reference for the interested non-expert who wants to see how the gears of the machinery turn underneath the superficial stories you find in popular science books.

If the three volumes are complete, they’ll presumably cover the classes you’d take for a master’s degree in physics. There aren’t many books like this, which fill the gap between textbooks and popular science books. The only other example that comes to my mind is the “Theoretical Minimum” series by Susskind, Hrabovsky, Friedman, and Cabannes. Sean’s is more focused on the essentials and somewhat lighter in the maths. I have also found Sean’s to be better written.

I’ve always admired Sean for ignoring the unwritten dictum of academia that inaccessibility makes you move valuable, and for his enthusiasm in helping people understand physics, despite the fact that, 20 years ago, most senior academics considered this a waste of time. Today the situation is entirely different. I think Sean was one of the people who changed this attitude.

Saturday, September 24, 2022

What is "Nothing"?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Like most videos on YouTube, this is a video about nothing. But we’re a science channel, so we’ll talk about nine levels of nothing. What are the nine levels of nothing? Can you really make a universe from them? And if someone asks you why there is something rather than nothing, what’s a good answer? That’s what we will talk about today.

First things first, what do we mean by “nothing”? A first attempt to define nothing is to look at how we use the word in everyday language. Suppose your birthday is coming up and you say “Oh, I want nothing.” So when I give you a box for your birthday, you expect it to be empty It’s nothing, in the sense that it doesn’t contain any objects. We will call this the level 1 nothing. It’s a pre-science nothing, the nothing you might refer to before you’ve ever heard of physics.

But of course, you have heard of physics, and so you know that even a box full of level one nothing still contains air, and air is made of something. You wanted nothing for your birthday, and certainly you’ll be disappointed to get air instead. Let’s therefore pump all the air out of this box. We’ll call what’s left the level 2 nothing. It’s what was called a vacuum in the 17th century, no objects, and no air either.

Okay you might say, but we don’t live in the 17th century, and when you said you want nothing for your birthday you really meant it. If we just pump out the air, there’s still the occasional cosmic ray inside, or neutrinos, or dark matter particles, if they exist. So, we go one step further and remove all types of matter, this gives us the level 3 nothing. Indeed, since objects and air are made of particles, removing particles includes the previous two nothings.

But even if the box is closed, there would still be radiation in the box, for example in the infrared, which is maybe not much, but it’s something. And the magnetic field of the earth would also still go through the box. Therefore, we now also remove all types of radiation and all fields. Because you wanted nothing for your birthday and of course I want you to be happy. Now we have a level four nothing: no particles, no radiation, no fields. What you have left then is what you could call the 21st century vacuum.

The level 4 nothing is however is still something. For one thing, many physicists argue that the vacuum has an energy density and pressure and associate this with the cosmological constant. As I explained in this earlier video, I think this doesn’t make sense, the cosmological constant is just a constant of nature which determines the curvature of empty space. Empty space just isn’t necessarily flat. Talking about the curvature of empty space as if it was energy density and pressure is just a weird interpretation of geometry.

Even leaving aside the cosmological constant, the 21st century vacuum isn’t nothing because in quantum field theories, like the standard model of particle physics, the vacuum contains virtual particles that are created in pairs but quickly destroy each other again. They come out of the vacuum and disappear back into it. Virtual particle pairs are like couples you’ve never heard of that pop up in your news feed, destroy each other, and disappear back into nothing. Except with maths.

We can’t directly measure virtual particles, that’s why they’re called virtual. But we can infer their presence because we can measure their influence on other particles. Or we could, if we hadn’t removed those from the box already.

For example, if we look at the energy levels of electrons around an atomic nucleus, these are slightly shifted in the presence of virtual particles. This can be measured, and it has been measured. That’s one way we know virtual particles exist.

You could argue that the phrase “virtual particle” is really just a name for a mathematical expression that we use to calculate measurement outcomes, and I would agree. But be that as it may, we can observe their effects and nothing has no effects so it’s got to be something. And you wanted nothing for your birthday, not a box full of virtual particles. Besides, virtual particles can sometimes become real, for example near black holes, so they can actually kick us back from level four to level three.

To get to level 5 nothing we therefore remove the twenty-first century vacuum too. Now we have neither virtual nor real particles nor radiation nor fields and there’s also no way that any of them can reappear from the vacuum. What’s left in the box now? Well. There’s still space and time in it. And time is money, and money is the root of all evil, and that’s a terrible joke, but still something rather than nothing.

This is why for level 6 of nothing, things get decidedly weird because we remove space and time, too. And just to make sure, we will also remove all other equations and laws of nature that might give rise to space and time, such as strings or quantum gravity, or whatever other idea you believe in. Remove all of it. At this point there is nothing left from our theories of physics.

So why is there any physics at all? This question is one of the reasons we’ll never have a theory of everything, because even the best theory can’t explain its own existence. Scientific explanations end at this level, and it’s probably where this video should end, but I admit I enjoy talking about nothing, so let’s see what else there is to say.

I have taken inspiration for this video from an essay by Robert Lawrence Kuhn. He also talks about it in this video. My first six levels of nothing are similar to his, though not exactly the same because I’ve looked at it from the perspective of a physicist. But Kuhn doesn’t stop there, he has three more levels of nothing.

Taking away everything physical still leaves us with something in your birthday box because you might grant the existence of non-physical entities. For example, some people believe in god, or other religious ideas, like the belief that consciousness is non-physical. In level 7 we remove those, too. Theological explanations end at this level. If you think that god necessarily has to exist then you have to get off the bus at level 7 and accept that the question why god exists doesn’t have an answer.

Is the box finally empty? Not quite. There’s still mathematics that could be said to exist in some sense. That is, we have abstract ideas and objects, numbers, sets, logic, truths and falsehoods, and the entire platonic world of ideals. For the 8th level of nothing, we remove those too.

Has this finally removed everything? Are you finally happy with your birthday gift? Well, there’s still the possibility that something comes into existence even if that something doesn’t exist. And a possibility is something in and by itself. So, for level 9, we also remove all possibilities. This is Kuhn’s final level of nothing. It’s the best nothing I can give you for your birthday. I hope you’re happy now.

The ninth level of nothing leaves us with the always interesting question whether the absence of something is also something, which is why philosophers like to discuss whether holes in cheese exist. Personally, I’m more interested in the cheese. I guess that’s why I’m a physicist and not a philosopher, but I found Kuhn’s classification of nothings useful because it explains why we sometimes talk past each other apropos of nothing.

For example, “inflation” is a currently popular theory in physics according to which our universe was created by a quantum fluctuation from a vacuum. We have no evidence that this is correct, but let us leave this aside for today, and just ask what kind of creation this would be if it was correct. The idea of inflation is that you have a big space that’s filled with a quantum vacuum, and every once in a while a quantum fluctuation succeeds in becoming so large that it begins to grow. Indeed, it grows into an entire universe like ours, with cheese, and holes in it, and all.

In such a vacuum there are many fluctuations, and therefore the creation of a universe doesn’t happen only once, it happens over and over again. It’s a type of multiverse called “eternal inflation”. We just talked about this some weeks ago. The beginning of our universe in this eternal inflation would be a creation from a level four nothing.

Physics can get you a little further than this because you can write down a theory in which space and time is created from a state without space and time. It’s arguably somewhat hard to imagine what this means, but you can certainly write down mathematics for it.

You see, I just define a symbol for a state without space and time, and an operator that creates space and time, then I let the operator act on the state, and voila, I’ve created space and time. Ok, I have oversimplified this a little, but basically this is how it works. I really think people are way too respectful of all the stuff that physicists made up and get away with just because their maths is incomprehensible.

Lawrence Krauss’ book “A Universe from Nothing” is about this idea of creating space and time from nothing. And this would be a creation from a level 5 nothing. But even if you don’t believe in God, a level 5 nothing is still something. To begin with it has the mathematics that give rise to all the rest.

If physics doesn’t answer the question why there is something rather than nothing, then what could? Philosophers have discussed that back and forth. I’m not much of a philosopher and have a nothing worthwhile to add. That’s a ninth level nothing. But just in case someone stops you on the street and asks “why is there something rather than nothing”, let me tell you the three most popular answers that I have come across.

The most popular answer at the moment seems to be that nothing is absurd. It doesn’t make sense in and by itself and can’t be. It’s just a confusion of human language that we have inflicted on ourselves. The difficulty becomes apparent if you try to explain what nothing is, because any statement about it requires something. I mean if I can talk about nothing, then nothing it’s the thing that I talk about and it's therefore something?

Another answer is that no explanation is needed, or there is no explanation. God made it, que sera, sera, please move on, nothing to see here. See what I did there?

A third answer might be that our universe, or at least any universe, is in some sense the best option, and nothing doesn’t live up to the requirement because nothing can’t be any good.

If someone asked me on the street why there is something rather than nothing, I’d probably just shrug. I can’t think of any way to answer the question, and I also don’t see what difference it would make if we could answer it. I mean, suppose someone came tomorrow with a 2000 page proof that something must exist, what would it be good for? I guess I could do a video about it.

More seriously, just because it’s not a question that I want to spend my time on doesn’t mean I think no one should. In fact, I am glad that we are not all interested in the same questions and I’m happy to leave this one to philosophers. Do you have an answer that I didn’t mention? Let me know in the comments.

Saturday, September 17, 2022

The New Meta-Materials for Superlenses and Invisibility Cloaks

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Meta is the Greek prefix for “after” and Aristotle used the phrase “metaphysics” for the stuff in his writing that came literally “after” he was done with the physics. Metaphysics is concerned with some of the most important questions we face at this critical moment in human history. Questions like whether the holes in cheese exist, whether cheese exists, or whether only the atoms that make up the cheese exist.

But this is not what we’ll talk about today. This video is about metamaterials which, I assure you, have nothing to do with cheese. Though, maybe, a little bit. Metamaterials are the next technological stage “after” materials. It’s a research area that has progressed incredibly quickly in the past decade, and that includes superlenses, invisibility cloaks, earthquake protection, and also chocolate. What are metamaterials, and what are they good for? That’s what we’ll talk about today.

First things first, what are metamaterials? A linguistic approach might lead you to think a metamaterial is what comes after the material, so I guess, that’d be the bill. But that’s not quite right. A metamaterial has custom-designed micro-structures which give a material new properties. These micro-structures are typically arrays that resonate at specific frequencies, and that interact either with acoustic waves or with electromagnetic waves. This way, metamaterials can be used to control sound, heat, light, and even earthquakes.

This sounds pretty abstract, so let us start with a concrete example, the superlens.

When you take an image of an object, with your eyes or with a camera, you collect light that reflects off the surface of an object with a lens. Lenses work by “refraction” which means they change the angle at which the light travels. If an object is too close to the lens, the refraction can no longer converge the light. For this reason, you can’t take images of things that are too close to the lens.

But not all the light that reflects from an object gets away. The part that gets away is called the far field, but there is another part of the light called the near field, which stays near the surface of the object. The electromagnetic waves in the near field are oscillating like usual, but they don’t travel into the distance, they decay exponentially. It’s also called an “evanescent wave”.

This figure shows how waves enter a medium at a surface, which is the red line. The top image is a normal, refracted wave, which continues traveling through space but the angle changes when it enters the medium. The bottom image shows an evanescent wave, which decays with distance from the surface. The evanescent waves contain tiny details of the structure of the object, but since they don’t reach the camera, those details are lost. And you can’t get the camera arbitrarily close to the object, because then you couldn’t refocus the light. And that’s a shame because you might not be able to count the hairs in my eyebrows after all.

But in 2000, the British physicist Sir John Pendry of Imperial College in London found a way to use the information in the near field. He said, it’s easy enough, you just use a material that has a negative refractive index.

What does it mean for a material to have a negative refractive index? Normal materials don’t have this, but metamaterials can. When a ray of light enters a medium, then the refractive indexes of the two media relate the angles. This is called Snell’s law. If the refractive index of the medium is negative, then this means the continuation of the ray in the medium is also reflected from the normal to the surface. So, it goes back into the direction it comes from. How would that look like?

Well, as I said, stuff that we normally encounter in daily life doesn’t have a negative refractive index, so I can’t show you a photo. But we can illustrate what it would look like. You probably remember the “broken pencil” illusion. If you put a pencil half into a glass of water, then the part in the water appears shifted to the side. It’s because the light is refracted in the water but the brain interprets the visual input as if the light travels in straight lines. If the water had a negative refraction index, then the lower part of the pencil wouldn’t just seem shifted, it’d also be reflected to the other side.

Aaron Danner had the great idea to use a raytracer to create a 3-d image of a pool filled with water that has a negative refraction index. Here is the image of the pool with normal water. And here is the image with the negative refractive index. The thing to pay attention to are those three black lines, which indicate the corner of the pool. You’d normally expect this to be out of sight, but since this strange water mixes refraction with reflection, you can now see it. If there were fish in the pool they’d appear to be floating on top of the water. Which, I don't know if you know this, but it’s not what a fish is supposed to do.

What’s this got to do with lenses? Well remember that you need lenses to collect rays of light. But if you put a sheet of a medium with negative refractive index between two with normal refractive index, that’ll basically turn the light rays around and effectively focus them. It acts like a lens. And, here comes the important bit, this also works for evanescent waves which usually get lost. They get focused too, and are prevented from decaying. This is why metamaterials with a negative refraction image can reach a resolution that’s impossible to reach with normal lenses.

A superlens was built for the first time in 2005 by researchers at UC Berkeley. Their lens was made of a silver sheet that was merely 35 nanometers thick. In this case, the structure of the material comes from oscillations in the electron density in the silver which amplifies the evanescent waves coming from the object. You have to put the object directly into contact with the silver surface for that to work.

This image (A) is a lithograph taken with a focused ion beam, so this is the control image. This image (C) is the optical control without superlens. And this one (B) is the superlens image. You can clearly see that the superlens image has a higher resolution. This graph D shows the difference in accuracy between imaging with the superlens, that’s the blue curve, compared to imaging without the superlens, that’s the red curve.

Though this jump in resolution might sound good, these lenses are rather impractical. You have to put the metamaterial directly into contact with whatever you want to image and then your camera on top. So it does away with selfie sticks, but unfortunately also ruins your makeup. This is why, last year, a group of researchers from Iran and Switzerland published a paper in Scientific Reports, in which they propose to use a metamaterial to turn the near field into a far field, so you can put your camera elsewhere.

They call this device a “hyperlens” which to me sounds like it’s a superlens that’s had too much coffee, but they mean a grid of aluminum nanorods that resonate at wavelengths in the visible part of the spectrum. For now, this is just a computer simulation, but the idea is that the resonance converts the evanescent modes into propagating modes, so then you can capture them elsewhere. The researchers claim that at least in their numerical simulations this structure can image biological tissues with a resolution of a tenth of the wave-length of the light. The resolution limit of conventional lenses is about a quarter of a wave-length.


Let’s then talk about what’s the probably best known application of metamaterials, the invisibility cloak. You may have read the headlines a few years ago about this. Metamaterials make invisibility cloaks possible because with a negative refraction index you can bend light in the opposite direction to what normal materials do. This means that, at least in theory, with the right combination of materials and metamaterials, you can bend light around an object. This appears to us as if the object isn’t there, again because the brains assume that light travels in straight lines.

This sounds pretty cool, and indeed scientists have some things to show, or maybe in this case it’s better to say *not show. Early experiments in the mid-2000’s mainly used microwaves. But in 2015, a team of researchers from China made an invisibility cloak that works in the infrared. In this Figure (Figure 1f) you see how the light is redirected. They used several triangles of germanium and put them in a very precise geometric configuration so that it creates a hidden area inside. You might say that this isn’t much of a metamaterial, but it’s the same idea: you custom-design structures to redirect waves as you want. Into this hidden region they put a mouse. (Figure 2b). Then they took an image with and without the cloak (Figure 4a and 4b). Half of the mouse is gone!

Invisibility cloaks in the visible part of the spectrum haven’t yet been made, but some semi-invisibility shields exist, for example this one from a company named Hyperstealth Corp. These don’t work by bending the light around objects, but by spreading the light in the horizontal plane. If you have a narrow object, then its image will be overpowered by the light coming from the sides of the object which blurs out what is behind. This works particularly well when the background is uniform. However, it’s not really an invisibility shield. Easiest way to build an invisibility shield is put a camera behind you and project that on a screen in front of you.

You can also use metamaterials to manipulate electromagnetic fields that are not in the optical range. For example, as I explained in this earlier video, the main problem with wireless power transfer is that power decreases with very rapidly with distance from the sender. A “magnetic superlens”, however, could extend this reach.

That this works was shown in a paper by a group of American researchers in 2014. This figure shows the difference between wireless power transfer using a magnetic superlens compared to wireless power transfer through free space. On the y-axis, we have wireless power transfer efficiency, and on the x-axis, we have distance in meters. The solid black line represents wireless power transfer through free space, which drops quickly to near-zero values as distance increases.

The colored lines represent wireless power transfer with the use of a magnetic superlens made up of metamaterials. You see that at best you can extend the reach by a few centimeters. And notice that the efficiency is in all cases in the single digits. So, nice idea, but in practice it doesn’t make much of a difference.

Another type of wave you can manipulate are acoustic waves. Acoustic metamaterials aren’t really a new thing. Sound absorption foam like this one uses basically the same idea. It has a lot of tiny holes. So you see, it’s kind of like cheese. The holes make it very difficult for sound waves in certain frequencies to bounce back which basically kills echo. If I wrap this around my head, you’ll hear the difference. Wrapping your head into one of those will generally improve your experience of the world, highly recommended.

Metamaterials are more sophisticated versions of this. You can for example design them so that they only absorb particular frequencies, this is called a sonic or phononic crystal. Another thing you can do is to reflect the signal back without spreading it out. This was done by a team of researchers from China and the USA in 2018. The material they used was just a plastic dish with a spiral structure that effectively changes the refractive index. They say an application could be to make vehicles easier to detect. Though I suspect that their metamaterial would sell better if it made a car less easy to detect.

You can also use acoustic metamaterials to build an acoustic type of superlens, which has been done for ultrasound, but it’s the kind of solution still looking for a problem. And, as you can guess, they are trying to build acoustic invisibility shields. This has been done for example underwater with ultrasound which is great if you want to hide from dolphins. And in 2014, a group from Duke University used a pyramid with a special surface structure that makes it reflect sound as if it was an empty plane. Here is how this pyramid would look looks like if you could see sound. The pyramid is hollow, so you can hide stuff inside. Maybe they’ve finally figured out what the Egyptians were up to?

Another application of metamaterials is earthquake protection. Like you can use structures in materials to change how light and sound propagates, you can change the properties of the ground to change how seismic waves propagate. For this you embed structures around or under buildings so that seismic waves are diverted around the building. You basically make the building invisible to earthquakes.

For example, a group at MIT’s Lincoln lab use arrays of boreholes that are either filled or empty to redirect seismic waves. They haven’t actually build a real world example, but they have made measurements on downscaled physical models and they have done computer simulations.

This image is an illustration for how seismic barriers could work in theory. The green squiggly lines are the surface waves, the blue squiggly lines are P-waves, and the black arrows are the S-waves. All these waves get partly redirected and diffused.

At least in a computer simulation, the cloaking effect is quite impressive as you can see in this image from a 2017 paper. For this, they used data from a real earthquake, the Hector Mine earthquake that happened in Southern California in 1999. It had a magnitude of 7.1. The metamaterial barriers effectively reduced it to an earthquake of magnitude 4.5. And just a few months ago, a group from China proposed another metamaterial to dampen seismic waves. They want to use steel embedded with cylinders of foam.

Image A of this figure shows an aerial view of a seismic wave moving through unprotected soil – without protection, the wave moves without losing energy, exposing any infrastructure atop the soil to the full power of the seismic wave. In Image B, the metamaterial array effectively neutralizes the wave. Here you see the effectiveness of the metamaterial array from a side view – in Image A, the seismic wave travels across the surface uninterrupted, while in Image B, the metamaterial array dissipates the wave at Line C. The authors claim that their system can dampen seismic surface waves in the range of 0 point 1 to 20 Hertz with up to 85 percent efficiency.

And as promised, a tasty example to finish. A team of researchers from the Netherlands have created an edible metamaterial. It’s made of chocolate in multiple s-shaped pieces that makes the chocolate more or less crunchy, depending on the direction you chew it. And if you think about it YouTubers do this too when they cut breaths out of their videos and zoom back and forth in every other sentence. This structural changes affects how you travel through a video. So we’re really doing meta-videos.

Metamaterials have opened a whole new dimension to material design, and as you can see, they are well on the way to application already. We will certainly come back to this topic in the future, so if you want to stay up to date, don’t forget to subscribe.

Saturday, September 10, 2022

The Multiverse: Science, Religion, or Pseudoscience?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Why do physicists believe there are universes besides our own? I get a lot of questions about the idea that we live in this “multiverse”. Is it science, religion, pseudoscience, or just wrong? That’s what we’ll talk about today.

The topic of this video is covered in more detail in my new book Existential Physics. 

First things first, what’s a multiverse? You may guess that’s a new form of poetry, and you wouldn’t be entirely wrong. A multiverse is a collection of universes, either infinitely many or a number so large no one’s even bothered giving it a name. It’s an idea that has sprung up in some esoteric corners of theoretical physics and has, not so surprisingly, caught the imagination of science fiction authors, script-writers, and also the public. And it is poetic somehow, isn’t it, all those universes out there.

There isn’t just one multiverse but several different ones, so multiple multiverses, if you wish. The multiverse shouldn’t be confused with the metaverse, which is what universes evolve into when they’ve been fed enough Zuckerberg candy.

How many different multiverses do we have? Well, Brian Greene has written a book in which he lists 9 different ones, but you know how scientists are, the moment the book came out they jumped up to complain about what wasn’t on his list. And I can totally understand that. I mean, everyone knows that a list needs ten items. Nine is just not right. So let me just briefly run through the three types of multiverse that you most often hear about.

1. Many Worlds

The probably best-known and least controversial type of multiverse is the many worlds interpretation of quantum mechanics. If you remember, in quantum mechanics we can make predictions only for probabilities. We can say, for example, a particle goes left or right, each with a 50 percent chance. But then, when we measure the particle, we find it either left or right, and then we know where it is with 100 percent confidence. So, when we have measured the particle, what happened with the other possible outcome?

In the most common interpretation of quantum mechanics, often called the Copenhagen Interpretation, the moment you make a measurement you just update your probabilities because you got new information. The possibilities which you didn’t observe disappear because now you know they didn’t happen. This is called the measurement update, or sometimes the reduction or collapse of the wave-function.

In the many worlds interpretation, in contrast, one postulates that all possible outcomes of an experiment happen, each in a separate universe. It’s just that we live in only one of those universes and never see the other outcomes.

Of course then you have to explain why we don’t spread over all universes like the outcomes of experiments do. Mathematically, this works the same way as the sudden update of the wave-function. This means for what observations are concerned, many worlds is identical to standard quantum mechanics. The difference is what you believe it means.

If you believe in the many worlds interpretation, then every time a quantum object is measured, the universe splits into as many different universes as there were possible outcomes of the measurement. And this doesn’t just happen in laboratories. A measurement in quantum mechanics doesn’t require an apparatus. Anything that’s large enough can cause a “measurement”, that may be Geiger counter, but also banana, or, well you. This means that measurements happens all the time and everywhere. They constantly create new universes, and more are being created as we speak. Which means more bananas! And more yous!

For example, each time the wave-function of a photon spreads into all directions, but then it hits your eye, and the universe splits. In some, the photon arrived in your eye. In others, it hit the wall next to you, in some it went right through your head. And this could be happening to all photons. So, in some universes, an elephant is standing in front of you and you don’t see it. It’s unlikely, but, well, it’s possible, and according to the many worlds interpretation anything that’s possible is also real. I hope you make friends with the invisible elephant. I think that would be nice.

2. Eternal Inflation

We don’t know how our universe began and maybe we will never know. We just talked about this the other week. But according to a presently popular idea called “inflation”, our universe was created from a quantum fluctuation of a field called the “inflaton”. This field supposedly fills an infinitely large space and our universe was created from only a tiny patch of that, the patch where the fluctuation happened.

But the field keeps on fluctuating, so there are infinitely many other universes fluctuating into existence. This universe-creation goes on forever, which is why it’s called eternal inflation. Eternal inflation, by the way lasts forever into the future, but still requires a beginning in the past, so it doesn’t do away with the Big Bang issue.

In Eternal Inflation, the other universes may contain the same matter as ours, but in slightly different arrangements, so there may be copies of you in them. In some versions you became a professional ballet dancer. In some you won a Nobel Prize. In yet another one another you are a professional ballet dancer who won a Nobel prize and dated Elon Musk. And they’re all as real as this one.

Where did this inflaton field go that allegedly created our universe? Well, physicists say it has fallen apart into the particles that we observe now, so it’s gone and that’s why we can’t measure it. Yeah, that is a little sketchy.

3. The String Theory Landscape

String theory is an approach to a unification of gravity with the other forces of nature. Or maybe I should say it was, because it’s rapidly declined in popularity in the past decade. Why? It just didn’t lead anywhere.

String theorists originally hoped that one day it’d be possible to use their theory to calculate the values of the constants of nature, such as the masses of elementary particles and the strength by which they interact and so on. This didn’t work, so they gave up and just postulated that any value is possible. And since they couldn’t explain why we only observe a specific set of values they declared that they all exist.

And so this gives you another version of the multiverse. This collection of universes with all possible values for the constants of nature is called the string theory landscape. It contains universes with different types of matter or that have other laws of nature. For example, in some of them gravity is much weaker than it is in our universe. In some, radioactive decay happens much faster. And some universes expand so quickly that stars can’t form. If you believe in the string theory landscape, this isn’t just theoretically possible, it all actually happens.

You can combine these multiverses in any way you wish. So you can get married to Elon Musk hopping around at half the strength of gravity, with elephants in the room which you coincidentally can’t see. If you believe in the multiverse, then you have to believe this is possible.

There are some other multiverses which I didn’t talk about, like Max Tegmark’s mathematical universe in which all mathematics supposedly exists, or the simulation hypothesis, according to which our universe is a computer simulation. Because if you can simulate our laws of nature, why not simulate some others too? I don’t want to go through all the different multiverses because they all have the same problem.

The issue with all those different multiverses is that they postulate the existence of something you can’t observe, which is those other universes. Not only can you not see them, you can’t interact with them in any way. They are entirely disconnected from ours. There is no possible observation that you could make to infer their presence, not even in principle.

For this reason, postulating that the other universes exist is unnecessary to explain what we do observe, and therefore something that a scientist shouldn’t do. Making an unnecessary assumption is logically equivalent to postulating the existence of an unobservable god, or a flying spaghetti monster, or an omniscient dwarf who lives in your wardrobe. Fine if you do it in private, not so fine if you publish papers about it.

But. This does not mean that other universes do not exist. It merely means that science doesn’t say anything about whether or not they exist. If you postulate that they do not exist, that’s also unnecessary to explain what we observe, and therefore equally unscientific.

So now what, is the multiverse unscientific or pseudoscience or religion? Well, depends on what you do with it.

If you assume that unobservable universes exist and write papers about them, then that’s pseudoscience. Because this is exactly what we mean by pseudoscience: pretends to be science but isn’t. If you accept that science doesn’t say anything about the existence of those other universes one way or another, and you just decide to believe in them, then that’s religion. Either way, multiverses are not science. They’re like Tinker Bell, basically, they exist if you believe in them.

You might find this whole multiverse idea rather silly. And I wouldn’t blame you. But some physicists are quite serious about it. They believe these other universes exist because they show up in their mathematics. You see, they have mathematics, and some of that describes what we observe. And then they claim therefore everything else that their mathematics describes must also exist. They are confusing mathematics with reality.

There are some standard “objections” that physicists always try on me. You have probably heard some of them too, so here’s how you can deal with them.

Objection 1: Black Holes

The first point that multiverse fans always bring up is that we say that the inside of a black hole exists, even though we can’t observe it. But that’s just wrong: You can observe the inside of a black hole, you just can’t come back to tell us what you observed. Besides, we know that black holes evaporate, so they eventually reveal their inside.

Objection 2: Cosmic Horizon

Second objection that I hear is that we can only observe a patch of our own universe because light needs time to travel, and it’s got only so far since the Big Bang. But certainly no one would say that therefore the universe stops existing outside of the part we can observe. No of course not. No one says if you can’t observe it, it doesn’t exist. The point is: if you can’t observe it, science says nothing about whether it exists or not.

Objection 3: Observable Multiverses

The third standard objection is that some physicists have tried to come up with cases in which the presence of other universes would be observable. For example, there has been the idea that another universe could have collided with ours in the past, leaving a specific pattern in the cosmic microwave background. Or our universe could have been entangled with another one. So, the nobel prize winning ballet dancer isn’t married to this Elon Musk but has a quantum connection to an Elon Musk in another universe. Again this would leave a specific pattern in the CMB.

The answer to this objection is that people have looked for these patterns in the CMB and they are just not there. But to be fair, the testable multiverse models are a different problem than the one I named above. The big problem with multiverse ideas is that physicists mistake mathematics for reality. The problem with the testable multiverse ideas is that they think just because a hypothesis is testable it is also scientific. This is not what Popper meant. He said if it isn’t testable it isn’t science. Not “if it’s testable, then it’s science”.

Objection 4: It’s simple

The fourth and final objection is that the multiverse is good because it’s a simple theory. You see, multiverse fans argue that if you don’t make assumptions about what the values of the constants of nature are, but just say “they all exist,” then you have fewer assumptions in your theory. And a simpler theory is better, because Occam’s razor and all.

But look, if that argument was correct, then the best theory would be one with no assumptions at all. There’s just a little problem with that, which is that such a theory doesn’t explain anything. I mean, it literally isn’t a theory, it’s nothing. Just saying that it’s simple doesn’t make a scientific theory a good one. For a theory to be good, it still has to describe what we observe. It’s like just telling my hair to “please stay put” may be simple but doesn’t make it a good hair day.

And that’s exactly what happens in those multiverse theories, they’re too simple to be good for anything. If you don’t specify the values of the constants of nature, then you just can’t make predictions. To be fair, I would agree it’s simpler to not make predictions than making them, but even in physics you can’t publish predictions you didn’t make. At least not yet. Which is why multiverse physicists always end up making assumptions for the values of those constants.

They don’t always do this directly, sometimes they instead postulate probability distributions from which they derive likely values of the constants. But that’s more difficult than just using the constants and certainly not simple.

Same issue with the many world’s interpretations. Those who work on it claim that their theory is simpler than standard quantum mechanics because it just doesn’t use the measurement update. But if you don’t update the wave-function upon measurement, then that just doesn’t describe what we observe. We don’t observe dead-and-alive cats, that was Schrödinger’s whole point.

Therefore, you have to add other assumptions to many worlds, about what a detector is and how the universes split and so on, which for all practical purposes amounts to the same as updating the wave-function. In most cases these prescriptions are actually more complicated than the measurement update. So multiverse theories are either simple but don’t make predictions, or they make predictions but are more complicated than the generally accepted theories.

Let me finish by saying I am not against the multiverse or poetry. I would like to apologize to all the poets watching this. It’s not like I think science is the only thing that matters. You may find the multiverse inspirational, or maybe comforting, or maybe just fun to talk about. And there’s nothing wrong with that – please enjoy your stories and articles and videos about the multiverse. But don’t mistake it for science.

Saturday, September 03, 2022

The Trouble with 5G

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]



Did you know that there are now more mobile devices in the world than people? Whether you knew this or not, you probably did know that mobile phones work with the next best thing to magic, which is physics. All that data flies through the air in form of electromagnetic radiation. And since video resolution will soon be high enough for you to check whether I’ve plucked my eyebrows, wireless networks constantly have to be upgraded.

The fourth Generation of wireless networks, four G for short, is now being extended to five G, and six G is in planning. But five G interferes with the weather forecast, and six G brings more problems. What’s new about those wireless networks, and what’s the problem with them? That’s what we’ll talk about today.

The first four generations of wireless networks used frequency bands from four hundred Mega Hertz to roughly two GigaHertz. But there are limits to the amount of information you can transfer over a channel with a limited bandwidth. In fact there’s maths for this, it’s called the Shannon-Hartley theorem.

If you want to transfer more information through a channel with a fixed noise-level, you have to increase either the bandwidth or the power. You can’t increase the four G bandwidth, and there are safety limits on the power. The five G generation tries to circumvent the problem by using new bands at higher frequencies, going up to about fifty Giga Hertz.

Those frequencies correspond to wave-lengths in the millimeter range, which is why they’re called millimeter waves. There’s a reason they haven’t been previously used for telecommunication, and it’s not because millimeter waves are also used as good-byes for in-laws. It’s because radiation at the previously used frequencies passes through obstacles largely undisturbed, unless maybe the obstacle is a mountain. But millimeter waves can get blocked by trees or buildings, which isn’t great if you like calling people who aren’t within line of sight. “Could you pass me the salt, please? Thank you so much.”

So the idea of five G is to collect the signals from nearby mobile phones in what’s called a small cell of the network, and pass them on at low power to a bigger antenna that sends them long-distance at higher power. The five G network technology is currently being rolled out in most of the developed world. Cisco estimates that by next year 10 percent of mobile connections will use 5G.

Five G is controversial because it’s the first to use millimeter waves and the health effects have not been well studied. I already talked about this in a previous video but let me be clear that I have no reason to think that five G will have any adverse health effects. To the extent that research exists, it shows that millimeter waves will at high power warm up tissue, and that’s pretty much it.

However, the studies that have been done leave me wanting. Last year, one of the Nature journals published a review on 5G mobile networks and health. They looked at 107 experimental studies that investigated various effects on living tissue including genotoxicity, cell proliferation, gene expression, cell signaling, etc.

The brief summary is that none of those studies found anything of concern. However, this isn’t the interesting part of the paper. The interesting part is that the authors rated the quality of these 107 studies. Only two were rated well-designed, and only one got the maximum quality score. One. Out of 107.

The others all had significant shortcomings, anything from lack of blinding to small sample sizes to poor control of environmental parameters. In fact, the authors’ conclusion is not that five G is safe. Their conclusion is: “Given the low-quality methods of the majority of the experimental studies we infer that a systematic review of different bioeffects is not possible at present.”

Now, as I said, there’s no reason to think that five G is harmful. Indeed, there’s good reason to think it’s not, because millimeter waves have been used in medicine for a long time and for all we know they only enter the upper skin layers.

But I am a little surprised that there aren’t any good studies on the health effects of long-term radiation exposure in this frequency range. The 5G network has been in the planning since 2008. That’s 14 years. That’s longer than it takes NASA to fly to Pluto!

So scientists say there’s nothing to worry. Well, they also said that smoking is good for you and alcohol doesn’t cross the placenta and that copies of you live in parallel universes. As a scientist myself, I can confirm that scientists say a lot when the day is long, and I would much rather see data than just take word for it. Only good thing I have to tell you on the matter is that the World Health Organization is working on their own review about the health risk of five G which is supposed to come out by December.

Ok, while we wait to hear what the WHO says about the idea of irradiating much of the world population with millimeter waves, let’s talk about a known side-effect of five G: It’s a headache for atmospheric scientists, that’s meteorologists but also climate scientists. And yes, that means 5G affects the weather forecast.

You see, among the most important data that goes into weather and climate models is the amount of water vapor in the atmosphere. This is measured from satellites. This movie shows the average amount of water vapor in a column of atmosphere in a given month measured by NASA's Aqua satellite. Accurate measurements of the atmospheric water content are essential for weather forecasts.

You measure the amount of water vapor by measuring electromagnetic radiation that is scattered by the water molecules in the atmosphere. Each molecule emits radiation in particular frequency ranges and that allows you to count how many of those molecules there are. It’s the same method that’s used to detect phosphine in the atmosphere of Venus which we talked about in more detail in this earlier video. The frequency that satellites use to look for water is – you guessed it! – 23.80 Giga Hertz (Table 1, first line).

The issue is now that this water vapor signal is uncomfortably close to one of the 5G bands which covers the range from 24 point 25 to 27 point 5 Giga Hertz. You might say that’s still four hundred Mega Hertz away from the water vapor measurements, and that’s right. But the five G band doesn’t abruptly stop at a particular frequency, it’s more that it tapers out. The emission outside of the assigned band is called leakage. That leakage creates noise. And this noise is the problem.

You see, the weather forecast today sensitively depends on data quality. In recent decades, weather forecast has improved a lot. In this figure you see how much more you can trust the weather forecast today than you could a few decades ago. In 1980, a three-day forecast in the Norther Hemisphere was only correct about 85 percent of the time.

Today it’s correct more than 98 percent of the time. And this isn’t just about deciding whether to bring an umbrella, it’s relevant to warn people of dangerous weather events. A 72-hour hurricane warning today is more accurate than a 24-hour warning was 40 years ago.

The reasons for this improvement have been better computers, better models, but also weather satellites that collect more and better data. And that brings us back to the water vapor signal and the 5G troubles.

The water vapor signal is weak and the strongest contribution usually comes from low altitudes. That’s right, typically the biggest fraction of water vapor in the atmosphere isn’t in the clouds but close to the ground. If you took all the water in the atmosphere and put it on the ground you’d get about 2.5 cm. The clouds alone merely make a tenth of a millimeter.

Those are average values and the details depend on the weather situation, but in any case it means that the water vapor signal is very sensitive to noise near the ground. It’s like trying to hear a whisper in a noisy room. To make matters worse, most of the 5G noise will come from densely populated areas so we’ll get the least accurate forecast where people actually live.

Meteorologists are not happy. This is partly because they had to put away the crystal balls. But the bigger reason is that in 2019, The National Oceanic and Atmospheric Organization in the United States, NOAA for short, did an internal study on the impacts of 5G. In a federal hearing, Neil Jacobs, a former NOAA Administrator, said that the current 5G regulations “would degrade the forecast skill by up to 30 percent, so, if you look back in time to see when our forecast skill was roughly 30 percent less than it was today, it’s somewhere around 1980. This would result in the reduction of hurricane track forecast lead time by roughly 2 to 3 days”

The Secretary-General of the World Meteorological Organization, Petteri Taalas, is also concerned. He said: “Potential effects of this could be felt across multiple impact areas including aviation, shipping, agricultural meteorology and warning of extreme events as well as our common ability to monitor climate change in the future.” His organization calls for strict limits on the 5G leakage.

But well, as they say, there are two sides to every story. On the other side is for example Brad Gillen, executive vice president of the CTIA, that’s a trade association which represents the wireless communications industry in the United States.

He wrote a blogpost for the CTIA website claiming that the effect of five G on the weather forecast is “an absurd claim with no science behind it” He says the study done by NOAA used an obsolete sensor that it’s not in operation. Then he pulls the China card “The Trump Administration has already made its call, and it is time we all get on the same page as China and our other rivals most certainly are today.”

That wasn’t the end of the story. The atmospheric scientist Jordan Gerthan from the University of Wisconsin at Madison, pointed out that the reason the NOAA study mentioned a sensor that isn’t being used on satellites today is that this particular design was cancelled. It was, however, replaced by a very similar one so the argument is a red herring.

In response, a different CTIA guy wrote another blogpost claiming that “5G traffic will be hundreds of megahertz away from the band used in weather data collection”, so he completely ignores the leakage problem and hope his readers don’t know any better. On the other hand, NOAA didn’t publish their study and that didn’t win them any favors either.

However, in 2020, researchers from Rutgers University did their own study. They modeled the leakage of five G into the water vapor signal and evaluated its impact on a weather forecast by using old data. They did a mock 12 hour forecast, one without 5G and then two with different levels of leakage power.

As you can see in these figures, they found that the 5G leakage can affect the forecast up to zero point nine millimeter in precipitation and 1 point three degrees Celsius in temperature at two meters altitude. And it’s not just the value that changes but also the location. That’s a significant difference which would indeed degrade weather forecast accuracy noticeably. Maybe not as dramatic as the NOAA guy claimed, but certainly of concern.

What has happened since? In July 2021, the American Government Accountability Office released a report in which they just said that the arguments about the impact of 5G on weather forecast were “highly contentious.” Despite the lack of consensus, the official US position became to adopt fairly weak rules on the power leakage. They were then adopted by the International Telecommunication Union which is based in Geneva, Switzerland and which writes the global rules.

But most countries in the European Union so far just haven’t auctioned off the troublesome frequency band. Maybe they’re waiting to see how things pan out in the USA, the guinea pig of countries.

And then there’s six G, the 6th generation of wireless networks. This is already being planned, and it’s supposed to use bands at even higher frequencies, above one hundred GigaHertz and up into the TeraHertz range. Six G is supposed to usher in the metaverse era with augmented and virtual-reality and ultrahigh-definition video so we can finally watch live streams of squirrel feeders from New Zealand on our contact lenses.

According to the tech site LiveWire “6G is just the natural progression towards faster and better wireless connectivity…Ultimately, whether it’s with 6G, 7G, or another “G”, we’ll have such incredibly fast speeds that no progress bars or wait times will be required for any normal amount of data, at least at today’s standards. Everything will just be available...instantly.” And who would not like that?

But of course the 6G range, too, is being used by scientists for measurements that could be compromised. For example, NASA measures ozone around 236 GigaHertz, and carbon monoxide at about 230.5 GigaHertz. So we can pretty much expect to see the entire 5G discussion repeat for 6G.

How can the situation be solved? For 5G, the World Meteorological Organization is trying to negotiate limits with the regulating agencies in different countries. They demand that cell towers operating close to weather satellite frequencies should be limited to transmit at minus 55 dBW (Decibel Watt) for out-of-band emission, so that’s the leakage.

The European Commission has agreed on –42 decibel watts for 5G base stations. The FCC in the US set a limit at –20 decibel watt. This is a logarithmic scale, so this is more than 30 orders of magnitude above the limit the meteorologists ask for.

What do we learn from this? When a new technology is developed, scientists usually get there first. And when everyone else catches up, they’ll interfere with the scientists, often metaphorically but sometimes literally.

This isn’t a new story of course. You only have to worry about noise from railways if you have railways and there are actually trains going on them. But a high-tech society also relies on the accuracy of data, so this is a difficult trade-off. There are no easy ways to decide what to do, but I think everyone would be better off if the worries from scientists were taken more seriously in the design stage and not grumpingly acknowledged half through a global roll-out.