Saturday, July 31, 2021

Are we made of math? Is math real?

[This is a transcript of the video embedded below.]


There’s a lot of mathematics in physics, as you have undoubtedly noticed. But what’s the difference between the math that we use to describe nature and nature itself? Is there any difference? Or could it be that they’re just the same thing, that everything *is math? That’s what we’ll talk about today.

I noticed in the comments to my earlier video about complex numbers that many people said oh, numbers are not real. But of course numbers are real.

Here’s why. You probably think I am “real”. Why? Because the hypothesis that I am a human being standing in front of a green screen trying to remember that the “h” in “human” isn’t silent explains your observations. And it explains your observations better than any other hypothesis, for example, that I’m computer generated, in which case I’d probably be better looking, or that I’m a hallucination, in which case your sub consciousness speaks German und das macht igendwie keinen Sinn oder?

We use the same notion of “reality” in physics, that something is real because it’s a good explanation for our observations. I am not trying to tell you that this is The Right Way to define reality, it’s just for all I can tell how we use the word. We can’t actually see elementary particles, like the Higgs-boson, with our own eyes. We say they are real because certain mathematical structures that we have come up with describe our observations. Same thing with gravitational waves, or black holes, or the particle spin.

And numbers are just like that. Of course we don’t see numbers as objects walking around, but as attributes of objects, like the spin that is a property of certain particles, not a thing in and by itself. If you see three apples, three describes what you see, therefore it’s real. Again, if that is not a notion of reality you want to use, that’s totally okay, but then I challenge you to come up with a different notion that is consistent and agrees with how most people actually use the word.

Interestingly enough, not all numbers are real. The example I just gave was for integers. But if you look at all numbers with infinitely many digits after the decimal point we don’t actually need all those digits to describe observations, because we cannot measure anything with infinite accuracy. In reality we only ever need a finite number of digits. Now, all these numbers with infinitely many digits are called the real numbers. Which means, odd as it may sound, we don’t know whether the real numbers are, erm, real.

But of course physics is more difficult than just number. For all we currently know, everything in the universe is made of 25 particles, held together by four fundamental forces: gravity, the electromagnetic force, and the strong and weak nuclear force. Those particles and their forces can be mathematically described by Einstein’s Theory of General Relativity and Quantum Field Theory, theories which have been remarkably successful in explaining what we observe.

For what the science is concerned, I’d say that’s it. But people often ask me things like “what is space-time?” “what is a particle?” And I don’t know what to do with questions like this.

Space-time is a mathematical structure that we use in our theories. This mathematical structure is defined by its properties. Space-time is a differentiable manifold with Lorentzian signature, it has a distance measure, it has curvature, and so on. It’s a math thing. We call it “real” because it correctly describes our observations.

It’s a similar story for the particles. A particle is a vector in a Hilbert space that transforms under certain irreducible representations of the Poincare group. That’s the best answer we have to the question what a particle is. Again we call those particles “real” because they correctly describe what we observe.

So when physicists say that space-time is real or the Higgs-boson is real, they mean that a certain mathematical structure correctly describes observations. But many people seem to find this unsatisfactory. Now that may partly be because they’re looking for a simple answer and there just isn’t one. But I think there’s another reason, it’s that they intuitively think there must be something more to space-time and matter, something that distinguishes the math from the physics. Something that makes the math real or, as Stephen Hawking put it “Breathes fire into the equations”.

But those mathematical structures in our theories already describe all our observations. This means just going by the evidence, you don’t need anything more. It’s therefore possible that reality actually is math, that there is no distinction between them. This idea is not in conflict with any observation. The origin of this idea goes all the way back to Plato, which is why it’s often called Platonism, though Plato thought that the ideal mathematical forms are somehow beyond human recognition. The idea has more recently been given a modern formulation by Max Tegmark who called it the Mathematical Universe Hypothesis.

Tegmark’s hypothesis is actually more, shall we say, grandiose. He doesn’t just claim that actually reality is math but that all math is real. Not just the math that we use in the theories that describe our observations, but all of it. The exponential function, Mandelbrot sets, the number 18, they’re all real as you and I. If you believe Tegmark.

But should you believe Tegmark? Well, as we have seen earlier, the justification we have for calling some mathematical structures real is that they describe what we observe. This means we have no rationale for talking about the reality of mathematics that does not describe what we observe, therefore the mathematical universe hypothesis isn’t scientific. This is generally the case for all types of the multiverse. The physicists who believe in this argue that unobservable universes are real because they are in their math. But just because you have math for something doesn’t mean it’s real. You can just assume it’s real, but this is unnecessary to describe what we observe and therefore unscientific.

Let me be clear that this doesn’t mean it’s wrong. It isn’t wrong to say the exponential function exists, or there are infinitely many other universes that we can’t see. It’s just that this is a belief-based statement, not supported by evidence. What’s wrong is to claim that science says so.

Then what about the question whether we are made of math? Well, you can’t falsify this hypothesis. Suppose you had an observation that you can’t describe by math, it could always be that you just haven’t found the right math. So the idea that we’re made of math is also not wrong but unscientific. You can believe it if you want. There’s no evidence for or against it.

I want to finish by saying I am not doing these videos to convince you to share my opinion. I just want to introduce you to some topics that I think are thought-stimulating, and give you a starting point, in the hope it will give you something interesting to think about.

Saturday, July 24, 2021

Can Physics Be Too Speculative?



Imagination and creativity are the heart of science. But look at the headlines in the popular science media and you can’t shake off the feeling that some physicists have gotten ahead of themselves. Multiverses, dark matter, string theory, fifth forces, and that asteroid which was supposedly alien technology. These ideas make headlines, but are then either never heard of again – like hundreds of hypothetical particles that were never detected, and tests of string theory that were impossible in the first place – or later turn out to be wrong – all reports of fifth forces disappeared, and that asteroid was probably a big chunk of nitrogen. Have physicists gone too far in their speculations?

The question how much speculation is healthy differs from the question where to draw the line between science and pseudoscience. That’s because physicists usually justify their speculations as work in progress, so they don’t have to live up to the standard we expect for fully-fledged scientific theories. It’s then not as easy as pointing out that string theory is for all practical purposes untestable, because its supporters will argue that maybe one day they’ll figure out how to test it. The same argument can be made about the hypothetical particles that make up dark matter or those fifth forces. Maybe one day they’ll find a way to test them.

The question we are facing, thus, is similar to the one that the philosopher Imre Lakatos posed: Which research programs make progress, and which have become degenerative? When speculation stimulates progress it benefits science, but when speculation leads to no insights for the description of nature, it eats up time and resources, and gets in the way of progress. Which research program is on which side must be assessed on a case-by-case basis.

Dark matter is an example of a research program that used to be progressive but has become degenerative. In its original form, dark matter was a simple parameterization that fit a lot of observations – a paradigmatic example of a good scientific hypothesis. However, as David Merritt elucidates in his recent book “A philosophical approach to MOND”, dark matter has trouble with more recent observations, and physicists in the area have taken on to accommodating data, rather than making successful predictions.

Moreover, the abundance of specific particle models for dark matter that physicists have put forward are unnecessary to explain any existing observations. These models produce publications but they do not further progress. This isn’t so surprising because guessing a specific particle from rather unspecific observations of its gravitational pull has an infinitesimal chance of working.

Theories for the early universe or fifth forces suffer from a similar problem. They do not explain any existing observations. Instead, they make the existing – very well working – theories more complicated without solving any problem.

String theory is a different case. That’s because string theory is supposed to remove an inconsistency in the foundations of physics: The missing quantization of gravity. If successful, that would be progress in and by itself, even if it doesn’t result in testable predictions. But string theorists have pretty much given up on their original goal and never satisfactorily showed the theory solves the problem to begin with.

Much of what goes as “string theory” today has nothing to do with the original idea of unifying all the forces. Instead, string theorists apply certain limits of their theory in an attempt to describe condensed matter systems. Now, in my opinion, string theorists vastly overstate the success of this method. But the research program is progressing and working towards empirical predictions.

Multiverse research concerns itself with postulating the existence of entities that are unobservable in principle. This isn’t scientific and should have no place in physics. The origin of the problem seems to be that many physicists are Platonists – they believe that their math is real, rather than just a description of reality. But Platonism is a philosophy and shouldn’t be mistaken for science.

What about Avi Loeb’s claim that the interstellar object `Oumuamua was alien technology? Loeb has justified his speculation by pointing towards scientists who ponder multiverses and extra dimensions. He seems to think his argument is similar. But Loeb’s argument isn’t degenerative science. It's just bad science. He jumped to conclusions from incomplete data.
It isn’t hard to guess that many physicists will object to my assessments. That is fine – my intention here is not so much to argue this particular assessment is correct, but that this assessment must be done regularly, in collaboration between physicists and philosophers.

Yes, Imagination and creativity are the heart of science. They are also the heart of science fiction. And we shouldn’t conflate science with fiction.

Saturday, July 17, 2021

What’s the Fifth Force?

[This is a transcript of the video embedded below.]


Physicists may have found a fifth force. Uh, that sounds exciting. And since it sounds so exciting, you see it in headlines frequently, so frequently you probably wonder how many of these fifth forces there are. And what’s a fifth force anyway? Could it really exist? If it exists, is it good for anything? That’s what we’ll talk about today.

Before we can talk about the fifth force, we have to briefly talk about the first four forces. To our best current knowledge, all matter in the universe is made of 25 particles. Physicists collect them in the “standard model” that’s kind of like the periodic table for subatomic particles. These 25 particles are held together by four forces. That’s 1) gravity, apples falling down and all that, 2) the electromagnetic force, that’s a combination of the electric and magnetic force which really belong together, 3) the strong nuclear force that holds together atomic nuclei against the electromagnetic force, and 4) the weak nuclear force that’s responsible for nuclear decay.

All other forces that we know, for example the van-der Waals force that keeps atoms together in molecules, friction forces, muscle forces, these are all emergent forces. That they are emergent means that they derive from those four fundamental forces. And that those forces are fundamental means they are not emergent – they cannot be derived from anything else. Or at least we don’t presently know anything simpler that they could be derived from.

Now, if you say that gravity is a force in the wrong company, someone might point out that Einstein taught us gravity is not a force. Yes, that guy again. According to Einstein, gravity is the effect of a curved space-time. Looks like a force, but isn’t one. Indeed, that’s the reason why physicists, if they want to be very precise, will not speak of four fundamental *forces, but of four fundamental interactions. But in reality, I hear them talk about the gravitational force all the time, so I would say if you want to call gravity a force, please go ahead, we all know what you mean.

As you can tell already from that, what physicists call a force doesn’t have a very precise definition. For example, the three forces besides gravity – the electromagnetic and the strong and weak nuclear force – are similar in that we know they are mediated by exchange particles. So that means if there is a force between two particles, like, say, a positively charged proton and a negatively charged electron, then you can understand that force as the exchange of another particle between them. For the case of electromagnetism, that exchange particle is the photon, the quantum of light. For the strong and weak nuclear force, we also have exchange particles. For the strong nuclear force, those are called “gluons” because they “glue” quarks together, and for the weak nuclear force, these are called the Z and W bosons.

Gravity, again, is the odd one out. We believe it has an exchange particle – that particle is called the “graviton” – but we don’t know whether that particle actually exists, it’s never been measured. And on the other hand, we have an exchange particle to which we don’t associate a force, and that’s the Higgs-boson. The Higgs-boson is the particle that gives masses to the other particles. It does that by interacting with those particles, and it acts pretty much like a force carrier. Indeed, some physicists *do* call the Higgs-exchange a force. But most of them don’t.

The reason is that the exchange particles of electromagnetism, the strong and weak nuclear force, and even gravity, hypothetically, all come out of symmetry requirements. The Higgs-boson doesn’t. That may not be a particularly good reason to not call it a force carrier, but that’s the common terminology. Four fundamental forces, among them is gravity, which isn’t a force, but not the Higgs-exchange, which is a force. Yes, it’s confusing.

So what’s with that fifth force? The fifth force is a hypothetical, new, fundamental force for which we don’t yet have evidence. It we found it, it would be the biggest physics news in 100 years. That’s why it frequently makes headlines. There isn’t one particular fifth force, but there’s a large number of “fifth” forces that physicists have invented and that they’re now looking for.

We know that if a fifth force exists it’s difficult to observe, because otherwise we’d already have noticed it. This means, this force either only becomes noticeable at very long distances – so you’d see it in cosmology or astrophysics – or it become noticeable at very short distances, and it’s hidden somewhere in the realm of particle physics.

For example, the anomaly in the muon g-2, could be a sign for a new force carrier, so it could be a fifth force. Or maybe not. There is also a supposed anomaly in some nuclear transitions, which could be mediated by a new particle, called X17, which would carry a fifth force. Or maybe not. Neither of these anomalies are very compelling evidence, the most likely explanation in both cases is some difficult nuclear physics.

The most plausible case for a fifth force, I think, comes from the observations we usually attribute to dark matter. Astrophysicists introduce dark matter because they do see a force that’s acting on normal matter. The currently most widely accepted hypothesis for this observation is that this force is just gravity, so an old force, if you wish, but that instead there is some new type of matter. That doesn’t fit very well with all observations, so it could be instead that it’s actually not just gravity, but indeed a new force, and that would be a fifth force. Dark energy, too, is sometimes attributed to a fifth force. But this isn’t really necessary to explain observations, at least not at the moment.

If we found evidence for such a new force, could we do anything with it? Almost certainly not, at least not in the foreseeable future. The reason is, if such forces exist, their effects must be very very small otherwise we’d have noticed them earlier. So, you most definitely can’t use it for Yogic flying, or to pin your enemies to the wall. However, who knows, if we do find a new force, maybe one day we’ll figure out something to do with it. It’s definitely worth looking for.

So, if you read headlines about a fifth force, that just means there’s some anomalous observation which can be explained by a new fundamental interaction, most often a new particle. It’s a catchy phrase, but really quite vague and not very informative.

Saturday, July 10, 2021

How Dangerous are Solar Storms?

[This is a transcript of the video embedded below.]


On May twenty-third nineteen sixty-seven, the US Air Force almost started a war. It was during the most intense part of the Cold War. On that day, the American Missile Warning System, designed to detect threats coming from the Soviet Union, suddenly stopped working. Radar stations at all sites in the Northern Hemisphere seemed to be jammed. Officials of the U.S. Air Force thought that the Soviet Union had attacked their radar and began to prepare for war. Then they realized it wasn’t the Soviets. It was a solar storm.

What are solar storms, how dangerous are they, and what can we do about them? That’s what we will talk about today.

First things first, what is a solar storm? The sun is so hot that in it, electrons are not bound to atomic nuclei, but can move around freely. Physicists call this state a “plasma”. If electric charges move around in the plasma, that builds up magnetic fields. And the magnetic fields move more electric charges around, which increases the magnetic fields and so on. That way, the sun can build up enormous magnetic fields, powered by nuclear fusion.

Sometimes these magnetic fields form arcs above the surface of the sun, often in an area of sunspots. These arcs can rip and blast off and then two things can happen: First, a lot of radiation is released suddenly, that’s visible light but also ultraviolet light and up into the X-ray range. This is called a solar flare. The radiation is usually accompanied by some fast moving particles, called solar particles. And second, in some case the flare comes with a shock wave that blasts some of the plasma into space. This is called a “coronal mass ejection,” and it can be billions of tons of hot plasma. The solar flare together with the coronal mass ejection is called a “solar storm”.

A solar storm can last from minutes to hours and can release more energy than the entire power we have spent in human history. The activity of the sun has an 11-year cycle, and the worst solar storms often come in the years after the solar maximum. We’re currently just starting a new cycle and the next maximum of solar activity will be around twenty twenty-five. The statistically most dangerous years of the solar cycle will come after that.

Well, actually. The solar cycle is really 22 years, because after 11 years the magnetic field flips, and the cycle isn’t complete until it flips back. It’s just that for what the solar activity is concerned, 11 years is the relevant cycle.

How do these solar storms affect us? Space is big and most of these solar storms don’t go into our direction. If they do, the solar flare moves at the speed of light and takes about eight minutes to reach us. The radiation exposure that comes with it is a health risk for astronauts and pilots, and it can affect satellites in orbit. For example, during a solar storm in 2003 the Japanese weather satellite Madori 2 was permanently damaged, and many other satellites automatically shut down because their navigation systems were not working. This solar storm became known as the 2003 Halloween storm because it happened in October.

Down here on earth we are mostly shielded from the flare. But not so with the coronal mass ejection. It comes after the flare with a delay of twelve hours to three days, depending on the initial velocity, and it carries its own magnetic field. When it reaches earth, that magnetic field connects with that of Earth. One effect of this is that the aurora becomes stronger, can be seen closer to the equator and can even change color to become red. During the Halloween storm, it could be seen as far south as the Mediterranean and also in Texas and Florida.

The aurora is pretty and mostly harmless, but the magnetic field causes a big problem. Because it changes so rapidly, it induces electric currents. The crust of Earth is not very conductive but our electric grids are, by design, very conductive. This means that the magnetic field from the solar storm moves around a lot of currents in the electric grid, which can damage power plants and transformers, and cause power outages.

How big can solar storms get? The strength of solar storms is measured by the energy output in the solar flare. The smallest ones are called A-class and are near background levels, followed by B, C, M and X-class. This is a logarithmic scale, so each letter represents a 10-fold increase in energy output. There’s no more letters after X, instead one adds numbers after the X, X10, for example is another 10 fold increase after X.

What’s the biggest solar storm on record? It might have been the one from September 2nd, 1859. The solar flare on that day was observed coincidentally by the English astronomer Richard Carrington, which is why it’s known today as the “Carrington event”.

The coronal mass ejection after the flare travelled directly into direction Earth. At the time there weren’t many power grids that could have been damaged because electric lights wouldn’t become common in cities for another two decades or so. But they did have a telegraph system.

A telegrapher in Philadelphia received a severe electric shock when he was testing his equipment, and most of the devices stopped working because they couldn’t cope with the current. But some telegraphers figured out that they could continue using their device if they unplugged it, using just the current induced by the solar storm. The following exchange took place during the Carrington event between Portland and Boston:
    "Please cut off your battery entirely from the line for fifteen minutes."
    "Will do so. It is now disconnected."
    "Mine is disconnected, and we are working with the auroral current. How do you receive my writing?"
    "Better than with our batteries on. – Current comes and goes gradually."
    "My current is very strong at times, and we can work better without the batteries, as the Aurora seems to neutralize and augment our batteries alternately, making current too strong at times for our relay magnets. Suppose we work without batteries while we are affected by this trouble."


How strong was the Carrington event? We don’t know really. At the time two measurement stations in England were keeping track of the magnetic field on earth. But those devices worked by pushing an inked pen around on paper, and during the peak of the storm, that pen just ran off the page. It’s been estimated by Karen Harvey to have had a total energy up to 10³² erg which puts it roughly into the category X45. You can read more about the Carrington event in Stuart Clark’s book “The Sun Kings”.

In twenty thirteen the insurance market Lloyd’s estimated that if a solar storm similar to the Carrington event took place today it would cause damage to the electric grid between zero point six and two point six trillion US dollars – for the United States alone. That’s about twenty times the damage of hurricane Katrina. Power outages could last from a couple of weeks to up to two years because so many transformers would have to be replaced.

The most powerful flare measured with modern methods was the 2003 Halloween Storm. Again it was so powerful that it overloaded the detectors. The sensors cut out at X 17. It was later estimated to have been X 35 +/- five, so somewhat below the Carrington event.

How bad can solar storms get? The magnetic field of our planet shields us from particles that come from the sun constantly, the so-called solar wind. It also prevents those solar particles from ripping the atmosphere off our planet. Mars, for example, once had an atmosphere, but since Mars has a weak magnetic field, its atmosphere was wiped away by solar wind. A solar storm that overwhelms the protection we have from our magnetic field could leave us exposed to the plasma raining down and could in the worst case strip apart some or all of our atmosphere. Can such strong solar storms happen?

Well, I hope you are sitting, because for all I can tell the answer is not obviously “no”. The more energy a solar storm has, the less likely it is. But occasionally astrophysicists observe stars very similar to our Sun that have a solar flare so large they might put life in the habitable zone at risk. They don’t presently know whether such an event is possible for our sun, or how likely it is.

I didn’t know that when I began working on this video. Sorry for the bad news.

What can we do about it? Satellites in orbit can be shielded to some extent. Airplanes can be redirected to lower latitudes or altitudes to limit radiation exposure of pilots and passengers. We can interrupt part of the electric grid to prevent currents from moving around too easily. But besides that, the best we can do is prepare for what’s to come, maybe stock up on toilet paper. How well these preparations work depends crucially on how far ahead we know a solar storm is headed in our direction. That’s why scientists are currently working on solar weather forecasts that might give us a warning already before the flare.

And about those mega-storms. We don’t currently have the technology to do anything about them. So I think the best we can do is to invest in science research and development so that one day we’ll able to protect ourselves.

Thanks for watching, don’t forget to subscribe, see you next week.

Saturday, July 03, 2021

Can we make a new universe?

[This is a transcript of the video embedded below.]


Some people dream of making babies, some dream of making baby universes. Seriously? Yes, seriously. How is that supposed to work? What does it take to make a new universe? And if we make one, what do we do with it? That’s what we’ll talk about today.

At first sight, it seems impossible to make a new universe, because where would you take all that stuff from, if not from the old universe? But it turns out you don’t need a lot of stuff to make a new universe. And we know that from Albert Einstein. Yes, that guy again.

First, Albert Einstein famously taught us that mass is really just a type of energy, E equals m c square and all that. But more importantly, Einstein also taught us that space is dynamic. It can bend and curve, and it can expand. It changes with time. And if space changes with time, then energy is not conserved. I explained this in more detail an earlier video, but here’s a brief summary.

The simplest example of energy non-conservation is the cosmological constant. The cosmological constant is the reason that the expansion of our universe gets faster. It has units of an energy-density – so that’s energy per volume – and as the name says, it’s constant. But if the energy per volume is constant, and the volume increases, then the total energy increases with the volume. This means in an expanding universe, you can get a lot of energy from nothing – if you just manage to expand space rapidly enough. I know that this sounds completely crazy, but this is really how it works in Einstein’s theory of General Relativity. Energy is just not conserved.

So, okay, we don’t need a lot of matter, but how do we make a baby universe that expands? Well, you try to generate conditions similar to those that created our own universe.

There’s a little problem with that, which is that no one really knows how our universe was created in the first place. There are many different theories for it, but none of them has observational support. However, one of those theories has become very popular among astrophysicists, it’s called “eternal inflation” – and while we don’t know it’s right, it could be right.

In eternal inflation, our universe is created from the decay of a false vacuum. To understand what a false vacuum is, let’s first talk about what a true vacuum is. A true vacuum is in a state of minimal energy. You can’t get energy out of it, it’s stable. It just sits there. Because it already has minimal energy, it can’t do anything and you can’t do anything with it.

A false vacuum is one that looks like a true vacuum temporarily, but eventually it decays into a true vacuum because it has energy left to spare, and that extra energy goes into something else. For example, if you throw jelly at a wall, it’ll stick there for a moment, but then fall down. That moment when it sticks to the wall is kind of like a false vacuum state. It’s unstable and it will eventually decay into the true vacuum, which is when the jelly drops to the ground and the extra energy splatters it all over the place.

What does this have to do with the creation of our universe? Well, consider you have a lot of false vacuum. In that false vacuum, there’s a patch that decays into a true vacuum. The true vacuum has a lower energy, but it can have higher pressure. If it has higher pressure, it’ll expand. That’s is how our universe could have started. And in principle you can recreate this situation in the laboratory. You “just” have to create this false vacuum state. Then part of it will decay into a true vacuum. And if the conditions are right, that true vacuum will expand rapidly. While it expands it creates its own space. It does not grow into our universe, it makes a bubble.

This universe creation only works if you have enough energy, or mass, in the original blob of false vacuum. How much do you need? Depends on some parameters of the model which physicists don’t know for sure, but in the most optimistic case it’s about 10 kilograms. That’s what it takes to make a new universe. 10 kilograms.

But how do you create 10 kilograms of false vacuum? No one has any idea. Also, 10 kilograms might not sound much if you’re a rocket scientist, but for particle physicists that’s a terrible lot. The mass equivalent that even the presently biggest particle collider, the large hadron collider, works with is 10 to the minus twenty grams. Now if you collide big atomic nuclei instead of protons, you can bring this up by some orders of magnitude, but 10 kilograms is not something that high energy physicists will work with in my lifetime. No one will create a new universe any time soon.

But, well, in principle, theoretically, we could do it. If you believe this story with the false vacuum and so on. Let us just suppose for a moment that this is correct, what would we do with these universes? Would we potty train them and send them to cosmic kindergarten?

Well, no, because sadly, these little baby-universes don’t stay connected to their mother-universe for long. Their connection is like a wormhole throat, it becomes unstable and pinches off within a fraction of a second. So you’d be giving birth to these universes, kick start their growth, but then, blip, they’re gone. From the outside they would look pretty much like small black holes.

By the way, this could be happening all the time without particle physicists doing anything. Because we don’t really understand the quantum properties of space. So, some people think that space really makes a lot of quantum fluctuations. These fluctuations happen at distances so short we can’t see them, but it could be that sometimes they create one of these baby universes.

If you want to know more about this topic, Zeeya Merali has written a very nice book about baby universes called “A Big Bang in a Little Room”.

Saturday, June 26, 2021

How artificial intelligence reads minds

[This is a transcript of the video embedded below.]


Human communication works by turning thought into motion. Whether that’s body language, or speech, or writing – we use muscles in one way or another to get out the information that we want to share. But sometimes it would be really handy if we could communicate directly from our brain, to one another or with a computer. How far is this technology along? How does it work? And what’s next? That’s what we will talk about today.

Scientists currently have two ways to figure out what’s going on inside your brain. One way is to use functional Magnetic Resonance Imaging, the other is using electrodes.

Functional Magnetic Resonance Imaging or fMRI for short measures the flow of blood to different regions of the brain. The blood-flow is correlated with neural activity, so an fMRI tells you what parts of the brain are activated in a certain task. I previously made a video about Magnetic Resonance Imaging, so if you want to know how the physics works, check this out.

The problem with fMRIs is that they require people to lie in a big machine. It isn’t only that using this machine is expensive, it also takes some time to take an fMRI, which means that the temporal resolution isn’t great, typically a few seconds. So fMRI can’t tell you much about fast and temporary processes.

The other way to measure brain activity is electroencephalography, EEG for short, which measures tiny currents in electrodes that are placed on the skin on the head. The advantage of this method is that the temporal resolution is much better. The big disadvantage though is that it gives you only a rough idea about the region where the signal is coming from. A much better way is to put the electrodes directly on the surface of the brain, but this requires surgery.

Elon Musk has the idea that one day people might be willing to have electrodes implanted into their brain and he has put some money behind this with his “neuralink” project. But it’s difficult to get a research project approved if it requires drilling holes into other people’s heads, so most studies currently use fMRI – or people who already have holes in their head for one reason or another.

Before we talk about what recent studies have found, I want to briefly thank our tier four supporters on patreon. Your support is of great help to keep this channel going. And you too can be part of the story, go check out our page on patreon, the link is in the info below.

Let us then have a look at what scientists have found.

Researchers from Carnegie Mellon and other American universities have done a very interesting series of experiments using fMRI. In the first one, they put eleven trial participants in the MRI machine and showed them a word on a screen. The participants were asked to think of the concept related to a noun, for example an apple, a cat, a refrigerator, and so on. Then they gave the brain scans of 10 of these people to an artificially intelligent software, together with the word that the people were prompted with. The AI looked for patterns in the brain activity that correlated with the words, and then guessed what the 11th person was thinking of from the brain scan alone. The program guessed correctly about three quarters of the time.

That’s not particularly great, but it *is better than chance – it’s a proof of principle. And along the way the researchers made a very interesting finding. The study had participants whose first language was either English or Portuguese but their brain signature was independent of that. Indeed, the researchers found that in the brain, the concept encoded by a word doesn’t have much to do with the word itself. Instead, the brain encodes the concept by assigning different attributes to it. They have identified three of these attributes:

1) Eating related. This brain pattern activates for words like “apple”, “tomato” or “lettuce”
2) Shelter related. This pattern activates for example for “house”, “closet”, or “umbrella”, and
3) A body-object interaction. For example, if the concept is “pliers” the brain also activates the part representing your hand using the pliers.

This actually allows the computer to predict to some extent how the signal of a concept will look like even if the computer hasn’t seen data on that before. The researchers checked this by combining different concepts to sentences such as “The old man threw the stone into the lake”. Out of 240 possible sentences, the computer could pick the right one in eighty-three percent of cases. It is not that the computer can tell the whole sentence but it knows its basic components, it knows the semantic elements.

The basic finding of this experiment, that the brain identifies concepts by a combination of attributes, has been confirmed by other experiments. For example, another 2019 study, which also used fMRIs asked participants to think of different animals and found that the brain roughly classifies them by attributes like size, intelligence, and habitat.

In the last decade there have also been several attempts to find out what a person sees from their brain activity. For example, in 2017 a team from Kyoto University published a paper in which they used deep learning – so, artificial intelligence again – to find out what someone was seeing from their fMRI signal. They trained the software to recognize general aspects of the image, like shapes, contrast, faces, etc. You can judge the results for yourself. Here you see the actual images that the trial participants looked at, and here the reconstruction by the artificial intelligence – I find it really impressive.

What about speech or text? In April 2019, researchers from UCSF published a paper in Nature reporting they had successfully converted brain activity directly to speech. They worked with epilepsy patients that already had electrodes on their brain surface for treatment. What the researchers looked for were the motor signals that correspond to the sounds in speech, like the tongue, jaw, lips, and so on. Again, they let a computer figure out how to map the brain signal to speech. What you are about to hear is one of the participants reading a sentence and then what the software recreated just from the brain activity.

That’s pretty good, isn’t it? Unfortunately, it took weeks to decode the signals with that quality, so it’s rather useless in practice. But a new study that appeared just a few weeks ago has made a big leap forward for using brain-to text software by looking not at the movements related to producing sounds, but at the movements that come with handwriting.

The person who they worked with is paralyzed from the neck down and has electrodes implanted on his brain already. He was asked to imagine writing the letters of the alphabet, which was used to train the software, and later the AI could reproduce the text from brain activity when the subject imagined writing whole sentences. And, it could do that in real time. That allowed the paralyzed man to text at a speed of about 90 characters per minute, which is quite similar to what able-bodied people reach with text-messaging, about 135 characters. The AI was able to identify characters with over 94% accuracy, and with autocorrect that went up to 99%. So, as you can see, on the side of signal analysis, research has progressed quite rapidly in the past couple of years. But for technological applications the problem is that fMRIs are impractical, EEGs aren’t precise enough, and not everyone wants to have a USB port fused to their brain. Are there any other options?

Well, one thing that researchers have done is to genetically modify zebrafish larvae so that their neurons are fluorescent when active. That way you can measure brain activity non-invasively. And that’s nice, but even if you did that with humans, there’s still the skull in the way, so that doesn’t seem very promising.

More promising is an approach pursued by NASA which is to develop an infrared system to monitor brain activity. That still requires users to wear sensors around their head but it’s non-invasive. And several teams of scientists are trying to monitor brain activity by combining different non-invasive measurements: electrical and ultrasound and optical. The US military, for example, has put 104 million dollars into the Next-generation Nonsurgical Neurotechnology Program, or N cube for short which has the aim of controlling military drones.

We live in a momentous period in the history of human development. It’s the period when humans leave behind the idea that conscious thought is outside of science. So, all of a sudden, we can develop technologies to aid the conversion of thought into action. I find this incredibly interesting. I expect much to happen in this field in the coming years, and will update you from time to time, so don’t forget to subscribe.

Saturday, June 19, 2021

Asteroid Mining – A fast way to get rich?

[This is a transcript of the video embedded below.]


Asteroids are the new gold mines. Fly to an asteroid, dig up its minerals, become a billionaire. I used to think this is crazy and will never make financial sense. But after a lot of reading I’m now thinking maybe it will work – by letting bacteria do the digging. How do you dig with bacteria? Is it even legal to mine asteroids? And will it happen in your lifetime? That’s what we’ll talk about today.

Space Agencies like NASA and ESA have found about 25000 asteroids. In 2020 alone, they discovered 3000 new near earth asteroids. About 900 of them have an extension of 1 kilometer or more.

What makes asteroids so interesting for mining is that their chemical composition is often similar to what you find in the core of our planet. Metals from the platinum group are very expensive because they are useful but rare in Earth’s crust. On an asteroid, they can be much easier to dig up. And that’s a straight-forward way to get rich – very, very rich.

The asteroid Psyche, for example, has a diameter of about two-hundred kilometers and astrophysicists estimate it’s about ninety percent metal, mostly iron and nickel. Lindy Elkins-Tanton, NASA's lead scientist of the Psyche mission, estimated the asteroid is worth about 10 quintillion US dollars. That’s a 1 followed by 19 zeros. Now imagine that thing was made of platinum...

NASA, by the way, is planning a mission to Psyche that’s to be launched in 2022. Not because of the quintillions but because they want to study its composition to learn more about how planetary systems form.

How does one find an asteroid that’s good for mining? Well, first of all it shouldn’t take forever to get there, so you want one that comes reasonably close to earth every once in a while. You also don’t want it to spin too much because that’d make it very hard to land or operate on it. And finally you want one that’s cheap to get to, so that means there’s a small amount of acceleration needed during the flight, a small “Delta V” as it’s called.

How many asteroids are there that fit these demands? The astrophysicist Martin Elvis from Harvard estimated it using an equation that’s now called the Elvis equation. It’s similar to the Drake equation which one uses to estimate the number of extraterrestrial civilizations by multiplying a lot of factors. And like the Drake equation, the Elvis Equation depends a lot on the assumptions that you make.

In any case, Elvis with his Elvis equation estimates that only about 10 of the known asteroids are worth mining. For the other ones, the cost-benefit ratio doesn’t work out, because they’re either too difficult to reach or don’t have enough worth mining or they’re too small. In principle one could also think of catching small asteroids, and bringing them back to earth, but in practice that’s difficult: The small ones are hard to find and track. At least right now this doesn’t work.

So the first two problems with asteroid mining are finding an asteroid and getting there. The next problem is digging. The gravitational pull on these asteroids is so small that one can’t just drill into the ground, that would simply kick the spacecraft off the asteroid.

The maybe most obvious way around this problem is to anchor the digging machine to the asteroid. Another solution that researchers from NASA are pursuing are shovels that dig in two opposite directions simultaneously, so there’s no net force to kick the machine off the asteroid. NASA is also looking into the option that instead of using one large machine, one could use instead a swarm of small robots that coordinate their tasks.

Another smart idea is optical digging. For this one use mirrors and lenses to concentrate sunlight to heat up the surface. This can burn off the surface layer by layer, and the material can then be caught in bags.

And then there is mining with bacteria. Using bacteria for mining is actually not a new idea. It’s called “biomining” and according to some historians the Romans have been doing it 2000 years ago already – though they almost certainly didn’t understand how it works since they didn’t know of bacteria to begin with. But we know today that some bacteria eat and decompose minerals. And during their digestion process, they separate off the metal that you want to extract. So basically, the idea is that you ship the bacteria to your asteroid, let them eat dust, and wait for them to digest.

On Earth, biomining is responsible for approximately twenty percent of global copper production and five percent of global gold production. But how can bacteria survive on asteroids? You can’t very well put them into space suits!

For one you wouldn’t directly dump the bacteria onto the asteroid, but put them into some kind of gel. Still there are pretty harsh conditions on an asteroid and you need to find the right bacteria for the task. It’s not hopeless. Microbiologists know that some species of bacteria have adapted to temperatures that would easily kill humans. Some bacteria can for example live at temperatures up to one-hundred thirteen degrees Celsius and some at temperatures down to minus twenty-eight Celsius. At low metabolic rates, they’ve been found to survive even at minus fourty degrees. And some species of bacteria survive vacuum as low as 10 to the minus five Pascal, which should allow them to survive in the vicinity of a spacecraft.

What about radiation? Again, bacteria are remarkably resistant. The bacterium Deinococcus radiodurans, for example, can cope with ionizing radiation up to twenty kiloGray. For comparison, in humans, acute radiation poisoning sets in at about zero point seven Gray. The bacteria easily tolerate twenty-thousand times as much!

And while the perfect bacterium for space mining hasn’t yet been found, there’s a lot of research going on in this area. It looks like a really promising idea to me.

But, you may wonder now, is it even legal to mine an asteroid? Probably yes. This kind of question is addressed by the nineteen sixty-seven Outer Space Treaty, which has been signed by one hundred eleven countries including the United States, Russia, and almost all of Europe.

According to that treaty, celestial bodies may not be subject to “national appropriation”. However, the treaty does not directly address the extraction of “space resources”, that is stuff you find on those celestial bodies. Some countries have interpreted this to mean that commercial mining does not amount to national appropriation and is permitted.

For example, since 2015 American citizens have the right to possess and sell space resources. Luxembourg has established a similar legal framework in 2017. And Russia too is about to pass such legislation.

This isn’t the only development in the area. You can now make a university degree in space resources, for example at the Colorado School of Mines, the University of Central Florida, and the University of Luxembourg. And at the same time several space agencies are planning to visit more asteroids. NASA wants to fly not only to Psyche, but also to Bennu, that is expected to get close to Earth in September twenty twenty-three.

The Chinese National Space Administration has proposed a similar mission to retrieve a sample from the asteroid Kamo’oalewa. And there are several other missions on the horizon.

And then there’s the industry interest. Starting about a decade ago, a number of start-ups appeared with the goal of mining asteroids, such as Planetary Resources and Deep Space Industries. These companies attracted some investors when they appeared but since then they have been struggling to attract more money, and they have basically disappeared – they’ve been bought by other companies which are more interested in their assets than in furthering the asteroid mining adventure.

The issue is that asteroid mining is real business, but it’s business in which there’s still tons of research to do: How to identify what asteroid is a good target, how to get to the asteroid, how to dig on it. And let’s not forget that once you’ve managed to do that, you also have to get the stuff back to earth. It’d take billions of up-front investments and decades of time to pay off even in the best case. So, while it’s promising, it looks unlikely to me that private investors will drive the technological development in this area. It will likely remain up to tax funded space agencies to finance this research for some more time.

Saturday, June 12, 2021

2+2 doesn't always equal 4

[This is a transcript of the video embedded below.]



2 plus 2 makes 5 is the paradigmatic example of an obvious falsehood, a falsehood that everybody knows to be false. Because 2 plus 2 is equal to 1. Right? At the end of this video, you’ll know what I am talking about.

George Orwell famously used two plus two equals five in his novel nineteen eighty-four as an example for an obviously false statement that you can nevertheless make people believe in.

The same example was used already in seventeen eighty-nine by the French priest and writer Emmanuel Siey├Ęs in his essay “what is the third estate”. At this time the third estate – the “bourgeoisie” – made up the big bulk of the population in France, but wasn’t allowed to vote. Sieyes wrote
“[If] it be claimed that under the French constitution two hundred thousand individuals out of twenty-six million citizens constitute two-thirds of the common will, only one comment is possible: it is a claim that two and two make five.” This was briefly before the French revolution.

So you can see there’s a heavy legacy to using two plus two is five as an example of an obvious untruth. And if you claim otherwise that can understandably upset some people. For example, the mathematician Kareem Carr recently got fire on twitter for pointing out that 2+2 isn’t always equal four.

He was accused of being “woke” because he supposedly excused obviously wrong math as okay. Even he was surprised at how upset some people got about it, because his point is of course entirely correct. 2+2 isn’t always equal to four. And I don’t just mean that you could change the symbol “4” with the symbol “5”. You can do that of course, but that’s not the point. The point is that two plus two is a symbolic representation for the properties of elements of a group. And the result depends on what the 2s refer to and how the mathematical operation “+” is defined.

Strictly speaking, without those definitions 2+2 can be pretty much anything. That’s where the joke comes from that you shouldn’t let mathematicians sort out the restaurant bill, because they haven’t yet agreed on how to define addition.

To see why it’s important to know what you are adding and how, let’s back up for a moment to see where the “normal” addition law comes from. If I have two apples and I add two apples, then that makes four apples. Right? Right.

Ok, but how about this. If I have a glass of water with a temperature of twenty degrees and I pour it together with another glass of water at 20 degrees, then together the water will have a temperature of 40 degrees. Erm. No, certainly not.

If both glasses contained the same amount of water, the final temperature will be one half the sum of the temperatures, so that’d still be 20 degrees, which makes much more sense. Temperatures don’t add by the rule two plus two equals four. And why is that?

It’s because temperature is a measure for the average energy of particles and averages don’t add the same way as apples. The average height of women in the United States is 5 ft 4 inches, and that of men 5 ft 9 inches, but that doesn’t mean that the average American has a height of 11 ft 1. You have to know what you’re adding to know how to add it.

Another example. Suppose you switch on a flashlight. The light moves at, well, the speed of light. And as you know the speed of light is the same for all observers. We learned that from Albert Einstein. Yes, that guy again. Now suppose I switch on the flashlight while you come running at me at, say, ten kilometers per hour. At what velocity is the light coming at *you. Well, that’s the speed of light plus ten kilometers per hour. Right? Erm, no. Because that’d be faster than the speed of light. What’s going on?

What’s going on is that velocities don’t add like apples either. They merely approximately do this if all the velocities involved are much smaller than the speed of light. But strictly speaking they have to be added using this formula.

Here u and v are the two velocities that you want to add and w is the result. C is the speed of light. You see immediately if one of the velocities, say u, is also the speed of light, then the resulting velocity stays the speed of light.

So, if you add something to the speed of light, the speed of light doesn’t change. If you come running at me, the light from my flashlight still comes at you with the speed of light.

Indeed, if you add the speed of light to the speed of light because maybe you want to know the velocity at which two light beams approach each other head on, you get c plus c equals c. So, in units of the speed of light, according to Einstein, 1+1 is 1.

That’s some examples from physics for quantities that just have different addition laws. Here is another one from mathematics. Suppose you want to add two numbers that are elements of a finite group, to keep things simple, say one with only three elements. We can give these elements the numbers zero, one, and two.

We can then define an addition rule on this group, which I’ll write as a plus with a circle around it, to make clear it’s not the usual addition. This new addition rule works like this. Take the usual sum of two number, then divide the result by three and take the rest.

So, for example 1+2 = 3, divide by three, the rest is 0. This addition law is defined so that it keeps us within the group. And with this addition law, you have 1 plus 2 equals 0. By the same rule 2 plus 2 equals one.

I know this looks odd, but it’s completely standard mathematics, and it’s not even very advanced mathematics, it just isn’t commonly taught in school. This remainder after division is usually called the modulus. So this addition law can be written as the plus with the circle equals the normal plus mod 3. A set of numbers with this addition law is called a cyclic group.

You can’t only do this with 4, but with any integer number. For example if you take the number 12, that just means if you add numbers to something larger than 12 you start over from zero again. That’s how clocks work, basically, 8+7=3, add another 12 and that gives 3 again. We’re fairly used to this.

Clocks are a nice visual example for how to add numbers in a cyclic group, but time-keeping itself is not an example for cyclic addition. That’s because the “real” physical time of course does not go in a circle. It’s just that on a simple clock we might not have an indicator for the time switching from am to pm or to the next day.

So in summary, if you add numbers you need to know what it is that you are adding and take the right addition law to describe what you are interested in. If you take two integers and use the standard addition law, then, yes, two plus two equals four. But there are many other things those numbers could stand for and many other addition laws, and depending on your definition, two plus two might be two or one or five or really anything at all. That’s not “woke” that’s math.

Saturday, June 05, 2021

Why do we see things that aren't there?

[This is a transcript of the video embedded below.]

A few weeks ago, we talked about false discoveries, scientists who claimed they’d found evidence for alien life. But why is it that we get so easily fooled and jump to conclusions. How bad is it and what can we do about it? That’s what we will talk about today.


My younger daughter had spaghetti for the first time when she was about two years old. When I put the plate in front of her she said “hair”.

The remarkable thing about this is not so much that she said this, but that all of you immediately understood why she said it. Spaghetti are kind of like hair. And as we get older and learn more about the world we find other things that also look kind of like hair. Willows, for example. Or mops. Even my hair sometimes looks like hair.

Our brains are pattern detectors. If you’ve seen one thing, it’ll tell you if you come across similar things. Psychologists call this apophenia, we see connections between unrelated things. These connections are not wrong, but they’re not particularly meaningful. That we see these connections, therefore, tells us more about the brain than about the things that our brain connects.

The famous Rorschach inkblot test, for example, uses apophenia in the attempt to find out what connections the patient readily draws. Of course these tests are difficult to interpret because if you start thinking about it, you’ll come up with all kinds of things for all kinds of reasons. Seeing patterns in clouds is also an example of apophenia.

And there are some patterns which we are particularly good at spotting, the ones that are most important for our survival, ahead of all: Faces. We see faces everywhere and in anything. Psychologists call this pareidolia.

The most famous example may be Jesus on a toast. But there’s also a Jesus on the butt of that dog. There’s a face on Mars, a face in this box, a face in this pepper, and this washing machine has had enough.

The face on Mars is worth a closer look to see what’s going on. In 1976, the Viking mission sent back images from its orbit around Mars. When one of them looked like a face, a guy by name y Richard C. Hoagland went on TV to declare it was evidence of lost Martian civilization. But higher resolution images of the same spot from later missions don’t look like faces to us anymore. What’s going on?

What’s going on is that, when we lack information, our brain fills in details with whatever it thinks is the best matching pattern. That’s also what happened, if you remember my earlier video, with the canals on Mars. There never were any canals on Mars. They were imaging artifacts, supported by vivid imagination.

Michael Shermer, the American science writer and founder of The Skeptics Society, explains this phenomenon in his book “The believing brain”. He writes: “It is in this intersection of non-existing theory and nebulous data that the power of belief is at its zenith and the mind fills in the blanks”.

He uses as example what happened when Galileo first observed Saturn, in 1620. Galileo’s telescope at the time had a poor resolution, so Galileo couldn’t actually see the rings. But he could see there was something strange about Saturn, it didn’t seem to be round. Here is a photo that Jordi Busque took a few months ago with a resolution similar to what Galileo must have seen. What does it look like to you? Galileo claimed that Saturn was a triple planet.

Again, what’s happening is that the human brain isn’t just a passive data analysis machine. The brain doesn’t just look at an image and says: I don’t have enough data, maybe it’s noise or maybe it isn’t. No, it’ll come up with something that matches the noise, whether or not it has enough data to actually draw that conclusion reliably.

This makes sense from an evolutionary perspective. It’s better to see a mountain lion when there isn’t one than to not see a mountain lion when there is one. Can you spot the mountain lion? Pause the video before I spoil your fun. It’s here.

A remarkable experiment to show how we find patterns in noise was done in 2003 by researchers from Quebec and Scotland. They showed images of random white noise to their study participants. But the participants were told that half of those images contained the letter “S” covered under noise. And sure enough, people saw letters where there weren’t any.

Here’s the fun part. The researchers then took the images which the participants had identified as containing the letter “S” and overlaid them. And this overlay clearly showed an “S”.

What is going on? Well, if you randomly scatter points on a screen, then every once in a while they will coincidentally look somewhat like an “S”. If you then selectively pick random distributions that look a particular way, and discard the others, you indeed find what you were looking for. This experiment shows that the brain is really good at finding patterns. But it’s extremely bad at calculating the probability that this pattern could have come about coincidentally.

A final cognitive bias that I want to mention which is built into our brain is anthropomorphism, that means we assign agency to inanimate objects. That’s why we, for example, get angry at our phones or cars though that makes absolutely no sense.

Anthropomorphism was first studied in 1944 by Fritz Heider and Marianne Simmel. They showed people a video in which squares and triangles were moving around. And they found the participants described the video as if the squares and triangles had intentions. We naturally make up such stories. This is also why we have absolutely no problem with animation movies whose “main characters” are cars, sponges, or potatoes.

What does this mean? It means that our brains have a built-in tendency to jump to conclusions and to see meaningful connections when there aren’t any. That’s why we have astrophysicists who yell “aliens” each time they have unexplained data, and why we have particle physicists who get excited about each little “anomaly” even though they should full well know that they are almost certainly wasting their time. And it’s why, if I hear Beatles songs playing on two different radio stations at the same time, I’m afraid Paul McCartney died.

Kidding aside, it’s also why so many people fall for conspiracy theories. If someone they know gets ill, they can’t put it down as an unfortunate coincidence. They will look for an explanation, and if they look, they will find one. Maybe that’s some kind of radiation, or chemicals, or the evil government. Doesn’t really matter, the brain wants an explanation.

So, this is something to keep in mind: Our brains come up with a lot of false positives. We see patterns that aren’t there, we see intention where there isn’t any, and sometimes we see Jesus on the butt of a dog.

Saturday, May 29, 2021

What does the universe expand into? Do we expand with it?

[This is a transcript of the video embedded below.]


If the universe expands, what does it expand into? That’s one of the most frequent questions I get, followed by “Do we expand with the universe?” And “Could it be that the universe doesn’t expand but we shrink?” At the end of this video, you’ll know the answers.

I haven’t made a video about this so far, because there are already lots of videos about it. But then I was thinking, if you keep asking, those other videos probably didn’t answer the question. And why is that? I am guessing it may be because one can’t really understand the answer without knowing at least a little bit about how Einstein’s theory of general relativity works. Hi Albert. Today is all about you.

So here’s that little bit you need to know about General Relativity. First of all, Einstein used from special relativity that time is a dimension, so we really live in a four dimensional space-time with one dimension of time and three dimensions of space.

Without general relativity, space-time is flat, like a sheet of paper. With general relativity, it can curve. But what is curvature? That’s the key to understanding space-time. To see what it means for space-time to curve, let us start with the simplest example, a two-dimensional sphere, no time, just space.

That image of a sphere is familiar to you, but really what you see isn’t just the sphere. You see a sphere in a three dimensional space. That three dimensional space is called the “embedding space”. The embedding space itself is flat, it doesn’t have curvature. If you embed the sphere, you immediately see that it’s curved. But that’s NOT how it works in general relativity.

In general relativity we are asking how we can find out what the curvature of space-time is, while living inside it. There’s no outside. There’s no embedding space. So, for the sphere that’d mean, we’d have to ask how’d we find out it’s curved if we were living on the surface, maybe ants crawling around on it.

One way to do it is to remember that in flat space the inner angles of triangles always sum to 180 degrees. In a curved space, that’s no longer the case. An extreme example is to take a triangle that has a right angle at one of the poles of the sphere, goes down to the equator, and closes along the equator. This triangle has three right angles. They sum to 270 degrees. That just isn’t possible in flat space. So if the ant measures those angles, it can tell it’s crawling around on a sphere.

There is another way that ant can figure out it’s in a curved space. In flat space, the circumference of a circle is related to the radius by 2 Pi R, where R is the radius of the circle. But that relation too doesn’t hold in a curved space. If our ant crawls a distance R from the pole of the sphere and you then goes around in a circle, the radius of the circle will be less than 2πR. This means, measuring the circumference is another way to find out the surface is curved without knowing anything about the embedding space.

By the way, if you try these two methods for a cylinder instead of a sphere you’ll get the same result as in flat space. And that’s entirely correct. A cylinder has no intrinsic curvature. It’s periodic in one direction, but it’s internally flat.

General Relativity now uses a higher dimensional generalization of this intrinsic curvature. So, the curvature of space-time is defined entirely in terms which are internal to the space-time. You don’t need to know anything about the embedding pace. The space-time curvature shows up in Einstein’s field equations in these quantities called R.

Roughly speaking, to calculate those, you take all the angles of all possible triangles in all orientations at all points. From that you can construct an object called the curvature tensor that tells you exactly how space-time curves where, how strong, and into which direction. The things in Einstein’s field equations are sums over that curvature tensor.

That’s the one important thing you need to know about General Relativity, the curvature of space-time can be defined and measured entirely inside of space-time. The other important thing is the word “relativity” in General Relativity. That means you are free to choose a coordinate system, and the choice of a coordinate system doesn’t make any difference for the prediction of measurable quantities.

It’s one of these things that sounds rather obvious in hindsight. Certainly if you make a prediction for a measurement and that prediction depends on an arbitrary choice you made in the calculation, like choosing a coordinate system, then that’s no good. However, it took Albert Einstein to convert that “obvious” insight into a scientific theory, first special relativity and then, general relativity.

So with that background knowledge, let us then look at the first question. What does the universe expand into? It doesn’t expand into anything, it just expands. The statement that the universe expands is, as any other statement that we make in general relativity, about the internal properties of space-time. It says, loosely speaking, that the space between galaxies stretches. Think back of the sphere and imagine its radius increases. As we discussed, you can figure that out by making measurements on the surface of the sphere. You don’t need to say anything about the embedding space surrounding the sphere.

Now you may ask, but can we embed our 4 dimensional space-time in a higher dimensional flat space? The answer is yes. You can do that. It takes in general 10 dimensions. But you could indeed say the universe is expanding into that higher dimensional embedding space. However, the embedding space is by construction entirely unobservable, which is why we have no rationale to say it’s real. The scientifically sound statement is therefore that the universe doesn’t expand into anything.

Do we expand with the universe? No, we don’t. Indeed, it’s not only that we don’t expand, but galaxies don’t expand either. It’s because they are held together by their own gravitational pull. They are “gravitationally bound”, as physicists say. The pull that comes from the expansion is just too weak. The same goes for solar systems and planet. And atoms are held together by much stronger forces, so atoms in intergalactic space also don’t expand. It’s only the space between them that expands.

How do we know that the universe expands and it’s not that we shrink? Well, to some extent that’s a matter of convention. Remember that Einstein says you are free to choose whatever coordinate system you like. So you can use a coordinate system that has yardsticks which expand at exactly the same rate as the universe. If you use those, you’d conclude the universe doesn’t expand in those coordinates.

You can indeed do that. However, those coordinates have no good physical interpretation. That’s because they will mix space with time. So in those coordinates, you can’t stand still. Whenever you move forward in time, you also move sideward in space. That’s weird and it’s why we don’t use those coordinates.

The statement that the universe expands is really a statement about certain types of observations, notably the redshift of light from distant galaxies, but also a number of other measurements. And those statements are entirely independent on just what coordinates you chose to describe them. However, explaining them by saying the universe expands in this particular coordinate system is an intuitive interpretation.

So, the two most important things you need to know to make sense of General Relativity is first that the curvature of space-time can be defined and measured entirely within space-time. An embedding space is unnecessary. And second, you are free to choose whatever coordinate system you like. It doesn’t change the physics.

In summary: General Relativity tells us that the universe doesn’t expand into anything, we don’t expand with it, and while you could say that the universe doesn’t expand but we shrink that interpretation doesn’t make a lot of physical sense.

Thursday, May 27, 2021

The Climate Book You Didn’t Know You Need

The Physics of Climate Change
Lawrence Krauss
Post Hill Press (March 2021)

In the past years, media coverage of climate change has noticeably shifted. Many outlets have begun referring to it as “climate crisis” or “climate emergency”, a mostly symbolic move, in my eyes, because those who trust that their readers will tolerate this nomenclature are those whose readers don’t need to be reminded of the graveness of the situation. Even more marked has been the move to no longer mention climate change skeptics and, moreover, to proudly declare the intention to no longer acknowledge even the existence of the skeptics’ claims.

As a scientist who has worked in science communication for more than a decade, I am of two minds about this. On the one hand, I perfectly understand the futility of repeating the same facts to people who are unwilling or unable to comprehend them – it’s the reason I don’t respond when someone emails me their home-brewed theory of everything. On the other hand, it’s what most science communication comes down to: patiently rephrasing the same thing over and over again. That science writers – who dedicate their life to communicating research – refuse to explain that very research, strikes me as an odd development.

This makes me suspect something else is going on. Declaring the science settled relieves news contributors of the burden of actually having to understand said science. It’s temptingly convenient and cheap, both literally and figuratively. Think about the last dozen or so news reports on climate change you’ve read. Earliest cherry blossom bloom in Japan, ice still melting in Antarctica, Greta Thunberg doesn’t want to travel to Glasgow in November. Did one of those actually explain how scientists know that climate change is man-made? I suspect not. Are you sure you understand it? Would you be comfortable explaining it to a climate change skeptic?

If not, then Lawrence Krauss’ new book “The Physics of Climate Change” is for you. It’s a well-curated collection of facts and data with explanations that are just about technical enough to understand the science without getting bogged down in details. The book covers historical and contemporary records of carbon dioxide levels and temperature, greenhouse gases and how their atmospheric concentrations change the energy balance, how we can tell one cause of climate change from another, and impacts we have seen and can expect to see, from sea level rise to tipping points.

To me, learning some climate science has been a series of realizations that it’s more difficult than it looks at first sight. Remember, for example, the explanation for the greenhouse effect we all learned in school? Carbon dioxide in the atmosphere lets incoming sunlight through, but prevents infrared light from escaping into space, hence raising the temperature. Alas, a climate change skeptic might point out, the absorption of infrared light is saturated at carbon dioxide levels well below the current ones. So, burning fossil fuels can’t possible make any difference, right?

No, wrong. But explaining just why is not so simple...

In a nutshell, the problem with the greenhouse analogy is that Earth isn’t a greenhouse. It isn’t surrounded by a surface that traps light, but rather by an atmosphere whose temperature and density falls gradually with altitude. The reason that increasing carbon dioxide concentrations continue to affect the heat balance of our planet is that they move the average altitude from which infrared light can escape upwards. But in the relevant region of the atmosphere (the troposphere) higher altitude means lower temperature. Hence, the increasing carbon dioxide level makes it more difficult for Earth to lose heat. The atmosphere must therefore warm to get back into an energy balance with the sun. If that explanation was too short, Krauss goes through the details in one of the chapters of his book.

There are a number of other stumbling points that took me some time to wrap my head around. Isn’t water vapor a much more potent greenhouse gas? How can we possibly tell whether global temperatures rise because of us or because of other, natural, causes, for example changes in the sun? Have climate models ever correctly predicted anything, and if so what? And in any case, what’s the problem with a temperature increase that’s hard to even read off the old-fashioned liquid thermometer pinned to our patio wall? I believe these are all obvious questions that everybody has at some point, and Krauss does a great job answering them.

I welcome this book because I have found it hard to come by a didactic introduction to climate science that doesn’t raise more question than it answers. Yes, there are websites which answer skeptics’ claims, but more often than not they offer little more than reference lists. Well intended, I concur, but not terribly illuminating. I took Michael Mann’s online course Climate Change: The Science and Global Impact, which provides a good overview. But I know enough physics to know that Mann’s course doesn’t say much about the physics. And, yes, I suppose I could take a more sophisticated course, but there are only so many hours in a day. I am sure the problem is familiar to you.

So, at least for me, Krauss book fills a gap in the literature. To begin with, at under 200 pages in generous font size, it’s a short book. I have also found it a pleasure to read for Krauss neither trivializes the situation nor pushes conclusions in the reader’s face. It becomes clear from his writing that he is concerned, but his main mission is to inform, not to preach.

I welcome Krauss’ book for another reason. As a physicist myself, I have been somewhat embarrassed by the numerous physicists who have put forward very – I am looking for a polite word here – shall we say, iconoclastic, ideas about climate change. I have also noticed this personally in several occasions, that physicists have rather strong yet uninformed opinion about what climate models are good for. I am therefore happy that a physicist as well-known as Krauss counteracts the impression that physicists believe they know everything better. He 100% sticks with the established science and doesn’t put forward own speculations.

There are some topics though I wish Krauss would have said more about. One particularly glaring omission is the uncertainty in climate trend projections due to the lacking understanding of cloud formation. Indeed, Krauss says little about the shortcomings of current climate models aside from acknowledging that tipping points are difficult to predict, and nothing about the difficulties of quantifying the uncertainty. This is unfortunate, for it’s another issue that irks me when I read about climate change in newspapers or magazines. Every model has shortcomings, and when those shortcomings aren’t openly put on the table I begin to wonder if something’s being swept under the rug. You see, I’m chronically skeptical myself. Maybe it’s something to do with being a physicist after all.

I for one certainly wish there was more science in the news coverage of climate change. Yes, there are social science studies showing that facts do little to change opinions. But many people, I believe, genuinely don’t know what to think because without at least a little background knowledge it isn’t all that easy to identify mistakes in the arguments of climate change deniers. Krauss’ book is a good starting point to get that background knowledge.

Saturday, May 22, 2021

Aliens that weren't aliens

[This is a transcript of the video embedded below.]


The interstellar object ‘Oumuamua travelled through our solar system in 2017. Soon after it was spotted, the astrophysicist Avi Loeb claimed it was alien technology. Now it looks like it was just a big chunk of nitrogen.

This wasn’t the first time scientists yelled aliens mistakenly and it certainly won’t be the last. So, in this video we’ll look at the history of supposed alien discoveries. What did astronomers see, what did they think it was, what did it turn out to be in the end? And what are we to make of these claims? That’s what we’ll talk about today.

Let’s then talk about all the times when aliens weren’t aliens. In 1877, the Italian astronomer Giovanni Shiaparelli studied the surface of our neighbor planet Mars. He saw a network of long, nearly straight lines. At that time, astronomers didn’t have the ability take photographs of their observation and the usual procedure was to make drawings and write down what they saw. Schiaparelli called the structures “canali” in Italian, a word which leaves their origin unspecified. In the English translation, however, the “canali” became “canals” which strongly suggested an artificial origin. The better word would have been “channels”.

This translation blunder made scientific history. Even though the resolution of telescopes at the time wasn’t good enough to identify surface structures on Mars, a couple of other astronomers quickly reported they also saw canals. Around the turn of the 19th to the 20th century, the American Astronomer Percival Lowell published three books in which he presented the hypothesis that the canals were an irrigation system built by an intelligent civilization.

The idea that there had once been, or maybe still was, intelligent life on Mars persisted until 1965. In this year, the American space mission Mariner 5 flew by Mars and sent back the first photos of Mars’s surface to Earth. The photos showed craters but nothing resembling canals. The canals turned out to have been imaging artifacts, supported by vivid imagination. And even though the scientific community laid the idea of canals on Mars to rest in 1965, it took much longer for the public to get over the idea of life on Mars. I recall my grandmother was still telling me about the canals in the 1980s.

But in any case the friends of ET didn’t have to wait long for renewed hope. In 1967, the Irish Astrophysicist Jocelyn Bell Burnell noticed that the radio telescope in Cambridge which she worked on at the time recorded a recurring signal that pulsed with a period of somewhat more than a second. She noted down “LGM” on the printout of the measurement curve, short for „little green men”.

The little green men were a joke, of course. But at the time, astrophysicists didn’t know any natural process that could explain Bell Burnell’s observations, so they couldn’t entirely exclude that it was a signal stemming from alien technology. However, a few years after the signal was first recorded it became clear that its origin was not aliens, but a rotating neutron star.

Rotating neutron stars can build up strong magnetic fields and then emit a steady, but directed beam of electromagnetic radiation. And since the neutron star rotates, we only see this beam if it happens to point into our direction. This is why the signal appears to be pulsed. Such objects are now called “Pulsars”.

Then in 1996, life on Mars had a brief comeback. That year, a group of Americans found a meteorite in Antarctica which seemed to carry traces of bacteria. This rock was probably flung into the direction of our planet when a heavier meteorite crashed into the surface of Mars. Indeed, other scientists confirmed that the Antarctic meteorite most likely came from mars. However, they concluded that the structures in the rock are too small to be of bacterial origin.

That wasn’t it with alien sightings. In 2015, the Kepler telescope found a star with irregular changes in its brightness. Officially the star has the catchy name KIC8462852, but unofficially it’s been called WTF. That stands, as you certainly know for Where’s the flux? The name which stuck in the end though was “Tabby’s star,” after the first name of its discoverer, Tabetha Boyajian.

At first astrophysicists didn’t have a good explanation for the odd behavior of Tabby’s star. And so, it didn’t take long until a group of researchers from the University of Pennsylvania proposed aliens are building a megastructure around their star.

Indeed, the physicist freeman Dyson had argued already in the 1960s, that advanced extraterrestrial civilizations would try to capture energy from their sun as directly as possible. To this end, Dyson speculated, they’d build a sphere around the star. It’s remained unclear how such a sphere would be constructed or remain stable, but, well, they are advanced, these civilizations, so presumably they’ve figured it out. And they’re covering up their star to catch its energy, that can quite plausibly lead to a signal like the one observed from Tabby’s star.

Several radio telescopes scanned the area around Tabby’s star on the lookout for signs of intelligent life, but didn’t find anything. Further observations now seem to support the hypothesis that the star is surrounded by debris from a destroyed moon or other large rocks.

Then, in 2017, the Canadian astronomer Robert Weryk made a surprising discovery when he analyzed data from the Pan-STARRS telescope in Hawaii. He saw an object that passed closely by our planet, but it looked neither like a comet nor like an asteroid.

When Weryk traced back its path, the object turned out to have come from outside our solar system. “‘Oumuamua” the astronomers named it, Hawaiian for “messenger from afar arriving first”.

‘Oumuamua gave astronomers and physicists quite something to think. It entered our solar system on a path that agreed with the laws of gravity, with no hints at any further propulsion. But as it got closer to the sun, it began to emit particles of some sort that gave it an acceleration.

This particle emission didn’t fit that usually observed from comets. Also, the shape of ‘Oumuamua, is rather untypical for asteroids or comets. The shape which fits the data best is that of a disk, about 6-8 times as wide as high.

When ‘Oumuamua was first observed, no one had any good idea what it was, what it was made of, or what happened when it got close to the sun. The Astrophysicist Avi Loeb used the situation to claim that ‘Oumuamua is technology of an alien civilization. “[T]he simplest, most direct line from an object with all of ‘Oumuamua’s observed qualities to an explanation for them is that it was manufactured.”

According to a new study it now looks like ‘Oumuamua is a piece of frozen nitrogen that was split off a nitrogen planet in another solar system. It remained frozen until it got close to our sun, when it began to partly evaporate. Though we will never know exactly because the object has left our solar system for good and the data we have now is all the data we will ever have.

And just a few weeks ago, we talked about what happened to the idea that there’s life on Venus. Check out my earlier video for more about that.

So, what do we learn from that? When new discoveries are made it takes some time until scientists have collected and analyzed all the data, formulated hypotheses, and evaluated which hypothesis explains the data best. Before that is done, the only thing that can reliably be said is “we don’t know”.

But “we don’t know” is boring and doesn’t make headlines. Which is why some scientists use the situation to put forward highly speculative ideas before anyone else can show they’re wrong. This is why headlines about possible signs of extraterrestrial life are certainly entertaining but usually, after a few years, disappear.

Thanks for watching, don’t forget to subscribe, see you next week.

Saturday, May 15, 2021

Quantum Computing: Top Players 2021

[This is a transcript of the video embedded below.]


Quantum computing is currently one of the most exciting emergent technologies, and it’s almost certainly a topic that will continue to make headlines in the coming years. But there are now so many companies working on quantum computing, that it’s become really confusing. Who is working on what? What are the benefits and disadvantages of each technology? And who are the newcomers to watch out for? That’s what we will talk about today.

Quantum computers use units that are called “quantum-bits” or qubits for short. In contrast to normal bits, which can take on two values, like 0 and 1, a qubit can take on an arbitrary combination of two values. The magic of quantum computing happens when you entangle qubits.

Entanglement is a type of correlation, so it ties qubits together, but it’s a correlation that has no equivalent in the non-quantum world. There are a huge number of ways qubits can be entangled and that creates a computational advantage - if you want to solve certain mathematical problems.

Quantum computer can help for example to solve the Schrödinger equation for complicated molecules. One could use that to find out what properties a material has without having to synthetically produce it. Quantum computers can also solve certain logistic problems of optimize financial systems. So there is a real potential for application.

But quantum computing does not help for *all types of calculations, they are special purpose machines. They also don’t operate all by themselves, but the quantum parts have to be controlled and read out by a conventional computer. You could say that quantum computers are for problem solving what wormholes are for space-travel. They might not bring you everywhere you want to go, but *if they can bring you somewhere, you’ll get there really fast.

What makes quantum computing special is also what makes it challenging. To use quantum computers, you have to maintain the entanglement between the qubits long enough to actually do the calculation. And quantum effects are really, really sensitive even to smallest disturbances. To be reliable, quantum computer therefore need to operate with several copies of the information, together with an error correction protocol. And to do this error correction, you need more qubits. Estimates say that the number of qubits we need to reach for a quantum computer to do reliable and useful calculations that a conventional computer can’t do is about a million.

The exact number depends on the type of problem you are trying to solve, the algorithm, and the quality of the qubits and so on, but as a rule of thumb, a million is a good benchmark to keep in mind. Below that, quantum computers are mainly of academic interest.

Having said that, let’s now look at what different types of qubits there are, and how far we are on the way to that million.

1. Superconducting Qubits

Superconducting qubits are by far the most widely used, and most advanced type of qubits. They are basically small currents on a chip. The two states of the qubit can be physically realized either by the distribution of the charge, or by the flux of the current.

The big advantage of superconducting qubits is that they can be produced by the same techniques that the electronics industry has used for the past 5 decades. These qubits are basically microchips, except, here it comes, they have to be cooled to extremely low temperatures, about 10-20 milli Kelvin. One needs these low temperatures to make the circuits superconducting, otherwise you can’t keep them in these neat two qubit states.

Despite the low temperatures, quantum effects in superconducting qubits disappear extremely quickly. This disappearance of quantum effects is measured in the “decoherence time”, which for the superconducting qubits is currently a few 10s of micro-seconds.

Superconducting qubits are the technology which is used by Google and IBM and also by a number of smaller companies. In 2019, Google was first to demonstrate “quantum supremacy”, which means they performed a task that a conventional computer could not have done in a reasonable amount of time. The processor they used for this had 53 qubits. I made a video about this topic specifically, so check this out for more. Google’s supremacy claim was later debated by IBM. IBM argued that actually the calculation could have been performed within reasonable time on a conventional super-computer, so Google’s claim was somewhat premature. Maybe it was. Or maybe IBM was just annoyed they weren’t first.

IBM’s quantum computers also use superconducting qubits. Their biggest one currently has 65 qubits and they recently put out a roadmap that projects 1000 qubits by 2023. IBMs smaller quantum computers, the ones with 5 and 16 qubits, are free to access in the cloud.

The biggest problem for superconducting qubits is the cooling. Beyond a few thousand or so, it’ll become difficult to put all qubits into one cooling system, so that’s where it’ll become challenging.

2. Photonic quantum computing

In photonic quantum computing the qubits are properties related to photons. That may be the presence of a photon itself, or the uncertainty in a particular state of the photon. This approach is pursued for example by the company Xanadu in Toronto. It is also the approach that was used a few months ago by a group of Chinese researchers, which demonstrated quantum supremacy for photonic quantum computing.

The biggest advantage of using photons is that they can be operated at room temperature, and the quantum effects last much longer than for superconducting qubits, typically some milliseconds but it can go up to some hours in ideal cases. This makes photonic quantum computers much cheaper and easier to handle. The big disadvantage is that the systems become really large really quickly because of the laser guides and optical components. For example, the photonic system of the Chinese group covers a whole tabletop, whereas superconducting circuits are just tiny chips.

The company PsiQuantum however claims they have solved the problem and have found an approach to photonic quantum computing that can be scaled up to a million qubits. Exactly how they want to do that, no one knows, but that’s definitely a development to have an eye on.

3. Ion traps

In ion traps, the qubits are atoms that are missing some electrons and therefore have a net positive charge. You can then trap these ions in electromagnetic fields, and use lasers to move them around and entangle them. Such ion traps are comparable in size to the qubit chips. They also need to be cooled but not quite as much, “only” to temperatures of a few Kelvin.

The biggest player in trapped ion quantum computing is Honeywell, but the start-up IonQ uses the same approach. The advantages of trapped ion computing are longer coherence times than superconducting qubits – up to a few minutes. The other advantage is that trapped ions can interact with more neighbors than superconducting qubits.

But ion traps also have disadvantages. Notably, they are slower to react than superconducting qubits, and it’s more difficult to put many traps onto a single chip. However, they’ve kept up with superconducting qubits well.

Honeywell claims to have the best quantum computer in the world by quantum volume. What the heck is quantum volume? It’s a metric, originally introduced by IBM, that combines many different factors like errors, crosstalk and connectivity. Honeywell reports a quantum volume of 64, and according to their website, they too are moving to the cloud next year. IonQ’s latest model contains 32 trapped ions sitting in a chain. They also have a roadmap according to which they expect quantum supremacy by 2025 and be able to solve interesting problems by 2028.

4. D-Wave

Now what about D-Wave? D-wave is so far the only company that sells commercially available quantum computers, and they also use superconducting qubits. Their 2020 model has a stunning 5600 qubits.

However, the D-wave computers can’t be compared to the approaches pursued by Google and IBM because D-wave uses a completely different computation strategy. D-wave computers can be used for solving certain optimization problems that are defined by the design of the machine, whereas the technology developed by Google and IBM is good to create a programmable computer that can be applied to all kinds of different problems. Both are interesting, but it’s comparing apples and oranges.

5. Topological quantum computing

Topological quantum computing is the wild card. There isn’t currently any workable machine that uses the technique. But the idea is great: In topological quantum computers, information would be stored in conserved properties of “quasi-particles”, that are collective motions of particles. The great thing about this is that this information would be very robust to decoherence.

According to Microsoft “the upside is enormous and there is practically no downside.” In 2018, their director of quantum computing business development, told the BBC Microsoft would have a “commercially relevant quantum computer within five years.” However, Microsoft had a big setback in February when they had to retract a paper that demonstrated the existence of the quasi-particles they hoped to use. So much about “no downside”.

6. The far field

These were the biggest players, but there are two newcomers that are worth having an eye on.

The first is semi-conducting qubits. They are very similar to the superconducting qubits, but here the qubits are either the spin or charge of single electrons. The advantage is that the temperature doesn’t need to be quite as low. Instead of 10 mK, one “only” has to reach a few Kelvin. This approach is presently pursued by researchers at TU Delft in the Netherlands, supported by Intel.

The second are Nitrogen Vacancy systems where the qubits are places in the structure of a carbon crystal where a carbon atom is replaced with nitrogen. The great advantage of those is that they’re both small and can be operated at room temperatures. This approach is pursued by The Hanson lab at Qutech, some people at MIT, and a startup in Australia called Quantum Brilliance.

So far there hasn’t been any demonstration of quantum computation for these two approaches, but they could become very promising.

So, that’s the status of quantum computing in early 2021, and I hope this video will help you to make sense of the next quantum computing headlines, which are certain to come.

I want to thank Tanuj Kumar for help with this video.