Wednesday, December 30, 2020

Well, Actually. 10 Physics Answers.

[This is a transcript of the video embedded below.]


Today I will tell you how to be just as annoying as a real physicist. And the easiest way to do that is to insist correcting people when it really doesn’t matter.

1. “The Earth Orbits Around the Sun.”

Well, actually the Earth and the Sun orbit around a common center of mass. It’s just that the location of the center of mass is very close by the center of the sun because the sun is so much heavier than earth. To be precise, that’s not quite correct either because Earth isn’t the only planet in the solar system, so, well, it’s complicated.

2. “The Speed of Light is constant.”

Well, actually it’s only the speed of light in vacuum that’s constant. The speed of light is lower when the light goes through a medium, and just what the speed is depends on the type of medium. The speed of light in a medium is also no longer observer-independent – as the speed of light in vacuum is – but instead it depends on the relative velocity between the observer and the medium. The speed of light in a medium can also depend on the polarization or color of the light, the former is called birefringence and the latter dispersion.

3. “Gravity Waves are Wiggles in Space-time”

Well, actually gravity waves are periodic oscillations in gases and fluids for which gravity is a restoring force. Ocean waves and certain clouds are examples of gravity waves. The wiggles in space-time are called gravitational waves, not gravity waves.

4. “The Earth is round.”

Well, actually the earth isn’t round, it’s an oblate spheroid, which means it’s somewhat thicker at the equator than from pole to pole. That’s because it rotates and the centrifugal force is stronger for the parts that are farther away from the axis of rotation. In the course of time, this has made the equator bulge outwards. It is however a really small bulge, and to very good precision the earth is indeed round.

5. “Quantum Mechanics is a theory for Small Things”

Well, actually, quantum mechanics applies to everything regardless of size. It’s just that for large things the effects are usually so tiny you can’t see them.

6. “I’ve lost weight!”

Well, actually weight is a force that depends on the gravitational pull of the planet you are on, and it’s also a vector, meaning it has a direction. You probably meant you lost mass.

7. “Light is both a particle and a wave.”

Well, actually, it’s neither. Light, as everything else, is described by a wave-function in quantum mechanics. A wave-function is a mathematical object, that can both be sharply focused and look pretty much like a particle. Or it can be very smeared out, in which case it looks more like a wave. But really it’s just a quantum-thing from which you calculate probabilities of measurement outcomes. And that’s, to our best current knowledge, what light “is”.

8. “The Sun is eight light minutes away from Earth.”

Well, actually, this is only correct in a particular coordinate system, for example that in which Planet Earth is in rest. If you move really fast relative to Earth, and use a coordinate system in rest with that fast motion, then the distance from sun to earth will undergo Lorentz-contraction, and it will take light less time to cross the distance.

9. “Water is blue because it mirrors the sky.”

Well, actually, water is just blue. No, really. If you look at the frequencies of electromagnetic radiation that water absorbs, you find that in the visual part of the spectrum the absorption has a dip around blue. This means water swallows less blue light than light of other frequencies that we can see, so more blue light reaches your eye, and water looks blue. 

However, as you have certainly noticed, water is mostly transparent. It generally swallows very little visible light and so, that slight taint of blue is a really tiny effect. Also, what I just told you is for chemically pure water, H two O, and that’s not the water you find in oceans, which contain various minerals and salt, not to mention dirt. So the major reason the oceans look blue, if they do look blue, is indeed that they mirror the sky.

10. “Black Holes have a strong gravitational pull.”

Well, actually the gravitational pull of a black hole with mass M is exactly as large as the gravitational pull of a star with mass M. It’s just that – if you remember newton’s one over r square law – the gravitational pull depends on the distance to the object. 

The difference between a black hole and a star is that if you fall onto a star, you’re burned to ashes when you get too close. For a black hole you keep falling towards the center, cross the horizon, and the gravitational pull continues to increase. Theoretically, it eventually becomes infinitely large.

How many did you know? Let me know in the comments.


You can join the chat on this video tomorrow, Thursday Dec 31, at 6pm CET or noon Eastern Time here.

Saturday, December 26, 2020

What is radiation? How harmful is it?

[This is a transcript of the video embedded below.]


Did you know that sometimes a higher exposure to radiation is better than a lower one? And that some researchers have claimed low levels of radioactivity are actually beneficial for your health? Does that make sense? Are air purifiers that ionize air dangerous? And what do we mean by radiation to begin with? That’s what we will talk about today.

First of all, what is radiation? Radiation generally refers to energy transferred by waves or particles. So, if I give you a fully charged battery pack, that’s an energy transfer, but it’s not radiation because the battery is neither a wave nor a particle. On the other hand, if I shout at you and it makes your hair wiggle, that sound was radiation. In this case the energy was transferred by sound waves, that are periodic density fluctuations in the air.

Sound is not something we usually think of as radiation, but technically, it is. Really all kind of things are technically radiation. If you drop a pebble into water, for example, then the waves this creates are also radiation.

But what people usually think of, when they talk about radiation, is radiation that’s transferred by either (a) elementary particles, that’s particles which have no substructure, for all we currently know, or that’s (b) transferred by small composite particles, such as protons, neutrons, or even small atomic nuclei and (c) electromagnetic waves. But electromagnetic waves are strictly speaking also made of particles, which are the photons. So, really all these types of radiation that we usually worry about are made of some kind of particle.

The only exception is gravitational radiation. That’s transferred in waves, and we believe that these gravitational waves are made of particles, that’s the gravitons, yet we have no evidence for the gravitons themselves. But of all the possible types of radiation, gravitational radiation is the one that is the least likely to leave any noticeable trace. Therefore, with apologies, I will in the following, not consider gravitational waves.

Having said that, if you want to know what radiation does, you need to know four things. First, what particle is it? Second, what’s the energy of the particle. Third, how many of these particles are there. And forth, what do they do to the human body. We will go through these one by one.

First, the type of particle tells you how likely the radiation is to interact with you. Some particles come in huge amounts, but they basically never interact. They just go through stuff and don’t do anything. For example, the sun produces an enormous number of particles called neutrinos. Neutrinos are electrically neutral, have a small mass, and they just pass through walls and you and indeed, the whole earth. There are about one hundred trillion neutrinos going through your body every second. And you don’t notice.

It’s the same with the particles that make up dark matter. They should be all around us and going through us as we speak, but they interact so rarely with anything, we don’t notice. Or maybe they don’t exist in the first place. In any case, neutrinos and dark matter are particles you clearly don’t need to worry about.

However, other particles interact more strongly, especially if they are electrically charged. That’s because the constituents of all types of matter are also electrically charged. Charged particles in radiation are mostly electrons, which you all know, or muons. Muons are very similar to electrons, just heavier and they are unstable. They decay into electrons and neutrinos again. You can also have charged radiation made of protons, that’s one of the constituents of the atomic nucleus and it’s positively charged, or you can have radiation made of small atomic nuclei. The best known of those are Helium nuclei, which are also called alpha particles.

Besides protons, the other constituent of the atomic nucleus are neutrons. As the name says, they are electrically neutral. They are a special case because they can do a lot of damage even though they do not have an electric charge. That’s because neutrons can enter the atomic nucleus and make the nucleus unstable.

However, neutrons, curiously enough, are actually unstable if they are on their own. If neutrons are not inside an atomic nucleus, they live only for about 10 minutes, then they decay to a proton, an electron and an electron-anti-neutrino. For this reason you don’t encounter single neutrons in nature. So, that too is something you don’t need to worry about.

Then there’s electromagnetic radiation, which we already talked about the other week. Electromagnetic radiation is made of photons, and they can interact with anything that is electrically charged. And since atoms have electrically charged constituents, this means, electromagnetic radiation can which interact with any atom. But whether they actually do that depends on the amount of energy per photon.

So, first you need to know what kind particle is in the radiation, because that tells you how likely it is to interact. And then, second, to understand what the radiation can do if it interacts, you need to know how much energy the individual particles in the radiation have. If the energy of the particles in the radiation is large enough to break bonds between molecules, then they are much more likely to be harmful.

The typical binding energy of molecules is similar to the energy you need to pull an electron off an atom. This is called ionization, and radiation that can do that is therefore called “ionizing radiation”. The reason ionizing radiation is harmful is not so much the ionization itself, it’s that if radiation can ionize, you know it can also break molecular bonds.

Ionized atoms or molecules like to undergo chemical reactions. That may be a problem if it happens inside the body. But ionized molecules in the air are actually common, because sunlight can do this ionization, and not something you need to worry about. If you have an air purifier, for example, that ionizes some air molecules, usually O two or N two.

The idea is that these ionized molecules will bind to dust and then the dust is charged, so it will stick to the floor or other grounded surfaces. But this ionization in air purifiers does not require ionizing radiation, so it’s not a health risk. Except that air purifiers may also produce ozone, which is not healthy.

Where does ionizing radiation come from? Well, for one, ultraviolet sunlight has enough energy to ionize. But even higher energies can be reached by ionizing radiation that comes from outer space, the so-called cosmic rays.

Most ultraviolet radiation from the sun gets stuck in the stratosphere thanks to ozone. Most cosmic rays are also blocked or at least dramatically slowed down in the upper atmosphere, but part of it still reaches the ground. This already tells you that your exposure to ionizing radiation increases with altitude. In fact, average people like you and I tend to get the highest doses of ionizing radiation on airplanes.

The particles in the primary cosmic radiation are mostly protons, some are small ionized nuclei, and then there’s a tiny fraction of other stuff. Primary here means, it’s the thing that actually comes from outer space. But almost all of these primary cosmic particles hit air molecules in the upper atmosphere, which converts them into a lot of particles of lower energy, usually called a cosmic ray shower. This shower, which rains down on earth, is almost exclusively made of photons, electrons, muons, and neutrinos, which we’ve already talked about.

Ionizing radiation is also emitted by radioactive atoms. The radiation that atoms can emit is of three types: alpha, that’s Helium nuclei, beta, that’s electrons and positrons, and gamma, that’s photons. Radioactive atoms which emit these types of radiation occur naturally in air, rock, soil, and even food. So there is a small amount of background radiation everywhere on earth, no matter where you go, and what you touch.

This then brings us to the third point. If you know what particle it is, and you know what energy it has, you need to know how many of them there are. We measure this in the total energy per time, that is known as power. The power of the radiation is the highest if you are close to the source of the radiation. That’s because the particles spread out into space, so the farther away you are, the fewer of them will hit you. The number of particles can drop very quickly if some of the radiation is absorbed. And the radiation that is the most likely to interact with matter, is the least likely to reach you. This is the case for example for alpha particles. You can block them just by a sheet of paper.

And then there’s the fourth point, which is the really difficult one. How much of that radiation is absorbed by the body and what can it do? There is no simple answer to this. Well, okay, one thing that’s simple is that high amounts of radiation, regardless of what type, can pump a lot of energy into the body, which is generally bad. Most countries therefore have radiation safety regulations that set strict limits on the amount of radiation that humans should maximally be exposed to. If you want to know details, I encourage you to check out these official guides, to which I leave links in the info below the video.

Interestingly enough, more radiation is not always worse. For example, you may remember that if there’s a nuclear accident, people rush to buy iodine pills. That’s because nuclear accidents can release a radioactive isotope of iodine, which may then enter the body through air or food. This iodine will accumulate in the thyroid gland and if it decays, that can damage cells and cause cancer. The pills of normal iodine basically fill up the storage space in your thyroid gland, which means the radioactive substance leaves the body faster.

But. Believe that or not, some people swallow large amounts of radioactive iodine as a medical treatment. This is actually rather common if someone has an overactive thyroid gland which causes a long list of health issues. It can be treated by medication, but then patients have to take pills throughout their lives, and these pills are not without side effects.

Now, if you give those patients radioactive iodine that kills off a significant fraction of the cells in the thyroid gland, and can solve their problem permanently. This method has been in use since the 1940s, is very effective, and no, it does not increase the risk of thyroid cancer. The thing is that if the radiation dose is high enough, the cells in the thyroid gland will not just be damaged, mutate, and possibly cause cancer. They’ll just die.

Now, this is not to say that more radiation is generally better, certainly not. But it demonstrates that it’s not easy to find out what the health effects of a certain radiation dose are. The physics is simple. But the biology isn’t.

Indeed, some scientists have argued that low doses of ionizing radiation may be beneficial because they encourage the body to use cell-repair mechanisms. This idea is called “radiation hormesis”. Does that make sense? Well. It kind of sounds plausible. But the plausible ideas are the one you should be most careful with. Several dozen studies have looked at radiation hormesis in the past 20 years. But so far, the evidence has been inconclusive and official radiation safety committees have not accepted it. This does not mean it’s wrong. It just means, at the moment it’s unclear whether or, if so, under which circumstances, low doses of ionizing radiation may have health benefits, or at least not do damage.

So, I hope you learned something new today!



You can join the chat on this video today (Saturday) at 6pm CET/noon Eastern Time here.

Wednesday, December 23, 2020

How to speak English like Einstein


[This is a transcript of the video embedded above. Parts of the text won’t make sense without the accompanying audio.]

Hi everybody, I’ve been thinking really hard about why you are here. Of course theoretical physics is awesome, but in my experience, that opinion is, sadly enough, not widely shared among the general population. So while I am thrilled to see you’re all super excited about the square well potential in Schrödinger’s equation, I am secretly convinced you’re just here to hear me try to pronounce difficult English words with a German accent. So, today, we’ll have a special feature about How To Speak English Like Einstein.
Albert Einstein: “The scientific method itself would not have led anywhere, it would not even have been formed, without a passionate striving for a clear understanding. Perfection of means and confusion of goals seem, in my opinion, to characterize our age.”
Don’t worry if you don’t speak German, you don’t need to know a single word of German to understand this video. But before we get started, let’s have a look at Professor Einstein’s name, Albert Einstein.

How is that pronounced correctly? Most importantly, the German ST is not pronounced “st” as you would in English, for example in “first” or “start”. The “ST” in Einstein is pronounced “scht”. Einstein.

The German “sch” is similar, but not exactly identical, to the English “sh”. If you are familiar with phonetic spelling, that’s this thing that looks like an integral. You find it in words like “push” or “machine”. It’s a good first approximation to the German “sch”, but if you want to get the German sound right, you have to pull the tongue back in your mouth.

Listen, the English one is “push”. Now pull back your tongue, and you get pusch. Pusch. That’s the sound that goes into Einschtein.

The rest is details. All the German vowels are shifted relative to the English ones, long story, big headache, but just try not to rely on the spelling, just listen. It’s Albert Einstein. Don’t worry about the “r” in Albert, just make that an “a”. Everyone does it and it’ll sound just fine. Albeat. Albeat Einstein.

Ok, so now about that German accent. To speak with a German accent, you have to remember which English sounds do not exist in German. And that’s most importantly, the English “th”, the vanishing “w”, and the “r”. If you replace those with the next closest German sounds, you’ll immediately sound very German.

Let’s use this sentence as example “I remember in February we were still thinking that this would be over relatively soon.” I hope I pronounced this correctly.

Here’s the first step to a German accent. Replace all the “th”s, the “th” with a “z”. Why a “z”? Because that’s what comes out if you put your tongue in the wrong place. That’s what you mean, zat’s what it sounds like. So, you replace “this” with “zis”. And “either” with “eizer”. “Therefore” with “zerefore”, and so on. The example sentence then becomes:

“I remember in February we were still zinking zat zis would be over relatively soon.”
Mayday, mayday. Hello, can you hear us? Can you hear us? Over. We are sinking. We are sinking.

Hello. Ziz is ze German coast guard.

We’re sinking. We’re sinking.

What are you zinking about?
German humor.

Second step. The vanishing English “w”. As in “what” or “wonderful”. That sound doesn’t exist in German either, so you make it a “v”. What becomes vat. Wonderful becomes vonderful. Would become vould, and so on. With that our example sentence now sounds like this

“I remember in February ve vere still zinking zat zis vould be over relatively soon.”

The third step is the probably most difficult one if you’re an English native speaker. It’s to replace the English “r” with a German r. The German “r” is a short rolling r. Think of a happy cat, it’s purring, it goes “rrrrrr” “rrrr”. Comes from the back of your throat. Like if you’re snoring. Rrrr. Try that. I’ll wait.

Excellent. Now you launch from that into a word. Let’s take the word “right”. “rrrrrrrrrrrrrright” right. Right. There you have it. It sounds very German doesn’t it? We don’t, in German, actually do a lot of rolling with the r, so don’t make that too long. Right. Also, don’t trill the r at the tip of your tongue, like in trust me. No, don’t do that. It should be tRust me.

Some more examples. Friend becomes “fRiend”. Direction becomes diRection. It’s actually a terrible sound.

The example sentence is now: “I Remember in FebRuaRy ve vere still zinking zat zis vould be over Relatively soon.”

Repeat after me, I’ll pause.

“I Remember in FebRuaRy ve vere still zinking zat zis vould be over Relatively soon.”

Great. You are awesome. Have fun with your Einstein English, don’t forget to subscribe and check my Patreon page for more content. Zanks for vatching.

Saturday, December 19, 2020

All you need to know about 5G

The new 5G network technology is currently being rolled out in the United States, Germany, the United Kingdom, and many other countries all over the world. What’s new about it? Does it really use microwaves? Like in microwave ovens? Is that something you should worry about? I began looking into this fully convinced I’d tell you that nah, this is the usual nonsense about cellphones causing cancer. But having looked at it in some more detail, now I’m not so sure.


First of all, what is 5 G? 5 G is the fifth generation of wireless networks. The installation of antennas is not yet completed, and it will probably take at least several more years to complete, but in some places 5G is already operating, and you can now buy cellphones that use it. What’s it good for? 5G promises to deliver more data, faster, by up to a factor one hundred, optimistically. It could catapult us into an era where driverless cars and the internet of things have become reality.

How is that supposed to work? 5 G uses a variety of improvements on the data routing that makes it more efficient, but the biggest change that has attracted the most attention is that 5G uses a frequency range that the previous generations of wireless networks did not use.

These are the millimeter waves. And, yes, these are the same waves that are being used in the scanners at airport security, the difference is that in the scanners you’re exposed for a second every couple of months or so, while with 5G you’d be sitting in it at low power but possibly for hours a day, depending on how close you live and work to one of the new antennas.

As the name says, millimeter waves have wavelengths in the millimeter range, and the ones used for 5G correspond to frequencies of twenty-four to forty-eight Giga-Hertz.

If that number doesn’t tell you anything, don’t worry, I will give you more context in a moment. For now, let me just say that the new frequencies are about a factor ten higher than the highest frequencies that were previously used for wireless networks.

Another thing that’s new about 5G are directional phased-array antennas. Complicated word that basically means the antennas don’t just radiate the signal off into all directions, but they can target a particular direction. And that’s an important difference, if you want to know how the signal strength drops with distance to the antenna. Roughly speaking, it becomes more difficult to know what’s going on.

Because of these new features, conspiracy theories have flourished around 5G and there have been about a hundred incidents, mostly in the Netherlands, Belgium, Ireland, and the UK, where people have burned down or otherwise damaged 5G telephone towers. Dozens of cities, counties, and nations have stopped the installing. There have been protests against the rollout of the 5G technology all over the world. And groups of concerned scientists have written open letters twice, once in 2017 and once in 2019. Each letter attracted about a few hundred signatures from scientists. Not a terrible lot, but not nothing either.

Before we can move on, I need to give you some minimal background on the physics, so bear with me for a moment. Wireless technology uses electromagnetic radiation to encode and send information. Electromagnetic radiation is electric and magnetic fields oscillating around each other creating a freely propagating wave that can travel from one place to another. Electromagnetic radiation is everywhere. Light is electromagnetic radiation. Radio stations air music with electromagnetic radiation. If you open an oven and feel the heat, that’s also electromagnetic radiation. These seem to be different phenomena, but physically, they’re all the same thing. The only difference is the wavelength of the oscillation. Commonly, we use different names for electromagnetic radiation depending on that wavelength.

If we can see it, we call it light. Visible light with long wavelengths is red, and at even longer wave-lengths when we can no longer see it, we call it infrared. We can’t see infrared light, but we often still feel that it’s warm. At even longer wavelengths we call the radiation microwaves, and if the wavelengths are even longer, they are called radio waves.

On the other side of visible light, at wavelengths shorter than violet, we have the ultraviolet, and then the X-rays, and gamma-rays. The new millimeter waves are in the high frequency part of microwaves.

Now, we may call electromagnetic radiation a “wave” but those waves are actually quantized, which means they are made of small packs of energy. These small packs of energy are the particles of light, which are called “photons”. You may think it’s an unnecessary complication, to talk about quantization here, but knowing that electromagnetic radiation is made of these particles, the photons, is extremely helpful to understand what the radiation can do.

That’s because the energy of the photons is proportional to the frequency of the radiation, or equivalently, the energy is inversely proportional to the wavelength.

So, a high frequency means a short wavelength, and a large energy per photon. A small frequency means a long wavelength, which means small energy. Again that’s energy per photon.

That the frequency of electromagnetic radiation tells you the energy of the particles in the radiation is so useful because if you want to damage a molecule, you need a certain minimum amount of energy. You need this energy to break the bonds between the atoms that make up the molecule. And so, the most essential thing you need to know to gauge how harmful electromagnetic radiation is, is whether the energy per photon in the radiation is large enough to break molecular bonds, like the bonds that hold together the DNA.

Breaking molecular bonds is not the only way electromagnetic radiation can be harmful, and I will get to the other ways in few minutes, but it *is the most direct and important harm electromagnetic radiation can do.

So how much energy do you need to damage a molecule? Damage begins happening just above the high-energy-end of visible light, with the ultraviolet radiation. That’s the light that gives you a sunburn and that you’ve been told to avoid. It has wavelengths that are just a little bit shorter than visible light, or frequencies and energies that are just a little bit higher.

In terms of energy, ultraviolet radiation has about three to thirty electron volts per photon. An electron Volt is just a unit of energy. If that’s unfamiliar to you, doesn’t matter, you merely need to know that the binding energy of most molecules also lies in the range of a few electron volts.

If you want to break a molecule, you need energies above that binding energy, so you need frequencies at or above the ultraviolet. That’s because the energy for the damage has to come with the individual photons in the radiation. If the individual photons do not have enough energy to actually damage the molecule, they either just go through or, sometimes, if they hit a resonance frequency, they’ll wiggle the molecule. If you wiggle molecules that means you warm them up.

So, what matters for the question whether you can damage a molecule is the energy per photon in the radiation, which means the frequency of the radiation, *not the total energy of all the particles in the radiation, of which there could be many. If you take more particles, but *each of them has an energy below what’s necessary for damaging a molecule, you’ll just get more wiggling.

All the radiation used for wireless networks, including 5G, uses frequencies way below those necessary to break molecular bonds. It is below even the infrared. So in this regard, there is clearly nothing to worry about.

But. As I mentioned, breaking molecular bonds is not the only way that electromagnetic radiation can harm living tissue. Because tissue is complicated. It’s not just physics. You can also harm tissue just by warming it.

And how much warming you can get from electromagnetic radiation is not determined by the energy per photon, it is determined by the total energy per time that is transferred by all the photons and on the fraction that is absorbed by the tissue. That total energy transfer per time is called the “power” and it’s commonly measured in Watts. So: The frequency tells you the energy per photon. The power tells you the total energy in photons per time.

For example, if you look at your microwave oven, that probably operates at about 2 GigaHertz, which is a really small energy per photon, about a million times below the energy required to break molecular bonds.

But a microwave oven operates at maybe four hundred or up to a thousand Watts. And that’s high in terms of power. So, a lot of photons per time. On the other hand, if you have a wireless router at home, it quite possibly operates at a similar frequency as your microwave oven. But a wireless router typically uses something like one hundred milli Watts, that’s ten thousand times less than the microwave oven, and the router radiates into space, not into a closed cavity.

That’s a relevant difference for a simple geometric reason. If the photons in the electromagnetic radiation distribute in all of the directions, as they do for antennas like your wireless router, then the density of particles will thin out, meaning the power will drop very quickly with distance to the sender. This is why, in wireless communication, the highest power you’ll be exposed to is if you are close to the sender and that is usually your cell phone, not an antenna, because the antennas tend to be on a roof or a mast or in any case, not on your ear.

Ok, to summarize: The frequency tells you the energy per particle and determines the what type of damage is possible. The power tells you the number of particles and it drops very quickly with distance to the source. The power alone does not tell you how much is absorbed by the human body.

Back to 5G. What the 5G controversy is about is whether the electromagnetic radiation from the new antennas poses a health risk.

5G actually uses electromagnetic radiation in three different parts of the spectrum, called the low band, the mid band, and the high band. The frequency of the radiation in all these bands is below that which is required to damage molecules. The frequency of the mid band is indeed comparable to the one your microwave oven is using, but actually, there’s nothing new about this, microwaves have been used by wireless networks for more than two decades.

The radiation in the high band are the new millimeter waves. This band has so far been largely unused for telecommunication purposes simply because it’s not very good for long-range transmission. The electromagnetic waves in this range do not travel very far and can get blocked by walls, trees, and even humans.

Therefore, the idea behind 5G is to use a short-range network, made of the so-called “small cells” for the millimeter waves. These small cells have to be distributed at distances of about one hundred meters or so.

The small cells communicate with macro cells that use the mid and low bands with antennas that operate at higher power and that do the long range transmission. So, a fully functional 5G network is likely to increase the exposure to millimeter waves, which have not before been used for cell phones.

This means the people who are citing the lack of correlation between cell phone use and cancer incidence in the past 20 years missed the point. These studies don’t tell you anything about the 5G high band because that wasn’t previously in use.

Now the thing is if you look what is known about the health risks from long-term exposure to the new millimeter waves band, there are basically no studies. We know that millimeter waves cannot penetrate deeply into the human body, but we know that at high power, they warm the skin and irritate eyes. Exactly what power is too much in the long run no one knows because there just hasn’t been enough research.

Here is for example a Meta-review published about a year ago, which came to the conclusion:
“The available studies do not provide adequate and sufficient information for a meaningful safety assessment.”

And here we have Rob Waterhouse, vice president of a telecommunication company in the United States:
Waterhouse admits that although millimeter waves have been used for many different applications—including astronomy and military applications—the effect of their use in telecommunications is not well understood… “The majority of the scientific community does not think there’s an issue. However, it would be unscientific to flat out say there are no reasons to worry.”
That’s not very reassuring. And the World Health Organization writes:

“no adverse health effect has been causally linked with exposure to wireless technologies… but, so far, only a few studies have been carried out at the frequencies to be used by 5G.”

So the protests that you see against 5G, I am afraid to say, are not entirely unjustified. Don’t get me wrong, damaging other people’s property is certainly not a legitimate response. But I can understand the concern. We have no reason to think 5G *is a health risk. Indeed, it is reasonable to think it is *not a health risk, given that this radiation is of low energy and scatters in the upper layers of the skin, but there is very little data on what the effects of long-term exposure may be.

How should one proceed in such a situation? Depends on how willing you are to tolerate risk. And that’s not a question for science, that’s a question for politics. What do you think? Let me know in the comments.



You can join the chat on this week's topic on Saturday, Dec 19, at noon Eastern Time/6pm CET here.

Saturday, December 12, 2020

Are Singularities Real?

Last week we discussed whether infinity is real, and came to the conclusion it is not. This week I want to talk about a closely related topic, singularities. What are singularities, where do they appear in physics, and are they real?


A singularity is a place beyond which you cannot continue. But singularities in mathematics can be rather dull. In mathematics a singularity may just be a location where an object, for example a function, is not defined. But it may not be defined just because you didn’t define it.

If I define for example a piecewise function that has the value one for x strictly smaller and strictly larger than zero, then that’s not defined exactly at zero. You can’t go from left to right. So, that’s a singularity. It is however a singularity that is easy to remove, just by filling in the missing point. Correspondingly, this is also called a removable singularity.

But many functions have singularities that are more interesting than that. The simplest example that’s still interesting is the function one over x, which has a singularity at zero. This singularity cannot be removed. There is no point you could fit in at zero that would make this function continuous. You won’t get from the left to the right.

For the function one over x that’s because the function diverges when x gets close to zero, so the value of the function becomes infinitely large. However, and this is a really important point, a singularly does not necessarily have to come with anything infinite.

Take for example the function sine of one over x, which I have plotted here. This function has a singularity at x equals zero, but that’s not because the value of the function becomes infinitely large. It’s because there’s no such thing as the value of the sine function at one over zero.

For a mathematician, a function doesn’t even have to look odd to have a singularity. The best example is the function e to the minus one over x square. This looks perfectly fine if you plot it. But this function has a really weird property. If you calculate the value of the function and the derivatives of the function at zero, you will find that they are all exactly zero.

What this means is that if you reach zero from one side, you don’t know how to continue the function. There are infinitely many ways to continue from there, all of which will perfectly fit to the other side. For example, you could continue with a function that’s zero everywhere and glue this onto the other side. Or you could take the function e to the minus pi over x square. This type of singularity is called an “essential singularity”.

Okay, so singularities are arguably a thing in mathematics. But do singularities appear in reality? Not for all we currently know. Physicists use mathematics to describe nature, and, yes, sometimes this mathematics contains singularities. But these singularities are in all known cases a signal that the theory has been applied in a range where it’s no longer valid.

Take for example water dripping off a tap. The surface of the droplet has a singularity where the drop pinches off. At this point the curvature becomes infinitely large. However, this happens only if you describe water as a fluid, which is an approximation in which you ignore that really the water is made of atoms. If you look closely at the pinch-off point of the droplet, then there is no singularity, there are just atoms.

There are other examples where singularities appear in physics. For example, in phase transitions, like the transition from a liquid to a solid, some quantities can become infinitely large. But again this is really a consequence of using certain bulk descriptions in approximation. If you actually look closely, there isn’t really anything singular at a phase transition.

There is one exception to this, and that’s black holes.

Black holes are solutions of Einstein’s theories of general relativity. They have a singularity in the center. Because it’s a common misunderstanding, let me emphasize that there is no singularity at the black hole horizon. There is actually nothing particularly interesting happening at the horizon. It’s just the boundary of a region from which you cannot get out. Instead, once you cross the horizon, you will inevitably fall into the singularity.

And in a nutshell, this is pretty much what Hawking and Penrose’s singularity theorems are about. That in general relativity you can get situations where singularities are unavoidable because all possible paths lead there.

But what happens if you fall into a black hole singularity? Well, you die before you reach the singularity because tidal forces rip you to pieces. But if your remainders reach the singularity, then that’s just the end. There’s no more space or time beyond this. There’s just nothing.

At least that’s what the mathematics says. So what does the physics say? Is the black hole singularity “real”? No one knows. Because we cannot see what happens inside of a black hole. Whatever happens there is really just pure speculation.

Most physicists believe that the singularity in black holes is not real, but that it is instead of the same type as the other singularities in physics. That is, it just signals that the theory, in this case general relativity, breaks down and to make meaningful predictions, one needs a better theory. For the black hole singularity, that better theory would be a theory for the quantum behavior of space and time, a theory of “quantum gravity” as it’s called.

Some of you may wonder now what’s with the technological singularity. The technological singularity usually refers to a point in time where machines become intelligent enough to improve themselves, creating a runaway effect which is supposedly impossible to predict. It’s called a singularity because of this impossibility to make a prediction beyond it, which is indeed very similar to the mathematical definition of a singularity.

But of course the technological singularity is not a real singularity. It may be in practice impossible to predict what happens afterwards but lots of things are in practice impossible to predict. There is nothing specifically unpredictable about the laws of nature at a technological singularity, if that ever happens in the first place.

In summary, singularities exist in mathematics, but we have no evidence that singularities also exist in nature. And given that, as we saw earlier, certain types of singularities do not even require any quantity to become infinite, it is not impossible that one day we may discover an actual singularity in nature. In contrast to infinity, singularities are not a priori unscientific.



You can join the chat about this week's topic on Saturday, Dec 12, 12PM EST / 6PM CET.

Saturday, December 05, 2020

Is Infinity Real?

[This is a transcript of the video embedded below]

Is infinity real? Or is it just mathematical nonsense that you get when you divide by zero? If infinity is not real, does this mean zero also is not real? And what does it mean that infinity appears in physics? That’s what we will talk about today.


Most of us encounter infinity the first time when we learn to count, and realize that you can go on counting forever. I know it’s not a terribly original observation, but this lack of an end to counting because you can always add one and get an even larger numbers is the key property of infinity. Infinity is the unbounded. It’s larger than any number you can think of. You could say it’s unthinkably large.

Okay, it isn’t quite as simple because, odd as this may sound, there are different types of infinity. The amount of natural numbers, 1,2,3 and so on is just the simplest type of infinity, called “countable infinity”. And the natural numbers are in a very specific way equally infinite as other sets of numbers, because you can count these other sets using the natural numbers.

Formally this means a set of numbers is similarly infinite as the natural numbers, if you have a one-to-one map from the natural numbers to that other set. If there is such a map, then the two sets are of the same type of infinity.

For example, if you add the number zero to the natural numbers – so you get the set zero, one, two, three, and so on – then you can map the natural numbers to this by just subtracting one from each natural number. So the set of natural numbers and the set of the natural numbers plus the number zero are of the same type of infinity.

It’s the same for the set of all integers Z, which is zero, plus minus one, plus minus two, and so on. You can uniquely assign a natural number to each integer, so the integers are also countably infinite.

The rational numbers, that is the set of all fractions of integers, is also countably infinite. The real numbers, that contain all numbers with infinitely many digits after the point, is however not countably infinite. You could say it’s even more infinite than the natural numbers. There are actually infinitely many types of infinity, but these two, those which correspond to the natural and real numbers, are the two most commonly used ones.

Now, that there are many different types of infinity is interesting, but more relevant for using infinity in practice is that most infinities are actually the same. As a consequence of this, if you add one to infinity, the result is still the same infinity. And if you multiply infinity with two, you just get the same infinity again. If you divide one by infinity, you get a number with an absolute value smaller than anything, so that’s zero. But you get the same thing if you divide two or fifteen or square root of eight by infinity. The result is always zero.

I hope there are no mathematicians watching this, because technically one should not write down these relations as equations. Really they are statements about the type of infinity. The first, for example, just means if you add one to infinity, then the result is the same type of infinity.

The problem with writing these relations as equations is that it can easily go wrong. See, you could for example try to subtract infinity on both sides of this equation, giving you nonsense like one equals zero. And why is that? It’s because you forgot that the infinity here really only tells you the type of infinity. It’s not a number. And if the only thing you know about two infinities is that they are of the same type, then the difference between them can be anything.

It’s even worse if you do things like dividing infinity by infinity or multiplying infinity with zero. In this case, not only can the result be any number, it could also be any kind of infinity.

This whole infinity business certainly looks like a mess, but mathematicians actually know very well how to deal with infinity. You just have to be careful to keep track of where your infinity comes from.

For example, suppose you have a function like x square that goes to infinity when x goes to infinity. You divide it by an exponential function, that also goes to infinity with x. So you are dividing infinity by infinity. This sounds bad.

But in this case you know how you get to infinity and therefore you can unambiguously calculate the result. In this case, the result is zero. The easiest way to see this is to plot this fraction as a function of x, as I have done here.

If you know where your infinities come from, you can also subtract one from anther. Indeed, physicists do this all the time in quantum field theory. You may for example have terms like 1/epsilon, 1/epsilon square and the logarithm of epsilon. Each of these terms will give you infinity for epsilon to zero. But if you know that two terms are of the same infinity, so they are the same function of epsilon, then you can add or subtract them like numbers. In physics, usually the goal of doing this is to show that at the end of a calculation they all cancel each other and everything makes sense.

So, mathematically, infinity it interesting, but not problematic. For what the math is concerned, we know how to deal with infinity just fine.

But is infinity real? Does it exist? Well, it arguably exists in the mathematical sense, in the sense that you can analyze its properties and talk about it as we just did. But in the scientific sense, infinity does not exist.

That’s because, as we discussed previously, scientifically we can only say that an element of a theory of nature “exists” if it is necessary to describe observations. And since we cannot measure infinity, we do not actually need it to describe what we observe. In science, we can always replace infinity with a very large but finite number. We don’t do this. But we could.

Here is an example that demonstrates how mathematical infinities are not measurable in reality. Suppose you have a laser pointer and you swing it from left to right, and that makes a red dot move on a wall in a far distance. What’s the speed by which the dot moves on the wall?

That depends on how fast you move the laser pointer and how far away the wall is. The farther away the wall, the faster the dot moves with the swing. Indeed, it will eventually move faster than light. This may sound perplexing, but note that the dot is not actually a thing that moves. It’s just an image which creates the illusion of a moving object. What is actually moving is the light from the pointer to the wall and that moves just with the speed of light.

Nevertheless, you can certainly observe the motion of the dot. So, we can ask then, can the dot move infinitely fast, and can we therefore observe something infinite?

It seems that for the dot to move infinitely fast you’d have to place the wall infinitely far away, which you cannot do. But wait. You could instead tilt the wall at an angle to you. The more you tilt it, the faster the dot moves across the surface of the wall as you swing the laser pointer. Indeed, if the wall is parallel to the direction of the laser beam, it seems the dot would be moving infinitely fast across the wall. Mathematically this happens because the value of the tangent function at pi over two is infinity. But does this happen in reality?

In reality, the wall will never be perfectly flat, so there are always some points that will stick out and that will smear out the dot. Also, you could not actually measure that the dot is at exactly the same time on both ends of the wall because you cannot measure times arbitrarily precisely. In practice, the best you can do is to show that the dot moved faster than some finite value.

This conclusion is not specific to the example with the laser pointer, this is generally the case. Whenever you try to measure something infinite, the best you can do in practice is to say it’s larger than something finite that you have measured. But to show that it was really infinite you would have to show the result was larger than anything you could possibly have measured. And there’s no experiment that can show that. So, infinity is not real in the scientific sense.

Nevertheless, physicists use infinity all the time. Take for example the size of the universe. In most contemporary models, the universe is infinitely large. But this is a statement about a mathematical property of these models. The part of the universe that we can actually observe only has a finite size.

And the issue that infinity is not measurable is closely related to the problem with zero. Take for example the mathematical abstraction of a point. Physicists use this all the time when they deal with point particles. A point has zero size. But you would have to measure infinitely precisely to show that you really have something of zero size. So you can only ever show it’s smaller than whatever your measurement precision allows.

Infinity and zero are everywhere in physics. Even in seemingly innocent things like space, or space-time. The moment you write down the mathematics for space, you assume there are no gaps in it. You assume it’s a perfectly smooth continuum, made of infinitely many infinitely small points.

Mathematically, that’s a convenient assumption because it’s easy to work with. And it seems to be working just fine. That’s why most physicists do not worry all that much about it. They just use infinity as a useful mathematical tool.

But maybe using infinity and zero in physics brings in mistakes because these assumptions are not only not scientifically justified, they are not scientifically justifiable. And this may play a role in our understanding of the cosmos or quantum mechanics. This is why some physicists, like George Ellis, Tim Palmer, and Nicolas Gisin have argued that we should be formulating physics without using infinities or infinitely precise numbers.

You can join the chat on this video on

Saturday 12PM EST / 6PM CET
Sunday 2PM EST / 8PM CET

Saturday, November 28, 2020

Magnetic Resonance Imaging

[This is a transcript of the video embedded below. Some of the text may not make sense without the animations in the video.]

Magnetic Resonance Imaging is one of the most widely used imaging methods in medicine. A lot of you have probably had one taken. I have had one too. But how does it work? This is what we will talk about today.


Magnetic Resonance Imaging, or MRI for short, used to be called Nuclear Magnetic Resonance, but it was renamed out of fear that people would think the word “nuclear” has something to do with nuclear decay or radioactivity. But the reason it was called “nuclear magnetic resonance” has nothing to do with radioactivity, it is just that the thing which resonates is the atomic nucleus, or more precisely, the spin of the atomic nucleus.

Nuclear magnetic resonance was discovered already in the nineteen-forties by Felix Bloch and Edward Purcell. They received a Nobel Prize for their discovery in nineteen-fifty-two. The first human body scan using this technology was done in New York in nineteen-seventy-seven. Before I tell you how the physics of Magnetic Resonance Imaging works in detail, I first want to give you a simplified summary.

If you put an atomic nucleus into a time-independent magnetic field, it can spin. And if does spin, it spins with a very specific frequency, called the Larmor frequency, named after Joseph Larmor. This frequency depends on the type of nucleus. Usually the nucleus does not spin, it just sits there. But if you, in addition to the time-independent magnetic field, let an electromagnetic wave pass by the nucleus at just exactly the right resonance frequency, then the nucleus will extract energy from the electromagnetic wave and start spinning.

After the electromagnetic wave has travelled through, the nucleus will slowly stop spinning and release the energy it extracted from the wave, which you can measure. How much energy you measure depends on how many nuclei resonated with the electromagnetic wave. So, you can use the strength of the signal to tell how many nuclei of a particular type were in your sample.

For magnetic resonance imaging in the human body one typically targets hydrogen nuclei, of which there are a lot in water and fat. How bright the image is then tells you basically the amount of fat and water. Though one can also target other nuclei and measure other quantities, so some magnetic resonsnce images work differently. Magnetic Resonance Imaging is particularly good for examining soft tissue, whereas for a broken bone you’d normally use an X-ray.

In more detail, the physics works as follows. Atomic nuclei are made of neutrons and protons, and the neutrons and protons are each made of three quarks. Quarks have spin one half each and their spins combine to give the neutrons and protons also spin one half. The neutrons and protons then combine their spins to give a total spin to atomic nuclei, which may or may not be zero, depending on the number of neutrons and protons in the nucleus.

If the spin is nonzero, then the atomic nucleus has a magnetic moment, which means it will spin in a magnetic field at a frequency that depends on the composition of the nucleus and the strength of the magnetic field. This is the Larmor frequency that nuclear spin resonance works with. If you have atomic nuclei with spin in a strong magnetic field, then their spins will align with the magnetic field. Suppose we have a constant and homogeneous magnetic field pointing into direction z, then the nuclear spins will preferably also point in direction z. They will not all do that, because there is always some thermal motion. So, some of them will align in the opposite direction, though this is not energetically the most favorable state. Just how many point in each direction depends on the temperature. The net magnetic moment of all the nuclei is then called the magnetization, and it will point in direction z.

In an MRI machine, the z-direction points into the direction of the tube, so usually that’s from head to toe.

Now, if the magnetization does for whatever reason not point into direction z, then it will circle around the z direction, or precess, as the physicists say, in the transverse directions, which I have called x and y. And it will do that with a very specific frequency, which is the previously mentioned Larmor frequency. The Larmor frequency depends on a constant which itself depends on the type of nucleus, and is proportional to the strength of the magnetic field. Keep this in mind because it will become important later.

The key feature of magnetic resonance imaging is now that if you have a magnetization that points in direction z because of the homogenous magnetic field, and you apply an additional, transverse magnetic field that oscillates at the resonance frequency, then the magnetization will turn away from the z axis. You can calculate this with the Bloch-equation, named after the same Bloch who discovered nuclear magnetic resonance in the first place. For the following I have just integrated this differential equation. For more about differential equations, please check my earlier video.

What you see here is the magnetization that points in the z-direction, so that’s the direction of the time-independent magnetic field. And now a pulse of an electromagnetic wave come through. This pulse is not at the resonance frequency. As you can see, it doesn’t do much. And here is a pulse that is at the resonance frequency. As you see, the magnetization spirals down. How far it spirals down depends on how long you apply the transverse magnetic field. Now watch what happens after this. The magnetization slowly returns to its original direction.

Why does this happen? There are two things going on. One is that the nuclear spins interact with their environment, this is called spin-lattice relaxation and brings the z-direction of the magnetization back up. The other thing that happens is that the spins interact with each other, which is called spin-spin relaxation and it brings the transverse magnetization, the one in x and y direction, back to zero.

Each of these processes has a characteristic decay time, usually called T_1 and T_2. For soft tissue, these decay times are typically in the range of ten milliseconds to one second. What you measure in an MRI scan is then roughly speaking the energy that is released in the return of the nuclear spins to the z-direction and the time that takes. Somewhat less roughly speaking, you measure what’s called the free induction decay.

Another way to look at this process of resonance and decay is to look at the curve which the tip of the magnetization vector traces out in three dimensions. I have plotted this here for the resonant case. Again you see it spirals down during the pulse, and then relaxes back into the z-direction.

So, to summarize, for magnetic resonance imaging you have a constant magnetic field in one direction, and then you have a transverse electromagnetic wave, which oscillates at the resonance frequency. For this transverse field, you only use a short pulse which makes the nuclear spins point in the transverse direction. Then they turn back to the z-direction, and you can measure this.

I have left out one important thing, which is how do you manage to get a spatially resolved image and not just a count of all the nuclei. You do this by using a magnetic field with a strength that slightly changes from one place to another. Remember that I pointed out the resonance frequency is proportional to the magnetic field. Because of this, if you use a magnetic field that changes from one place to another, you can selectively target certain nuclei at a particular position. Usually one does that by using a gradient for the magnetic field, so then the images you get are slices through the body.

The magnetic fields used in MRI scanners for medical purposes are incredibly strong, typically a few Tesla. For comparison, that’s about a hundred thousand times stronger than the magnetic field of planet earth, and only a factor two or three below the strength of the magnets used at the Large Hadron Collider.

These strong magnetic fields do not harm the body, you just have to make sure to not take magnetic materials with you in the scanner. The resonance frequencies that fit to these strong magnetic fields are in the range of fifty to three-hundred Megahertz. These energies are far too small to break chemical bonds, which is why the electromagnetic waves used in Magnetic Resonance Imaging do not damage cells. There is however a small amount of energy deposited into the tissue by thermal motion, which can warm the tissue, especially at the higher frequency end. So one has to take care to not do these scans for a too long time.

So if you have an MRI taken, remember that it literally makes your atomic nuclei spin.

Saturday, November 21, 2020

Warp Drive News. Seriously!

[This is a transcript of the video embedded below.]

As many others, I became interested in physics by reading too much science fiction. Teleportation, levitation, wormholes, time-travel, warp drives, and all that, I thought was super-fascinating. But of course the depressing part of science fiction is that you know it’s not real. So, to some extent, I became a physicist to find out which science fiction technologies have a chance to one day become real technologies. Today I want to talk about warp drives because I think on the spectrum from fiction to science, warp drives are on the more scientific end. And just a few weeks ago, a new paper appeared about warp drives that puts the idea on a much more solid basis.


But first of all, what is a warp drive? In the science fiction literature, a warp drive is a technology that allows you to travel faster than the speed of light or “superluminally” by “warping” or deforming space-time. The idea is that by warping space-time, you can beat the speed of light barrier. This is not entirely crazy, for the following reason.

Einstein’s theory of general relativity says you cannot accelerate objects from below to above the speed of light because that would take an infinite amount of energy. However, this restriction applies to objects in space-time, not to space-time itself. Space-time can bend, expand, or warp at any speed. Indeed, physicists think that the universe expanded faster than the speed of light in its very early phase. General Relativity does not forbid this.

There are two points I want to highlight here: First, it is a really common misunderstanding, but Einstein’s theories of special and general relativity do NOT forbid faster-than-light motion. You can very well have objects in these theories that move faster than the speed of light. Neither does this faster-than light travel necessarily lead to causality paradoxes. I explained this in an earlier video. Instead, the problem is that, according to Einstein, you cannot accelerate from below to above the speed of light. So the problem is really crossing the speed of light barrier, not being above it.

The second point I want to emphasize is that the term “warp drive” refers to a propulsion system that relies on the warping of space-time, but just because you are using a warp drive does not mean you have to go faster than light. You can also have slower-than-light warp drives. I know that sounds somewhat disappointing, but I think it would be pretty cool to move around by warping spacetime at any speed.

Warp drives were a fairly vague idea until in 1994, Miguel Alcubierre found a way to make them work in General Relativity. His idea is now called the Alcubierre Drive. The explanation that you usually get for how the Alcubierre Drive works, is that you contract space-time in front of you and expand it behind you, which moves you forward.

That didn’t make sense to you? Just among us, it never made sense to me either. Because why would this allow you to break the speed of light barrier? Indeed, if you look at Alcubierre’s mathematics, it does not explain how this is supposed to work. Instead, his equations say that this warp drive requires large amounts of negative energy.

This is bad. It’s bad because, well, there isn’t any such thing as negative energy. And even if you had this negative energy that would not explain how you break the speed of light barrier. So how does it work? A few weeks ago, someone sent me a paper that beautifully sorts out the confusion surrounding warp drives.

To understand my problem with the Alcubierre Drive, I have to tell you briefly how General Relativity works. General Relativity works by solving Einstein’s field equations. Here they are. I know this looks somewhat intimidating, but the overall structure is fairly easy to understand. It helps if you try to ignore all these small Greek indices, because they really just say that there is an equation for each combination of directions in space-time. More important is that on the left side you have these R’s. The R’s quantify the curvature of space-time. And on the right side you have T. T is called the stress-energy tensor and it collects all kinds of energy densities and mass densities. That includes pressure and momentum flux and so on. Einstein’s equations then tell you that the distribution of different types of energy determines the curvature, and the curvature in return determines the how the distribution of the stress-energy changes.

The way you normally solve these equations is to use a distribution of energies and masses at some initial time. Then you can calculate what the curvature is at that initial time, and you can calculate how the energies and masses will move around and how the curvature changes with that.

So this is what physicists usually mean by a solution of General Relativity. It is a solution for a distribution of mass and energy.

But. You can instead just take any space-time, put it into the left side of Einstein’s equations, and then the equations will tell you what the distribution of mass and energy would have to be to create this space-time.

On a purely technical level, these space-times will then indeed be “solutions” to the equations for whatever is the stress energy tensor you get. The problem is that in this case, the energy distribution which is required to get a particular space-time is in general entirely unphysical.

And that’s the problem with the Alcubierre Drive. It is a solution to a General Relativity, but in and by itself, this is a completely meaningless statement. Any space-time will solve the equations of General Relativity, provided you assume that you have a suitable distribution of masses and energies to create it. The real question is therefore not whether a space-time solves Einstein’s equations, but whether the distribution of mass and energy required to make it a solution to the equations is physically reasonable.

And for the Alcubierre drive the answer is multiple no’s. First, as I already said, it requires negative energy. Second, it requires a huge amount of that. Third, the energy is not conserved. Instead, what you actually do when you write down the Alcubierre space-time, is that you just assume you have something that accelerates it beyond the speed of light barrier. That it’s beyond the barrier is why you need negative energies. And that it accelerates is why you need to feed energy into the system. Please check the info below the video for a technical comment about just what I mean by “energy conservation” here.

Let me then get to the new paper. The new paper is titled “Introducing Physical Warp Drives” and was written by Alexey Bobrick and Gianni Martire. I have to warn you that this paper has not yet been peer reviewed. But I have read it and I am pretty confident it will make it through peer review.

In this paper, Bobrick and Martire describe the geometry of a general warp-drive space time. The warp-drive geometry is basically a bubble. It has an inside region, which they call the “passenger area”. In the passenger area, space-time is flat, so there are no gravitational forces. Then the warp drive has a wall of some sort of material that surrounds the passenger area. And then it has an outside region. This outside region has the gravitational field of the warp-drive itself, but the gravitational field falls off and in the far distance one has normal, flat space-time. This is important so you can embed this solution into our actual universe.

What makes this fairly general construction a warp drive is that the passage of time inside of the passenger area can be different from that outside of it. That’s what you need if you have normal objects, like your warp drive passengers, and want to move them faster than the speed of light. You cannot break the speed of light barrier for the passengers themselves relative to space-time. So instead, you keep them moving normally in the bubble, but then you move the bubble itself superluminally.

As I explained earlier, the relevant question is then, what does the wall of the passenger area have to be made of? Is this a physically possible distribution of mass and energy? Bobrick and Martire explain that if you want superluminal motion, you need negative energy densities. If you want acceleration, you need to feed energy and momentum into the system. And the only reason the Alcubierre Drive moves faster than the speed of light is that one simply assumed it does. Suddenly it all makes sense!

I really like this new paper because to me it has really demystified warp drives. Now, you may find this somewhat of a downer because really it says that we still do not know how to accelerate to superluminal speeds. But I think this is a big step forward because now we have a much better mathematical basis to study warp drives.

For example, once you know how the warped space-time looks like, the question comes down to how much energy do you need to achieve a certain acceleration. Bobrick and Martire show that for the Alcubiere drive you can decrease the amount of energy by seating passengers next to each other instead of behind each other, because the amount of energy required depends on the shape of the bubble. The flatter it is in the direction of travel, the less energy you need. For other warp-drives, other geometries may work better. This is the kind of question you can really only address if you have the mathematics in place.

Another reason I find this exciting is that, while it may look now like you can’t do superluminal warp drives, this is only correct if General Relativity is correct. And maybe it is not. Astrophysicists have introduced dark matter and dark energy to explain what they observe, but it is also possible that General Relativity is ultimately not the correct theory for space-time. What does this mean for warp drives? We don’t know. But now we know we have the mathematics to study this question.

So, I think this is a really neat paper, but it also shows that research is a double-edged sword. Sometimes, if you look closer at a really exciting idea, it turns out to be not so exciting. And maybe you’d rather not have known. But I think the only way to make progress is to not be afraid of learning more. 

Note: This paper has not appeared yet. I will post a link here once I have a reference.




You can join the chat on this video on Saturday 11/21 at 12PM EST / 6PM CET or on Sunday 11/22 at 2PM EST / 8PM CET.

We will also have a chat on Black Hole Information loss on Tuesday 11/24 at 8PM EST / 2AM CET and on Wednesday 11/25 at 2PM EST / 8PM CET.

Wednesday, November 18, 2020

The Black Hole information loss problem is unsolved. Because it’s unsolvable.

Hi everybody, welcome and welcome back to science without the gobbledygook. I put in a Wednesday video because last week I came across a particularly bombastically nonsensical claim that I want to debunk for you. The claim is that the black hole information loss problem is “nearing its end”. So today I am here to explain why the black hole information loss problem is not only unsolved but will remain unsolved because it’s for all practical purposes unsolvable.


First of all, what is the black hole information loss problem, or paradox, as it’s sometimes called. It’s an inconsistency in physicists’ currently most fundamental laws of nature, that’s quantum theory and general relativity.

Stephen Hawking showed in the early nineteen-seventies that if you combine these two theories, you find that black holes emit radiation. This radiation is thermal, which means besides the temperature, that determines the average energy of the particles, the radiation is entirely random.

This black hole radiation is now called Hawking Radiation and it carries away mass from the black hole. But the radius of the black hole is proportional to its mass, so if the black hole radiates, it shrinks. And the temperature is inversely proportional to the black hole mass. So, as the black hole shrinks, it gets hotter, and it shrinks even faster. Eventually, it’s completely gone. Physicists refer to this as “black hole evaporation.”

When the black hole has entirely evaporated, all that’s left is this thermal radiation, which only depends on the initial mass, angular momentum, and electric charge of the black hole. This means that besides these three quantities, it does not matter what you formed the black hole from, or what fell in later, the result is the same thermal radiation.

Black hole evaporation, therefore, is irreversible. You cannot tell from the final state – that’s the outcome of the evaporation – what the initial state was that formed the black holes. There are many different initial states that will give the same final state.

The problem is now that this cannot happen in quantum theory. Processes in quantum theory are always time-reversible. There are certainly processes that are in practice irreversible. For example, if you mix dough. You are not going to unmix it, ever. But. According to quantum mechanics, this process is reversible, in principle.

In principle, one initial state of your dough leads to exactly one final state, and using the laws of quantum mechanics you could reverse it, if only you tried hard enough, for ten to the five-hundred billion years or so. It’s the same if you burn paper, or if you die. All these processes are for all practical purposes irreversible. But according to quantum theory, they are not fundamentally irreversible, which means a particular initial state will give you one, and only one, final state. The final state, therefore, tells you what the initial state was, if you have the correct differential equation. For more about differential equations, please check my earlier video.

So you set out to combine quantum theory with gravity, but you get some something that contradicts what you started with. That’s inconsistent. Something is wrong about this. But what? That’s the black hole information loss problem.

Now, four points I want to emphasize here. First, the black hole information loss problem has actually nothing to do with information. John, are you listening? Really the issue is not loss of information, which is an extremely vague phrase, the issue is time irreversibility. General Relativity forces a process on you which cannot be reversed in time, and that is inconsistent with quantum theory.

So it would better be called the black hole time irreversibility problem, but you know how it goes with nomenclature, it doesn’t always make sense. Peanuts aren’t nuts, vacuum cleaners don’t clean the vacuum. Dark energy is neither dark nor energy. And black hole information loss is not about information.

Second, black hole evaporation is not an effect of quantum gravity. You do not need to quantize gravity to do Hawking’s calculation. It merely uses quantum mechanics in the curved background of non-quantized general relativity. Yes, it’s something with quantum and something with gravity. No, it’s not quantum gravity.

The third point is that the measurement process in quantum mechanics does not resolve the black hole information loss problem. Yes, according to the Copenhagen interpretation a quantum measurement is irreversible. But the inconsistency in black hole evaporation occurs before you make a measurement.

And related to this is the fourth point, it does not matter whether you believe time-irreversibility is wrong even leaving aside the measurement. It’s a mathematical inconsistency. Saying that you do not believe one or the other property of the existing theories does not explain how to get rid of the problem.

So, how do you get rid of the black hole information loss problem. Well, the problem comes from combining a certain set of assumptions, doing a calculation, and arriving at a contradiction. This means any solution of the problem will come down to removing or replacing at least one of the assumptions.

Mathematically there are many ways to do that. Even if you do not know anything about black holes or quantum mechanics, that much should be obvious. If you have a set of inconsistent axioms, there are many ways to fix that. It will therefore not come as a surprise to you that physicists have spent the past forty years coming up with always new “solutions” to the black hole information loss problem, yet they can’t agree which one is right.

I have already made a video about possible solutions to the black hole information loss problem, so let me just summarize this really quickly. For details, please check the earlier video.

The simplest solution to the black hole information loss problem is that the disagreement is resolved when the effects of quantum gravity become large, which happens when the black hole has shrunk to a very small size. This simple solution is incredibly unpopular among physicists. Why is that? It’s because we do not have a theory of quantum gravity, so one cannot write papers about it.

Another option is that the black holes do not entirely evaporate and the information is kept in what’s left, usually called a black hole remnant. Yet another way to solve the problem is to simply accept that information is lost and then modify quantum mechanics accordingly. You can also put information on the singularity, because then the evaporation becomes time-reversible.

Or you can modify the topology of space-time. Or you can claim that information is only lost in our universe but it’s preserved somewhere in the multiverse. Or you can claim that black holes are actually fuzzballs made of strings and information creeps out slowly. Or, you can do ‘t Hooft’s antipodal identification and claim what goes in one side comes out the other side, fourier transformed. Or you can invent non-local effects, or superluminal information exchange, or baby universes, and that’s not an exhaustive list.

These solutions are all mathematically consistent. We just don’t know which one of them is correct. And why is that? It’s because we cannot observe black hole evaporation. For the black holes that we know exist the temperature is way, way too small to be observable. It’s below even the temperature of the cosmic microwave background. And even if it wasn’t, we wouldn’t be able to catch all that comes out of a black hole, so we couldn’t conclude anything from it.

And without data, the question is not which solution to the problem is correct, but which one you like best. Of course everybody likes their own solution best, so physicists will not agree on a solution, not now, and not in 100 years. This is why the headline that the black hole information loss problem is “coming to an end” is ridiculous. Though, let me mention that I know the author of the piece, George Musser, and he’s a decent guy and, the way this often goes, he didn’t choose the title.

What’s the essay actually about? Well, it’s about yet another proposed solution to the black hole information problem. This one is claiming that if you do Hawking’s calculation thoroughly enough then the evaporation is actually reversible. Is this right? Well, depends on whether you believe the assumptions that they made for this calculation. Similar claims have been made several times before and of course they did not solve the problem.

The real problem here is that too many theoretical physicists don’t understand or do not want to understand that physics is not mathematics. Physics is science. A theory of nature needs to be consistent, yes, but consistency alone is not sufficient. You still need to go and test your theory against observations.

The black hole information loss problem is not a math problem. It’s not like trying to prove the Riemann hypothesis. You cannot solve the black hole information loss problem with math alone. You need data, there is no data, and there won’t be any data. Which is why the black hole information loss problem is for all practical purposes unsolvable.

The next time you read about a supposed solution to the black hole information loss problem, do not ask whether the math is right. Because it probably is, but that isn’t the point. Ask what reason do we have to think that this particular piece of math correctly describes nature. In my opinion, the black hole information loss problem is the most overhyped problem in all of science, and I say that as someone who has published several papers about it.

On Saturday we’ll be talking about warp drives, so don’t forget to subscribe.

Saturday, November 14, 2020

Understanding Quantum Mechanics #8: The Tunnel Effect

[This is a transcript of the video embedded below. Parts of the text will not make sense without the graphics in the video.]

Have you heard that quantum mechanics is impossible to understand? You know what, that’s what I was told, too, when I was a student. But twenty years later, I think the reason so many people believe one cannot understand quantum mechanics is because they are constantly being told they can’t understand it. But if you spend some time with quantum mechanics, it’s not remotely as strange and weird as they say. The strangeness only comes in when you try to interpret what it all means. And there’s no better way to illustrate this than the tunnel effect, which is what we will talk about today.


Before we can talk about tunneling, I want to quickly remind you of some general properties of wave-functions, because otherwise nothing I say will make sense. The key feature of quantum mechanics is that we cannot predict the outcome of a measurement. We can only predict the probability of getting a particular outcome. For this, we describe the system we are observing – for example a particle – by a wave-function, usually denoted by the Greek letter Psi. The wave-function takes on complex values, and probabilities can be calculated from it by taking the absolute square.

But how to calculate probabilities is only part of what it takes to do quantum mechanics. We also need to know how the wave-function changes in time. And we calculate this with the Schrödinger equation. To use the Schrödinger equation, you need to know what kind of particle you want to describe, and what the particle interacts with. This information goes into this thing labeled H here, which physicists call the “Hamiltonian”.

To give you an idea for how this works, let us look at the simplest possible case, that’s a massive particle, without spin, that moves in one dimension, without any interaction. In this case, the Hamiltonian merely has a kinetic part which is just the second derivative in the direction the particle travels, divided by twice the mass of the particle. I have called the direction x and the mass m. If you had a particle without quantum behavior – a “classical” particle, as physicists say – that didn’t interact with anything, it would simply move at constant velocity. What happens for a quantum particle? Suppose that initially you know the position of the particle fairly well, so the probability distribution is peaked. I have plotted here an example. Now if you solve the Schrödinger equation for this initial distribution, what happens is the following.

The peak of the probability distribution is moving at constant velocity, that’s the same as for the classical particle. But the width of the distribution is increasing. It’s smearing out. Why is that?

That’s the uncertainty principle. You initially knew the position of the particle quite well. But because of the uncertainty principle, this means you did not know its momentum very well. So there are parts of this wave-function that have a somewhat larger momentum than the average, and therefore a larger velocity, and they run ahead. And then there are some which have a somewhat lower momentum, and a smaller velocity, and they lag behind. So the distribution runs apart. This behavior is called “dispersion”.

Now, the tunnel effect describes what happens if a quantum particle hits an obstacle. Again, let us first look at what happens with a non-quantum particle. Suppose you shoot a ball in the direction of a wall, at a fixed angle. If the kinetic energy, or the initial velocity, is large enough, it will make it to the other side. But if the kinetic energy is too small, the ball will bounce off and come back. And there is a threshold energy that separates the two possibilities.

What happens if you do the same with a quantum particle? This problem is commonly described by using a “potential wall.” I have to warn you that a potential wall is in general not actually a wall, in the sense that it is not made of bricks or something. It is instead just generally a barrier for which a classical particle would have to have an energy above a certain threshold.

So it’s kind of like in the example I just showed with the classical particle crossing over an actual wall, but that’s really just an analogy that I have used for the purpose of visualization.

Mathematically, a potential wall is just a step function that’s zero everywhere except in a finite interval. You then add this potential wall as a function to the Hamiltonian of the Schrödinger equation. Now that we have the equation in place, let us look at what the quantum particle does when it hits the wall. For this, I have numerically integrated the Schrödinger equation I just showed you.

The following animations are slow-motion compared to the earlier one, which is why you cannot see that the wave-function smears out. It still does, it’s just so little that you have to look very closely to see it. It did this because it makes it easier to see what else is happening. Again, what I have plotted here is the probability distribution for the position of the particle.

We will first look at the case when the energy of the quantum particle is much higher than the potential wall. As you can see, not much happens. The quantum particle goes through the barrier. It just gets a few ripples.

Next we look at the case where the energy barrier of the potential wall is much, much higher than the energy of the particle. As you can see, it bounces off and comes back. This is very similar to the classical case.

The most interesting case is when the energy of the particle is smaller than the potential wall but the potential wall is not extremely much higher. In this case, a classical particle would just bounce back. In the quantum case, what happens is this. As you can see, part of the wave-function makes it through to the other side, even though it’s energetically forbidden. And there is a remaining part that bounces back. Let me show you this again.

Now remember that the wave-function tells you what the probability is for something to happen. So what this means is that if you shoot a particle at a wall, then quantum effects allow the particle to sometimes make it to the other side, when this should actually be impossible. The particle “tunnels” through the wall. That’s the tunnel effect.

I hope that these little animations have convinced you that if you actually do the calculation, then tunneling is half as weird as they say it is. It just means that a quantum particle can do some things that a classical particle can’t do. But, wait, I forgot to tell you something...

Here you see the solutions to the Schrödinger equation with and without the potential wall, but for otherwise identical particles with identical energy and momentum. Let us stop this here. If you compare the position of the two peaks, the one that tunneled and the one that never saw a wall, then the peak of the tunneled part of the wave-function has traveled a larger distance in the same time.

If the particle was travelling at or very close by the speed of light, then the peak of the tunneled part of the wave-function seems to have moved faster than the speed of light. Oops.

What is happening? Well, this is where the probabilistic interpretation of quantum mechanics comes to haunt you. If you look at where the faster-than light particles came from in the initial wave-function, then you find that they were the ones which had a head-start at the beginning. Because, remember, the particles did not all start from exactly the same place. They had an uncertainty in the distribution.

Then again, if the wave-function really describes single particles, as most physicists today believe it does, then this explanation makes no sense. Because then only looking at parts of the wave-function is just not an allowed way to define the particle’s time of travel. So then, how do you define the time it takes a particle to travel through a wall? And can the particle really travel faster than the speed of light? That’s a question which physicists still argue about today.

This video was sponsored by Brilliant which is a website that offers interactive courses on a large variety of topics in science and mathematics. I hope this video has given you an idea how quantum mechanics works. But if you really want to understand the tunnel effect, then you have to actively engage with the subject. Brilliant is a great starting point to do exactly this. To get more background on this video’s content, I recommend you look at their courses on quantum objects, differential equations, and probabilities.

To support this channel and learn more about Brilliant, go to brilliant.org/Sabine and sign up for free. The first 200 subscribers using this link will get 20 percent off their annual premium subscription.



You can join the chat on this week’s video here:
  • Saturday at 12PM EST / 6PM CET (link)
  • Sunday at 2PM EST / 8PM CET (link)

Saturday, November 07, 2020

Understanding Quantum Mechanics #7: Energy Levels

[This is a transcript of the video embedded below. Parts of the text will not make sense without the graphics in the video.]


Today I want to tell you what these plots show. Has anybody seen them before? Yes? Atomic energy levels, right! It’s one of the most important applications of quantum mechanics. And I mean important both historically and scientifically. Today’s topic also a good opportunity to answer a question one of you asked on a previous video “Why do some equations even actually need calculating, as the answer will always be the same?” That’s a really good question. I just love it, because it would never have occurred to me.

Okay, so we want to calculate what electrons do in an atom. Why is this interesting? Because what the electrons do determines the chemical properties of the elements. Basically, the behavior of the electrons explains the whole periodic table: Why do atoms come in particular groups, why do some make good magnets, why are some of them good conductors? The electrons tell you.

How do you find out what the electrons do? You use quantum mechanics. Quantum mechanics, as we discussed previously, works with wave-functions, usually denoted Psi. Here is Psi. And you calculate what the wave-function does with the Schrödinger equation. Here is the Schrödinger equation.

Now, the way I have written this equation here, it’s completely useless. We know what Psi is, that’s the thing we want to calculate, and we know how to take a time-derivative, but what is H? H is called the “Hamiltonian” and it contains the details about the system you want to describe. The Hamiltonian consists of two parts. The one part tells you what the particles do when you leave them alone and they don’t know anything of each other. So that would be in empty space, with no force acting on them, with no interaction. This is usually called the “kinetic” part of the Hamiltonian, or sometimes the “free” part. Then you have a second part that tells you how the particle, or particles if there are several, interact.
 
In the simplest case, this interaction term can be written as a potential, usually denoted V. And for an electron near an atomic nucleus, the potential is just the Coulomb potential. So that’s proportional to the charge of the nucleus, and falls with one over r, where r is the distance to the center of the nucleus. There is a constant in front of this term that I have called alpha, but just what it quantifies doesn’t matter for us today. And the kinetic term, for a slow-moving particle is just the square of the spatial derivatives, up to constants.

So, now we have a linear, partial differential equation that we need to solve. I don’t want to go through this calculation, because it’s not so relevant here just how to solve it, let me just say there is no magic involved. It’s pretty straight forward. But there some interesting things to learn from it.

The first interesting thing you find when you solve the Schrödinger equation for electrons in a Coulomb potential is that the solutions fall apart in two different classes. The one type of solution is a wave that can propagate through all of space. We call these the “unbound states”. And the other type of solution is a localized wave, stuck in the potential of the nucleus. It just sits there while oscillating. We call these the “bound states”. The bound states have a negative energy. That’s because you need to put energy in to rip these electrons off the atom.

The next interesting thing you find is that the bound states can be numbered, so you can count them. To count these states, one commonly uses, not one, but three numbers. These numbers are all integers and are usually called n, l, and m.

“n” starts at 1 and then increases, and is commonly called the “principal” quantum number. “l” labels the angular momentum. It starts at zero, but it has to be smaller than n.

So for n equal to one, you have only l equal to zero. For n equal to 2, l can be 0 or 1. For n equal to three, l can be zero, one or two, and so on.

The third number “m” tells you what the electron does in a magnetic field, which is why it’s called the magnetic quantum number. It takes on values from minus l to l. And these three numbers, n l m, together uniquely identify the state of the electron.

Let me then show you how the solutions to the Schrödinger equation look like in this case, because there are more interesting things to learn from it. The wave-functions give you a complex value for each location, and the absolute value tells you the probability of finding the electron. While the wave-function oscillates in time, the probability does not depend on time.

I have here plotted the probability as a function of the radius, so I have integrated over all angular directions. This is for different principal quantum numbers n, but with l and m equal to zero.

You can see that the wave-function has various maxima and minima, but with increasing n, the biggest maximum, so that’s the place you are most likely to find the electron, moves away from the center of the atom. That’s where the idea of electron “shells” comes from. It’s not wrong, but also somewhat misleading. As you can see here, the actual distribution is more complicated.

A super interesting property of these probability distributions is that they are perfectly well-behaved at r equals zero. That’s interesting because, if you remember, we used a Coulomb potential that goes as 1 over r. This potential actually diverges at r equal zero. Nevertheless, the wave-functions avoids this divergence. Some people have argued that actually something similar can avoid that a singularity forms in black holes. Please check the information below the video for a reference.

But these curves show only the radial direction, what about the angular direction? To show you how this looks like, I will plot the probability of finding the electron with a color code for slices through the sphere.

And I will start with showing you the slices for the cases of which you just saw the curves in the radial direction, that is, different n, but with the other numbers at zero.

The more red-white the color, the more likely you are to find the electron. I have kept the radius fix, so this is why the orbitals with small n only make a small blip when we scan through the middle. Here you see it again. Note how the location of the highest probability moves to a larger radius with increasing n.

Then let us look at a case where l is nonzero. This is for example for n=3, l=1 and m equals plus minus 1. As you can see, the distribution splits up in several areas of high probability and now has an orientation. Here is the same for n=4, l=2, m equals plus minus 2. It may appear as if this is no longer spherically symmetric. But actually if you combine all the quantum numbers, you get back spherical symmetry, as it has to be.

Another way to look at the electron probability distributions is to plot them in three dimensions. Personally I prefer the two-dimensional cuts because the color shading contains more information about the probability distribution. But since some people prefer the 3-dimensional plots, let me show you some examples. The surface you see here is the surface inside of which you will find the electron with a probability of 90%. Again you see that thinking of the electrons as sitting on “shells” doesn’t capture very well what is going on.

Now that you have an idea how we calculate atomic energy levels and what they look like, let me then get to the question: Why do we calculate the same things over and over again?

So, this particular calculation of the atomic energy levels was frontier research a century ago. Today students do it as an exercise. The calculations physicists now do in research in atomic physics are considerably more advanced than this example, because we have made a lot of simplifications here.

First, we have neglected that the electron has a spin, though this is fairly easy to integrate. More seriously, we have assumed that the nucleus is a point. It is not. The nucleus has a finite size and it is neither perfectly spherically symmetric, nor does it have a homogeneous charge distribution, which makes the potential much more complicated. Worse, nuclei themselves have energy levels and can wobble. Then the electrons on the outer levels actually interact with the electrons in the inner levels, which we have ignored. There are further corrections from quantum field theory, which we have also ignored. Yet another thing we have ignored is that electrons in the outer shells of large atoms get corrections from special relativity. Indeed, fun fact: without special relativity, gold would not look gold.

And then, for most applications it’s not energy levels of atoms that we want to know, but energy levels of molecules. This is a huge complication. The complication is not that we don’t know the equation. It’s still the Schrödinger equation. It’s also not that we don’t know how to solve it. The problem is, with the methods we currently use, doing these calculations for even moderately sized molecules, takes too long, even on supercomputers.

And that’s an important problem. Because the energy levels of molecules tell you whether a substance is solid or brittle, what its color is, how good it conducts electricity, how it reacts with other molecules, and so on. This is all information you want to have. Indeed, there’s a whole research area devoted to this question, which is called “quantum chemistry”. It also one of the calculations physicists hope to speed up with quantum computers.

So, why do we continue solving the same equation? Because we are improving how good the calculation is, we are developing new methods to solve it more accurately and faster, and we are applying it to new problems. Calculating the energy levels of electrons is not yesterday’s physics, it’s still cutting edge physics today.

If you really want to understand how quantum mechanics works, I recommend you check out Brilliant, who have been sponsoring this video. Brilliant is a website that offers a large variety of interactive courses in mathematics and science, including quantum mechanics, and it’s a great starting point to dig deeper into the topic. For more background on what I just talked about, have a look for example at their courses on quantum objects, differential equations, and linear algebra.  

To support this channel and learn more about Brilliant go to Brilliant.org/Sabine and sign up for free. The first 200 subscribers using this link will get twenty percent off the annual premium subscription.

Thanks for watching, see you next week.



You can join the chat about this video today (Saturday, Nov 7) at 6pm CET or tomorrow at the same time.