Saturday, January 16, 2021

Was the universe made for us?

[This is a transcript of the video embedded below.]


Today I want to talk about the claim that our universe is especially made for humans, or fine-tuned for life. According to this idea it’s extremely unlikely our universe would just happen to be the way it is by chance, and the fact that we nevertheless exist requires explanation. This argument is popular among some religious people who use it to claim that our universe needs a creator, and the same argument is used by physicists to pass off unscientific ideas like the multiverse or naturalness as science. In this video, I will explain what’s wrong with this argument, and why the observation that the universe is this way and not some other way, is evidence neither for nor against god or the multiverse.

Ok, so here is how the argument goes in a nutshell. The currently known laws of nature contain constants. Some of these constants are for example, the fine-structure constant that sets the strength of the electromagnetic force, Planck’s constant, Newton’s constant, the cosmological constant, the mass of the Higgs boson, and so on.

Now you can ask, what would a universe look like, in which one or several of these constants were a tiny little bit different. Turns out that for some changes to these constants, processes that are essential for life as we know it could not happen, and we could not exist. For example, if the cosmological constant was too large, then galaxies would never form. If the electromagnetic force was too strong, nuclear fusion could not light up stars. And so on. There’s a long list of calculations of this type, but they’re not the relevant part of the argument, so I don’t want to go through the whole list.

The relevant part of the argument goes like this: It’s extremely unlikely that these constants would happen to have just exactly the values that allow for our existence. Therefore, the universe as we observe it requires an explanation. And then that explanation may be god or the multiverse or whatever is your pet idea. Particle physicists use the same type of argument when they ask for a next larger particle collider. In that case, they claim it requires explanation why the mass of the Higgs boson happens to be what it is. This is called an argument from “naturalness”. I explained this in an earlier video.

What’s wrong with the argument? What’s wrong is the claim that the values of the constants of nature that we observe are unlikely. There is no way to ever quantify this probability because we will never measure a constant of nature that has a value other than the one it does have. If you want to quantify a probability you have to collect a sample of data. You could do that, for example, if you were throwing dice.Throw them often enough, and you get an empirically supported probability distribution.

But we do not have an empirically supported probability distribution for the constants of nature. And why is that. It’s because… they are constant. Saying that the only value we have ever observed is “unlikely” is a scientifically meaningless statement. We have no data, and will never have data, which allow us to quantify the probability of something we cannot observe. There’s nothing quantifiably unlikely, therefore, there’s nothing in need of explanation.

If you look at the published literature on the supposed “fine-tuning” of the constants of nature, the mistake is always the same. They just postulate a particular probability distribution. It’s this postulate that leads to their conclusion. This is one of the best known logical fallacies, called “begging the question” or “circular reasoning.” You assume what you need to show. And instead of showing that a value is unlikely, they pick a specific probability distribution that makes it unlikely. They could as well pick a probability distribution that would make the observed values *likely, just that this doesn’t give the result they want to have.

And, by the way, even if you could measure a probability distribution for the constants of nature, which you can’t, then the idea that our particular combination of constants is necessary for life would *still be wrong. There are several examples in the scientific literature for laws of nature with constants nothing like our own that, for all we can tell, allow for chemistry complex enough for life. Please check the info below the video for references.

Let me be clear though that finetuning arguments are not always unscientific. The best-known example of a good finetuning argument is a pen balanced on its tip. If you saw that, you’d be surprised. Because this is very unlikely to happen just by chance. You’d look for an explanation, a hidden mechanism. That sounds very similar to the argument for finetuning the constants of nature, but the balanced pen is a very different situation. The claim that the balanced pen is unlikely is based on data. You are surprised because you don’t normally encounter pens balanced on their tip.You have experience, meaning you have statistics. But it’s completely different if you talk about changing constants that cannot be changed by any physical process. Not only do we not have experience with that, we can never get any experience.

I should add there are theories in which the constants of nature are replaced with parameters that can change with time or place, but that’s a different story entirely and has nothing to do with the fine-tuning arguments. It’s an interesting idea though. Maybe I should talk about this some other time? Let me know in the comments.

And for the experts, yes, I have so far specifically referred to what’s known as the frequentist interpretation of probability. You can alternatively interpret the term “unlikely” using the Bayesian interpretation of probability. In the Bayesian sense, saying that something you observe was “unlikely”, means you didn’t expect it to happen. But with the Bayesian interpretation, the whole argument that the universe was especially made for us doesn’t work. That’s because in that case it’s easy enough to find reasons for why your probability assessment was just wrong and nothing’s in need of explaining.

Example: Did you expect a year ago that we’d spent much of 2020 in lockdown? Probably not. You probably considered that unlikely. But no one would claim that you need god to explain why it seemed unlikely.

What does this mean for the existence of god or the multiverse? Both are assumptions that are unnecessary additions to our theories of nature. In the first case, you say “the constants of nature in our universe are what we have measured, and god made them”, in the second case you say “the constants of nature in our universe are what we have measured, and there are infinitely many other unobservable universes with other constants of nature.” Neither addition does anything whatsoever to improve our theories of nature. But this does not mean god or the multiverse do not exist. It just means that evidence cannot tell us whether they do or do not exist. It means, god and the multiverse are not scientific ideas.

If you want to know more about fine-tuning, I have explained all this in great detail in my book Lost in Math.

In summary: Was the universe made for us? We have no evidence whatsoever that this is the case.


You can join the chat on this video today (Saturday, Jan 16) at 6pm CET/Eastern Time here.

Saturday, January 09, 2021

The Mathematics of Consciousness

[This is a transcript of the video embedded below.]


Physicists like to think they can explain everything, and that, of course, includes human consciousness. And so in the last few decades they’ve set out to demystify the brain by throwing math at the problem. Last year, I attended a workshop on the mathematics of consciousness in Oxford. Back then, when we still met other people in real life, remember that?

I find it to be a really interesting development that physicists take on consciousness, and so, today I want to talk a little about ideas for how consciousness can be described mathematically, how that’s going so far, and what we can hope to learn from it in the future.

The currently most popular mathematical approach to consciousness is integrated information theory, IIT for short. It was put forward by a neurologist, Giulio Tononi, in two thousand and four.

In IIT, each system is assigned a number, that’s big Phi, which is the “integrated information” and supposedly a measure of consciousness. The better a system is at distributing information while it’s processing the information, the larger Phi. A system that’s fragmented and has many parts that calculate in isolation may process lots of information, but this information is not “integrated”, so Phi is small.

For example, a digital camera has millions of light receptors. It processes large amounts of information. But the parts of the system don’t work much together, so Phi is small. The human brain on the other hand is very well connected and neural impulses constantly travel from one part to another. So Phi is large. At least that’s the idea. But IIT has its problems.

One problem with IIT is that computing Phi is ridiculously time consuming. The calculation requires that you divide up the system which you are evaluating in any possible way and then calculate the connections between the parts. This takes up an enormous amount of computing power. Estimates show that even for the brain of a worm, with only three hundred synapses, calculating Phi would take several billion years. This is why measurements of Phi that have actually been done in the human brain have used incredibly simplified definitions of integrated information.

Do these simplified definitions at least correlate with consciousness? Well, some studies have claimed they do. Then again others have claimed they don’t. The magazine New Scientist for example interviewed Daniel Bor from the University of Cambridge and reports:
“Phi should decrease when you go to sleep or are sedated via a general anesthetic, for instance, but work in Bor’s lab has shown that it doesn’t. “It either goes up or stays the same,” he says.”
I contacted Bor and his group, but they wouldn’t come forward with evidence to back up this claim. I do not actually doubt it’s correct, but I do find it somewhat peculiar they’d make such a statements to a journalist and then not provide evidence for it.

Yet another problem for IIT is, as the computer scientist Scott Aaronson pointed out, that one can think of rather trivial systems, that solve some mathematical problem, which distribute information during the calculation in such a way that Phi becomes very large. This demonstrates that Phi in general says nothing about consciousness, and in my opinion this just kills the idea.

Nevertheless, integrated information theory was much discussed at the Oxford workshop. Another topic that received a lot of attention is the idea by Roger Penrose and Stuart Hamaroff that consciousness arises from quantum effects in the human brain, not in synapses, but in microtubules. What the heck are microtubules? Microtubules are tiny tubes made of proteins that are present in most cells, including neurons. According to Penrose and Hameroff, in the brain these microtubules can enter coherent quantum states, which collapse every once in a while, and consciousness is created in that collapse.

Most physicists, me included, are not terribly excited about this idea because it’s generally hard to create coherent quantum states of fairly large molecules, and it doesn’t help if you put the molecules into a warm and wiggly environment like the human brain. For the Penrose and Hamaroff conjecture to work, the quantum states would have to survive at least a microsecond or so. But the physicist Max Tegmark has estimated that they would last more like a femtosecond, that’s only ten to the minus fifteen seconds.

Penrose and Hameroff are not the only ones who pursue the idea that quantum mechanics has something to do with consciousness. The climate physicist Tim Palmer also thinks there is something to it, though he is more concerned with the origins of creativity specifically than with consciousness in general.

According to Palmer, quantum fluctuations in the human brain create noise, and that noise is essential for human creativity, because it can help us when a deterministic, analytical approach gets stuck. He believes the sensitivity to quantum fluctuations developed in the human brain because that’s the most energy-efficient way of solving problems, but it only becomes possible once you have small and thin neurons, of the types you find in the human brain. Therefore, palmer has argued that low-energy transistors which operate probabilistically rather than deterministically, might help us develop artificial intelligent that’s actually intelligent.

Another talk that I thought was interesting at the Oxford workshop was that by Ramon Erra. One of the leading hypothesis for how cognitive processing works is that it uses the synchronization of neural activity in different regions of the brain to integrate information. But Erra points out that during an epileptic seizure, different parts of the brain are highly synchronized.

In this figure, for example, you see the correlations between the measured activity of hundred fifty or so brain sites. Red is correlated, blue is uncorrelated. On the left is the brain during a normal conscious phase, on the right is a seizure. So, clearly too much synchronization is not a good thing. Erra has therefore proposed that a measure of consciousness could be the entropy in the correlation matrix of the synchronization. Which is low both for highly uncorrelated and highly correlated states, but large in the middle, where you expect consciousness.

However, I worry that this theory has the same problem as integrated information theory, which is that there may be very simple systems that you do not expect to be conscious but that nevertheless score highly on this simple measure of synchronization.

One final talk that I would like to mention is that by Jonathan Mason. He asks us to imagine a stack of compact disks, and a disk player that doesn’t know which order to read out the bits on a compact disk. For the first disk, you then can always find a readout order that will result in a particular bit sequence, that could correspond, for example, to your favorite song.

But if you then use that same readout order for the next disk, you most likely just get noise, which means there is very little information in the signal. So if you have no idea how to read out information from the disks, what would you do? You’d look for a readout process that maximizes the information, or minimizes the entropy, for the readout result for all of the disks. Mason argues that the brain uses a similar principle of entropy minimization to make sense of information.

Personally, I think all of these approaches are way too simple to be correct. In the best case, they’re first steps on a long way. But as they say, every journey starts with a first step, and I certainly hope that in the next decades we will learn more about just what it takes to create consciousness. This might not only allow us to create artificial consciousness and help us tell when patients who can't communicate are conscious, it might also help us allow to make sense of the unconscious part of our thoughts so that we can become more conscious of them.

You can find recordings of all the talks at the workshop, right here on YouTube, please check the info below the video for references.


You can join the chat about this video today (Saturday, Jan 9) at noon Eastern Time or 6pm CET here.

Saturday, January 02, 2021

Is Time Real? What does this even mean?

[This is a transcript of the video embedded below.]


Time is money. It’s also running out. Unless possibly it’s on your side. Time flies. Time is up. We talk about time… all the time. But does anybody actually know what it is? It’s 3:30. That’s not what I mean. Then what do you mean? What does it mean? That’s what we will talk about today.

First things first, what is time? “Time is what keeps everything from happening at once,” as Ray Cummings put it. Funny, but not very useful. If you ask Wikipedia, time is what clocks measure. Which brings up the question, what is a clock. According to Wikipedia, a clock is what measures time. Huh. That seems a little circular.

Luckily, Albert Einstein gets us out of this conundrum. Yes, this guy again. According to Einstein, time is a dimension. This idea goes back originally to Minkowski, but it was Einstein who used it in his theories of special and general relativity to arrive at testable predictions that have since been confirmed countless times.

Time is a dimension, similar to the three dimensions of space, but with a very important difference that I’m sure you have noticed. We can stand still in space, but we cannot stand still in time. So time is not the same as space. But that time is a dimension means you can rotate into the time-direction, like you can rotate into a direction of space. In space, if you are moving in, say, the forward direction, you can turn forty-five degrees and then you’ll instead move into a direction that’s a mixture of forward and sideways.

You can do the same with a time and a space direction. And it’s not even all that difficult. The only thing you need to do is change your velocity. If you are standing still and then begin to walk, that does not only change your position in space, it also changes which direction you are going in space-time. You are now moving into a direction that is a combination of both time and space.

In physics, we call such a change of velocity a “boost” and the larger the change of velocity, the larger the angle you turn from time to space. Now, as you all know, the speed of light is an upper limit. This means you cannot turn from moving only through time and standing still in space to moving only in space and not in time. That does not work. Instead, there’s a maximal angle you can turn in space-time by speeding up. That maximal angle is by convention usually set to 45 degrees. But that’s really just convention. For the physics it matters only that it’s some angle smaller than ninety degrees.

The consequence of time being a dimension, as Einstein understood, is that time passes more slowly if you move, relative to the case when you were not moving. This is the “time dilation”.

How do we know this is correct? We can measure it. How do you measure a time-dimension? It turns out you can measure the time-dimension with – guess what – the things we normally call clocks. The relevant point here is that this definition is no longer circular. We defined time as a dimension in a specific theory. Clocks are what we call devices that measure this.

How do clocks work? A clock is anything that counts how often a system returns to the same, or at least very similar, configuration. For example, if the Earth orbits around the sun once, and returns to almost the same place, we call that a year. Or take a pendulum. If you count how often the pendulum is, say, at one of the turning points, that gives you a measure of time. The reason this works is that once you have a theory for space-time, you can calculate that the thing you called time is related to the recurrences of certain events in a regular way. Then you measure the recurrence of these events to tell the passage of time.

But then what do physicists mean if they say time is not real, as for example Lee Smolin has argued. As I have discussed in a series of earlier videos, we call something “real” in scientific terms if it is a necessary ingredient of a theory that correctly describes what we observe. Quarks, for example, are real, not because we can see them – we cannot – but because they are necessary to correctly describe what particle physicists measure at the Large Hadron Collider. Time, for the same reason, is real, because it’s a necessary ingredient for Einstein’s theory of General Relativity to correctly describe observations.

However, we know that General Relativity is not fundamentally the correct theory. By this I mean that this theory has shortcomings that have so-far not been resolved, notably singularities and the incompatibility with quantum theory. For this reason, most physicists, me included, think that General Relativity is only an approximation to a better theory, usually called “quantum gravity”. We don’t yet have a theory of quantum gravity, but there is no shortage of speculations about what its properties may be. And one of the properties that it may have is that it does not have time.

So, this is what physicists mean when they say time is not real. They mean that time may not be an ingredient of the to-be-found theory of quantum gravity or, if you are even more ambitious, a theory of everything. Time then exists only on an approximate “emergent” level.

Personally, I find it misleading to say that in this case, time is not real. It’s like claiming that because our theories for the constituents of matter don’t contain chairs, chairs are not real. That doesn’t make any sense. But leaving aside that it’s bad terminology, is it right that time might fundamentally not exist?

I have to admit it’s not entirely implausible. That’s because one of the major reasons why it’s difficult to combine quantum theory with general relativity is that… time is a dimension in general relativity. In Quantum Mechanics, on the other hand, time is not something you can measure. It is not “an observable,” as the physicists say. In fact, in quantum mechanics it is entirely unclear how to answer a seemingly simple question like “what is the probability for the arrival time of a laser signal”. Time is treated very differently in these two theories.

What might a theory look like in which time is not real? One possibility is that our space-time might be embedded into just space. But it has a boundary were time turns to space. Note how carefully I have avoided saying before it turns to space. Before arguably is a meaningless word if you have no direction of time. It would be more accurate to say what we usually call “the early universe” where we expect a “big bang” may actually have been “outside of space time” and there might have been only space, no time.

Another possibility that physicists have discussed is that deep down the universe and everything in it is a network. What we usually call space-time is merely an approximation to the network in cases when the network is particularly regular. There are actually quite a few approaches that use this idea, the most recent one being Stephen Wolfram’s Hypergraphs.

Finally, I should mention Julian Barbour who has argued that we don’t need time to begin with. We do need it in General Relativity, which is the currently accepted theory for the universe. But Barbour has developed a theory that he claims is at least as good as General Relativity, and does not need time. Instead, it is a theory only about the relations between configurations of matter in space, which contain an order that we normally associate with the passage of time, but really the order in space by itself is already sufficient. Barbour’s view is certainly unconventional and it may not lead anywhere, but then again, maybe he is onto something. He has just published a new book about his ideas.

Thanks for your time, see you next week.


You can join the chat on this topic today (Jan 2nd) at 6pm CET/noon Eastern Time here.

Wednesday, December 30, 2020

Well, Actually. 10 Physics Answers.

[This is a transcript of the video embedded below.]


Today I will tell you how to be just as annoying as a real physicist. And the easiest way to do that is to insist correcting people when it really doesn’t matter.

1. “The Earth Orbits Around the Sun.”

Well, actually the Earth and the Sun orbit around a common center of mass. It’s just that the location of the center of mass is very close by the center of the sun because the sun is so much heavier than earth. To be precise, that’s not quite correct either because Earth isn’t the only planet in the solar system, so, well, it’s complicated.

2. “The Speed of Light is constant.”

Well, actually it’s only the speed of light in vacuum that’s constant. The speed of light is lower when the light goes through a medium, and just what the speed is depends on the type of medium. The speed of light in a medium is also no longer observer-independent – as the speed of light in vacuum is – but instead it depends on the relative velocity between the observer and the medium. The speed of light in a medium can also depend on the polarization or color of the light, the former is called birefringence and the latter dispersion.

3. “Gravity Waves are Wiggles in Space-time”

Well, actually gravity waves are periodic oscillations in gases and fluids for which gravity is a restoring force. Ocean waves and certain clouds are examples of gravity waves. The wiggles in space-time are called gravitational waves, not gravity waves.

4. “The Earth is round.”

Well, actually the earth isn’t round, it’s an oblate spheroid, which means it’s somewhat thicker at the equator than from pole to pole. That’s because it rotates and the centrifugal force is stronger for the parts that are farther away from the axis of rotation. In the course of time, this has made the equator bulge outwards. It is however a really small bulge, and to very good precision the earth is indeed round.

5. “Quantum Mechanics is a theory for Small Things”

Well, actually, quantum mechanics applies to everything regardless of size. It’s just that for large things the effects are usually so tiny you can’t see them.

6. “I’ve lost weight!”

Well, actually weight is a force that depends on the gravitational pull of the planet you are on, and it’s also a vector, meaning it has a direction. You probably meant you lost mass.

7. “Light is both a particle and a wave.”

Well, actually, it’s neither. Light, as everything else, is described by a wave-function in quantum mechanics. A wave-function is a mathematical object, that can both be sharply focused and look pretty much like a particle. Or it can be very smeared out, in which case it looks more like a wave. But really it’s just a quantum-thing from which you calculate probabilities of measurement outcomes. And that’s, to our best current knowledge, what light “is”.

8. “The Sun is eight light minutes away from Earth.”

Well, actually, this is only correct in a particular coordinate system, for example that in which Planet Earth is in rest. If you move really fast relative to Earth, and use a coordinate system in rest with that fast motion, then the distance from sun to earth will undergo Lorentz-contraction, and it will take light less time to cross the distance.

9. “Water is blue because it mirrors the sky.”

Well, actually, water is just blue. No, really. If you look at the frequencies of electromagnetic radiation that water absorbs, you find that in the visual part of the spectrum the absorption has a dip around blue. This means water swallows less blue light than light of other frequencies that we can see, so more blue light reaches your eye, and water looks blue. 

However, as you have certainly noticed, water is mostly transparent. It generally swallows very little visible light and so, that slight taint of blue is a really tiny effect. Also, what I just told you is for chemically pure water, H two O, and that’s not the water you find in oceans, which contain various minerals and salt, not to mention dirt. So the major reason the oceans look blue, if they do look blue, is indeed that they mirror the sky.

10. “Black Holes have a strong gravitational pull.”

Well, actually the gravitational pull of a black hole with mass M is exactly as large as the gravitational pull of a star with mass M. It’s just that – if you remember newton’s one over r square law – the gravitational pull depends on the distance to the object. 

The difference between a black hole and a star is that if you fall onto a star, you’re burned to ashes when you get too close. For a black hole you keep falling towards the center, cross the horizon, and the gravitational pull continues to increase. Theoretically, it eventually becomes infinitely large.

How many did you know? Let me know in the comments.


You can join the chat on this video tomorrow, Thursday Dec 31, at 6pm CET or noon Eastern Time here.

Saturday, December 26, 2020

What is radiation? How harmful is it?

[This is a transcript of the video embedded below.]


Did you know that sometimes a higher exposure to radiation is better than a lower one? And that some researchers have claimed low levels of radioactivity are actually beneficial for your health? Does that make sense? Are air purifiers that ionize air dangerous? And what do we mean by radiation to begin with? That’s what we will talk about today.

First of all, what is radiation? Radiation generally refers to energy transferred by waves or particles. So, if I give you a fully charged battery pack, that’s an energy transfer, but it’s not radiation because the battery is neither a wave nor a particle. On the other hand, if I shout at you and it makes your hair wiggle, that sound was radiation. In this case the energy was transferred by sound waves, that are periodic density fluctuations in the air.

Sound is not something we usually think of as radiation, but technically, it is. Really all kind of things are technically radiation. If you drop a pebble into water, for example, then the waves this creates are also radiation.

But what people usually think of, when they talk about radiation, is radiation that’s transferred by either (a) elementary particles, that’s particles which have no substructure, for all we currently know, or that’s (b) transferred by small composite particles, such as protons, neutrons, or even small atomic nuclei and (c) electromagnetic waves. But electromagnetic waves are strictly speaking also made of particles, which are the photons. So, really all these types of radiation that we usually worry about are made of some kind of particle.

The only exception is gravitational radiation. That’s transferred in waves, and we believe that these gravitational waves are made of particles, that’s the gravitons, yet we have no evidence for the gravitons themselves. But of all the possible types of radiation, gravitational radiation is the one that is the least likely to leave any noticeable trace. Therefore, with apologies, I will in the following, not consider gravitational waves.

Having said that, if you want to know what radiation does, you need to know four things. First, what particle is it? Second, what’s the energy of the particle. Third, how many of these particles are there. And forth, what do they do to the human body. We will go through these one by one.

First, the type of particle tells you how likely the radiation is to interact with you. Some particles come in huge amounts, but they basically never interact. They just go through stuff and don’t do anything. For example, the sun produces an enormous number of particles called neutrinos. Neutrinos are electrically neutral, have a small mass, and they just pass through walls and you and indeed, the whole earth. There are about one hundred trillion neutrinos going through your body every second. And you don’t notice.

It’s the same with the particles that make up dark matter. They should be all around us and going through us as we speak, but they interact so rarely with anything, we don’t notice. Or maybe they don’t exist in the first place. In any case, neutrinos and dark matter are particles you clearly don’t need to worry about.

However, other particles interact more strongly, especially if they are electrically charged. That’s because the constituents of all types of matter are also electrically charged. Charged particles in radiation are mostly electrons, which you all know, or muons. Muons are very similar to electrons, just heavier and they are unstable. They decay into electrons and neutrinos again. You can also have charged radiation made of protons, that’s one of the constituents of the atomic nucleus and it’s positively charged, or you can have radiation made of small atomic nuclei. The best known of those are Helium nuclei, which are also called alpha particles.

Besides protons, the other constituent of the atomic nucleus are neutrons. As the name says, they are electrically neutral. They are a special case because they can do a lot of damage even though they do not have an electric charge. That’s because neutrons can enter the atomic nucleus and make the nucleus unstable.

However, neutrons, curiously enough, are actually unstable if they are on their own. If neutrons are not inside an atomic nucleus, they live only for about 10 minutes, then they decay to a proton, an electron and an electron-anti-neutrino. For this reason you don’t encounter single neutrons in nature. So, that too is something you don’t need to worry about.

Then there’s electromagnetic radiation, which we already talked about the other week. Electromagnetic radiation is made of photons, and they can interact with anything that is electrically charged. And since atoms have electrically charged constituents, this means, electromagnetic radiation can which interact with any atom. But whether they actually do that depends on the amount of energy per photon.

So, first you need to know what kind particle is in the radiation, because that tells you how likely it is to interact. And then, second, to understand what the radiation can do if it interacts, you need to know how much energy the individual particles in the radiation have. If the energy of the particles in the radiation is large enough to break bonds between molecules, then they are much more likely to be harmful.

The typical binding energy of molecules is similar to the energy you need to pull an electron off an atom. This is called ionization, and radiation that can do that is therefore called “ionizing radiation”. The reason ionizing radiation is harmful is not so much the ionization itself, it’s that if radiation can ionize, you know it can also break molecular bonds.

Ionized atoms or molecules like to undergo chemical reactions. That may be a problem if it happens inside the body. But ionized molecules in the air are actually common, because sunlight can do this ionization, and not something you need to worry about. If you have an air purifier, for example, that ionizes some air molecules, usually O two or N two.

The idea is that these ionized molecules will bind to dust and then the dust is charged, so it will stick to the floor or other grounded surfaces. But this ionization in air purifiers does not require ionizing radiation, so it’s not a health risk. Except that air purifiers may also produce ozone, which is not healthy.

Where does ionizing radiation come from? Well, for one, ultraviolet sunlight has enough energy to ionize. But even higher energies can be reached by ionizing radiation that comes from outer space, the so-called cosmic rays.

Most ultraviolet radiation from the sun gets stuck in the stratosphere thanks to ozone. Most cosmic rays are also blocked or at least dramatically slowed down in the upper atmosphere, but part of it still reaches the ground. This already tells you that your exposure to ionizing radiation increases with altitude. In fact, average people like you and I tend to get the highest doses of ionizing radiation on airplanes.

The particles in the primary cosmic radiation are mostly protons, some are small ionized nuclei, and then there’s a tiny fraction of other stuff. Primary here means, it’s the thing that actually comes from outer space. But almost all of these primary cosmic particles hit air molecules in the upper atmosphere, which converts them into a lot of particles of lower energy, usually called a cosmic ray shower. This shower, which rains down on earth, is almost exclusively made of photons, electrons, muons, and neutrinos, which we’ve already talked about.

Ionizing radiation is also emitted by radioactive atoms. The radiation that atoms can emit is of three types: alpha, that’s Helium nuclei, beta, that’s electrons and positrons, and gamma, that’s photons. Radioactive atoms which emit these types of radiation occur naturally in air, rock, soil, and even food. So there is a small amount of background radiation everywhere on earth, no matter where you go, and what you touch.

This then brings us to the third point. If you know what particle it is, and you know what energy it has, you need to know how many of them there are. We measure this in the total energy per time, that is known as power. The power of the radiation is the highest if you are close to the source of the radiation. That’s because the particles spread out into space, so the farther away you are, the fewer of them will hit you. The number of particles can drop very quickly if some of the radiation is absorbed. And the radiation that is the most likely to interact with matter, is the least likely to reach you. This is the case for example for alpha particles. You can block them just by a sheet of paper.

And then there’s the fourth point, which is the really difficult one. How much of that radiation is absorbed by the body and what can it do? There is no simple answer to this. Well, okay, one thing that’s simple is that high amounts of radiation, regardless of what type, can pump a lot of energy into the body, which is generally bad. Most countries therefore have radiation safety regulations that set strict limits on the amount of radiation that humans should maximally be exposed to. If you want to know details, I encourage you to check out these official guides, to which I leave links in the info below the video.

Interestingly enough, more radiation is not always worse. For example, you may remember that if there’s a nuclear accident, people rush to buy iodine pills. That’s because nuclear accidents can release a radioactive isotope of iodine, which may then enter the body through air or food. This iodine will accumulate in the thyroid gland and if it decays, that can damage cells and cause cancer. The pills of normal iodine basically fill up the storage space in your thyroid gland, which means the radioactive substance leaves the body faster.

But. Believe that or not, some people swallow large amounts of radioactive iodine as a medical treatment. This is actually rather common if someone has an overactive thyroid gland which causes a long list of health issues. It can be treated by medication, but then patients have to take pills throughout their lives, and these pills are not without side effects.

Now, if you give those patients radioactive iodine that kills off a significant fraction of the cells in the thyroid gland, and can solve their problem permanently. This method has been in use since the 1940s, is very effective, and no, it does not increase the risk of thyroid cancer. The thing is that if the radiation dose is high enough, the cells in the thyroid gland will not just be damaged, mutate, and possibly cause cancer. They’ll just die.

Now, this is not to say that more radiation is generally better, certainly not. But it demonstrates that it’s not easy to find out what the health effects of a certain radiation dose are. The physics is simple. But the biology isn’t.

Indeed, some scientists have argued that low doses of ionizing radiation may be beneficial because they encourage the body to use cell-repair mechanisms. This idea is called “radiation hormesis”. Does that make sense? Well. It kind of sounds plausible. But the plausible ideas are the one you should be most careful with. Several dozen studies have looked at radiation hormesis in the past 20 years. But so far, the evidence has been inconclusive and official radiation safety committees have not accepted it. This does not mean it’s wrong. It just means, at the moment it’s unclear whether or, if so, under which circumstances, low doses of ionizing radiation may have health benefits, or at least not do damage.

So, I hope you learned something new today!



You can join the chat on this video today (Saturday) at 6pm CET/noon Eastern Time here.

Wednesday, December 23, 2020

How to speak English like Einstein


[This is a transcript of the video embedded above. Parts of the text won’t make sense without the accompanying audio.]

Hi everybody, I’ve been thinking really hard about why you are here. Of course theoretical physics is awesome, but in my experience, that opinion is, sadly enough, not widely shared among the general population. So while I am thrilled to see you’re all super excited about the square well potential in Schrödinger’s equation, I am secretly convinced you’re just here to hear me try to pronounce difficult English words with a German accent. So, today, we’ll have a special feature about How To Speak English Like Einstein.
Albert Einstein: “The scientific method itself would not have led anywhere, it would not even have been formed, without a passionate striving for a clear understanding. Perfection of means and confusion of goals seem, in my opinion, to characterize our age.”
Don’t worry if you don’t speak German, you don’t need to know a single word of German to understand this video. But before we get started, let’s have a look at Professor Einstein’s name, Albert Einstein.

How is that pronounced correctly? Most importantly, the German ST is not pronounced “st” as you would in English, for example in “first” or “start”. The “ST” in Einstein is pronounced “scht”. Einstein.

The German “sch” is similar, but not exactly identical, to the English “sh”. If you are familiar with phonetic spelling, that’s this thing that looks like an integral. You find it in words like “push” or “machine”. It’s a good first approximation to the German “sch”, but if you want to get the German sound right, you have to pull the tongue back in your mouth.

Listen, the English one is “push”. Now pull back your tongue, and you get pusch. Pusch. That’s the sound that goes into Einschtein.

The rest is details. All the German vowels are shifted relative to the English ones, long story, big headache, but just try not to rely on the spelling, just listen. It’s Albert Einstein. Don’t worry about the “r” in Albert, just make that an “a”. Everyone does it and it’ll sound just fine. Albeat. Albeat Einstein.

Ok, so now about that German accent. To speak with a German accent, you have to remember which English sounds do not exist in German. And that’s most importantly, the English “th”, the vanishing “w”, and the “r”. If you replace those with the next closest German sounds, you’ll immediately sound very German.

Let’s use this sentence as example “I remember in February we were still thinking that this would be over relatively soon.” I hope I pronounced this correctly.

Here’s the first step to a German accent. Replace all the “th”s, the “th” with a “z”. Why a “z”? Because that’s what comes out if you put your tongue in the wrong place. That’s what you mean, zat’s what it sounds like. So, you replace “this” with “zis”. And “either” with “eizer”. “Therefore” with “zerefore”, and so on. The example sentence then becomes:

“I remember in February we were still zinking zat zis would be over relatively soon.”
Mayday, mayday. Hello, can you hear us? Can you hear us? Over. We are sinking. We are sinking.

Hello. Ziz is ze German coast guard.

We’re sinking. We’re sinking.

What are you zinking about?
German humor.

Second step. The vanishing English “w”. As in “what” or “wonderful”. That sound doesn’t exist in German either, so you make it a “v”. What becomes vat. Wonderful becomes vonderful. Would become vould, and so on. With that our example sentence now sounds like this

“I remember in February ve vere still zinking zat zis vould be over relatively soon.”

The third step is the probably most difficult one if you’re an English native speaker. It’s to replace the English “r” with a German r. The German “r” is a short rolling r. Think of a happy cat, it’s purring, it goes “rrrrrr” “rrrr”. Comes from the back of your throat. Like if you’re snoring. Rrrr. Try that. I’ll wait.

Excellent. Now you launch from that into a word. Let’s take the word “right”. “rrrrrrrrrrrrrright” right. Right. There you have it. It sounds very German doesn’t it? We don’t, in German, actually do a lot of rolling with the r, so don’t make that too long. Right. Also, don’t trill the r at the tip of your tongue, like in trust me. No, don’t do that. It should be tRust me.

Some more examples. Friend becomes “fRiend”. Direction becomes diRection. It’s actually a terrible sound.

The example sentence is now: “I Remember in FebRuaRy ve vere still zinking zat zis vould be over Relatively soon.”

Repeat after me, I’ll pause.

“I Remember in FebRuaRy ve vere still zinking zat zis vould be over Relatively soon.”

Great. You are awesome. Have fun with your Einstein English, don’t forget to subscribe and check my Patreon page for more content. Zanks for vatching.

Saturday, December 19, 2020

All you need to know about 5G

The new 5G network technology is currently being rolled out in the United States, Germany, the United Kingdom, and many other countries all over the world. What’s new about it? Does it really use microwaves? Like in microwave ovens? Is that something you should worry about? I began looking into this fully convinced I’d tell you that nah, this is the usual nonsense about cellphones causing cancer. But having looked at it in some more detail, now I’m not so sure.


First of all, what is 5 G? 5 G is the fifth generation of wireless networks. The installation of antennas is not yet completed, and it will probably take at least several more years to complete, but in some places 5G is already operating, and you can now buy cellphones that use it. What’s it good for? 5G promises to deliver more data, faster, by up to a factor one hundred, optimistically. It could catapult us into an era where driverless cars and the internet of things have become reality.

How is that supposed to work? 5 G uses a variety of improvements on the data routing that makes it more efficient, but the biggest change that has attracted the most attention is that 5G uses a frequency range that the previous generations of wireless networks did not use.

These are the millimeter waves. And, yes, these are the same waves that are being used in the scanners at airport security, the difference is that in the scanners you’re exposed for a second every couple of months or so, while with 5G you’d be sitting in it at low power but possibly for hours a day, depending on how close you live and work to one of the new antennas.

As the name says, millimeter waves have wavelengths in the millimeter range, and the ones used for 5G correspond to frequencies of twenty-four to forty-eight Giga-Hertz.

If that number doesn’t tell you anything, don’t worry, I will give you more context in a moment. For now, let me just say that the new frequencies are about a factor ten higher than the highest frequencies that were previously used for wireless networks.

Another thing that’s new about 5G are directional phased-array antennas. Complicated word that basically means the antennas don’t just radiate the signal off into all directions, but they can target a particular direction. And that’s an important difference, if you want to know how the signal strength drops with distance to the antenna. Roughly speaking, it becomes more difficult to know what’s going on.

Because of these new features, conspiracy theories have flourished around 5G and there have been about a hundred incidents, mostly in the Netherlands, Belgium, Ireland, and the UK, where people have burned down or otherwise damaged 5G telephone towers. Dozens of cities, counties, and nations have stopped the installing. There have been protests against the rollout of the 5G technology all over the world. And groups of concerned scientists have written open letters twice, once in 2017 and once in 2019. Each letter attracted about a few hundred signatures from scientists. Not a terrible lot, but not nothing either.

Before we can move on, I need to give you some minimal background on the physics, so bear with me for a moment. Wireless technology uses electromagnetic radiation to encode and send information. Electromagnetic radiation is electric and magnetic fields oscillating around each other creating a freely propagating wave that can travel from one place to another. Electromagnetic radiation is everywhere. Light is electromagnetic radiation. Radio stations air music with electromagnetic radiation. If you open an oven and feel the heat, that’s also electromagnetic radiation. These seem to be different phenomena, but physically, they’re all the same thing. The only difference is the wavelength of the oscillation. Commonly, we use different names for electromagnetic radiation depending on that wavelength.

If we can see it, we call it light. Visible light with long wavelengths is red, and at even longer wave-lengths when we can no longer see it, we call it infrared. We can’t see infrared light, but we often still feel that it’s warm. At even longer wavelengths we call the radiation microwaves, and if the wavelengths are even longer, they are called radio waves.

On the other side of visible light, at wavelengths shorter than violet, we have the ultraviolet, and then the X-rays, and gamma-rays. The new millimeter waves are in the high frequency part of microwaves.

Now, we may call electromagnetic radiation a “wave” but those waves are actually quantized, which means they are made of small packs of energy. These small packs of energy are the particles of light, which are called “photons”. You may think it’s an unnecessary complication, to talk about quantization here, but knowing that electromagnetic radiation is made of these particles, the photons, is extremely helpful to understand what the radiation can do.

That’s because the energy of the photons is proportional to the frequency of the radiation, or equivalently, the energy is inversely proportional to the wavelength.

So, a high frequency means a short wavelength, and a large energy per photon. A small frequency means a long wavelength, which means small energy. Again that’s energy per photon.

That the frequency of electromagnetic radiation tells you the energy of the particles in the radiation is so useful because if you want to damage a molecule, you need a certain minimum amount of energy. You need this energy to break the bonds between the atoms that make up the molecule. And so, the most essential thing you need to know to gauge how harmful electromagnetic radiation is, is whether the energy per photon in the radiation is large enough to break molecular bonds, like the bonds that hold together the DNA.

Breaking molecular bonds is not the only way electromagnetic radiation can be harmful, and I will get to the other ways in few minutes, but it *is the most direct and important harm electromagnetic radiation can do.

So how much energy do you need to damage a molecule? Damage begins happening just above the high-energy-end of visible light, with the ultraviolet radiation. That’s the light that gives you a sunburn and that you’ve been told to avoid. It has wavelengths that are just a little bit shorter than visible light, or frequencies and energies that are just a little bit higher.

In terms of energy, ultraviolet radiation has about three to thirty electron volts per photon. An electron Volt is just a unit of energy. If that’s unfamiliar to you, doesn’t matter, you merely need to know that the binding energy of most molecules also lies in the range of a few electron volts.

If you want to break a molecule, you need energies above that binding energy, so you need frequencies at or above the ultraviolet. That’s because the energy for the damage has to come with the individual photons in the radiation. If the individual photons do not have enough energy to actually damage the molecule, they either just go through or, sometimes, if they hit a resonance frequency, they’ll wiggle the molecule. If you wiggle molecules that means you warm them up.

So, what matters for the question whether you can damage a molecule is the energy per photon in the radiation, which means the frequency of the radiation, *not the total energy of all the particles in the radiation, of which there could be many. If you take more particles, but *each of them has an energy below what’s necessary for damaging a molecule, you’ll just get more wiggling.

All the radiation used for wireless networks, including 5G, uses frequencies way below those necessary to break molecular bonds. It is below even the infrared. So in this regard, there is clearly nothing to worry about.

But. As I mentioned, breaking molecular bonds is not the only way that electromagnetic radiation can harm living tissue. Because tissue is complicated. It’s not just physics. You can also harm tissue just by warming it.

And how much warming you can get from electromagnetic radiation is not determined by the energy per photon, it is determined by the total energy per time that is transferred by all the photons and on the fraction that is absorbed by the tissue. That total energy transfer per time is called the “power” and it’s commonly measured in Watts. So: The frequency tells you the energy per photon. The power tells you the total energy in photons per time.

For example, if you look at your microwave oven, that probably operates at about 2 GigaHertz, which is a really small energy per photon, about a million times below the energy required to break molecular bonds.

But a microwave oven operates at maybe four hundred or up to a thousand Watts. And that’s high in terms of power. So, a lot of photons per time. On the other hand, if you have a wireless router at home, it quite possibly operates at a similar frequency as your microwave oven. But a wireless router typically uses something like one hundred milli Watts, that’s ten thousand times less than the microwave oven, and the router radiates into space, not into a closed cavity.

That’s a relevant difference for a simple geometric reason. If the photons in the electromagnetic radiation distribute in all of the directions, as they do for antennas like your wireless router, then the density of particles will thin out, meaning the power will drop very quickly with distance to the sender. This is why, in wireless communication, the highest power you’ll be exposed to is if you are close to the sender and that is usually your cell phone, not an antenna, because the antennas tend to be on a roof or a mast or in any case, not on your ear.

Ok, to summarize: The frequency tells you the energy per particle and determines the what type of damage is possible. The power tells you the number of particles and it drops very quickly with distance to the source. The power alone does not tell you how much is absorbed by the human body.

Back to 5G. What the 5G controversy is about is whether the electromagnetic radiation from the new antennas poses a health risk.

5G actually uses electromagnetic radiation in three different parts of the spectrum, called the low band, the mid band, and the high band. The frequency of the radiation in all these bands is below that which is required to damage molecules. The frequency of the mid band is indeed comparable to the one your microwave oven is using, but actually, there’s nothing new about this, microwaves have been used by wireless networks for more than two decades.

The radiation in the high band are the new millimeter waves. This band has so far been largely unused for telecommunication purposes simply because it’s not very good for long-range transmission. The electromagnetic waves in this range do not travel very far and can get blocked by walls, trees, and even humans.

Therefore, the idea behind 5G is to use a short-range network, made of the so-called “small cells” for the millimeter waves. These small cells have to be distributed at distances of about one hundred meters or so.

The small cells communicate with macro cells that use the mid and low bands with antennas that operate at higher power and that do the long range transmission. So, a fully functional 5G network is likely to increase the exposure to millimeter waves, which have not before been used for cell phones.

This means the people who are citing the lack of correlation between cell phone use and cancer incidence in the past 20 years missed the point. These studies don’t tell you anything about the 5G high band because that wasn’t previously in use.

Now the thing is if you look what is known about the health risks from long-term exposure to the new millimeter waves band, there are basically no studies. We know that millimeter waves cannot penetrate deeply into the human body, but we know that at high power, they warm the skin and irritate eyes. Exactly what power is too much in the long run no one knows because there just hasn’t been enough research.

Here is for example a Meta-review published about a year ago, which came to the conclusion:
“The available studies do not provide adequate and sufficient information for a meaningful safety assessment.”

And here we have Rob Waterhouse, vice president of a telecommunication company in the United States:
Waterhouse admits that although millimeter waves have been used for many different applications—including astronomy and military applications—the effect of their use in telecommunications is not well understood… “The majority of the scientific community does not think there’s an issue. However, it would be unscientific to flat out say there are no reasons to worry.”
That’s not very reassuring. And the World Health Organization writes:

“no adverse health effect has been causally linked with exposure to wireless technologies… but, so far, only a few studies have been carried out at the frequencies to be used by 5G.”

So the protests that you see against 5G, I am afraid to say, are not entirely unjustified. Don’t get me wrong, damaging other people’s property is certainly not a legitimate response. But I can understand the concern. We have no reason to think 5G *is a health risk. Indeed, it is reasonable to think it is *not a health risk, given that this radiation is of low energy and scatters in the upper layers of the skin, but there is very little data on what the effects of long-term exposure may be.

How should one proceed in such a situation? Depends on how willing you are to tolerate risk. And that’s not a question for science, that’s a question for politics. What do you think? Let me know in the comments.



You can join the chat on this week's topic on Saturday, Dec 19, at noon Eastern Time/6pm CET here.

Saturday, December 12, 2020

Are Singularities Real?

Last week we discussed whether infinity is real, and came to the conclusion it is not. This week I want to talk about a closely related topic, singularities. What are singularities, where do they appear in physics, and are they real?


A singularity is a place beyond which you cannot continue. But singularities in mathematics can be rather dull. In mathematics a singularity may just be a location where an object, for example a function, is not defined. But it may not be defined just because you didn’t define it.

If I define for example a piecewise function that has the value one for x strictly smaller and strictly larger than zero, then that’s not defined exactly at zero. You can’t go from left to right. So, that’s a singularity. It is however a singularity that is easy to remove, just by filling in the missing point. Correspondingly, this is also called a removable singularity.

But many functions have singularities that are more interesting than that. The simplest example that’s still interesting is the function one over x, which has a singularity at zero. This singularity cannot be removed. There is no point you could fit in at zero that would make this function continuous. You won’t get from the left to the right.

For the function one over x that’s because the function diverges when x gets close to zero, so the value of the function becomes infinitely large. However, and this is a really important point, a singularly does not necessarily have to come with anything infinite.

Take for example the function sine of one over x, which I have plotted here. This function has a singularity at x equals zero, but that’s not because the value of the function becomes infinitely large. It’s because there’s no such thing as the value of the sine function at one over zero.

For a mathematician, a function doesn’t even have to look odd to have a singularity. The best example is the function e to the minus one over x square. This looks perfectly fine if you plot it. But this function has a really weird property. If you calculate the value of the function and the derivatives of the function at zero, you will find that they are all exactly zero.

What this means is that if you reach zero from one side, you don’t know how to continue the function. There are infinitely many ways to continue from there, all of which will perfectly fit to the other side. For example, you could continue with a function that’s zero everywhere and glue this onto the other side. Or you could take the function e to the minus pi over x square. This type of singularity is called an “essential singularity”.

Okay, so singularities are arguably a thing in mathematics. But do singularities appear in reality? Not for all we currently know. Physicists use mathematics to describe nature, and, yes, sometimes this mathematics contains singularities. But these singularities are in all known cases a signal that the theory has been applied in a range where it’s no longer valid.

Take for example water dripping off a tap. The surface of the droplet has a singularity where the drop pinches off. At this point the curvature becomes infinitely large. However, this happens only if you describe water as a fluid, which is an approximation in which you ignore that really the water is made of atoms. If you look closely at the pinch-off point of the droplet, then there is no singularity, there are just atoms.

There are other examples where singularities appear in physics. For example, in phase transitions, like the transition from a liquid to a solid, some quantities can become infinitely large. But again this is really a consequence of using certain bulk descriptions in approximation. If you actually look closely, there isn’t really anything singular at a phase transition.

There is one exception to this, and that’s black holes.

Black holes are solutions of Einstein’s theories of general relativity. They have a singularity in the center. Because it’s a common misunderstanding, let me emphasize that there is no singularity at the black hole horizon. There is actually nothing particularly interesting happening at the horizon. It’s just the boundary of a region from which you cannot get out. Instead, once you cross the horizon, you will inevitably fall into the singularity.

And in a nutshell, this is pretty much what Hawking and Penrose’s singularity theorems are about. That in general relativity you can get situations where singularities are unavoidable because all possible paths lead there.

But what happens if you fall into a black hole singularity? Well, you die before you reach the singularity because tidal forces rip you to pieces. But if your remainders reach the singularity, then that’s just the end. There’s no more space or time beyond this. There’s just nothing.

At least that’s what the mathematics says. So what does the physics say? Is the black hole singularity “real”? No one knows. Because we cannot see what happens inside of a black hole. Whatever happens there is really just pure speculation.

Most physicists believe that the singularity in black holes is not real, but that it is instead of the same type as the other singularities in physics. That is, it just signals that the theory, in this case general relativity, breaks down and to make meaningful predictions, one needs a better theory. For the black hole singularity, that better theory would be a theory for the quantum behavior of space and time, a theory of “quantum gravity” as it’s called.

Some of you may wonder now what’s with the technological singularity. The technological singularity usually refers to a point in time where machines become intelligent enough to improve themselves, creating a runaway effect which is supposedly impossible to predict. It’s called a singularity because of this impossibility to make a prediction beyond it, which is indeed very similar to the mathematical definition of a singularity.

But of course the technological singularity is not a real singularity. It may be in practice impossible to predict what happens afterwards but lots of things are in practice impossible to predict. There is nothing specifically unpredictable about the laws of nature at a technological singularity, if that ever happens in the first place.

In summary, singularities exist in mathematics, but we have no evidence that singularities also exist in nature. And given that, as we saw earlier, certain types of singularities do not even require any quantity to become infinite, it is not impossible that one day we may discover an actual singularity in nature. In contrast to infinity, singularities are not a priori unscientific.



You can join the chat about this week's topic on Saturday, Dec 12, 12PM EST / 6PM CET.

Saturday, December 05, 2020

Is Infinity Real?

[This is a transcript of the video embedded below]

Is infinity real? Or is it just mathematical nonsense that you get when you divide by zero? If infinity is not real, does this mean zero also is not real? And what does it mean that infinity appears in physics? That’s what we will talk about today.


Most of us encounter infinity the first time when we learn to count, and realize that you can go on counting forever. I know it’s not a terribly original observation, but this lack of an end to counting because you can always add one and get an even larger numbers is the key property of infinity. Infinity is the unbounded. It’s larger than any number you can think of. You could say it’s unthinkably large.

Okay, it isn’t quite as simple because, odd as this may sound, there are different types of infinity. The amount of natural numbers, 1,2,3 and so on is just the simplest type of infinity, called “countable infinity”. And the natural numbers are in a very specific way equally infinite as other sets of numbers, because you can count these other sets using the natural numbers.

Formally this means a set of numbers is similarly infinite as the natural numbers, if you have a one-to-one map from the natural numbers to that other set. If there is such a map, then the two sets are of the same type of infinity.

For example, if you add the number zero to the natural numbers – so you get the set zero, one, two, three, and so on – then you can map the natural numbers to this by just subtracting one from each natural number. So the set of natural numbers and the set of the natural numbers plus the number zero are of the same type of infinity.

It’s the same for the set of all integers Z, which is zero, plus minus one, plus minus two, and so on. You can uniquely assign a natural number to each integer, so the integers are also countably infinite.

The rational numbers, that is the set of all fractions of integers, is also countably infinite. The real numbers, that contain all numbers with infinitely many digits after the point, is however not countably infinite. You could say it’s even more infinite than the natural numbers. There are actually infinitely many types of infinity, but these two, those which correspond to the natural and real numbers, are the two most commonly used ones.

Now, that there are many different types of infinity is interesting, but more relevant for using infinity in practice is that most infinities are actually the same. As a consequence of this, if you add one to infinity, the result is still the same infinity. And if you multiply infinity with two, you just get the same infinity again. If you divide one by infinity, you get a number with an absolute value smaller than anything, so that’s zero. But you get the same thing if you divide two or fifteen or square root of eight by infinity. The result is always zero.

I hope there are no mathematicians watching this, because technically one should not write down these relations as equations. Really they are statements about the type of infinity. The first, for example, just means if you add one to infinity, then the result is the same type of infinity.

The problem with writing these relations as equations is that it can easily go wrong. See, you could for example try to subtract infinity on both sides of this equation, giving you nonsense like one equals zero. And why is that? It’s because you forgot that the infinity here really only tells you the type of infinity. It’s not a number. And if the only thing you know about two infinities is that they are of the same type, then the difference between them can be anything.

It’s even worse if you do things like dividing infinity by infinity or multiplying infinity with zero. In this case, not only can the result be any number, it could also be any kind of infinity.

This whole infinity business certainly looks like a mess, but mathematicians actually know very well how to deal with infinity. You just have to be careful to keep track of where your infinity comes from.

For example, suppose you have a function like x square that goes to infinity when x goes to infinity. You divide it by an exponential function, that also goes to infinity with x. So you are dividing infinity by infinity. This sounds bad.

But in this case you know how you get to infinity and therefore you can unambiguously calculate the result. In this case, the result is zero. The easiest way to see this is to plot this fraction as a function of x, as I have done here.

If you know where your infinities come from, you can also subtract one from anther. Indeed, physicists do this all the time in quantum field theory. You may for example have terms like 1/epsilon, 1/epsilon square and the logarithm of epsilon. Each of these terms will give you infinity for epsilon to zero. But if you know that two terms are of the same infinity, so they are the same function of epsilon, then you can add or subtract them like numbers. In physics, usually the goal of doing this is to show that at the end of a calculation they all cancel each other and everything makes sense.

So, mathematically, infinity it interesting, but not problematic. For what the math is concerned, we know how to deal with infinity just fine.

But is infinity real? Does it exist? Well, it arguably exists in the mathematical sense, in the sense that you can analyze its properties and talk about it as we just did. But in the scientific sense, infinity does not exist.

That’s because, as we discussed previously, scientifically we can only say that an element of a theory of nature “exists” if it is necessary to describe observations. And since we cannot measure infinity, we do not actually need it to describe what we observe. In science, we can always replace infinity with a very large but finite number. We don’t do this. But we could.

Here is an example that demonstrates how mathematical infinities are not measurable in reality. Suppose you have a laser pointer and you swing it from left to right, and that makes a red dot move on a wall in a far distance. What’s the speed by which the dot moves on the wall?

That depends on how fast you move the laser pointer and how far away the wall is. The farther away the wall, the faster the dot moves with the swing. Indeed, it will eventually move faster than light. This may sound perplexing, but note that the dot is not actually a thing that moves. It’s just an image which creates the illusion of a moving object. What is actually moving is the light from the pointer to the wall and that moves just with the speed of light.

Nevertheless, you can certainly observe the motion of the dot. So, we can ask then, can the dot move infinitely fast, and can we therefore observe something infinite?

It seems that for the dot to move infinitely fast you’d have to place the wall infinitely far away, which you cannot do. But wait. You could instead tilt the wall at an angle to you. The more you tilt it, the faster the dot moves across the surface of the wall as you swing the laser pointer. Indeed, if the wall is parallel to the direction of the laser beam, it seems the dot would be moving infinitely fast across the wall. Mathematically this happens because the value of the tangent function at pi over two is infinity. But does this happen in reality?

In reality, the wall will never be perfectly flat, so there are always some points that will stick out and that will smear out the dot. Also, you could not actually measure that the dot is at exactly the same time on both ends of the wall because you cannot measure times arbitrarily precisely. In practice, the best you can do is to show that the dot moved faster than some finite value.

This conclusion is not specific to the example with the laser pointer, this is generally the case. Whenever you try to measure something infinite, the best you can do in practice is to say it’s larger than something finite that you have measured. But to show that it was really infinite you would have to show the result was larger than anything you could possibly have measured. And there’s no experiment that can show that. So, infinity is not real in the scientific sense.

Nevertheless, physicists use infinity all the time. Take for example the size of the universe. In most contemporary models, the universe is infinitely large. But this is a statement about a mathematical property of these models. The part of the universe that we can actually observe only has a finite size.

And the issue that infinity is not measurable is closely related to the problem with zero. Take for example the mathematical abstraction of a point. Physicists use this all the time when they deal with point particles. A point has zero size. But you would have to measure infinitely precisely to show that you really have something of zero size. So you can only ever show it’s smaller than whatever your measurement precision allows.

Infinity and zero are everywhere in physics. Even in seemingly innocent things like space, or space-time. The moment you write down the mathematics for space, you assume there are no gaps in it. You assume it’s a perfectly smooth continuum, made of infinitely many infinitely small points.

Mathematically, that’s a convenient assumption because it’s easy to work with. And it seems to be working just fine. That’s why most physicists do not worry all that much about it. They just use infinity as a useful mathematical tool.

But maybe using infinity and zero in physics brings in mistakes because these assumptions are not only not scientifically justified, they are not scientifically justifiable. And this may play a role in our understanding of the cosmos or quantum mechanics. This is why some physicists, like George Ellis, Tim Palmer, and Nicolas Gisin have argued that we should be formulating physics without using infinities or infinitely precise numbers.

You can join the chat on this video on

Saturday 12PM EST / 6PM CET
Sunday 2PM EST / 8PM CET

Saturday, November 28, 2020

Magnetic Resonance Imaging

[This is a transcript of the video embedded below. Some of the text may not make sense without the animations in the video.]

Magnetic Resonance Imaging is one of the most widely used imaging methods in medicine. A lot of you have probably had one taken. I have had one too. But how does it work? This is what we will talk about today.


Magnetic Resonance Imaging, or MRI for short, used to be called Nuclear Magnetic Resonance, but it was renamed out of fear that people would think the word “nuclear” has something to do with nuclear decay or radioactivity. But the reason it was called “nuclear magnetic resonance” has nothing to do with radioactivity, it is just that the thing which resonates is the atomic nucleus, or more precisely, the spin of the atomic nucleus.

Nuclear magnetic resonance was discovered already in the nineteen-forties by Felix Bloch and Edward Purcell. They received a Nobel Prize for their discovery in nineteen-fifty-two. The first human body scan using this technology was done in New York in nineteen-seventy-seven. Before I tell you how the physics of Magnetic Resonance Imaging works in detail, I first want to give you a simplified summary.

If you put an atomic nucleus into a time-independent magnetic field, it can spin. And if does spin, it spins with a very specific frequency, called the Larmor frequency, named after Joseph Larmor. This frequency depends on the type of nucleus. Usually the nucleus does not spin, it just sits there. But if you, in addition to the time-independent magnetic field, let an electromagnetic wave pass by the nucleus at just exactly the right resonance frequency, then the nucleus will extract energy from the electromagnetic wave and start spinning.

After the electromagnetic wave has travelled through, the nucleus will slowly stop spinning and release the energy it extracted from the wave, which you can measure. How much energy you measure depends on how many nuclei resonated with the electromagnetic wave. So, you can use the strength of the signal to tell how many nuclei of a particular type were in your sample.

For magnetic resonance imaging in the human body one typically targets hydrogen nuclei, of which there are a lot in water and fat. How bright the image is then tells you basically the amount of fat and water. Though one can also target other nuclei and measure other quantities, so some magnetic resonsnce images work differently. Magnetic Resonance Imaging is particularly good for examining soft tissue, whereas for a broken bone you’d normally use an X-ray.

In more detail, the physics works as follows. Atomic nuclei are made of neutrons and protons, and the neutrons and protons are each made of three quarks. Quarks have spin one half each and their spins combine to give the neutrons and protons also spin one half. The neutrons and protons then combine their spins to give a total spin to atomic nuclei, which may or may not be zero, depending on the number of neutrons and protons in the nucleus.

If the spin is nonzero, then the atomic nucleus has a magnetic moment, which means it will spin in a magnetic field at a frequency that depends on the composition of the nucleus and the strength of the magnetic field. This is the Larmor frequency that nuclear spin resonance works with. If you have atomic nuclei with spin in a strong magnetic field, then their spins will align with the magnetic field. Suppose we have a constant and homogeneous magnetic field pointing into direction z, then the nuclear spins will preferably also point in direction z. They will not all do that, because there is always some thermal motion. So, some of them will align in the opposite direction, though this is not energetically the most favorable state. Just how many point in each direction depends on the temperature. The net magnetic moment of all the nuclei is then called the magnetization, and it will point in direction z.

In an MRI machine, the z-direction points into the direction of the tube, so usually that’s from head to toe.

Now, if the magnetization does for whatever reason not point into direction z, then it will circle around the z direction, or precess, as the physicists say, in the transverse directions, which I have called x and y. And it will do that with a very specific frequency, which is the previously mentioned Larmor frequency. The Larmor frequency depends on a constant which itself depends on the type of nucleus, and is proportional to the strength of the magnetic field. Keep this in mind because it will become important later.

The key feature of magnetic resonance imaging is now that if you have a magnetization that points in direction z because of the homogenous magnetic field, and you apply an additional, transverse magnetic field that oscillates at the resonance frequency, then the magnetization will turn away from the z axis. You can calculate this with the Bloch-equation, named after the same Bloch who discovered nuclear magnetic resonance in the first place. For the following I have just integrated this differential equation. For more about differential equations, please check my earlier video.

What you see here is the magnetization that points in the z-direction, so that’s the direction of the time-independent magnetic field. And now a pulse of an electromagnetic wave come through. This pulse is not at the resonance frequency. As you can see, it doesn’t do much. And here is a pulse that is at the resonance frequency. As you see, the magnetization spirals down. How far it spirals down depends on how long you apply the transverse magnetic field. Now watch what happens after this. The magnetization slowly returns to its original direction.

Why does this happen? There are two things going on. One is that the nuclear spins interact with their environment, this is called spin-lattice relaxation and brings the z-direction of the magnetization back up. The other thing that happens is that the spins interact with each other, which is called spin-spin relaxation and it brings the transverse magnetization, the one in x and y direction, back to zero.

Each of these processes has a characteristic decay time, usually called T_1 and T_2. For soft tissue, these decay times are typically in the range of ten milliseconds to one second. What you measure in an MRI scan is then roughly speaking the energy that is released in the return of the nuclear spins to the z-direction and the time that takes. Somewhat less roughly speaking, you measure what’s called the free induction decay.

Another way to look at this process of resonance and decay is to look at the curve which the tip of the magnetization vector traces out in three dimensions. I have plotted this here for the resonant case. Again you see it spirals down during the pulse, and then relaxes back into the z-direction.

So, to summarize, for magnetic resonance imaging you have a constant magnetic field in one direction, and then you have a transverse electromagnetic wave, which oscillates at the resonance frequency. For this transverse field, you only use a short pulse which makes the nuclear spins point in the transverse direction. Then they turn back to the z-direction, and you can measure this.

I have left out one important thing, which is how do you manage to get a spatially resolved image and not just a count of all the nuclei. You do this by using a magnetic field with a strength that slightly changes from one place to another. Remember that I pointed out the resonance frequency is proportional to the magnetic field. Because of this, if you use a magnetic field that changes from one place to another, you can selectively target certain nuclei at a particular position. Usually one does that by using a gradient for the magnetic field, so then the images you get are slices through the body.

The magnetic fields used in MRI scanners for medical purposes are incredibly strong, typically a few Tesla. For comparison, that’s about a hundred thousand times stronger than the magnetic field of planet earth, and only a factor two or three below the strength of the magnets used at the Large Hadron Collider.

These strong magnetic fields do not harm the body, you just have to make sure to not take magnetic materials with you in the scanner. The resonance frequencies that fit to these strong magnetic fields are in the range of fifty to three-hundred Megahertz. These energies are far too small to break chemical bonds, which is why the electromagnetic waves used in Magnetic Resonance Imaging do not damage cells. There is however a small amount of energy deposited into the tissue by thermal motion, which can warm the tissue, especially at the higher frequency end. So one has to take care to not do these scans for a too long time.

So if you have an MRI taken, remember that it literally makes your atomic nuclei spin.

Saturday, November 21, 2020

Warp Drive News. Seriously!

[This is a transcript of the video embedded below.]

As many others, I became interested in physics by reading too much science fiction. Teleportation, levitation, wormholes, time-travel, warp drives, and all that, I thought was super-fascinating. But of course the depressing part of science fiction is that you know it’s not real. So, to some extent, I became a physicist to find out which science fiction technologies have a chance to one day become real technologies. Today I want to talk about warp drives because I think on the spectrum from fiction to science, warp drives are on the more scientific end. And just a few weeks ago, a new paper appeared about warp drives that puts the idea on a much more solid basis.


But first of all, what is a warp drive? In the science fiction literature, a warp drive is a technology that allows you to travel faster than the speed of light or “superluminally” by “warping” or deforming space-time. The idea is that by warping space-time, you can beat the speed of light barrier. This is not entirely crazy, for the following reason.

Einstein’s theory of general relativity says you cannot accelerate objects from below to above the speed of light because that would take an infinite amount of energy. However, this restriction applies to objects in space-time, not to space-time itself. Space-time can bend, expand, or warp at any speed. Indeed, physicists think that the universe expanded faster than the speed of light in its very early phase. General Relativity does not forbid this.

There are two points I want to highlight here: First, it is a really common misunderstanding, but Einstein’s theories of special and general relativity do NOT forbid faster-than-light motion. You can very well have objects in these theories that move faster than the speed of light. Neither does this faster-than light travel necessarily lead to causality paradoxes. I explained this in an earlier video. Instead, the problem is that, according to Einstein, you cannot accelerate from below to above the speed of light. So the problem is really crossing the speed of light barrier, not being above it.

The second point I want to emphasize is that the term “warp drive” refers to a propulsion system that relies on the warping of space-time, but just because you are using a warp drive does not mean you have to go faster than light. You can also have slower-than-light warp drives. I know that sounds somewhat disappointing, but I think it would be pretty cool to move around by warping spacetime at any speed.

Warp drives were a fairly vague idea until in 1994, Miguel Alcubierre found a way to make them work in General Relativity. His idea is now called the Alcubierre Drive. The explanation that you usually get for how the Alcubierre Drive works, is that you contract space-time in front of you and expand it behind you, which moves you forward.

That didn’t make sense to you? Just among us, it never made sense to me either. Because why would this allow you to break the speed of light barrier? Indeed, if you look at Alcubierre’s mathematics, it does not explain how this is supposed to work. Instead, his equations say that this warp drive requires large amounts of negative energy.

This is bad. It’s bad because, well, there isn’t any such thing as negative energy. And even if you had this negative energy that would not explain how you break the speed of light barrier. So how does it work? A few weeks ago, someone sent me a paper that beautifully sorts out the confusion surrounding warp drives.

To understand my problem with the Alcubierre Drive, I have to tell you briefly how General Relativity works. General Relativity works by solving Einstein’s field equations. Here they are. I know this looks somewhat intimidating, but the overall structure is fairly easy to understand. It helps if you try to ignore all these small Greek indices, because they really just say that there is an equation for each combination of directions in space-time. More important is that on the left side you have these R’s. The R’s quantify the curvature of space-time. And on the right side you have T. T is called the stress-energy tensor and it collects all kinds of energy densities and mass densities. That includes pressure and momentum flux and so on. Einstein’s equations then tell you that the distribution of different types of energy determines the curvature, and the curvature in return determines the how the distribution of the stress-energy changes.

The way you normally solve these equations is to use a distribution of energies and masses at some initial time. Then you can calculate what the curvature is at that initial time, and you can calculate how the energies and masses will move around and how the curvature changes with that.

So this is what physicists usually mean by a solution of General Relativity. It is a solution for a distribution of mass and energy.

But. You can instead just take any space-time, put it into the left side of Einstein’s equations, and then the equations will tell you what the distribution of mass and energy would have to be to create this space-time.

On a purely technical level, these space-times will then indeed be “solutions” to the equations for whatever is the stress energy tensor you get. The problem is that in this case, the energy distribution which is required to get a particular space-time is in general entirely unphysical.

And that’s the problem with the Alcubierre Drive. It is a solution to a General Relativity, but in and by itself, this is a completely meaningless statement. Any space-time will solve the equations of General Relativity, provided you assume that you have a suitable distribution of masses and energies to create it. The real question is therefore not whether a space-time solves Einstein’s equations, but whether the distribution of mass and energy required to make it a solution to the equations is physically reasonable.

And for the Alcubierre drive the answer is multiple no’s. First, as I already said, it requires negative energy. Second, it requires a huge amount of that. Third, the energy is not conserved. Instead, what you actually do when you write down the Alcubierre space-time, is that you just assume you have something that accelerates it beyond the speed of light barrier. That it’s beyond the barrier is why you need negative energies. And that it accelerates is why you need to feed energy into the system. Please check the info below the video for a technical comment about just what I mean by “energy conservation” here.

Let me then get to the new paper. The new paper is titled “Introducing Physical Warp Drives” and was written by Alexey Bobrick and Gianni Martire. I have to warn you that this paper has not yet been peer reviewed. But I have read it and I am pretty confident it will make it through peer review.

In this paper, Bobrick and Martire describe the geometry of a general warp-drive space time. The warp-drive geometry is basically a bubble. It has an inside region, which they call the “passenger area”. In the passenger area, space-time is flat, so there are no gravitational forces. Then the warp drive has a wall of some sort of material that surrounds the passenger area. And then it has an outside region. This outside region has the gravitational field of the warp-drive itself, but the gravitational field falls off and in the far distance one has normal, flat space-time. This is important so you can embed this solution into our actual universe.

What makes this fairly general construction a warp drive is that the passage of time inside of the passenger area can be different from that outside of it. That’s what you need if you have normal objects, like your warp drive passengers, and want to move them faster than the speed of light. You cannot break the speed of light barrier for the passengers themselves relative to space-time. So instead, you keep them moving normally in the bubble, but then you move the bubble itself superluminally.

As I explained earlier, the relevant question is then, what does the wall of the passenger area have to be made of? Is this a physically possible distribution of mass and energy? Bobrick and Martire explain that if you want superluminal motion, you need negative energy densities. If you want acceleration, you need to feed energy and momentum into the system. And the only reason the Alcubierre Drive moves faster than the speed of light is that one simply assumed it does. Suddenly it all makes sense!

I really like this new paper because to me it has really demystified warp drives. Now, you may find this somewhat of a downer because really it says that we still do not know how to accelerate to superluminal speeds. But I think this is a big step forward because now we have a much better mathematical basis to study warp drives.

For example, once you know how the warped space-time looks like, the question comes down to how much energy do you need to achieve a certain acceleration. Bobrick and Martire show that for the Alcubiere drive you can decrease the amount of energy by seating passengers next to each other instead of behind each other, because the amount of energy required depends on the shape of the bubble. The flatter it is in the direction of travel, the less energy you need. For other warp-drives, other geometries may work better. This is the kind of question you can really only address if you have the mathematics in place.

Another reason I find this exciting is that, while it may look now like you can’t do superluminal warp drives, this is only correct if General Relativity is correct. And maybe it is not. Astrophysicists have introduced dark matter and dark energy to explain what they observe, but it is also possible that General Relativity is ultimately not the correct theory for space-time. What does this mean for warp drives? We don’t know. But now we know we have the mathematics to study this question.

So, I think this is a really neat paper, but it also shows that research is a double-edged sword. Sometimes, if you look closer at a really exciting idea, it turns out to be not so exciting. And maybe you’d rather not have known. But I think the only way to make progress is to not be afraid of learning more. 

Note: This paper has not appeared yet. I will post a link here once I have a reference.




You can join the chat on this video on Saturday 11/21 at 12PM EST / 6PM CET or on Sunday 11/22 at 2PM EST / 8PM CET.

We will also have a chat on Black Hole Information loss on Tuesday 11/24 at 8PM EST / 2AM CET and on Wednesday 11/25 at 2PM EST / 8PM CET.