Saturday, March 26, 2022

These Experiments Could Prove Einstein Wrong

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


Einstein’s theory of general relativity has made countless correct predictions. And yet physicists are constantly trying to prove it wrong. Why? What would it be good for to prove Einstein wrong? And how could it be done? That’s what we’ll talk about today. First of all, I have to clarify that when I say “proving Einstein wrong”, I mean proving Einstein’s theory of general relativity wrong. Einstein himself has actually been wrong about his own theory, and not only once.

For example, he originally thought the universe was static, that it remained at a constant size. He changed his mind after learning of Hubble’s discovery that the light of distant galaxies is systematically shifted to the red, which is evidence that the universe expands. Einstein also at some point came to think that gravitational waves don’t exist, and argued that black holes aren’t physically possible. We have meanwhile found evidence for both.

I’m not telling you this to belittle Einstein. I’m telling you this because it’s such an amazing example for how powerful mathematics is. Once you have formulated the mathematics correctly, it tells you how nature works, and that may not be how even its inventor thought it would work. It also tells us that it can take a long time to really understand a theory.

General Relativity is now more than a century old, and so far its predictions have all held up. Light deflection on the sun, red shift in the gravitational field, expansion of the universe, gravitational waves, black holes, they were right, right, right, and right again, to incredibly high levels of precision. But still, most physicists are pretty convinced Einstein’s theory is wrong and that’s why they constantly try to find evidence that it doesn’t work after all.

The most important reason physicists think that general relativity must be wrong is that it doesn’t work together with quantum mechanics. General relativity is not a quantum theory, it’s instead a “classical” theory as physicists say. It doesn’t know anything about the Heisenberg uncertainty principle or about particles that can be in two places at the same time and that kind of thing. And this means we simply don’t have a theory of gravity for quantum particles. Even though all matter is made of quantum particles.

Let that sink in for a moment. We don’t know how matter manages to gravitate even though the fact that matter *does gravitate is the most basic observation about physics that we make in our daily life.

This is why most physicists currently believe that general relativity has a quantum version, often called “quantum gravity”, just that no one has yet managed to write down the equations for it. Another reason that physicists think Einstein’s theory can’t be entirely correct is that it predicts the existence of singularities, inside black holes and at the big bang. At those singularities, the theory breaks down, so general relativity basically predicts its own demise.

Okay, so we have some reason to think general relativity is wrong, but how can we find out whether that’s indeed the case? The best way to do this is by testing the assumptions that Einstein based his theory on. The most important assumption is that the speed of light is the same in all directions and everywhere in the universe. To be precise, that refers to the speed of electromagnetic radiation at all frequencies, not just in the range of visible light, and it’s the speed in vacuum, usually denoted c. The speed of light in a medium depends on the rest frame of the medium.

According to Einstein, the speed of light in vacuum doesn’t depend on the energy of the light or its polarization. If the speed depends on the energy, that’s called dispersion, and if it depends on the polarization that’s called birefringence. We know that these effects both exist in medium. If we’d also see them in vacuum, that would mean Einstein was wrong indeed.

The currently best experiments for this come from analyzing electromagnetic radiation from gamma ray bursts. This is mostly because gamma ray bursts are bright, short, and can be far away, often several billion light years. Moreover, they emit electromagnetic radiation up to really high energies. Since one knows that the light must have been emitted in the burst at about the same time regardless of its energy, one can then test whether it also arrives at the same time. If it doesn’t, that would be evidence that the speed of light depends on the energy.

Since the gamma ray bursts are so far away, even tiny differences in the speed of light can add up to a noticeable delay. The most recent data on this were published just a few months ago by a group from Oxford and Stockholm. So far there is no indication that Einstein was wrong. You already knew this of course because otherwise you’d have seen the headlines! But that’s one way it could happen.

In general relativity it further turns out that gravitational waves also move with the speed of light. This is quite difficult to test because it requires one to measure both gravitational waves and light from the same source. Even if you manage to do that, it’s difficult to tell whether they were really emitted simultaneously. There is really only one measurement of this at the moment, which is a gravitational wave event from August 2017.

This is believed to have been a merger of two neutron stars, and it was accompanied by an electromagnetic signal. The electromagnetic signal was detected by the Fermi and INTEGRAL spacecraft beginning at about 1.7 seconds after the gravitational wave event began.

This is compatible with what Einstein predicts. However, it is pretty much impossible to prove Einstein wrong this way. Because if the two signals do not arrive together you don’t know whether that’s because one arrived 5 years earlier, or will arrive 100 years later, or maybe because you just didn’t measure it because it was too weak. Indeed, so far none of the other observed gravitational wave events came with an electromagnetic counterpart, and no one’s claimed that this means Einstein was wrong.

So that’s not a very promising way to prove Einstein wrong. But gravitational waves offer another opportunity to do that. In Einstein’s theory of general relativity, the black hole horizon is not a physical thing. It’s just the location of a surface that, once you’re inside, you can’t get out. It’s really just a name we give to this boundary much like city limits. But if Einstein’s theory is not fundamentally correct, then the black hole horizon could have physical properties, for example created by quantum effects in that yet-to-be-found theory of quantum gravity.

If that was so, then the gravitational waves emitted from black hole mergers would look different from what Einstein predicts. Because if the horizon is a physical thing, then it can ring and that creates echoes, not of sound waves, but of gravitational waves. In the gravitational wave data, this would look like a regular repetition of the original signal with decreasing amplitude.

There are a number of people who have looked for those. Niayesh Afshordi and his group at Perimeter Institute, some people from the LIGO collaboration, and a few others. They actually did find a signal that looked like an echo in the previously mentioned gravitational wave event from August 2017. Depending on whom you ask, the statistical significance is between 2 and 4.2 sigma.

However, after analyzing the data some more, astrophysicists seem to have mostly agreed that the alleged echo didn’t have anything to do with the black hole horizon itself. Remember this was a neutron star merger. People from Luciano Rezolla’s group have argued what happened is that the collapse to a black hole was somewhat delayed. This looks like an echo, but only once, and is also why the electromagnetic signal came 1.7 seconds after the gravitational wave signal had started.

In a new paper which just appeared a few weeks ago, Niayesh’s group claims again they’ve found a signal of a black hole echo. They just can’t give up trying to prove Einstein wrong. This time they say they found it in a different gravitational wave event and at 2.6 sigma, so that’s about a 1 in 200 chance to be coincidence. Personally I think it’s very implausible that we will find evidence that Einstein was wrong in black hole signals, but it’s worth looking for.

Another way physicists try to find ways to prove general relativity wrong is by showing that it doesn’t correctly work together with quantum mechanics. The major challenge for doing this is that in the experiments that we have been able to do so far, we either measure quantum effects, but then the masses of the objects are so small that we can’t measure the gravitational field. Or we can measure the gravitational field, but then the objects are so massive we can’t measure their quantum effects.

So one of the ways to prove Einstein wrong is to bring more massive objects into quantum superpositions and then measure their gravitational field. If the gravitational field is also in a quantum superposition, then that means general relativity is out and Einstein wrong. This avenue is pursued for example by the group of Markus Aspelmeyer in Vienna.

A related idea is to show that gravity can cause entanglement. Entanglement is a quantum effect and if it can be caused by gravity, then this means gravity must have quantum properties too, which it can’t in Einstein’s theory. So that too would prove Einstein wrong. This is a good idea in principle, but I suspect that in practice it will be very, very difficult to show that the entanglement didn’t come about in other ways.

Another rather straight-forward test is to check whether the one-over-R-squared law holds at very short distances. Yes, that’s known as Newton’s law of gravity, but we also have it in general relativity. Whether this remains valid at short distances can be directly tested with high precision measurements. These are done for example by the group of Eric Adelberger in Washington DC.

This image shows the key component of their measuring device. These two parts are rotated against each other while the gravitational attraction between them is being measured. This creates a periodically changing force which is a really clever way to filter out noise. Their most precise measurement yet was published in 2020 and confirms that one-over-R-squared law is correct all the way down to 57 micrometers. So again, they didn’t find anything out of the order so far, but this is another way Einstein could turn out to be wrong. 

 Finally, one can test a key assumption underlying general relativity, which is the equivalence principle. The equivalence principle says loosely speaking that all objects should fall the same and, most importantly, that how fast they fall doesn’t depend on their mass. This is much easier to measure than the gravitational field of particles because when you test the equivalence principle you are looking for a difference.

You can make your life even easier by looking for a difference between two objects that are very similar except for their mass, like two different isotopes of the same atom. This has been done most recently by a group in Stanford, California who looked for a difference in how two isotopes of Rubidium fall in the gravitational field of Earth. Again you already know they didn’t find any violation of the equivalence principle because otherwise you’d have heard of it. But this too is a way that Einstein could turn out to be wrong.

What would it be good for to prove Einstein wrong? Well, first of all it would give us experimental guidance to develop a theory of quantum gravity, and that could help us understand the quantum properties of space and time, as well as what’s inside black holes or what happened at the big bang.

Many physicists also hope that it will shed light on other puzzles, such as dark matter and dark energy, or explain some nagging anomalous observations in cosmology, like the presence of too many large structures in the universe, which we talked about in an earlier video, or that different measurement of the Hubble rate don’t give the same results.

Personally I think the most promising way to prove Einstein wrong is the approach pursued by the group of Aspelmeyer. And if they succeed they’ll almost certainly win a Nobel Prize. But it’s quite possible that in the end the breakthrough will happen in a way that no one saw coming.

Saturday, March 19, 2022

These tiny robots could work inside your body

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


When I grew up, one of my favorite movies was “Innerspace”, in which a man is miniaturized and, by accident, ends up inside somebody else’s body. We’re not going to shrink people to make them fit into blood vessels any time soon. But injecting tiny remote controlled robots into the human body isn’t all that far-fetched. What tiny robots are scientists working on? How far along is the technology? And, aside from leaving Rohin unemployed, what could they be good for? That’s what we’ll talk about today.

First things first, “tiny robots” isn’t a technical term. Scientists like to be more precise and talk about microbots or nanobots, for robots of the size of micrometers or nanometers. Just for reference, the width of a human hair is about a tenth of a millimeter. A micrometer is one hundred times smaller than a hair width. And a nanometer is one hundred thousand times smaller. So, yeah, that’s really really tiny. But some “tiny” robots are up to a few millimeters in size. I guess we should call those millibots. And then are xenobots. We’ll talk about those later.

You may think the problem with tiny robots is that they’re tiny. But actually that’s not the problem. Modern technology has been extremely successful at miniaturization, and I’m not talking about cellphones. Take a look at this image of a few gears next to a dust mite. That’s what I am talking about. We already have the technology to custom-build tiny things.

No, the problem with tiny robots is a different one. It’s that, regardless of whether the prefix is nano, micro, or xeno, at such small scales, different laws of physics become relevant. You can’t just take a human sized robot and scale it down, that makes no sense.

For tiny robots, forces like friction and surface tension become vastly more important than they are for us. That’s why insects can move in ways that humans can’t, like walking on water, or walking upside-down on the ceiling, or like, flying. Tiny robots can indeed fly entirely without wings. They just float on air like dust grains. Tiny robots need different ways of moving around, depending on their size and the medium they’re supposed to work in, or on.

There is another reason why tiny robots are more than just small versions of big robots, it’s that you need them in large numbers and they must be able to coordinate their tasks. Imagine for example you want to deliver drugs to cancer cells with robots of a size comparable to that of a cell. Well, then you need a lot of those robots just because there’s lots of cells in a tumor. A tumor of one cubic centimeter is typically made of some hundred million cells. That means, the production of these robots must be easy, cheap and fast.

But enough talk about the problems, let’s look at some robots that engineers have built.

Tiny robots often rely on flexible materials that can change shape. Here is an example of a little robot that’s a few millimeters long. It was developed by a group of researchers from the Max Planck Institute in Stuttgart, Germany. They published their results in Nature magazine in 2018.

This tiny robot is basically an elastic piece of silicone with some magnetic material added. Because of the magnetic material, one can use a magnetic field to bend and move it, in quite a variety of ways for which the group has taken inspiration from worms, caterpillars and jellyfish. The magnetic field they used has a strength of typically 10 milli Tesla. That’s strong, but not super strong, about several thousand times less than what you need for an MRI.

A few millimeters are still pretty large if you want to move around the human body. Here is another recent example of a robot that’s less than a tenth of a millimeter. It was developed by researchers at Cornell and Pennsylvania University. This tiny robot basically consists of a body that has a solar cell and four legs. If one illuminates the solar cell with laser pulses, then the legs move.

These robots can easily be mass produced using the same techniques that are currently used to mass produce microchips. The team estimates that for one US dollar, you could produce about a thousand of such robots, each equipped with a clock, sensors, and a programmable controller. To give you a sense of scale, these robots are so small that they can be sucked up injected with a syringe needle, which I’m sure will delight conspiracy theorists and anti-vaxers. The researchers hope that in the future their robots can operate with sunlight as power source.

That looks neat. Except, well, there isn’t a lot of sunlight in the human body. Indeed, how to get microbots in the desired place to do their job is maybe the biggest challenge at the moment. There are basically two ways to get it done. Either you use some kind of external control. That could be using magnetic or electric fields or ultrasound or lasers. Or you use some kind of self-propulsion mechanism.

An example of a self-propelled robot comes from two Japanese researchers who published a paper in 2019. Their tiny robot is basically a little tube that has an anode and a cathode, and those act on organic molecules which you would also find inside the human body. As a result, the robot moves forwards because momentum is conserved. It’s all physics!

Another method to move around a tiny robot is by pushing it with bacteria that themselves can be controlled with magnetic fields. Yes, there are bacteria that respond to magnetic fields, the species is called Magnetobacterium. This idea was put forward in 2006 by a group from the NanoRobotics Laboratory in Quebec, Canada. The magnetic field they used is half a Gauss, which is about an order of magnitude smaller than the field they used to bend the flexible millibot. Again that’s strong but not super strong. But this idea of bacteria-aided propulsion hasn’t been much further explored.

The major motivation for tiny robots is that one day they can be used to perform tasks in blood vessels or inside human tissue, to make surgeries less invasive or to entire avoid them. They might also be used to deliver drugs to a specific target to make a treatment more effective and protect the surrounding tissue. Besides this, they could passively collect data about their environment with very exact time and location markers. But these aren’t the only things tiny robots could be good for.

A team of researcher from the Czech Republic has for example developed a microbot that can capture and destroy microplastics. This robot is star-shaped and only about 4 micrometers in size, so that’s smaller than all the previous ones we have looked at. It is also powered by sunlight. In a paper published a few weeks ago the researchers show how their tiny robot gets stuck onto microplastic bits as soon as it touches them. The robot then accelerates the degradation of plastic.

Their test result numbers are not very impressive: In lab experiments, the robots reduced the weight of the plastics by only 3% in one week. Then again, this is just a proof of concept. Maybe one day we could release trillions of these robots at sea or wherever you want to clean and let them do the work for you.

Except, well, you might just swap microplastic pollution for microbot pollution. This is why Michael Sailor from the University of California San Diego has proposed to drill nanometer sized holes into tiny robots to make then less durable. They would then easily degrade within days or months, depending on the conditions, and decay into nontoxic silicon compounds.

Some researchers are thinking about robots differently. They combine biological materials, like parts of cells, with synthetic materials. These hybrid robots aren’t just promising because they allow researchers to use propulsion mechanisms that evolution has developed, but also because they can remedy another problem. It’s that robots in the human body may be attacked by the immune system. Hybrid robots which resemble cells or parts of cells can greatly alleviate this issue.

An example of a tiny hybrid robot that can swim through blood was developed by researchers at the University of California San Diego in 2018. Their goal was to use the robot to remove harmful bacteria and the toxins which the bacteria produce. Their robots have a size of about one micrometer, are powered by ultrasound, and can travel up to 35 micrometers per second.

They are made of gold nanowires coated with membranes from red blood cells and platelets. The gold nanowire responds to ultrasound, which allows the researchers to control where the robots go. The coating maintains much of the function of those cells, and therefore can absorb and neutralize toxins produced by bacteria.

The researchers have tested their tiny robots on blood samples contaminated with bacteria. After five minutes, the blood samples had three times less bacteria and toxins than untreated samples. Again this isn’t technology that will become common use any time soon, but it’s a promising proof of principle.

Finally, let’s talk about the xenobots. Xenobots are robots the size of a few tenths of a millimeter made from frog embryo cells. These cells are called Xenopus laevis, hence the name xenobots. This is a fairly new idea; it’s only been around since 2020.

The way it’s done is by using two types of cells: skin cells to create a barrier and heart muscle cells, which provide movement when they contract. Depending on how they are combined, the xenobot can perform different functions. This could be cleaning a medium of a certain substance, like microplastics or certain chemicals, or delivering drugs to a specific location, or clearing an obstructed artery.

A few weeks ago, a new paper appeared in the Proceedings of the National Academy of Sciences in which the authors presented for the first time xenobots capable of self-replication. The xenobots replicate by collecting cells and assembling them to new xenobots. Not really how we are used to self-replication from biology, but self-replication nevertheless.

But the production of these xenobots currently requires a lot of craftsmanship. It’s done by hand with a lot of cutting and twisting under the microscope, like an especially tiny form of microsurgery. This is clearly totally impractical for mass production, so until a cheap and easy technology has been developed to create xenobots, they’re not going to be of much use.

As you can see, tiny robots are a super-active research area at the moment, and the potential of this new technology is amazing. We will certainly come back with updates in the future, so don’t forget to subscribe.

Saturday, March 12, 2022

Is light pollution a real problem?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


For much of human history, light meant safety. Electric lights are today one of the hallmarks of civilization. But in recent years, activists have begun to complain about “light pollution” caused by too much or the wrong type of artificial light. In the French city of Rennes some young guys are running around at night turning off shop lights.

Is light pollution a real problem or just something that first world people complain about because they have nothing else to complain about? What could we do about it in any case? And how much of a problem are Elon Musk’s Starlink satellites really? That’s what we’ll talk about today.

This is a photo of the night sky in Patagonia. You can clearly see the Milky Way. If you took a photo with the same exposure at night in Barcelona, it would look like this. If you adjust the exposure so that it matches what we see with our own eyes, you’d see this. Artificial lights make the sky glow, you can’t see the Milky Way, and you barely see any stars.

And that’s what the night sky look like now above most cities in the developed world. Many who live there have never seen the Milky Way. During a major black out in Los Angeles in 1994, the police received numerous calls because people worried about this weird cloud in the night sky.

Alright, you may say, but most people have seen the Milky Way at some point. Yet few have seen the Zodiacal light or even know what it is. The Zodiacal light comes from sunlight that reflects off dust which floats around in the Solar System. The dust is probably mostly from Mars. The Zodiacal light, which you see in this photo, is sometimes called the “false dawn” because it can be mistaken for an upcoming sunrise. Fun fact: Brian May, lead guitarist of the band Queen, did his PhD in Astronomy about the Zodiacal light.

Light pollution is now incredibly common in the developed world. According to a paper from the Light Pollution Science and Technology Institute in Italy, by 2020, 60% of the population in Europe can’t see the Milky Way. In the USA that percentage is now at 78%. Light pollution is also the reason that astronomical observatories now must be located in isolated deserts, like the ones in northern Chile, or on island in the middle of the ocean, like Hawaii or the Canary Islands.

Okay, you may say, too bad for astronomers, but who really cares. Most of us can live very well without seeing the Milky Way. Indeed, but light pollution doesn’t just obscure our view of the night sky. Too much light, or the wrong kind of light, affects our circadian rhythm, our inner clock that regulates biological functions.

The circadian rhythm strongly relies on the input we receive from a certain type of photoreceptor in our eyes. That’s not the cones and rods, which are responsible for day and night vision, respectively. It’s a third type of photoreceptor that was only discovered in the 1990’s, called the intrinsically photosensitive retinal ganglion cell (ipRGC). It’s not involved in vision itself. Instead, it allows the body to use light as input to set the clock for the circadian rhythm. It is particularly sensitive to light at the blue end of the spectrum.

And if you mess with the input on those photoreceptors, the whole system gets messed up. According to a 2016 report from the American Medical Association scientists have found links between altered circadian rhythms and insomnia, depression, dementia, diabetes, heart disease and even cancer.

Now, that scientists have found links between this and that doesn’t always mean much, but some of those studies leave little doubt that light pollution has a major impact on our quality of life, and our health. For example, in a 2016 paper, researchers from Stanford University and NASA interviewed over 15 thousand Americans about the amount and quality of their sleep. Then they looked for correlations with nighttime light levels, using the GPS coordinates of the respondent’s homes and light data from satellites.

They found that living in areas with greater outdoor lights at night was associated with delayed bedtime and wake-up time, and increased daytime sleepiness. It also increased the dissatisfaction with sleep quantity and quality, all with a p-value smaller than 0.0001. That still wouldn’t count as a discovery in particle physics, but for medicine that’s an amazingly strong correlation.

In 2017, another American team of researchers published the results from a study that tracked the health of 100 thousand nurses for 22 years. They found that those who lived in places with more light at night were at an increased risk to develop breast cancer, even after accounting for individual and area risk. It was not a tremendously big increase, just 5% at 95% confidence level, but it wasn’t the first such finding.

A 2008 study in Israel had also found a link between light pollution and breast cancer. And a 2019 study in Spain, which followed several thousand people over 5 years, found that exposure to light pollution in the blue part of the spectrum specifically was associated with an increased risk of breast (+47%) and prostate cancer (+100%). It’s not that the light itself causes cancer, but rather, scientists think it’s that light pollution, by affecting the circadian rhythm, alters hormone levels, and the link between hormone levels and cancer risk is well established.

So too much light at night isn’t good for us, but it’s even worse for animals, especially birds. According to the United States Fish and Wildlife Service, light pollution kills between 5 and 50 million birds each year in North America alone. The major problem is that the glow above cities makes the birds think that sunrise is near, so they don’t get enough sleep, are chronically exhausted, and fall victim to predators or illness more easily.

Another animal that suffers from light pollution are sea turtles. When the baby turtles hatch on illuminated beaches, they walk towards the lights rather than towards the sea. There are also animals which hunt at night, like owls, which starve simply because their prey sees them coming.

And light pollution is constantly getting worse. A 2017 paper from researchers in Europe and America found that the area lighted at night increases by 2.2% per year worldwide, and in places where light pollution was already present its brightness increases also by 2.2% per year.

Okay, so we have seen that light pollution negatively affects health and quality of life, it isn’t good for animals either, and it’s getting worse. Now, the world certainly has larger problems than that, but on balance light pollution is fairly easy to fix with a little advance planning. That basically means, cities must watch out what lights they install.

In many places the old, yellowish light bulbs are now being replaced with modern white LEDs. They are cheap, energy efficient, and last long. Their light emission is also much more focused, so they can be directed downward, which reduces the upward glow. That’s better for birds and star lovers. So far, so good.

But unfortunately, the new LEDs also emit much more light in the blue spectrum. And according to the report from the American Medical Association which I mentioned previously, these blue-rich LED street lights appear to be five times more disruptive to our sleep cycle than the old street lights.

It would be better to use LEDs that emit more on the yellow end, such as narrow band amber LED (NAB-LED). Their emission spectrum is also, as the name says, fairly narrow, so it can be more easily filtered out of images which makes astronomers happy. At present these LEDs are much more expensive than the blue-white ones, so this is not a common choice. As so often, we’ll have to decide whether the improvement in life quality is worth the money.

Now what’s with Elon Musk’s Starlink satellites? While those also cause light pollution in some sense, they are really an entirely different problem. Musk’s company now has about 1900 of those satellites in orbit. They are aiming for a licence to make that as much as 30,000. These satellites provide worldwide internet access which covers literally the entire globe. But astronomers hate them.

The Starlink satellites are fairly small but they fly low, at “just” 340 kilometers of height. They can be seen like the moon because they reflect sunlight. And because they fly so low they are brighter than the satellites higher up. The biggest problem with the Starlink satellites however is that they can autonomously change their orbits, so astronomers can’t schedule observations to avoid them.

For example, in November 2019 astronomers at the Cerro Tololo International Observatory in Chile were capturing an image of the night sky. Or at least they were trying. That’s what came out. The satellites are so bright that they even show up in the observatory webcam!

Elon Musk isn’t the only one who wants to occupy low orbits, he’s just the first one. There’s also Amazon Kuiper, Samsung, OneWeb, India Astrotech, and two dozen more companies. In the long run we might end up with as much as 100000 satellites. Connie Walker from the International Astronomical Union said in a recent interview with the BBC: "By the end of a decade, more than 5,000 satellites will be above the horizon at any given time at a typical dark-sky observatory location. A few 100 to several 1,000 of these satellites will be illuminated by the Sun.”

In 2020 researchers from the European Southern Observatory published a paper in which they estimate the impact of these new satellite fleets on their astronomical observations in the visible and infrared. The satellites are a real problem in the two hours before and after sunrise and, as you expect, are more of a problem for images that capture a large part of the sky than for small parts. ESO estimates that between 1 and 40% of images will be affected during the first and last hours of the night.

Another observatory, the Zwicky Transient Observatory in Southern California, reported that already in late 2020 about 6% of their images were affected. By August 2021 the share of affected images had increased to 18%, and they expect that by the time Starlink reaches 10,000 satellites all their images will contain trails from the satellites.

Last month, the International Astronomical Union announced they’d founded the Centre for the Protection of the Dark and Quiet Sky from Satellite Constellation Interference. Its purpose is to lobby for legislation that makes the low earth orbit satellites less disruptive for astronomy. 

I am really torn on the issue. On the one hand I think that the global internet coverage which the satellites may bring is a blessing for many poor and remote regions of the planet. On the other hand, private companies should be a little more respectful to the scientific cultural good of astronomy. What do you think? Let me know in the comments.

Saturday, March 05, 2022

Did the early universe inflate?

[This is a transcript of the video embedded below. Some of the explanations may not make sense without the animations in the video.]


One of the most amazing discoveries of the past century has been that the universe expands. This is one of the insights physicists derived from Einstein’s theory of General Relativity. Yes, that guy again! But after this discovery, physicists made the theory more complicated. They added the hypothesis that not only does the universe expand, but that early on, right after the big bang, it expanded exponentially, blowing up space by 30 orders of magnitude in a fraction of a second.

This rapid exponential expansion in the early universe is called “inflation” and it does not follow from Einstein’s theory. Why did physicists add this complication? How does it work? And do we have any evidence that it’s actually correct? That’s what we’ll talk about today. Did the early universe inflate?

In the popular science media, inflation is sometimes presented as if it was established fact. It isn’t. Its status is similar to that of particle dark matter. They are both unconfirmed hypotheses. But while most physicists agree that particle dark matter has yet to be empirically confirmed, opinions about inflation are extremely polarized.

On the one hand you have people like Alan Guth, one of the inventors of inflation theory, arguing that the theory has made many correct predictions and that evidence speaks for it. On the other hand, you have people like Paul Steinhardt, interestingly enough also one of the inventors of inflation, who argue that inflation doesn’t make any predictions and isn’t even science. In an essay some years ago, Steinhardt together with Anna Ijjas and Avi Loeb wrote “inflationary cosmology, as we currently understand it, cannot be evaluated using the scientific method.”

Which side is right? They’re both right and they’re both wrong. Stay with me for some minutes and I hope it’ll start making sense.

The major disagreement between the two sides is philosophical, and we have to get this out of the way before we can talk about the science.

Guth, and most of his colleagues really, argue that physicists have used models of inflation to make predictions which were later confirmed, such as some properties of the large scale structure and the cosmic microwave background, notably the scalar spectral index which is somewhat smaller than one, and that space is on average flat, to good precision. This agrees with observation and they think this is evidence in favor of inflation. Steinhardt’s side holds against this that inflation models really predicted anything so that *those predictions which turned out to fit to observations can’t speak in favor of inflation.

On that count, Steinhardt’s people are clearly correct. Just because someone made a correct prediction doesn’t mean they have a good scientific theory. They may just have been lucky. And if you make sufficiently many different predictions, the chance that one of them later fits to observations is very high. Predictions are really overrated. Whether a scientific theory is good or not has nothing to do with the time at which one does a calculation with it. What matters instead is how much data you can correctly explain with it, so Guth’s argument doesn’t hold water.

What Steinhardt’s people are arguing in an nutshell is that inflation is such a flexible hypothesis that it can be made to fit any data. To see why they say this, let us have a look at how inflation works. You conjecture that in the early universe there was no matter but a new type of field called the inflaton field. The inflaton field has a potential energy and it has an initial condition. This potential energy and the initial condition – here comes the problem – are described by a bunch of parameters and functions.

The potential energy gives rise to the exponential expansion of the universe. But as the universe expands, the field sheds its potential energy. And when it’s done with that, the field decays into the normal particles of the standard model. And dark matter, if you think it exists. Which it may not. In any case, the inflaton field disappears so we can’t see it today. And once the inflaton field is gone, you take what’s left and from this you calculate what we should observe.

So the way inflation works is that you put in some parameters, initial values, and functions and out come numbers for what we should measure. There are literally hundreds of models for inflation and each makes somewhat different predictions.

Steinhardt and his people now argue, that regardless of what we observe, you can always fumble together an inflationary model that would fit to the observations. Therefore, the idea has no predictive power.

Guth and his side have two answers to this which actually contradict each other. First, you often hear them claim that inflation has made unambiguous predictions, as I said earlier, that the spectral index is somewhat smaller than one and that the curvature density of space is small today, indeed so small that at present it’s consistent with zero.

Problem is, this is patently untrue. If you look at the old literature, before we had the data, it’s easy enough to find inflationary models that predicted a spectral index larger than one. And inflation doesn’t predict the curvature at all. Inflation merely decreases the initial value that you picked for the curvature. But for any value that we observe today, there is *some initial value.

The second answer you will hear from the defenders of inflation is that, yes, we have a lot of inflationary models and they predict anything, so, contradicting the first argument. But we just determine the correct parameters and the potential from observations and that’s the same we do with the standard model of particle physics. For example, in a recent podcast to which I leave you a link below, Alan Guth made the following claim.
“It is certainly true that there are many different versions of inflation which you describe well, that depends on what you assume about the potential energy function for the inflaton field. That could be models there are models with many inflaton fields, more complicated potentials and interactions between them. So there’s a large variety there. But that’s exactly the same situation as one has in quantum field theory and how it relates to the standard model of particle physics.”
Another example, in response to the SciAm article by Steinhardt and coauthors, a group of cosmologists wrote a letter. This letter was signed by a lot of big shots in the field, Alan Guth and Andrei Linde and David Kaiser, but also Steven Weinberg and Frank Wilczek and Ed Witten. They wrote
“the testability of a theory in no way requires that all its predictions be independent of the choice of parameters. If such parameter independence were required, then we would also have to question the status of the Standard Model, with its empirically determined particle content and 19 or more empirically determined parameters.”
It is correct of course that for a theory to be testable not all its predictions have to be independent of the parameters. But it does require that you predict more data points than you have parameters. A scientific theory requires that you get more out than you put in, otherwise you don’t explain anything, you’re overfitting data.

And when it comes to fitting data, the situation for inflation is not remotely comparable to the standard model of particle physics. In particle physics, those 19 parameters explain literally terabytes of data. This means it’s a model with extraordinarily high explanatory power. But for inflation, the data you’re trying to predict comes down to a few numbers. In this case, the input of the models is actually more complicated than the output. This means they’re crappy models without explanatory power.

That you use the same procedure as for the standard model is completely irrelevant. That in my understanding is what Steinhardt’s side claims. And they are clearly correct on this count, too. Inflation predicts anything, and no this is not standard scientific methodology. Standard scientific methodology would require you to stick with models that have explanatory power.

Steinhardt by the way argued exactly the opposite 20 years ago. The reason he changed his mind seems to have been that many cosmologists have argued that inflation leads to a multiverse and Steinhardt doesn’t like the multiverse. So now he has made up his own alternative to inflation which is a type of cyclic cosmology. This didn’t really do his argument any favor.

Not that Guth’s side did any better. Another “argument” which the defenders of inflation raised in their letter was this:
“According to the high-energy physics database INSPIRE, there are now more than 14,000 papers in the scientific literature, written by over 9,000 distinct scientists, that use the word “inflation” or “inflationary” in their titles or abstracts. By claiming that inflationary cosmology lies outside the scientific method, [Ijjas, Steinhardt, and Loeb] are dismissing the research of not only all the authors of this letter but also that of a substantial contingent of the scientific community.”
This argument sadly shows that social reinforcement is a real problem in physics. Some of the biggest names in the community signed up to what is basically an argument from popularity, clearly a logical fallacy. It’s because of arguments like this that people don’t trust scientists.

In any case, that’s it with the philosophy, now let’s talk about the science. I just told you why Steinhardt’s people are right, so now let me tell you why they’re wrong.

They’re wrong because that there are many physicists who have fumbled together complicated models for inflation is correct but beside the point. Of course it means there’s a colossal waste of time and money going on. But for what the science is concerned really you should ask whether there is *any* simple model of inflation from which you get out more than you put in. And the answer to this is yes. You just have to look at the right models and the right data.

The most impressive data which simple inflationary models explain is a peculiar correlation in the cosmic microwave background, that between the temperature and the E modes, called the ET correlation. Doesn’t really matter if you don’t exactly know what this is, the point is it’s something which has been observed, and it a non-trivial correlation in the data which you can calculate from bunch of simple inflationary models. These models are good explanations for observations.

An example of such a simple model that fits with all current data is Starobinski inflation. In this figure you see that it’s right in the middle of the experimentally allowed region. But some other simple models are good too.

That there are also many other models which don’t work doesn’t really matter. Unless I guess you’re one of the 9000 or so people who have published papers on that.

So to summarize. Guth is right in saying that inflation is good science. But he is wrong with the reason for why that’s the case. Steinhardt is right with pointing out that Guth’s argument doesn’t hold up. But his conclusion is wrong because there are other reasons for why inflation is good science.

However, that doesn’t mean inflation is right. Physicists have proposed many other theories for the early universe, for example cyclic cosmology, and those can also explain observations. And maybe in the end one of those other theories will be the better explanation. We’ll talk about some of those alternatives another time, so don’t forget to subscribe.