Pages

Saturday, May 29, 2021

What does the universe expand into? Do we expand with it?

[This is a transcript of the video embedded below.]


If the universe expands, what does it expand into? That’s one of the most frequent questions I get, followed by “Do we expand with the universe?” And “Could it be that the universe doesn’t expand but we shrink?” At the end of this video, you’ll know the answers.

I haven’t made a video about this so far, because there are already lots of videos about it. But then I was thinking, if you keep asking, those other videos probably didn’t answer the question. And why is that? I am guessing it may be because one can’t really understand the answer without knowing at least a little bit about how Einstein’s theory of general relativity works. Hi Albert. Today is all about you.

So here’s that little bit you need to know about General Relativity. First of all, Einstein used from special relativity that time is a dimension, so we really live in a four dimensional space-time with one dimension of time and three dimensions of space.

Without general relativity, space-time is flat, like a sheet of paper. With general relativity, it can curve. But what is curvature? That’s the key to understanding space-time. To see what it means for space-time to curve, let us start with the simplest example, a two-dimensional sphere, no time, just space.

That image of a sphere is familiar to you, but really what you see isn’t just the sphere. You see a sphere in a three dimensional space. That three dimensional space is called the “embedding space”. The embedding space itself is flat, it doesn’t have curvature. If you embed the sphere, you immediately see that it’s curved. But that’s NOT how it works in general relativity.

In general relativity we are asking how we can find out what the curvature of space-time is, while living inside it. There’s no outside. There’s no embedding space. So, for the sphere that’d mean, we’d have to ask how’d we find out it’s curved if we were living on the surface, maybe ants crawling around on it.

One way to do it is to remember that in flat space the inner angles of triangles always sum to 180 degrees. In a curved space, that’s no longer the case. An extreme example is to take a triangle that has a right angle at one of the poles of the sphere, goes down to the equator, and closes along the equator. This triangle has three right angles. They sum to 270 degrees. That just isn’t possible in flat space. So if the ant measures those angles, it can tell it’s crawling around on a sphere.

There is another way that ant can figure out it’s in a curved space. In flat space, the circumference of a circle is related to the radius by 2 Pi R, where R is the radius of the circle. But that relation too doesn’t hold in a curved space. If our ant crawls a distance R from the pole of the sphere and you then goes around in a circle, the radius of the circle will be less than 2πR. This means, measuring the circumference is another way to find out the surface is curved without knowing anything about the embedding space.

By the way, if you try these two methods for a cylinder instead of a sphere you’ll get the same result as in flat space. And that’s entirely correct. A cylinder has no intrinsic curvature. It’s periodic in one direction, but it’s internally flat.

General Relativity now uses a higher dimensional generalization of this intrinsic curvature. So, the curvature of space-time is defined entirely in terms which are internal to the space-time. You don’t need to know anything about the embedding pace. The space-time curvature shows up in Einstein’s field equations in these quantities called R.

Roughly speaking, to calculate those, you take all the angles of all possible triangles in all orientations at all points. From that you can construct an object called the curvature tensor that tells you exactly how space-time curves where, how strong, and into which direction. The things in Einstein’s field equations are sums over that curvature tensor.

That’s the one important thing you need to know about General Relativity, the curvature of space-time can be defined and measured entirely inside of space-time. The other important thing is the word “relativity” in General Relativity. That means you are free to choose a coordinate system, and the choice of a coordinate system doesn’t make any difference for the prediction of measurable quantities.

It’s one of these things that sounds rather obvious in hindsight. Certainly if you make a prediction for a measurement and that prediction depends on an arbitrary choice you made in the calculation, like choosing a coordinate system, then that’s no good. However, it took Albert Einstein to convert that “obvious” insight into a scientific theory, first special relativity and then, general relativity.

So with that background knowledge, let us then look at the first question. What does the universe expand into? It doesn’t expand into anything, it just expands. The statement that the universe expands is, as any other statement that we make in general relativity, about the internal properties of space-time. It says, loosely speaking, that the space between galaxies stretches. Think back of the sphere and imagine its radius increases. As we discussed, you can figure that out by making measurements on the surface of the sphere. You don’t need to say anything about the embedding space surrounding the sphere.

Now you may ask, but can we embed our 4 dimensional space-time in a higher dimensional flat space? The answer is yes. You can do that. It takes in general 10 dimensions. But you could indeed say the universe is expanding into that higher dimensional embedding space. However, the embedding space is by construction entirely unobservable, which is why we have no rationale to say it’s real. The scientifically sound statement is therefore that the universe doesn’t expand into anything.

Do we expand with the universe? No, we don’t. Indeed, it’s not only that we don’t expand, but galaxies don’t expand either. It’s because they are held together by their own gravitational pull. They are “gravitationally bound”, as physicists say. The pull that comes from the expansion is just too weak. The same goes for solar systems and planet. And atoms are held together by much stronger forces, so atoms in intergalactic space also don’t expand. It’s only the space between them that expands.

How do we know that the universe expands and it’s not that we shrink? Well, to some extent that’s a matter of convention. Remember that Einstein says you are free to choose whatever coordinate system you like. So you can use a coordinate system that has yardsticks which expand at exactly the same rate as the universe. If you use those, you’d conclude the universe doesn’t expand in those coordinates.

You can indeed do that. However, those coordinates have no good physical interpretation. That’s because they will mix space with time. So in those coordinates, you can’t stand still. Whenever you move forward in time, you also move sideward in space. That’s weird and it’s why we don’t use those coordinates.

The statement that the universe expands is really a statement about certain types of observations, notably the redshift of light from distant galaxies, but also a number of other measurements. And those statements are entirely independent on just what coordinates you chose to describe them. However, explaining them by saying the universe expands in this particular coordinate system is an intuitive interpretation.

So, the two most important things you need to know to make sense of General Relativity is first that the curvature of space-time can be defined and measured entirely within space-time. An embedding space is unnecessary. And second, you are free to choose whatever coordinate system you like. It doesn’t change the physics.

In summary: General Relativity tells us that the universe doesn’t expand into anything, we don’t expand with it, and while you could say that the universe doesn’t expand but we shrink that interpretation doesn’t make a lot of physical sense.

Thursday, May 27, 2021

The Climate Book You Didn’t Know You Need

The Physics of Climate Change
Lawrence Krauss
Post Hill Press (March 2021)

In the past years, media coverage of climate change has noticeably shifted. Many outlets have begun referring to it as “climate crisis” or “climate emergency”, a mostly symbolic move, in my eyes, because those who trust that their readers will tolerate this nomenclature are those whose readers don’t need to be reminded of the graveness of the situation. Even more marked has been the move to no longer mention climate change skeptics and, moreover, to proudly declare the intention to no longer acknowledge even the existence of the skeptics’ claims.

As a scientist who has worked in science communication for more than a decade, I am of two minds about this. On the one hand, I perfectly understand the futility of repeating the same facts to people who are unwilling or unable to comprehend them – it’s the reason I don’t respond when someone emails me their home-brewed theory of everything. On the other hand, it’s what most science communication comes down to: patiently rephrasing the same thing over and over again. That science writers – who dedicate their life to communicating research – refuse to explain that very research, strikes me as an odd development.

This makes me suspect something else is going on. Declaring the science settled relieves news contributors of the burden of actually having to understand said science. It’s temptingly convenient and cheap, both literally and figuratively. Think about the last dozen or so news reports on climate change you’ve read. Earliest cherry blossom bloom in Japan, ice still melting in Antarctica, Greta Thunberg doesn’t want to travel to Glasgow in November. Did one of those actually explain how scientists know that climate change is man-made? I suspect not. Are you sure you understand it? Would you be comfortable explaining it to a climate change skeptic?

If not, then Lawrence Krauss’ new book “The Physics of Climate Change” is for you. It’s a well-curated collection of facts and data with explanations that are just about technical enough to understand the science without getting bogged down in details. The book covers historical and contemporary records of carbon dioxide levels and temperature, greenhouse gases and how their atmospheric concentrations change the energy balance, how we can tell one cause of climate change from another, and impacts we have seen and can expect to see, from sea level rise to tipping points.

To me, learning some climate science has been a series of realizations that it’s more difficult than it looks at first sight. Remember, for example, the explanation for the greenhouse effect we all learned in school? Carbon dioxide in the atmosphere lets incoming sunlight through, but prevents infrared light from escaping into space, hence raising the temperature. Alas, a climate change skeptic might point out, the absorption of infrared light is saturated at carbon dioxide levels well below the current ones. So, burning fossil fuels can’t possible make any difference, right?

No, wrong. But explaining just why is not so simple...

In a nutshell, the problem with the greenhouse analogy is that Earth isn’t a greenhouse. It isn’t surrounded by a surface that traps light, but rather by an atmosphere whose temperature and density falls gradually with altitude. The reason that increasing carbon dioxide concentrations continue to affect the heat balance of our planet is that they move the average altitude from which infrared light can escape upwards. But in the relevant region of the atmosphere (the troposphere) higher altitude means lower temperature. Hence, the increasing carbon dioxide level makes it more difficult for Earth to lose heat. The atmosphere must therefore warm to get back into an energy balance with the sun. If that explanation was too short, Krauss goes through the details in one of the chapters of his book.

There are a number of other stumbling points that took me some time to wrap my head around. Isn’t water vapor a much more potent greenhouse gas? How can we possibly tell whether global temperatures rise because of us or because of other, natural, causes, for example changes in the sun? Have climate models ever correctly predicted anything, and if so what? And in any case, what’s the problem with a temperature increase that’s hard to even read off the old-fashioned liquid thermometer pinned to our patio wall? I believe these are all obvious questions that everybody has at some point, and Krauss does a great job answering them.

I welcome this book because I have found it hard to come by a didactic introduction to climate science that doesn’t raise more question than it answers. Yes, there are websites which answer skeptics’ claims, but more often than not they offer little more than reference lists. Well intended, I concur, but not terribly illuminating. I took Michael Mann’s online course Climate Change: The Science and Global Impact, which provides a good overview. But I know enough physics to know that Mann’s course doesn’t say much about the physics. And, yes, I suppose I could take a more sophisticated course, but there are only so many hours in a day. I am sure the problem is familiar to you.

So, at least for me, Krauss book fills a gap in the literature. To begin with, at under 200 pages in generous font size, it’s a short book. I have also found it a pleasure to read for Krauss neither trivializes the situation nor pushes conclusions in the reader’s face. It becomes clear from his writing that he is concerned, but his main mission is to inform, not to preach.

I welcome Krauss’ book for another reason. As a physicist myself, I have been somewhat embarrassed by the numerous physicists who have put forward very – I am looking for a polite word here – shall we say, iconoclastic, ideas about climate change. I have also noticed this personally in several occasions, that physicists have rather strong yet uninformed opinion about what climate models are good for. I am therefore happy that a physicist as well-known as Krauss counteracts the impression that physicists believe they know everything better. He 100% sticks with the established science and doesn’t put forward own speculations.

There are some topics though I wish Krauss would have said more about. One particularly glaring omission is the uncertainty in climate trend projections due to the lacking understanding of cloud formation. Indeed, Krauss says little about the shortcomings of current climate models aside from acknowledging that tipping points are difficult to predict, and nothing about the difficulties of quantifying the uncertainty. This is unfortunate, for it’s another issue that irks me when I read about climate change in newspapers or magazines. Every model has shortcomings, and when those shortcomings aren’t openly put on the table I begin to wonder if something’s being swept under the rug. You see, I’m chronically skeptical myself. Maybe it’s something to do with being a physicist after all.

I for one certainly wish there was more science in the news coverage of climate change. Yes, there are social science studies showing that facts do little to change opinions. But many people, I believe, genuinely don’t know what to think because without at least a little background knowledge it isn’t all that easy to identify mistakes in the arguments of climate change deniers. Krauss’ book is a good starting point to get that background knowledge.

Saturday, May 22, 2021

Aliens that weren't aliens

[This is a transcript of the video embedded below.]


The interstellar object ‘Oumuamua travelled through our solar system in 2017. Soon after it was spotted, the astrophysicist Avi Loeb claimed it was alien technology. Now it looks like it was just a big chunk of nitrogen.

This wasn’t the first time scientists yelled aliens mistakenly and it certainly won’t be the last. So, in this video we’ll look at the history of supposed alien discoveries. What did astronomers see, what did they think it was, what did it turn out to be in the end? And what are we to make of these claims? That’s what we’ll talk about today.

Let’s then talk about all the times when aliens weren’t aliens. In 1877, the Italian astronomer Giovanni Shiaparelli studied the surface of our neighbor planet Mars. He saw a network of long, nearly straight lines. At that time, astronomers didn’t have the ability take photographs of their observation and the usual procedure was to make drawings and write down what they saw. Schiaparelli called the structures “canali” in Italian, a word which leaves their origin unspecified. In the English translation, however, the “canali” became “canals” which strongly suggested an artificial origin. The better word would have been “channels”.

This translation blunder made scientific history. Even though the resolution of telescopes at the time wasn’t good enough to identify surface structures on Mars, a couple of other astronomers quickly reported they also saw canals. Around the turn of the 19th to the 20th century, the American Astronomer Percival Lowell published three books in which he presented the hypothesis that the canals were an irrigation system built by an intelligent civilization.

The idea that there had once been, or maybe still was, intelligent life on Mars persisted until 1965. In this year, the American space mission Mariner 5 flew by Mars and sent back the first photos of Mars’s surface to Earth. The photos showed craters but nothing resembling canals. The canals turned out to have been imaging artifacts, supported by vivid imagination. And even though the scientific community laid the idea of canals on Mars to rest in 1965, it took much longer for the public to get over the idea of life on Mars. I recall my grandmother was still telling me about the canals in the 1980s.

But in any case the friends of ET didn’t have to wait long for renewed hope. In 1967, the Irish Astrophysicist Jocelyn Bell Burnell noticed that the radio telescope in Cambridge which she worked on at the time recorded a recurring signal that pulsed with a period of somewhat more than a second. She noted down “LGM” on the printout of the measurement curve, short for „little green men”.

The little green men were a joke, of course. But at the time, astrophysicists didn’t know any natural process that could explain Bell Burnell’s observations, so they couldn’t entirely exclude that it was a signal stemming from alien technology. However, a few years after the signal was first recorded it became clear that its origin was not aliens, but a rotating neutron star.

Rotating neutron stars can build up strong magnetic fields and then emit a steady, but directed beam of electromagnetic radiation. And since the neutron star rotates, we only see this beam if it happens to point into our direction. This is why the signal appears to be pulsed. Such objects are now called “Pulsars”.

Then in 1996, life on Mars had a brief comeback. That year, a group of Americans found a meteorite in Antarctica which seemed to carry traces of bacteria. This rock was probably flung into the direction of our planet when a heavier meteorite crashed into the surface of Mars. Indeed, other scientists confirmed that the Antarctic meteorite most likely came from mars. However, they concluded that the structures in the rock are too small to be of bacterial origin.

That wasn’t it with alien sightings. In 2015, the Kepler telescope found a star with irregular changes in its brightness. Officially the star has the catchy name KIC8462852, but unofficially it’s been called WTF. That stands, as you certainly know for Where’s the flux? The name which stuck in the end though was “Tabby’s star,” after the first name of its discoverer, Tabetha Boyajian.

At first astrophysicists didn’t have a good explanation for the odd behavior of Tabby’s star. And so, it didn’t take long until a group of researchers from the University of Pennsylvania proposed aliens are building a megastructure around their star.

Indeed, the physicist freeman Dyson had argued already in the 1960s, that advanced extraterrestrial civilizations would try to capture energy from their sun as directly as possible. To this end, Dyson speculated, they’d build a sphere around the star. It’s remained unclear how such a sphere would be constructed or remain stable, but, well, they are advanced, these civilizations, so presumably they’ve figured it out. And they’re covering up their star to catch its energy, that can quite plausibly lead to a signal like the one observed from Tabby’s star.

Several radio telescopes scanned the area around Tabby’s star on the lookout for signs of intelligent life, but didn’t find anything. Further observations now seem to support the hypothesis that the star is surrounded by debris from a destroyed moon or other large rocks.

Then, in 2017, the Canadian astronomer Robert Weryk made a surprising discovery when he analyzed data from the Pan-STARRS telescope in Hawaii. He saw an object that passed closely by our planet, but it looked neither like a comet nor like an asteroid.

When Weryk traced back its path, the object turned out to have come from outside our solar system. “‘Oumuamua” the astronomers named it, Hawaiian for “messenger from afar arriving first”.

‘Oumuamua gave astronomers and physicists quite something to think. It entered our solar system on a path that agreed with the laws of gravity, with no hints at any further propulsion. But as it got closer to the sun, it began to emit particles of some sort that gave it an acceleration.

This particle emission didn’t fit that usually observed from comets. Also, the shape of ‘Oumuamua, is rather untypical for asteroids or comets. The shape which fits the data best is that of a disk, about 6-8 times as wide as high.

When ‘Oumuamua was first observed, no one had any good idea what it was, what it was made of, or what happened when it got close to the sun. The Astrophysicist Avi Loeb used the situation to claim that ‘Oumuamua is technology of an alien civilization. “[T]he simplest, most direct line from an object with all of ‘Oumuamua’s observed qualities to an explanation for them is that it was manufactured.”

According to a new study it now looks like ‘Oumuamua is a piece of frozen nitrogen that was split off a nitrogen planet in another solar system. It remained frozen until it got close to our sun, when it began to partly evaporate. Though we will never know exactly because the object has left our solar system for good and the data we have now is all the data we will ever have.

And just a few weeks ago, we talked about what happened to the idea that there’s life on Venus. Check out my earlier video for more about that.

So, what do we learn from that? When new discoveries are made it takes some time until scientists have collected and analyzed all the data, formulated hypotheses, and evaluated which hypothesis explains the data best. Before that is done, the only thing that can reliably be said is “we don’t know”.

But “we don’t know” is boring and doesn’t make headlines. Which is why some scientists use the situation to put forward highly speculative ideas before anyone else can show they’re wrong. This is why headlines about possible signs of extraterrestrial life are certainly entertaining but usually, after a few years, disappear.

Thanks for watching, don’t forget to subscribe, see you next week.

Saturday, May 15, 2021

Quantum Computing: Top Players 2021

[This is a transcript of the video embedded below.]


Quantum computing is currently one of the most exciting emergent technologies, and it’s almost certainly a topic that will continue to make headlines in the coming years. But there are now so many companies working on quantum computing, that it’s become really confusing. Who is working on what? What are the benefits and disadvantages of each technology? And who are the newcomers to watch out for? That’s what we will talk about today.

Quantum computers use units that are called “quantum-bits” or qubits for short. In contrast to normal bits, which can take on two values, like 0 and 1, a qubit can take on an arbitrary combination of two values. The magic of quantum computing happens when you entangle qubits.

Entanglement is a type of correlation, so it ties qubits together, but it’s a correlation that has no equivalent in the non-quantum world. There are a huge number of ways qubits can be entangled and that creates a computational advantage - if you want to solve certain mathematical problems.

Quantum computer can help for example to solve the Schrödinger equation for complicated molecules. One could use that to find out what properties a material has without having to synthetically produce it. Quantum computers can also solve certain logistic problems of optimize financial systems. So there is a real potential for application.

But quantum computing does not help for *all types of calculations, they are special purpose machines. They also don’t operate all by themselves, but the quantum parts have to be controlled and read out by a conventional computer. You could say that quantum computers are for problem solving what wormholes are for space-travel. They might not bring you everywhere you want to go, but *if they can bring you somewhere, you’ll get there really fast.

What makes quantum computing special is also what makes it challenging. To use quantum computers, you have to maintain the entanglement between the qubits long enough to actually do the calculation. And quantum effects are really, really sensitive even to smallest disturbances. To be reliable, quantum computer therefore need to operate with several copies of the information, together with an error correction protocol. And to do this error correction, you need more qubits. Estimates say that the number of qubits we need to reach for a quantum computer to do reliable and useful calculations that a conventional computer can’t do is about a million.

The exact number depends on the type of problem you are trying to solve, the algorithm, and the quality of the qubits and so on, but as a rule of thumb, a million is a good benchmark to keep in mind. Below that, quantum computers are mainly of academic interest.

Having said that, let’s now look at what different types of qubits there are, and how far we are on the way to that million.

1. Superconducting Qubits

Superconducting qubits are by far the most widely used, and most advanced type of qubits. They are basically small currents on a chip. The two states of the qubit can be physically realized either by the distribution of the charge, or by the flux of the current.

The big advantage of superconducting qubits is that they can be produced by the same techniques that the electronics industry has used for the past 5 decades. These qubits are basically microchips, except, here it comes, they have to be cooled to extremely low temperatures, about 10-20 milli Kelvin. One needs these low temperatures to make the circuits superconducting, otherwise you can’t keep them in these neat two qubit states.

Despite the low temperatures, quantum effects in superconducting qubits disappear extremely quickly. This disappearance of quantum effects is measured in the “decoherence time”, which for the superconducting qubits is currently a few 10s of micro-seconds.

Superconducting qubits are the technology which is used by Google and IBM and also by a number of smaller companies. In 2019, Google was first to demonstrate “quantum supremacy”, which means they performed a task that a conventional computer could not have done in a reasonable amount of time. The processor they used for this had 53 qubits. I made a video about this topic specifically, so check this out for more. Google’s supremacy claim was later debated by IBM. IBM argued that actually the calculation could have been performed within reasonable time on a conventional super-computer, so Google’s claim was somewhat premature. Maybe it was. Or maybe IBM was just annoyed they weren’t first.

IBM’s quantum computers also use superconducting qubits. Their biggest one currently has 65 qubits and they recently put out a roadmap that projects 1000 qubits by 2023. IBMs smaller quantum computers, the ones with 5 and 16 qubits, are free to access in the cloud.

The biggest problem for superconducting qubits is the cooling. Beyond a few thousand or so, it’ll become difficult to put all qubits into one cooling system, so that’s where it’ll become challenging.

2. Photonic quantum computing

In photonic quantum computing the qubits are properties related to photons. That may be the presence of a photon itself, or the uncertainty in a particular state of the photon. This approach is pursued for example by the company Xanadu in Toronto. It is also the approach that was used a few months ago by a group of Chinese researchers, which demonstrated quantum supremacy for photonic quantum computing.

The biggest advantage of using photons is that they can be operated at room temperature, and the quantum effects last much longer than for superconducting qubits, typically some milliseconds but it can go up to some hours in ideal cases. This makes photonic quantum computers much cheaper and easier to handle. The big disadvantage is that the systems become really large really quickly because of the laser guides and optical components. For example, the photonic system of the Chinese group covers a whole tabletop, whereas superconducting circuits are just tiny chips.

The company PsiQuantum however claims they have solved the problem and have found an approach to photonic quantum computing that can be scaled up to a million qubits. Exactly how they want to do that, no one knows, but that’s definitely a development to have an eye on.

3. Ion traps

In ion traps, the qubits are atoms that are missing some electrons and therefore have a net positive charge. You can then trap these ions in electromagnetic fields, and use lasers to move them around and entangle them. Such ion traps are comparable in size to the qubit chips. They also need to be cooled but not quite as much, “only” to temperatures of a few Kelvin.

The biggest player in trapped ion quantum computing is Honeywell, but the start-up IonQ uses the same approach. The advantages of trapped ion computing are longer coherence times than superconducting qubits – up to a few minutes. The other advantage is that trapped ions can interact with more neighbors than superconducting qubits.

But ion traps also have disadvantages. Notably, they are slower to react than superconducting qubits, and it’s more difficult to put many traps onto a single chip. However, they’ve kept up with superconducting qubits well.

Honeywell claims to have the best quantum computer in the world by quantum volume. What the heck is quantum volume? It’s a metric, originally introduced by IBM, that combines many different factors like errors, crosstalk and connectivity. Honeywell reports a quantum volume of 64, and according to their website, they too are moving to the cloud next year. IonQ’s latest model contains 32 trapped ions sitting in a chain. They also have a roadmap according to which they expect quantum supremacy by 2025 and be able to solve interesting problems by 2028.

4. D-Wave

Now what about D-Wave? D-wave is so far the only company that sells commercially available quantum computers, and they also use superconducting qubits. Their 2020 model has a stunning 5600 qubits.

However, the D-wave computers can’t be compared to the approaches pursued by Google and IBM because D-wave uses a completely different computation strategy. D-wave computers can be used for solving certain optimization problems that are defined by the design of the machine, whereas the technology developed by Google and IBM is good to create a programmable computer that can be applied to all kinds of different problems. Both are interesting, but it’s comparing apples and oranges.

5. Topological quantum computing

Topological quantum computing is the wild card. There isn’t currently any workable machine that uses the technique. But the idea is great: In topological quantum computers, information would be stored in conserved properties of “quasi-particles”, that are collective motions of particles. The great thing about this is that this information would be very robust to decoherence.

According to Microsoft “the upside is enormous and there is practically no downside.” In 2018, their director of quantum computing business development, told the BBC Microsoft would have a “commercially relevant quantum computer within five years.” However, Microsoft had a big setback in February when they had to retract a paper that demonstrated the existence of the quasi-particles they hoped to use. So much about “no downside”.

6. The far field

These were the biggest players, but there are two newcomers that are worth having an eye on.

The first is semi-conducting qubits. They are very similar to the superconducting qubits, but here the qubits are either the spin or charge of single electrons. The advantage is that the temperature doesn’t need to be quite as low. Instead of 10 mK, one “only” has to reach a few Kelvin. This approach is presently pursued by researchers at TU Delft in the Netherlands, supported by Intel.

The second are Nitrogen Vacancy systems where the qubits are places in the structure of a carbon crystal where a carbon atom is replaced with nitrogen. The great advantage of those is that they’re both small and can be operated at room temperatures. This approach is pursued by The Hanson lab at Qutech, some people at MIT, and a startup in Australia called Quantum Brilliance.

So far there hasn’t been any demonstration of quantum computation for these two approaches, but they could become very promising.

So, that’s the status of quantum computing in early 2021, and I hope this video will help you to make sense of the next quantum computing headlines, which are certain to come.

I want to thank Tanuj Kumar for help with this video.

Saturday, May 08, 2021

What did Einstein mean by “spooky action at a distance”?

[This is a transcript of the video embedded below.]


Quantum mechanics is weird – I am sure you’ve read that somewhere. And why is it weird? Oh, it’s because it’s got that “spooky action at a distance”, doesn’t it? Einstein said that. Yes, that guy again. But what is spooky at a distance? What did Einstein really say? And what does it mean? That’s what we’ll talk about today.

The vast majority of sources on the internet claim that Einstein’s “spooky action at a distance” referred to entanglement. Wikipedia for example. And here is an example from Science Magazine. You will also find lots of videos on YouTube that say the same thing: Einstein’s spooky action at a distance was entanglement. But I do not think that’s what Einstein meant.

Let’s look at what Einstein actually said. The origin of the phrase “spooky action at a distance” is a letter that Einstein wrote to Max Born in March 1947. In this letter, Einstein explains to Born why he does not believe that quantum mechanics really describes how the world works.

He begins by assuring Born that he knows perfectly well that quantum mechanics is very successful: “I understand of course that the statistical formalism which you pioneered captures a significant truth.” But then he goes on to explain his problem. Einstein writes:
“I cannot seriously believe [in quantum mechanics] because the theory is incompatible with the requirement that physics should represent reality in space and time without spooky action at a distance...”

There it is, the spooky action at a distance. But just exactly what was Einstein referring to? Before we get into this, I have to quickly remind you how quantum mechanics works.

In quantum mechanics, everything is described by a complex-valued wave-function usually denoted Psi. From the wave-function we calculate probabilities for measurement outcomes, for example the probability to find a particle at a particular place. We do this by taking the absolute square of the wave-function.

But we cannot observe the wave-function itself. We only observe the outcome of the measurement. This means most importantly that if we make a measurement for which the outcome was not one hundred percent certain, then we have to suddenly „update” the wave-function. That’s because the moment we measure the particle, we know it’s either there or it isn’t. And this update is instantaneous. It happens at the same time everywhere, seemingly faster than the speed of light. And I think *that’s what Einstein was worried about because he had explained that already twenty years earlier, in the discussion of the 1927 Solvay conference.

In 1927, Einstein used the following example. Suppose you direct a beam of electrons at a screen with a tiny hole and ask what happens with a single electron. The wave-function of the electron will diffract on the hole, which means it will spread symmetrically into all directions. Then you measure it at a certain distance from the hole. The electron has the same probability to have gone in any direction. But if you measure it, you will suddenly find it in one particular point.

Einstein argues: “The interpretation, according to which [the square of the wave-function] expresses the probability that this particle is found at a given point, assumes an entirely peculiar mechanism of action at a distance, which prevents the wave continuously distributed in space from producing an action in two places on the screen.”

What he is saying is that somehow the wave-function on the left side of the screen must know that the particle was actually detected on the other side of the screen. In 1927, he did not call this action at a distance “spooky” but “peculiar” but I think he was referring to the same thing.

However, in Einstein’s electron argument it’s rather unclear what is acting on what. Because there is only one particle. This is why, Einstein together with Podolsky and Rosen later looked at the measurement for two particles that are entangled, which led to their famous 1935 EPR paper. So this is why entanglement comes in: Because you need at least two particles to show that the measurement on one particle can act on the other particle. But entanglement itself is unproblematic. It’s just a type of correlation, and correlations can be non-local without there being any “action” at a distance.

To see what I mean, forget all about quantum mechanics for a moment. Suppose I have two socks that are identical, except the one is red and the other one blue. I put them in two identical envelopes and ship one to you. The moment you open the envelope and see that your sock is red, you know that my sock is blue. That’s because the information about the color in the envelopes is correlated, and this correlation can span over large distances.

There isn’t any spooky action going on though because that correlation was created locally. Such correlations exist everywhere and are created all the time. Imagine for example you bounce a ball off a wall and it comes back. That transfers momentum to the wall. You can’t see how much, but you know that the total momentum is conserved, so the momentum of the wall is now correlated with that of the ball.

Entanglement is a correlation like this, it’s just that you can only create it with quantum particles. Suppose you have a particle with total spin zero that decays in two particles that can have spin either plus or minus one. One particle goes left, the other one right. You don’t know which particle has which spin, but you know that the total spin is conserved. So either the particle going to the right had spin plus one and the one going left minus one or the other way round.

According to quantum mechanics, before you have measured one of the particles, both possibilities exist. You can then measure the correlations between the spins of both particles with two detectors on the left and right side. It turns out that the entanglement correlations can in certain circumstances be stronger than non-quantum correlations. That’s what makes them so interesting. But there’s no spooky action in the correlation themselves. These correlations were created locally. What Einstein worried about instead is that once you measure the particle on one side, the wave-function for the particle on the other side changes.

But isn’t this the same with the two socks? Before you open the envelope the probability was 50-50 and then when you open it, it jumps to 100:0. But there’s no spooky action going on there. It’s just that the probability was a statement about what you knew, and not about what really was the case. Really, which sock was in which envelope was already decided the time I sent them.

Yes, that explains the case for the socks. But in quantum mechanics, that explanation does not work. If you think that really it was decided already which spin went into which direction when they were emitted, that will not create sufficiently strong correlations. It’s just incompatible with observations. Einstein did not know that. These experiments were done only after he died. But he knew that using entangled states you can demonstrate whether spooky action is real, or not.

I will admit that I’m a little defensive of good, old Albert Einstein because I feel that a lot of people too cheerfully declare that Einstein was wrong about quantum mechanics. But if you read what Einstein actually wrote, he was exceedingly careful in expressing himself and yet most physicists dismissed his concerns. In April 1948, he repeats his argument to Born. He writes that a measurement on one part of the wave-function is a “physical intervention” and that “such an intervention cannot immediately influence the physically reality in a distant part of space.” Einstein concludes:
“For this reason I tend to believe that quantum mechanics is an incomplete and indirect description of reality which will later be replaced by a complete and direct one.”

So, Einstein did not think that quantum mechanics was wrong. He thought it was incomplete, that something fundamental was missing in it. And in my reading, the term “spooky action at a distance” referred to the measurement update, not to entanglement.

Saturday, May 01, 2021

Dark Matter: The Situation Has Changed

[This is a transcript of the video embedded below]


Hi everybody. We haven’t talked about dark matter for some time. Which is why today I want to tell you how my opinion about dark matter has changed over the past twenty years or so. In particular, I want to discuss whether dark matter is made of particles or if not, what else it could be. Let’s get started.

First things first, dark matter is the hypothetical stuff that astrophysicists think makes up eighty percent of the matter in the universe, or 24 percent of the combined matter-energy. Dark matter should not be confused with dark energy. These are two entirely different things. Dark energy is what makes the universe expand faster, dark matter is what makes galaxies rotate faster, though that’s not the only thing dark matter does, as we’ll see in a moment.

But what is dark matter? 20 years ago I thought dark matter is most likely made of some kind of particle that we haven’t measured so far. Because, well, I’m a particle physicist by training. And if a particle can explain an observation, why look any further? Also, at the time there were quite a few proposals for new particles that could fit the data, like some supersymmetric particles or axions. So, the idea that dark matter is stuff, made of particles, seemed plausible to me and like the obvious explanation.

That’s why, just among us, I always thought dark matter is not a particularly interesting problem. Sooner or later they’ll find the particle, give it a name, someone will get a Nobel Prize and that’s that.

But, well, that hasn’t happened. Physicists have tried to measure dark matter particles since the mid 1980s. But no one’s ever seen one. There have been a few anomalies in the data, but these have all gone away upon closer inspection. Instead, what’s happened is that some astrophysical observations have become increasingly difficult to explain with the particle hypothesis. Before I get to the observations that particle dark matter doesn’t explain, I’ll first quickly summarize what it does explain, which are the reasons astrophysicists thought it exists in the first place.

Historically the first evidence for dark matter came from galaxy clusters. Galaxy clusters are made of a few hundred up to a thousand or so galaxies that are held together by their gravitational pull. They move around each other, and how fast they move depends on the total mass of the cluster. The more mass, the faster the galaxies move. Turns out that galaxies in galaxy clusters move way too fast to explain this with the mass that we can attribute to the visible matter. So Fritz Zwicky conjectured in the 1930s, that there must be more matter in galaxy clusters, just that we can’t see it. He called it “dunkle materie” dark matter.

It’s a similar story for galaxies. The velocity of a star which orbits around the center of a galaxy depends on the total mass within this orbit. But the stars in the outer parts of galaxies just orbit too fast around the center. Their velocity should drop with distance to the center of the galaxy, but it doesn’t. Instead, the velocity of the stars becomes approximately constant at far distance to the galactic center. This gives rise to the so-called “flat rotation curves”. Again you can explain that by saying there’s dark matter in the galaxies.

Then there is gravitational lensing. These are galaxies or galaxy clusters which bend light that comes from an object behind them. This object behind them then appears distorted, and from the amount of distortion you can infer the mass of the lens. Again, the visible matter just isn’t enough to explain the observations.

Then there’s the temperature fluctuations in the cosmic microwave background. These fluctuations are what you see in this skymap. All these spots here are deviations from the average temperature, which is about 2.7 Kelvin. The red spots are a little warmer, the blue spots a little colder than that average. Astrophysicists analyze the microwave-background using its power spectrum, where the vertical axis is roughly the number of spots and the horizontal axis is their size, with the larger sizes on the left and increasingly smaller spots to the right. To explain this power spectrum, again you need dark matter.

Finally, there’s the large scale distribution of galaxies and galaxy clusters and interstellar gas and so on, as you see in the image from this computer simulation. Normal matter alone just does not produce enough structure on short scales to fit the observations, and again, adding dark matter will fix the problem.

So, you see, dark matter was a simple idea that fit to a lot of observations, which is why it was such a good scientific explanation. But that was the status 20 years ago. And what’s happened since then is that observations have piled up that dark matter cannot explain.

For example, particle dark matter predicts a density in the cores of small galaxies that peaks, whereas the observations say the distribution should be flat. Dark matter also predicts too many small satellite galaxies, these are small galaxies that fly around a larger host. The Milky Way for example, should have many hundreds, but actually only has a few dozen. Also, these small satellite galaxies are often aligned in planes. Dark matter does not explain why.

We also know from observations that the mass of a galaxy is correlated to the fourth power of the rotation velocity of the outermost stars. This is called the baryonic Tully Fisher relation and it’s just an observational fact. Dark matter does not explain it. It’s a similar issue with Renzo’s rule, that says if you look at the rotation curve of a galaxy, then for every feature in the curve for the visible emission, like a wiggle or bump, there is also a feature in the rotation curve. Again, that’s an observational fact, but it makes absolutely no sense if you think that most of the matter in galaxies is dark matter. The dark matter should remove any correlation between the luminosity and the rotation curves.

Then there are collisions of galaxy clusters at high velocity, like the bullet cluster or the el gordo cluster. These are difficult to explain with particle dark matter, because dark matter creates friction and that makes such high relative velocities incredibly unlikely. Yes, you heard that correctly, the Bullet cluster is a PROBLEM for dark matter, not evidence for it.

And, yes, you can fumble with the computer simulations for dark matter and add more and more parameters to try to get it all right. But that’s no longer a simple explanation, and it’s no longer predictive.

So, if it’s not dark matter then what else could it be? The alternative explanation to particle dark matter is modified gravity. The idea of modified gravity is that we are not missing a source for gravity, but that we have the law of gravity wrong.

Modified gravity solves all the riddles that I just told you about. There’s no friction, so high relative velocities are not a problem. It predicted the Tully-Fisher relation, it explains Renzo’s rule and satellite alignments, it removes the issue with density peaks in galactic cores, and solves the missing satellites problem.

But modified gravity does not do well with the cosmic microwave background and the early universe, and it has some issues with galaxy clusters.

So that looks like a battle between competing hypotheses, and that’s certainly how it’s been portrayed and how most physicists think about it.

But here’s the thing. Purely from the perspective of data, the simplest explanation is that particle dark matter works better in some cases, and modified gravity better in others. A lot of astrophysicist reply to this, well, if you have dark matter anyway, why also have modified gravity? Answer: Because dark matter has difficulties explaining a lot of observations. On its own, it’s no longer parametrically the simplest explanation.

But wait, you may want to say, you can’t just use dark matter for observations a,b,c and modified gravity for observations x,y,z! Well actually, you can totally do that. Nothing in the scientific method that forbids it.

But more importantly, if you look at the mathematics, modified gravity and particle dark matter are actually very similar. Dark matter adds new particles, and modified gravity adds new fields. But because of quantum mechanics, fields are particles and particles are fields, so it’s the same thing really. The difference is the behavior of these fields or particles. It’s the behavior that changes from the scales of galaxies to clusters to filaments and the early universe. So what we need is a kind of phase transition that explains why and under which circumstances the behavior of these additional fields, or particles, changes, so that we need two different sets of equations.

And once you look at it this way, it’s obvious why we have not made progress on the question what dark matter is for such a long time. There’re just the wrong people working on it. It’s not a problem you can solve with particle physics and general relativity. It a problem for condensed matter physics. That’s the physics of gases, fluids, and solids and so on.

So, the conclusion that I have arrived at is that the distinction between dark matter and modified gravity is a false dichotomy. The answer isn’t either – or, it’s both. The question is just how to combine them.

Google talk online now

The major purpose of the talk was to introduce our SciMeter project which I've been working on for a few years now with Tom Price and Tobias Mistele. But I also talk a bit about my PhD topic and particle physics and how my book came about, so maybe it's interesting for some of you.