Saturday, May 15, 2021

Quantum Computing: Top Players 2021

[This is a transcript of the video embedded below.]


Quantum computing is currently one of the most exciting emergent technologies, and it’s almost certainly a topic that will continue to make headlines in the coming years. But there are now so many companies working on quantum computing, that it’s become really confusing. Who is working on what? What are the benefits and disadvantages of each technology? And who are the newcomers to watch out for? That’s what we will talk about today.

Quantum computers use units that are called “quantum-bits” or qubits for short. In contrast to normal bits, which can take on two values, like 0 and 1, a qubit can take on an arbitrary combination of two values. The magic of quantum computing happens when you entangle qubits.

Entanglement is a type of correlation, so it ties qubits together, but it’s a correlation that has no equivalent in the non-quantum world. There are a huge number of ways qubits can be entangled and that creates a computational advantage - if you want to solve certain mathematical problems.

Quantum computer can help for example to solve the Schrödinger equation for complicated molecules. One could use that to find out what properties a material has without having to synthetically produce it. Quantum computers can also solve certain logistic problems of optimize financial systems. So there is a real potential for application.

But quantum computing does not help for *all types of calculations, they are special purpose machines. They also don’t operate all by themselves, but the quantum parts have to be controlled and read out by a conventional computer. You could say that quantum computers are for problem solving what wormholes are for space-travel. They might not bring you everywhere you want to go, but *if they can bring you somewhere, you’ll get there really fast.

What makes quantum computing special is also what makes it challenging. To use quantum computers, you have to maintain the entanglement between the qubits long enough to actually do the calculation. And quantum effects are really, really sensitive even to smallest disturbances. To be reliable, quantum computer therefore need to operate with several copies of the information, together with an error correction protocol. And to do this error correction, you need more qubits. Estimates say that the number of qubits we need to reach for a quantum computer to do reliable and useful calculations that a conventional computer can’t do is about a million.

The exact number depends on the type of problem you are trying to solve, the algorithm, and the quality of the qubits and so on, but as a rule of thumb, a million is a good benchmark to keep in mind. Below that, quantum computers are mainly of academic interest.

Having said that, let’s now look at what different types of qubits there are, and how far we are on the way to that million.

1. Superconducting Qubits

Superconducting qubits are by far the most widely used, and most advanced type of qubits. They are basically small currents on a chip. The two states of the qubit can be physically realized either by the distribution of the charge, or by the flux of the current.

The big advantage of superconducting qubits is that they can be produced by the same techniques that the electronics industry has used for the past 5 decades. These qubits are basically microchips, except, here it comes, they have to be cooled to extremely low temperatures, about 10-20 milli Kelvin. One needs these low temperatures to make the circuits superconducting, otherwise you can’t keep them in these neat two qubit states.

Despite the low temperatures, quantum effects in superconducting qubits disappear extremely quickly. This disappearance of quantum effects is measured in the “decoherence time”, which for the superconducting qubits is currently a few 10s of micro-seconds.

Superconducting qubits are the technology which is used by Google and IBM and also by a number of smaller companies. In 2019, Google was first to demonstrate “quantum supremacy”, which means they performed a task that a conventional computer could not have done in a reasonable amount of time. The processor they used for this had 53 qubits. I made a video about this topic specifically, so check this out for more. Google’s supremacy claim was later debated by IBM. IBM argued that actually the calculation could have been performed within reasonable time on a conventional super-computer, so Google’s claim was somewhat premature. Maybe it was. Or maybe IBM was just annoyed they weren’t first.

IBM’s quantum computers also use superconducting qubits. Their biggest one currently has 65 qubits and they recently put out a roadmap that projects 1000 qubits by 2023. IBMs smaller quantum computers, the ones with 5 and 16 qubits, are free to access in the cloud.

The biggest problem for superconducting qubits is the cooling. Beyond a few thousand or so, it’ll become difficult to put all qubits into one cooling system, so that’s where it’ll become challenging.

2. Photonic quantum computing

In photonic quantum computing the qubits are properties related to photons. That may be the presence of a photon itself, or the uncertainty in a particular state of the photon. This approach is pursued for example by the company Xanadu in Toronto. It is also the approach that was used a few months ago by a group of Chinese researchers, which demonstrated quantum supremacy for photonic quantum computing.

The biggest advantage of using photons is that they can be operated at room temperature, and the quantum effects last much longer than for superconducting qubits, typically some milliseconds but it can go up to some hours in ideal cases. This makes photonic quantum computers much cheaper and easier to handle. The big disadvantage is that the systems become really large really quickly because of the laser guides and optical components. For example, the photonic system of the Chinese group covers a whole tabletop, whereas superconducting circuits are just tiny chips.

The company PsiQuantum however claims they have solved the problem and have found an approach to photonic quantum computing that can be scaled up to a million qubits. Exactly how they want to do that, no one knows, but that’s definitely a development to have an eye on.

3. Ion traps

In ion traps, the qubits are atoms that are missing some electrons and therefore have a net positive charge. You can then trap these ions in electromagnetic fields, and use lasers to move them around and entangle them. Such ion traps are comparable in size to the qubit chips. They also need to be cooled but not quite as much, “only” to temperatures of a few Kelvin.

The biggest player in trapped ion quantum computing is Honeywell, but the start-up IonQ uses the same approach. The advantages of trapped ion computing are longer coherence times than superconducting qubits – up to a few minutes. The other advantage is that trapped ions can interact with more neighbors than superconducting qubits.

But ion traps also have disadvantages. Notably, they are slower to react than superconducting qubits, and it’s more difficult to put many traps onto a single chip. However, they’ve kept up with superconducting qubits well.

Honeywell claims to have the best quantum computer in the world by quantum volume. What the heck is quantum volume? It’s a metric, originally introduced by IBM, that combines many different factors like errors, crosstalk and connectivity. Honeywell reports a quantum volume of 64, and according to their website, they too are moving to the cloud next year. IonQ’s latest model contains 32 trapped ions sitting in a chain. They also have a roadmap according to which they expect quantum supremacy by 2025 and be able to solve interesting problems by 2028.

4. D-Wave

Now what about D-Wave? D-wave is so far the only company that sells commercially available quantum computers, and they also use superconducting qubits. Their 2020 model has a stunning 5600 qubits.

However, the D-wave computers can’t be compared to the approaches pursued by Google and IBM because D-wave uses a completely different computation strategy. D-wave computers can be used for solving certain optimization problems that are defined by the design of the machine, whereas the technology developed by Google and IBM is good to create a programmable computer that can be applied to all kinds of different problems. Both are interesting, but it’s comparing apples and oranges.

5. Topological quantum computing

Topological quantum computing is the wild card. There isn’t currently any workable machine that uses the technique. But the idea is great: In topological quantum computers, information would be stored in conserved properties of “quasi-particles”, that are collective motions of particles. The great thing about this is that this information would be very robust to decoherence.

According to Microsoft “the upside is enormous and there is practically no downside.” In 2018, their director of quantum computing business development, told the BBC Microsoft would have a “commercially relevant quantum computer within five years.” However, Microsoft had a big setback in February when they had to retract a paper that demonstrated the existence of the quasi-particles they hoped to use. So much about “no downside”.

6. The far field

These were the biggest players, but there are two newcomers that are worth having an eye on.

The first is semi-conducting qubits. They are very similar to the superconducting qubits, but here the qubits are either the spin or charge of single electrons. The advantage is that the temperature doesn’t need to be quite as low. Instead of 10 mK, one “only” has to reach a few Kelvin. This approach is presently pursued by researchers at TU Delft in the Netherlands, supported by Intel.

The second are Nitrogen Vacancy systems where the qubits are places in the structure of a carbon crystal where a carbon atom is replaced with nitrogen. The great advantage of those is that they’re both small and can be operated at room temperatures. This approach is pursued by The Hanson lab at Qutech, some people at MIT, and a startup in Australia called Quantum Brilliance.

So far there hasn’t been any demonstration of quantum computation for these two approaches, but they could become very promising.

So, that’s the status of quantum computing in early 2021, and I hope this video will help you to make sense of the next quantum computing headlines, which are certain to come.

I want to thank Tanuj Kumar for help with this video.

Saturday, May 08, 2021

What did Einstein mean by “spooky action at a distance”?

[This is a transcript of the video embedded below.]


Quantum mechanics is weird – I am sure you’ve read that somewhere. And why is it weird? Oh, it’s because it’s got that “spooky action at a distance”, doesn’t it? Einstein said that. Yes, that guy again. But what is spooky at a distance? What did Einstein really say? And what does it mean? That’s what we’ll talk about today.

The vast majority of sources on the internet claim that Einstein’s “spooky action at a distance” referred to entanglement. Wikipedia for example. And here is an example from Science Magazine. You will also find lots of videos on YouTube that say the same thing: Einstein’s spooky action at a distance was entanglement. But I do not think that’s what Einstein meant.

Let’s look at what Einstein actually said. The origin of the phrase “spooky action at a distance” is a letter that Einstein wrote to Max Born in March 1947. In this letter, Einstein explains to Born why he does not believe that quantum mechanics really describes how the world works.

He begins by assuring Born that he knows perfectly well that quantum mechanics is very successful: “I understand of course that the statistical formalism which you pioneered captures a significant truth.” But then he goes on to explain his problem. Einstein writes:
“I cannot seriously believe [in quantum mechanics] because the theory is incompatible with the requirement that physics should represent reality in space and time without spooky action at a distance...”

There it is, the spooky action at a distance. But just exactly what was Einstein referring to? Before we get into this, I have to quickly remind you how quantum mechanics works.

In quantum mechanics, everything is described by a complex-valued wave-function usually denoted Psi. From the wave-function we calculate probabilities for measurement outcomes, for example the probability to find a particle at a particular place. We do this by taking the absolute square of the wave-function.

But we cannot observe the wave-function itself. We only observe the outcome of the measurement. This means most importantly that if we make a measurement for which the outcome was not one hundred percent certain, then we have to suddenly „update” the wave-function. That’s because the moment we measure the particle, we know it’s either there or it isn’t. And this update is instantaneous. It happens at the same time everywhere, seemingly faster than the speed of light. And I think *that’s what Einstein was worried about because he had explained that already twenty years earlier, in the discussion of the 1927 Solvay conference.

In 1927, Einstein used the following example. Suppose you direct a beam of electrons at a screen with a tiny hole and ask what happens with a single electron. The wave-function of the electron will diffract on the hole, which means it will spread symmetrically into all directions. Then you measure it at a certain distance from the hole. The electron has the same probability to have gone in any direction. But if you measure it, you will suddenly find it in one particular point.

Einstein argues: “The interpretation, according to which [the square of the wave-function] expresses the probability that this particle is found at a given point, assumes an entirely peculiar mechanism of action at a distance, which prevents the wave continuously distributed in space from producing an action in two places on the screen.”

What he is saying is that somehow the wave-function on the left side of the screen must know that the particle was actually detected on the other side of the screen. In 1927, he did not call this action at a distance “spooky” but “peculiar” but I think he was referring to the same thing.

However, in Einstein’s electron argument it’s rather unclear what is acting on what. Because there is only one particle. This is why, Einstein together with Podolsky and Rosen later looked at the measurement for two particles that are entangled, which led to their famous 1935 EPR paper. So this is why entanglement comes in: Because you need at least two particles to show that the measurement on one particle can act on the other particle. But entanglement itself is unproblematic. It’s just a type of correlation, and correlations can be non-local without there being any “action” at a distance.

To see what I mean, forget all about quantum mechanics for a moment. Suppose I have two socks that are identical, except the one is red and the other one blue. I put them in two identical envelopes and ship one to you. The moment you open the envelope and see that your sock is red, you know that my sock is blue. That’s because the information about the color in the envelopes is correlated, and this correlation can span over large distances.

There isn’t any spooky action going on though because that correlation was created locally. Such correlations exist everywhere and are created all the time. Imagine for example you bounce a ball off a wall and it comes back. That transfers momentum to the wall. You can’t see how much, but you know that the total momentum is conserved, so the momentum of the wall is now correlated with that of the ball.

Entanglement is a correlation like this, it’s just that you can only create it with quantum particles. Suppose you have a particle with total spin zero that decays in two particles that can have spin either plus or minus one. One particle goes left, the other one right. You don’t know which particle has which spin, but you know that the total spin is conserved. So either the particle going to the right had spin plus one and the one going left minus one or the other way round.

According to quantum mechanics, before you have measured one of the particles, both possibilities exist. You can then measure the correlations between the spins of both particles with two detectors on the left and right side. It turns out that the entanglement correlations can in certain circumstances be stronger than non-quantum correlations. That’s what makes them so interesting. But there’s no spooky action in the correlation themselves. These correlations were created locally. What Einstein worried about instead is that once you measure the particle on one side, the wave-function for the particle on the other side changes.

But isn’t this the same with the two socks? Before you open the envelope the probability was 50-50 and then when you open it, it jumps to 100:0. But there’s no spooky action going on there. It’s just that the probability was a statement about what you knew, and not about what really was the case. Really, which sock was in which envelope was already decided the time I sent them.

Yes, that explains the case for the socks. But in quantum mechanics, that explanation does not work. If you think that really it was decided already which spin went into which direction when they were emitted, that will not create sufficiently strong correlations. It’s just incompatible with observations. Einstein did not know that. These experiments were done only after he died. But he knew that using entangled states you can demonstrate whether spooky action is real, or not.

I will admit that I’m a little defensive of good, old Albert Einstein because I feel that a lot of people too cheerfully declare that Einstein was wrong about quantum mechanics. But if you read what Einstein actually wrote, he was exceedingly careful in expressing himself and yet most physicists dismissed his concerns. In April 1948, he repeats his argument to Born. He writes that a measurement on one part of the wave-function is a “physical intervention” and that “such an intervention cannot immediately influence the physically reality in a distant part of space.” Einstein concludes:
“For this reason I tend to believe that quantum mechanics is an incomplete and indirect description of reality which will later be replaced by a complete and direct one.”

So, Einstein did not think that quantum mechanics was wrong. He thought it was incomplete, that something fundamental was missing in it. And in my reading, the term “spooky action at a distance” referred to the measurement update, not to entanglement.

Saturday, May 01, 2021

Dark Matter: The Situation Has Changed

[This is a transcript of the video embedded below]


Hi everybody. We haven’t talked about dark matter for some time. Which is why today I want to tell you how my opinion about dark matter has changed over the past twenty years or so. In particular, I want to discuss whether dark matter is made of particles or if not, what else it could be. Let’s get started.

First things first, dark matter is the hypothetical stuff that astrophysicists think makes up eighty percent of the matter in the universe, or 24 percent of the combined matter-energy. Dark matter should not be confused with dark energy. These are two entirely different things. Dark energy is what makes the universe expand faster, dark matter is what makes galaxies rotate faster, though that’s not the only thing dark matter does, as we’ll see in a moment.

But what is dark matter? 20 years ago I thought dark matter is most likely made of some kind of particle that we haven’t measured so far. Because, well, I’m a particle physicist by training. And if a particle can explain an observation, why look any further? Also, at the time there were quite a few proposals for new particles that could fit the data, like some supersymmetric particles or axions. So, the idea that dark matter is stuff, made of particles, seemed plausible to me and like the obvious explanation.

That’s why, just among us, I always thought dark matter is not a particularly interesting problem. Sooner or later they’ll find the particle, give it a name, someone will get a Nobel Prize and that’s that.

But, well, that hasn’t happened. Physicists have tried to measure dark matter particles since the mid 1980s. But no one’s ever seen one. There have been a few anomalies in the data, but these have all gone away upon closer inspection. Instead, what’s happened is that some astrophysical observations have become increasingly difficult to explain with the particle hypothesis. Before I get to the observations that particle dark matter doesn’t explain, I’ll first quickly summarize what it does explain, which are the reasons astrophysicists thought it exists in the first place.

Historically the first evidence for dark matter came from galaxy clusters. Galaxy clusters are made of a few hundred up to a thousand or so galaxies that are held together by their gravitational pull. They move around each other, and how fast they move depends on the total mass of the cluster. The more mass, the faster the galaxies move. Turns out that galaxies in galaxy clusters move way too fast to explain this with the mass that we can attribute to the visible matter. So Fritz Zwicky conjectured in the 1930s, that there must be more matter in galaxy clusters, just that we can’t see it. He called it “dunkle materie” dark matter.

It’s a similar story for galaxies. The velocity of a star which orbits around the center of a galaxy depends on the total mass within this orbit. But the stars in the outer parts of galaxies just orbit too fast around the center. Their velocity should drop with distance to the center of the galaxy, but it doesn’t. Instead, the velocity of the stars becomes approximately constant at far distance to the galactic center. This gives rise to the so-called “flat rotation curves”. Again you can explain that by saying there’s dark matter in the galaxies.

Then there is gravitational lensing. These are galaxies or galaxy clusters which bend light that comes from an object behind them. This object behind them then appears distorted, and from the amount of distortion you can infer the mass of the lens. Again, the visible matter just isn’t enough to explain the observations.

Then there’s the temperature fluctuations in the cosmic microwave background. These fluctuations are what you see in this skymap. All these spots here are deviations from the average temperature, which is about 2.7 Kelvin. The red spots are a little warmer, the blue spots a little colder than that average. Astrophysicists analyze the microwave-background using its power spectrum, where the vertical axis is roughly the number of spots and the horizontal axis is their size, with the larger sizes on the left and increasingly smaller spots to the right. To explain this power spectrum, again you need dark matter.

Finally, there’s the large scale distribution of galaxies and galaxy clusters and interstellar gas and so on, as you see in the image from this computer simulation. Normal matter alone just does not produce enough structure on short scales to fit the observations, and again, adding dark matter will fix the problem.

So, you see, dark matter was a simple idea that fit to a lot of observations, which is why it was such a good scientific explanation. But that was the status 20 years ago. And what’s happened since then is that observations have piled up that dark matter cannot explain.

For example, particle dark matter predicts a density in the cores of small galaxies that peaks, whereas the observations say the distribution should be flat. Dark matter also predicts too many small satellite galaxies, these are small galaxies that fly around a larger host. The Milky Way for example, should have many hundreds, but actually only has a few dozen. Also, these small satellite galaxies are often aligned in planes. Dark matter does not explain why.

We also know from observations that the mass of a galaxy is correlated to the fourth power of the rotation velocity of the outermost stars. This is called the baryonic Tully Fisher relation and it’s just an observational fact. Dark matter does not explain it. It’s a similar issue with Renzo’s rule, that says if you look at the rotation curve of a galaxy, then for every feature in the curve for the visible emission, like a wiggle or bump, there is also a feature in the rotation curve. Again, that’s an observational fact, but it makes absolutely no sense if you think that most of the matter in galaxies is dark matter. The dark matter should remove any correlation between the luminosity and the rotation curves.

Then there are collisions of galaxy clusters at high velocity, like the bullet cluster or the el gordo cluster. These are difficult to explain with particle dark matter, because dark matter creates friction and that makes such high relative velocities incredibly unlikely. Yes, you heard that correctly, the Bullet cluster is a PROBLEM for dark matter, not evidence for it.

And, yes, you can fumble with the computer simulations for dark matter and add more and more parameters to try to get it all right. But that’s no longer a simple explanation, and it’s no longer predictive.

So, if it’s not dark matter then what else could it be? The alternative explanation to particle dark matter is modified gravity. The idea of modified gravity is that we are not missing a source for gravity, but that we have the law of gravity wrong.

Modified gravity solves all the riddles that I just told you about. There’s no friction, so high relative velocities are not a problem. It predicted the Tully-Fisher relation, it explains Renzo’s rule and satellite alignments, it removes the issue with density peaks in galactic cores, and solves the missing satellites problem.

But modified gravity does not do well with the cosmic microwave background and the early universe, and it has some issues with galaxy clusters.

So that looks like a battle between competing hypotheses, and that’s certainly how it’s been portrayed and how most physicists think about it.

But here’s the thing. Purely from the perspective of data, the simplest explanation is that particle dark matter works better in some cases, and modified gravity better in others. A lot of astrophysicist reply to this, well, if you have dark matter anyway, why also have modified gravity? Answer: Because dark matter has difficulties explaining a lot of observations. On its own, it’s no longer parametrically the simplest explanation.

But wait, you may want to say, you can’t just use dark matter for observations a,b,c and modified gravity for observations x,y,z! Well actually, you can totally do that. Nothing in the scientific method that forbids it.

But more importantly, if you look at the mathematics, modified gravity and particle dark matter are actually very similar. Dark matter adds new particles, and modified gravity adds new fields. But because of quantum mechanics, fields are particles and particles are fields, so it’s the same thing really. The difference is the behavior of these fields or particles. It’s the behavior that changes from the scales of galaxies to clusters to filaments and the early universe. So what we need is a kind of phase transition that explains why and under which circumstances the behavior of these additional fields, or particles, changes, so that we need two different sets of equations.

And once you look at it this way, it’s obvious why we have not made progress on the question what dark matter is for such a long time. There’re just the wrong people working on it. It’s not a problem you can solve with particle physics and general relativity. It a problem for condensed matter physics. That’s the physics of gases, fluids, and solids and so on.

So, the conclusion that I have arrived at is that the distinction between dark matter and modified gravity is a false dichotomy. The answer isn’t either – or, it’s both. The question is just how to combine them.

Google talk online now

The major purpose of the talk was to introduce our SciMeter project which I've been working on for a few years now with Tom Price and Tobias Mistele. But I also talk a bit about my PhD topic and particle physics and how my book came about, so maybe it's interesting for some of you.