Saturday, August 17, 2019

How we know that Einstein's General Relativity cannot be quite right

Today I want to explain how we know that the way Einstein thought about gravity cannot be correct.



Einstein’s idea was that gravity is not a force, but it is really an effect caused by the curvature of space and time. Matter curves space-time in its vicinity, and this curvature in return affects how matter moves. This means that, according to Einstein, space and time are responsive. They deform in the presence of matter and not only matter, but really all types of energies, including pressure and momentum flux and so on.

Einstein called his theory “General Relativity” because it’s a generalization of Special Relativity. Both are based on “observer-independence”, that is the idea that the laws of nature should not depend on the motion of an observer. The difference between General Relativity and Special Relativity is that in Special Relativity space-time is flat, like a sheet of paper, while in General Relativity it can be curved, like the often-named rubber sheet.

General Relativity is an extremely well-confirmed theory. It predicts that light rays bend around massive objects, like the sun, which we have observed. The same effect also gives rise to gravitational lensing, which we have also observed. General Relativity further predicts that the universe should expand, which it does. It predicts that time runs more slowly in gravitational potentials, which is correct. General Relativity predicts black holes, and it predicts just how the black hole shadow looks, which is what we have observed. It also predicts gravitational waves, which we have observed. And the list goes on.

So, there is no doubt that General Relativity works extremely well. But we already know that it cannot ultimately be the correct theory for space and time. It is an approximation that works in many circumstances, but fails in others.

We know this because General Relativity does not fit together with another extremely well confirmed theory, that is quantum mechanics. It’s one of these problems that’s easy to explain but extremely difficult to solve.

Here is what goes wrong if you want to combine gravity and quantum mechanics. We know experimentally that particles have some strange quantum properties. They obey the uncertainty principle and they can do things like being in two places at once. Concretely, think about an electron going through a double slit. Quantum mechanics tells us that the particle goes through both slits.

Now, electrons have a mass and masses generate a gravitational pull by bending space-time. This brings up the question, to which place does the gravitational pull go if the electron travels through both slits at the same time. You would expect the gravitational pull to also go to two places at the same time. But this cannot be the case in general relativity, because general relativity is not a quantum theory.

To solve this problem, we have to understand the quantum properties of gravity. We need what physicists call a theory of quantum gravity. And since Einstein taught us that gravity is really about the curvature of space and time, what we need is a theory for the quantum properties of space and time.

There are two other reasons how we know that General Relativity can’t be quite right. Besides the double-slit problem, there is the issue with singularities in General Relativity. Singularities are places where both the curvature and the energy-density of matter become infinitely large; at least that’s what General Relativity predicts. This happens for example inside of black holes and at the beginning of the universe.

In any other theory that we have, singularities are a sign that the theory breaks down and has to be replaced by a more fundamental theory. And we think the same has to be the case in General Relativity, where the more fundamental theory to replace it is quantum gravity.

The third reason we think gravity must be quantized is the trouble with information loss in black holes. If we combine quantum theory with general relativity but without quantizing gravity, then we find that black holes slowly shrink by emitting radiation. This was first derived by Stephen Hawking in the 1970s and so this black hole radiation is also called Hawking radiation.

Now, it seems that black holes can entirely vanish by emitting this radiation. Problem is, the radiation itself is entirely random and does not carry any information. So when a black hole is entirely gone and all you have left is the radiation, you do not know what formed the black hole. Such a process is fundamentally irreversible and therefore incompatible with quantum theory. It just does not fit together. A lot of physicists think that to solve this problem we need a theory of quantum gravity.

So this is how we know that General Relativity must be replaced by a theory of quantum gravity. This problem has been known since the 1930s. Since then, there have been many attempts to solve the problem. I will tell you about this some other time, so don’t forget to subscribe.

Tuesday, August 13, 2019

The Problem with Quantum Measurements


Have you heard that particle physicists want a larger collider because there is supposedly something funny about the Higgs boson? They call it the “Hierarchy Problem,” that there are 15 orders of magnitude between the Planck mass, which determines the strength of gravity, and the mass of the Higgs boson.

What is problematic about this, you ask? Nothing. Why do particle physicists think it’s problematic? Because they have been told as students it’s problematic. So now they want $20 billion to solve a problem that doesn’t exist.

Let us then look at an actual problem, that is that we don’t know how a measurement happens in quantum mechanics. The discussion of this problem today happens largely among philosophers; physicists pay pretty much no attention to it. Why not, you ask? Because they have been told as students that the problem doesn’t exist.

But there is a light at the end of the tunnel and the light is… you. Yes, you. Because I know that you are just the right person to both understand and solve the measurement problem. So let’s get you started.

Quantum mechanics is today mostly taught in what is known as the Copenhagen Interpretation and it works as follows. Particles are described by a mathematical object called the “wave-function,” usually denoted Ψ (“Psi”). The wave-function is sometimes sharply peaked and looks much like a particle, sometimes it’s spread out and looks more like a wave. Ψ is basically the embodiment of particle-wave duality.

The wave-function moves according to the Schrödinger equation. This equation is compatible with Einstein’s Special Relativity and it can be run both forward and backward in time. If I give you complete information about a system at any one time – ie, if I tell you the “state” of the system – you can use the Schrödinger equation to calculate the state at all earlier and all later times. This makes the Schrödinger equation what we call a “deterministic” equation.

But the Schrödinger equation alone does not predict what we observe. If you use only the Schrödinger equation to calculate what happens when a particle interacts with a detector, you find that the two undergo a process called “decoherence.” Decoherence wipes out quantum-typical behavior, like dead-and-alive cats and such. What you have left then is a probability distribution for a measurement outcome (what is known as a “mixed state”). You have, say, a 50% chance that the particle hits the left side of the screen. And this, importantly, is not a prediction for a collection of particles or repeated measurements. We are talking about one measurement on one particle.

The moment you measure the particle, however, you know with 100% probability what you have got; in our example you now know which side of the screen the particle is. This sudden jump of the probability is often referred to as the “collapse” of the wave-function and the Schrödinger equation does not predict it. The Copenhagen Interpretation, therefore, requires an additional assumption called the “Measurement Postulate.” The Measurement Postulate tells you that the probability of whatever you have measured must be updated to 100%.

Now, the collapse together with the Schrödinger equation describes what we observe. But the detector is of course also made of particles and therefore itself obeys the Schrödinger equation. So if quantum mechanics is fundamental, we should be able to calculate what happens during measurement using the Schrödinger equation alone. We should not need a second postulate.

The measurement problem, then, is that the collapse of the wave-function is incompatible with the Schrödinger equation. It isn’t merely that we do not know how to derive it from the Schrödinger equation, it’s that it actually contradicts the Schrödinger equation. The easiest way to see this is to note that the Schrödinger equation is linear while the measurement process is non-linear. This strongly suggests that the measurement is an effective description of some underlying non-linear process, something we haven’t yet figured out.

There is another problem. As an instantaneous process, wave-function collapse doesn’t fit together with the speed of light limit in Special Relativity. This is the “spooky action” that irked Einstein so much about quantum mechanics.

This incompatibility with Special Relativity, however, has (by assumption) no observable consequences, so you can try and convince yourself it’s philosophically permissible (and good luck with that). But the problem comes back to haunt you when you ask what happens with the mass (and energy) of a particle when its wave-function collapses. You’ll notice then that the instantaneous jump screws up General Relativity. (And for this quantum gravitational effects shouldn’t play a role, so mumbling “string theory” doesn’t help.) This issue is still unobservable in practice, all right, but now it’s observable in principle.

One way to deal with the measurement problem is to argue that the wave-function does not describe a real object, but only encodes knowledge, and that probabilities should not be interpreted as frequencies of occurrence, but instead as statements of our confidence. This is what’s known as a “Psi-epistemic” interpretation of quantum mechanics, as opposed to the “Psi-ontic” ones in which the wave-function is a real thing.

The trouble with Psi-epistemic interpretations is that the moment you refer to something like “knowledge” you have to tell me what you mean by “knowledge”, who or what has this “knowledge,” and how they obtain “knowledge.” Personally, I would also really like to know what this knowledge is supposedly about, but if you insist I’ll keep my mouth shut. Even so, for all we presently know, “knowledge” is not fundamental, but emergent. Referring to knowledge in the postulates of your theory, therefore, is incompatible with reductionism. This means if you like Psi-epistemic interpretations, you will have to tell me just why and when reductionism breaks down or, alternatively, tell me how to derive Psi from a more fundamental law.

None of the existing interpretations and modifications of quantum mechanics really solve the problem, which I can go through in detail some other time. For now let me just say that either way you turn the pieces, they won’t fit together.

So, forget about particle colliders; grab a pen and get started.

---

Note: If the comment count exceeds 200, you have to click on “Load More” at the bottom of the page to see recent comments. This is also why the link in the recent comment widget does not work. Please do not complain to me about this shitfuckery. Blogger is hosted by Google. Please direct complaints to their forum.

Saturday, August 10, 2019

Book Review: “The Secret Life of Science” by Jeremey Baumberg

The Secret Life of Science: How It Really Works and Why It Matters
Jeremy Baumberg
Princeton University Press (16 Mar. 2018)

The most remarkable thing about science is that most scientists have no idea how it works. With his 2018 book “The Secret Life of Science,” Jeremey Baumberg aims to change this.

The book is thoroughly researched and well-organized. In the first chapter, Baumberg starts with explaining what science is. He goes about this pragmatically and without getting lost in irrelevant philosophical discussions. In this chapter, he also introduces the terms “simplifier science” and “constructor science” to replace “basic” and “applied” research.

Baumberg suggests to think of science as an ecosystem with multiple species and flows of nutrients that need to be balanced, which is an analogy that he comes back to throughout the book. This first chapter is followed by a brief chapter about the motivations to do science and its societal relevance.

In the next chapters, Baumberg then focuses on various aspects of a scientist’s work-life and explains how these are organized in praxis: Scientific publishing, information sharing in the community (conferences and so on), science communication (PR, science journalism), funding, and hiring. In this, Baumberg make an effort to distinguish between research in academia and in business, and in many cases he also points out national differences.

The book finishes with a chapter about the future of science and Baumberg’s own suggestions for improvement. Except for the very last chapter, the author does not draw attention to existing problems with the current organization of science, though these will be obvious to most readers.

Baumberg is a physicist by training and, according to the book flap, works in nanotechnology and photonics. As most physicists who do not work in particle physics, he is well aware that particle physics is in deep trouble. He writes:
Knowing the mind of god” and “The theory of everything” are brands currently attached to particle physics. Yet they have become less powerful with time, attracting an air of liability, perhaps reaching that of a “toxic brand.” That the science involved now finds it hard to shake off precisely this layer of values attached to them shows how sticky they are.
The book contains a lot of concrete information for example about salaries and grant success rates. I have generally found Baumberg’s analysis to be spot on, for example when he writes “Science spending seems to rise until it becomes noticed and then stops.” Or
Because this competition [for research grants] is so well defined as a clear race for money it can become the raison d’etre for scientists’ existence, rather than just what is needed to develop resources to actually do science.
On counting citations, he likewise remarks aptly:
“[The h-index rewards] wide collaborators rather than lone specialists, rewards fields that cite more, and rewards those who always stay at the trendy edge of all research.”
Unfortunately I have to add that the book is not particularly engagingly written. Some of the chapters could have been shorter, Baumberg overuses the metaphor of the ecosystem, and the figures are not helpful. To give you an idea why I say this, I challenge you to make sense of this illustration:


In summary, Baumberg’s is a useful book though it’s somewhat tedious to read. Nevertheless, I think everyone who wants to understand how science works in reality should read it. It’s time we get over the idea that science somehow magically self-corrects. Science is the way we organize knowledge discovery, and its success depends on us paying attention to how it is organized.

Wednesday, August 07, 2019

10 differences between artificial intelligence and human intelligence

Today I want to tell you what is artificial about artificial intelligence. There is, of course, the obvious, which is that the brain is warm, wet, and wiggly, while a computer is not. But more importantly, there are structural differences between human and artificial intelligence, which I will get to in a moment.


Before we can talk about this though, I have to briefly tell you what “artificial intelligence” refers to.

What goes as “artificial intelligence” today are neural networks. A neural network is a computer algorithm that imitates certain functions of the human brain. It contains virtual “neurons” that are arranged in “layers” which are connected with each other. The neurons pass on information and thereby perform calculations, much like neurons in the human brain pass on information and thereby perform calculations.

In the neural net, the neurons are just numbers in the code, typically they have values between 0 and 1. The connections between the neurons also have numbers associated with them, and those are called “weights”. These weights tell you how much the information from one layer matters for the next layer.

The values of the neurons and the weights of the connections are essentially the free parameters of the network. And by training the network you want to find those values of the parameters that minimize a certain function, called the “loss function”.

So it’s really an optimization problem that neural nets solve. In this optimization, the magic of neural nets happens through what is known as backpropagation. This means if the net gives you a result that is not particularly good, you go back and change the weights of the neurons and their connections. This is how the net can “learn” from failure. Again, this plasticity mimics that of the human brain.

For a great introduction to neural nets, I can recommend this 20 minutes video by 3Blue1Brown.

Having said this, here are the key differences between artificial and real intelligence.

1. Form and Function

A neural net is software running on a computer. The “neurons” of an artificial intelligence are not physical. They are encoded in bits and strings on hard disks or silicon chips and their physical structure looks nothing like that of actual neurons. In the human brain, in contrast, form and function go together.

2. Size

The human brain has about 100 billion neurons. Current neural nets typically have a few hundred or so.

3. Connectivity

In a neural net each layer is usually fully connected to the previous and next layer. But the brain doesn’t really have layers. It instead relies on a lot of pre-defined structure. Not all regions of the human brain are equally connected and the regions are specialized for certain purposes.

4. Power Consumption

The human brain is dramatically more energy-efficient than any existing artificial intelligence. The brain uses around 20 Watts, which is comparable to what a standard laptop uses today. But with that power the brain handles a million times more neurons.

5. Architecture

In a neural network, the layers are neatly ordered and are addressed one after the other. The human brain, on the other hand, does a lot of parallel processing and not in any particular order.

6. Activation Potential

In the real brain neurons either fire or don’t. In a neural network the firing is mimicked by continuous values instead, so the artificial neurons can smoothly slide from off to on, which real neurons can’t.

7. Speed

The human brain is much, much slower than any artificially intelligent system. A standard computer performs some 10 billion operations per second. Real neurons, on the other hand, fire at a frequency of at most a thousand times per second.

8. Learning Technique

Neural networks learn by producing output, and if this output is of low performance according to the loss function, then the net responds by changing the weights of the neurons and their connections. No one knows in detail how humans learn, but that’s not how it works.

9. Structure

A neural net starts from scratch every time. The human brain, on the other hand, has a lot of structure already wired into its connectivity, and it draws on models which have proved useful during evolution.

10. Precision

The human brain is much more noisy and less precise than a neural net running on a computer. This means the brain basically cannot run the same learning mechanism as a neural net and it’s probably using an entirely different mechanism.

A consequence of these differences is that artificial intelligence today needs a lot of training with a lot of carefully prepared data, which is very unlike to how human intelligence works. Neural nets do not build models of the world, instead they learn to classify patterns, and this pattern recognition can fail with only small changes. A famous example is that you can add small amounts of noise to an image, so small amounts that your eyes will not see a difference, but an artificially intelligent system might be fooled into thinking a turtle is a rifle.

Neural networks are also presently not good at generalizing what they have learned from one situation to the next, and their success very strongly depends on defining just the correct “loss function”. If you don’t think about that loss function carefully enough, you will end up optimizing something you didn’t want. Like this simulated self-driving car trained to move at constant high speed, which learned to rapidly spin in a circle.

But neural networks excel at some things, such as classifying images or extrapolating data that doesn’t have any well-understood trend. And maybe the point of artificial intelligence is not to make it all that similar to natural intelligence. After all, the most useful machines we have, like cars or planes, are useful exactly because they do not mimic nature. Instead, we may want to build machines specialized in tasks we are not good at.

Tuesday, August 06, 2019

Special Breakthrough Prize awarded for Supergravity

Breakthrough Prize Trophy.
[Image: Breakthrough Prize]
The Breakthrough Prize is an initiative founded by billionaire Yuri Milner, now funded by a group of rich people which includes, next to Milner himself, Sergey Brin, Anne Wojcicki, and Mark Zuckerberg. The Prize is awarded in three different categories, Mathematics, Fundamental Physics, and Life Sciences. Today, a Special Breakthrough Prize in Fundamental Physics has been awarded to Sergio Ferrara, Dan Freedman, and Peter van Nieuwenhuizen for the invention of supergravity in 1976. The Prize of 3 million US$ will be split among the winners.

Interest in supergravity arose in the 1970s when physicists began to search for a theory of everything that would combine all four known fundamental forces to one. By then, string theory had been shown to require supersymmetry, a hypothetical new symmetry which implies that all the already known particles have – so far undiscovered – partner particles. Supersymmetry, however, initially only worked for the three non-gravitational forces, that is the electromagnetic force and the strong and weak nuclear forces. With supergravity, gravity could be included too, thereby bringing physicists one step closer to their goal of unifying all the interactions.

In supergravity, the gravitational interaction is associated with a messenger particle – the graviton – and this graviton has a supersymmetric partner particle called the “gravitino”. There are several types of supergravitational theories, because there are different ways of realizing the symmetry. Supergravity in the context of string theory always requires additional dimensions of space, which have not been seen. The gravitational theory one obtains this way is also not the same as Einstein’s General Relativity, because one gets additional fields that can be difficult to bring into agreement with observation. (For more about the problems with string theory, please watch my video.)

To date, we have no evidence that supergravity is a correct description of nature. Supergravity may one day become useful to calculate properties of certain materials, but so far this research direction has not led to much.

The works by Ferrera, Freedman, and van Nieuwenhuizen have arguably been influential, if by influential you mean that papers have been written about it. Supergravity and supersymmetry are mathematically very fertile ideas. They lend themselves to calculations that otherwise would not be possible and that is how, in the past four decades, physicists have successfully built a beautiful, supersymmetric, math-castle on nothing but thin air.

Awarding a scientific prize, especially one accompanied by so much publicity, for an idea that has no evidence speaking for it, sends the message that in the foundations of physics contact to observation is no longer relevant. If you want to be successful in my research area, it seems, what matters is that a large number of people follow your footsteps, not that your work is useful to explain natural phenomena. This Special Prize doesn’t only signal to the public that the foundations of physics are no longer part of science, it also discourages people in the field from taking on the hard questions. Congratulations.

Update Aug 7th: Corrected the first paragraph. The earlier version incorrectly stated that each of the recipients gets $3 million.

Thursday, August 01, 2019

Automated Discovery

[Image: BuySellGraphic.com]

In 1986, Dan Swanson from the University of Chicago discovered a discovery.

Swanson (who passed away in 2012) was an information scientist and a pioneer in literature analysis. In the 1980s, he studied the distribution of references in scientific papers and found that, on occasion, studies on two separate research topics would have few references between them, but would refer to a common, third, set of papers. He conjectured that might indicate so-far unknown links between the separate research topics.

Indeed, Swanson found a concrete example for such a link. Already in the 1980s, scientists knew that certain types of fish oils benefit blood composition and blood vessels. So there was one body of literature linking circulatory health to fish oil. They had also found, in another line of research, that patients with Raynaud’s disease do better if their circulatory health improves. This led Swanson to conjecture that patients with Raynaud’s disease could benefit from fish oil. In 1993, a clinical trial demonstrated that this hypothesis was correct.

You may find this rather obvious. I would agree it’s not a groundbreaking insight, but this isn’t the point. The point is that the scientific community missed this obvious insight. It was right there, in front of their eyes, but no one noticed.

30 years after Swanson’s seminal paper, we have more data than ever about scientific publications. And just the other week, Nature published a new example for what you can do with it.

In the new paper, a group of researchers from California studied the materials science literature. They did not, like Swanson, look for relations between research studies by using citations, but they did a (much more computationally intensive) word-analysis of paper abstracts (not unlike the one we did in our paper). This analysis serves to identify the most relevant words associated with a manuscript, and to find relations between these words.

Previous studies have shown that words, treated as vectors in a high-dimensional space, can be added and subtracted. The most famous example is that the combination “King – Man + Woman” gives a new vector that turns out to be associated with the word “Queen”. In the new paper, the authors report finding similar examples in the materials science literature, such as “ferromagnetic −  NiFe + IrMn” which adds together to “antiferromagnetic”.

Even more remarkable though, they noticed that a number of materials whose names are close to the word “thermoelectric” were never actually mentioned together with the word “thermoelectric” in any paper’s abstract. This suggests, so the authors claim, that these materials may be thermoelectric, but so-far no one has noticed.

They have tested how well this works by making back-dated predictions for the discovery of new thermoelectric materials using only papers published until one of the years between 2001 and 2018. For each of these historical datasets, they used the relations between words in the abstracts to predict 50 thermoelectrical materials most likely to be found in the future. And it worked! In the five years after the historical data-cut, the identified materials were on average eight times more likely to be studied as thermoelectrics than were randomly chosen unstudied materials. The authors have now also made real predictions for new thermoelectric materials. We will see in the coming years how those pan out.

I think that analyses like this have lot of potential. Indeed, one of the things that keeps me up at night is the possibility that we might already have all the knowledge necessary to make progress in the foundations of physics, we just haven’t connected the dots. Smart tools to help scientists decide what papers to pay attention to could greatly aid knowledge discovery.

Sunday, July 28, 2019

The Forgotten Solution: Superdeterminism

Welcome to the renaissance of quantum mechanics. It took more than a hundred years, but physicists finally woke up, looked quantum mechanics into the face – and realized with bewilderment they barely know the theory they’ve been married to for so long. Gone are the days of “shut up and calculate”; the foundations of quantum mechanics are en vogue again.

It is not a spontaneous acknowledgement of philosophy that sparked physicists’ rediscovered desire; their sudden search for meaning is driven by technological advances.

With quantum cryptography a reality and quantum computing on the horizon, questions once believed ephemeral are now butter and bread of the research worker. When I was a student, my prof thought it questionable that violations of Bell’s inequality would ever be demonstrated convincingly. Today you can take that as given. We have also seen delayed-choice experiments, marveled over quantum teleportation, witnessed decoherence in action, tracked individual quantum jumps, and cheered when Zeilinger entangled photons over hundreds of kilometers of distance. Well, some of us, anyway.

But while physicists know how to use the mathematics of quantum mechanics to make stunningly accurate predictions, just what this math is about has remained unclear. This is why physicists currently have several “interpretations” of quantum mechanics.

I find the term “interpretations” somewhat unfortunate. That’s because some ideas that go as “interpretation” are really theories which differ from quantum mechanics, and these differences may one day become observable. Collapse models, for example, explicitly add a process for wave-function collapse to quantum measurement. Pilot wave theories, likewise, can result in deviations from quantum mechanics in certain circumstances, though those have not been observed. At least not yet.

A phenomenologist myself, I am agnostic about different interpretations of what is indeed the same math, such as QBism vs Copenhagen or the Many Worlds. But I agree with the philosopher Tim Maudlin that the measurement problem in quantum mechanics is a real problem – a problem of inconsistency – and requires a solution.

And how to solve it? Collapse models solve the measurement problem, but they are hard to combine with quantum field theory which for me is a deal-breaker. Pilot wave theories also solve it, but they are non-local, which makes my hair stand up for much the same reason. This is why I think all these approaches are on the wrong track and instead side with superdeterminism.

But before I tell you what’s super about superdeterminism, I have to briefly explain the all-important theorem from John Stewart Bell. It says, in a nutshell, that correlations between certain observables are bounded in every theory which fulfills certain assumptions. These assumptions are what you would expect of a deterministic, non-quantum theory – statistical locality and statistical independence (together often referred to as “Bell locality”) – and should, most importantly, be fulfilled by any classical theory that attempts to explain quantum behavior by adding “hidden variables” to particles.

Experiments show that the bound of Bell’s theorem can be violated. This means the correct theory must violate at least one of the theorem’s assumptions. Quantum mechanics is indeterministic and violates statistical locality. (Which, I should warn you has little to do with what particle physicists usually mean by “locality.”) A deterministic theory that doesn’t fulfill the other assumption, that of statistical independence, is called superdeterministic. Note that this leaves open whether or not a superdeterministic theory is statistically local.

Unfortunately, superdeterminism has a bad reputation, so bad that most students never get to hear of it. If mentioned at all, it is commonly dismissed as a “conspiracy theory.” Several philosophers have declared superdeterminism means abandoning scientific methodology entirely. To see where this objection comes from – and why it’s wrong – we have to unwrap this idea of statistical independence.

Statistical independence enters Bell’s theorem in two ways. One is that the detectors’ settings are independent of each other, the other one that the settings are independent of the state you want to measure. If you don’t have statistical independence, you are sacrificing the experimentalist’s freedom to choose what to measure. And if you do that, you can come up with deterministic hidden variable explanations that result in the same measurement outcomes as quantum mechanics.

I find superdeterminism interesting because the most obvious class of hidden variables are the degrees of freedom of the detector. And the detector isn’t statistically independent of itself, so any such theory necessarily violates statistical independence. It is also, in a trivial sense, non-linear just because if the detector depends on a superposition of prepared states that’s not the same as superposing two measurements. Since any solution of the measurement problem requires a non-linear time evolution, that seems a good opportunity to make progress.

Now, a lot of people discard superdeterminism simply because they prefer to believe in free will, which is where I think the biggest resistance to superdeterminism comes from. Bad enough that belief isn’t a scientific reason, but worse that this is misunderstanding just what is going on. It’s not like superdeterminism somehow prevents an experimentalist from turning a knob. Rather, it’s that the detectors’ states aren’t independent of the system one tries to measure. There just isn’t any state the experimentalist could twiddle their knob to which would prevent a correlation.

Where do these correlations ultimately come from? Well, they come from where everything ultimately comes from, that is from the initial state of the universe. And that’s where most people walk off: They think that you need to precisely choose the initial conditions of the universe to arrange quanta in Anton Zeilinger’s brain just so that he’ll end up turning a knob left rather than right. Besides sounding entirely nuts, it’s also a useless idea, because how the hell would you ever calculate anything with it? And if it’s unfalsifiable but useless, then indeed it isn’t science. So, frowning at superdeterminism is not entirely unjustified.

But that would be jumping to conclusions. How much detail you need to know about the initial state to make predictions depends on your model. And without writing down a model, there is really no way to tell whether it does or doesn’t live up to scientific methodology. It’s here where the trouble begins.

While philosophers on occasion discuss superdeterminism on a conceptual basis, there is little to no work on actual models. Besides me and my postdoc, I count Gerard ‘t Hooft and Tim Palmer. The former gentleman, however, seems to dislike quantum mechanics and would rather have a classical hidden variables theory, and the latter wants to discretize state space. I don’t see the point in either. I’ll be happy if the result solves the measurement problem and is still local the same way that quantum field theories are local, ie as non-local as quantum mechanics always is.*

The stakes are high, for if quantum mechanics is not a fundamental theory, but can be derived from an underlying deterministic theory, this opens the door to new applications. That’s why I remain perplexed that what I think is the obvious route to progress is one most physicists have never even heard of. Maybe it’s just a reality they don’t want to wake up to.


Recommended reading:
  • The significance of measurement independence for Bell inequalities and locality
    Michael J. W. Hall
    arXiv:1511.00729
  • Bell's Theorem: Two Neglected Solutions
    Louis Vervoort
    FoP, 3,769–791 (2013), arXiv:1203.6587

* Rewrote this paragraph to better summarize Palmer’s approach.

Wednesday, July 24, 2019

Science Shrugs

Boris Johnson
The Michelson-Morley experiment of 1887 disproved the ether, a hypothetical medium that permeates the universe. By using an interferometer with perpendicular arms, Michelson and Morley demonstrated that the speed of light is the same regardless of how the direction of the light is oriented relative to our motion through the supposed ether. Their null result set the stage for Einstein’s theory of Special Relativity and is often lauded for heralding the new age of physics. At least that’s how the story goes. In reality, it was more complicated.

Thing is, Morley himself was not convinced of the results of his seminal experiment. Together with a new collaborator, Dayton Miller, he repeated the measurement a few years later. The two again got a negative result.

This seems to have settled the case for Morley, but Miller went on to build larger interferometers to achieve better precision.

Indeed, in the 1920s, Miller reported seeing an effect consistent with Earth passing through the ether! Though the velocity which he inferred from the data didn’t match expectations he remained convinced to have measured a partial drag caused by the ether.

Miller’s detection could never be reproduced by other experiments. It is today widely considered to be wrong, but just what exactly he measured has remained unclear.

And Miller’s isn’t the only measurement mystery.

In the 1960s, Joseph Weber built the first gravitational wave detectors. At a conference in 1969, he announced that he had measured two dozen gravitational wave events, and swiftly published his results in the Physical Review Letters.

It is clear now that Weber did not measure gravitational waves – those are much harder to detect than anyone anticipated back then. So what then did he measure?

Some have argued that Weber’s equipment was faulty, his data analysis flawed, or that he simply succumbed to wishful thinking. But just what happened? We may never know.

Then, 40 years ago, physicists at the Heavy Ion Society (GSI) in Germany bombarded uranium nuclei with curium. They saw an excess emission of positrons that they couldn’t explain. In a 1983 paper, the group wrote that the observation “cannot be associated with established dynamic mechanisms of positron production” and that known physics is “unlikely match to the data at a confidence level of better than 98%”.

This observation was never reproduced. We still have no idea if this was a real effect, caused by an odd experimental setup, or whether it was a statistical fluke.

Around the same time, in 1975, we saw the first detection of a magnetic monopole. Magnetic monopoles are hypothetical quasi-particles that should have been created in the early universe if the fundamental forces were once unified. The event in case was a track left in a detector sent to the upper atmosphere with a balloon. Some have suspected that the supposed monopole track was instead caused by a nuclear decay. But really, with only one event, who can tell? In 1982, a second monopole event was reported. It remained the last.*

Today we have a similar situation with the ANITA events. ANITA is the Antarctic Impulsive Transient Antenna, and its collaboration announced last year (to much press attention) that they have measured two upward-going cosmic ray events at high energy. Trouble is, according to the currently established theories, such events shouldn’t happen.

ANITA’s two events are statistically significant, and I have no doubt they actually measured something. But it’s so little data there’s a high risk this will remain yet another oddity, eternally unresolved. Though physicists certainly try to get something out of it.

In all of these cases it’s quite possible the observations had distinct causes, just that we do not know the circumstances sufficiently well and do not have enough data to make a sober inference. Science is limited in this regard: It cannot reliably explain rare events that do not reproduce, and in these cases we are left with speculation and story-telling.

How did the world end up with Donald Trump as President of the United States and Boris Johnson as Prime Minister of the United Kingdom? In politics as in physics, some things defy explanation.


* Rewrote this paragraph after readers pointed out the second reference, see comments.

Tuesday, July 23, 2019

When they ask us [I’ve been singing again]


Prompted by last week’s conference (sorry, I meant “unconference”) which saw a lot of climate-related talks, climate modeling, geoengineering, biodiversity, and so on. Wrote this on the plane back home. Loosely inspired by this and this. Enjoy, while it lasts.

Friday, July 19, 2019

M is for Maggot, N is for Nonsense

wormy apple
[image: pinclipart.com]
Imagine you bite into an apple and find a beheaded maggot. Yuck! But it could have been worse. Had you found only half a maggot, you’d have eaten more of it. Worse still, you may have found only a quarter of a maggot, or a hundredth, or a thousandth. Indeed, if you take the limit maggot to zero, the worst possible case must be biting into an apple and not finding a maggot.

Wait, what? That doesn’t make sense. Certainly a maggot-free apple is not maximally yucky. Where did our math fail us?

It didn’t, really. The beheaded maggot is an example of a discontinuous or “singular” limit and originally due to Michael Berry*. You know you have a discontinuous limit if the function whose limit you are taking (that’s the increasing “yuck factor” of the maggot) does not approach the value of the function at the limit (unyucky).

A less fruity example is taking the y-th power of x and sending y to infinity. If x is any positive number smaller than 1, taking its exponent to infinity will give zero. If x is equal to one, all values of y will give back 1. If x is larger than one, the result of taking y to infinity will return infinity. If you plot the limit y to infinity as a function of x, it’s discontinuous.

Such singular limits are not just mathematical curiosities. We have them in physics too.

For example in thermodynamics, when we take the limit in which the number of constituents of a system becomes infinitely large, we see phase transitions where some quantities, such as the derivative of specific heat, become discontinuous. This is, of course, strictly speaking an unrealistic limit because the number of constituents may become very large, but never actually infinite. However, the limit isn’t always unrealistic.

Take the example of massive gravity. In general relativity, gravitational waves propagate with the speed of light and the particle associated with them – the graviton – is massless. You can modify general relativity so that the graviton has a mass. However, if you then let the graviton mass go to zero, you do not get back general relativity. The reason is that if the graviton mass is not zero, then it has additional polarizations and those are independent of the mass as long as the mass isn’t zero**.

The same issue appears if you have massless fields that can propagate in additional dimensions of space. This too gives rise to additional polarization which don’t necessarily disappear even if you take the size of the extra dimensions to zero.

Discontinuous limits are often a sign that you have forgotten to keep track of global, as opposed to local properties. If you for example take the radius of a sphere to infinity the curvature will go to zero, but the result is not an infinitely extended plane. For this reason, there are certain solutions in general relativity that will not approximate each other as you think they should. In a space with a negative cosmological constant, for example, black hole horizons can be infinitely extended planes. But these solutions no longer exist if the cosmological constant vanishes. In this case, black hole horizons have to be spherical.

Why am I telling you that? Because discontinuous limits should make you skeptical about any supposed insights gained into quantum gravity by using calculations in Anti de Sitter space.

Anti De Sitter (AdS) space, to remind you, is a space with a negative cosmological constant. It is popular among string theorists because they know how to make calculations in this space. Trouble is, the cosmological constant in our universe is positive. And there is no reason to think the limit of taking the cosmological constant from negative values to positive values is continuous. Indeed, it almost certainly is not because the very reason that string theorists prefer calculations in AdS is that this space provides additional structure that exists for any negative value of the cosmological constant, and suddenly vanishes if the value is zero.

String theorists usually justify working with a negative cosmological constant by arguing it can teach us something about quantum gravity in general. That may be so or it may not be so. The case with the negative cosmological constant resembles that of finding a piece of a maggot in your apple. I find it hard to swallow.


* ht Tim Palmer
** there are ways to fix this limiting behavior so that you do get back general relativity.

Wednesday, July 10, 2019

Away Note

I will be away for a week to attend SciFoo 2019. Please expect blogging to be sparse and comments to be stuck in the queue longer than usual.

Tuesday, July 09, 2019

Why the multiverse is religion, not science.

This is the 5th and last part in my series to explain why the multiverse is not a scientific hypothesis. The other parts are: 1. Does the Higgs-boson exist? 2. Do I exist? 3. Does God exist? and 4. The multiverse hypothesis.

I put together these videos because I am frustrated that scientists discard the issue unthinkingly. This is not a polemical argument and it’s not meant as an insult. But believing in the multiverse is logically equivalent to believing in god, therefore it’s religion, not science.

To see why, let me pull together what I laid out in my previous videos. Scientists say that something exists if it is useful to describe observations. By “useful” I mean it is simpler than just collecting data. You can postulate the existence of things that are not useful to describe observations, such as gods, but this is no longer science.

Universes besides our own are logically equivalent to gods. They are unobservable by assumption, hence they can exist only in a religious sense. You can believe in them if you want to, but they are not part of science.

I know that this is not a particularly remarkable argument. But physicists seem to have a hard time following it, especially those who happen to work on the multiverse. Therefore, let me sort out some common misunderstandings.

First. The major misunderstanding is that I am saying the multiverse does not exist. But this is not what I am saying. I am saying science does not tell us anything about universes we cannot observe, therefore claiming they exist is not science.

Second. They will argue the multiverse is simple. Most physicists who are in favor of the multiverse say it’s scientific because it’s simpler to assume that all universes of a certain type exist than it is to assume that only one of them exist.

That’s a questionable claim. But more importantly, it’s beside the point. The simplest assumption is no assumption. And you do not need to make any statement about the existence of the multiverse to explain our observations. Therefore, science says, you should not. As I said, it’s the same with the multiverse as with god. It’s an unnecessary assumption. Not wrong, but superfluous.

You also do not need to postulate the existence of our universe, of course. No scientist ever does that. That would be totally ridiculous.

Third. They’ll claim the existence of the multiverse is a prediction of their theory.

It’s not. That’s just wrong. Just because you can write down a theory for something, doesn’t mean it exists*. We determine that something exists, in the scientific sense, if it is useful to describe observation. That’s exactly what the multiverse is not.

Fourth. But then you are saying that discussing what’s inside a black hole is also not science

That’s equally wrong. Other universes are not science because you cannot observe them. But you can totally observe what’s inside a black hole. You just cannot come back and tell us about it. Besides, no one really thinks that the inside of a black hole will remain inaccessible forever. For these reasons, the situation is entirely different for black holes. If it was correct that the inside of black holes cannot be observed, this would indeed mean that postulating its existence is not scientific.

Fifth. But there are types of multiverses that have observable consequences.

That’s right. Physicists have come up with certain types of multiverses that can be falsified. The problem with these ideas is conceptually entirely different. It’s that there is no reason to think we live in such multiverses to begin with. The requirement that a hypothesis must be falsifiable is certainly necessary to make the hypothesis scientific, but not sufficient. I previously explained this here.

To sum it up. The multiverse is certainly an interesting idea and it attracts a lot of public attention. There is nothing wrong with that in principle. Entertainment has a value and so has thought-stimulating discussion. But do not confuse the multiverse with science, because it is not.



* Revised this sentence after two readers misunderstood the previous version.

Update: The video now has German and Italian subtitles. To see those, click on "CC" in the YouTube toolbar. Choose language under settings/gear icon.

Sunday, July 07, 2019

Because Science Matters

[Foto: Michael Sentef]

Another day, another lecture. This time I am in Hamburg, at DESY, Germany’s major particle physics center.

My history with DESY is an odd one, which is none, despite the fact that fifteen years ago I was awarded Germany’s most prestigious young researcher grant, the Emmy-Noether fellowship, to work in Hamburg on particle physics phenomenology. The Emmy-Noether fellowship is a five-year grant that does not only pay the principle investigator but also comes with salaries for a small group. It’s basically the jackpot of German postdoc funding.

I declined it.

I hadn’t thought of this for a long time, but here I am in Hamburg, finally getting to see how my life might have looked like, in that parallel-world where I became a particle physicist. It looks like I’ll be late.

The taxi driver circles around a hotel and insists with heavy Polish accent this must be the right place because “there’s nothing after that”. To make his point he waves at trees and construction areas that stretch further up the road.

I finally manage to convince him that, really, I’m not looking for a hotel. A kilometer later he pulls into an anonymous driveway where a man in uniform asks him to stop. “See, this wrong!” the taxi-man squeaks and attempts to turn around when I spot a familiar sight: The cover of my book, on a poster, next to the entry.

“I’m supposed to give that talk,” I tell the man in uniform, “At two pm.” He looks at his watch. It’s a quarter past two.

I arrive at the lecture hall 20 minutes late, mostly due to a delayed train, but also, I note with some guilty consciousness, because I decided not to stay for the night. With too much traveling in my life already, I have become one of these terrible people who arrive just before their talk and vanish directly afterwards. I used to call it the “In and Out Lecture”, inspired by an American fast food chain with the laxative name “In and Out Burger”. A friend of mine more aptly dubbed it “Blitzkrieg Seminar.”

The room is well-filled. I am glad to see the audience was kept in good mood with drinks and snacks. Within minutes, I am wired up and ready to speak about the troubles in the foundations of physics.

Briefly before my arrival, I learned some particle physicists complained I was even invited. This isn’t the first time this happens. On another occasion some tried to un-invite me, albeit eventually unsuccessfully. They tend to be disappointed when it turns out I’m not a fire-spewing dragon but a middle-aged mother of two who just happens to know a lot about theory development in high energy physics.

Most of them, especially the experimentalists, don’t even find my argument all that disagreeable – at least at first sight. Relying on beauty has not historically worked well in physics, and it isn’t presently working, no doubt about this. To make progress, then, we should take clue from history and focus on resolving inconsistencies in our present description of nature, either inconsistencies between theory and experiment, or internal inconsistencies. So far, they’re usually with me.

Where my argument becomes disagreeable is when I draw consequences. There is no inconsistency to be resolved in the energy range that a next larger collider could reach. It would measure some constants to better precision, all right, but that’s not worth $20 billion.

Those 20 billion dollars, by the way, are merely the estimated construction cost for CERN’s planned Future Circular Collider (FCC). They do not include operation cost. The facility would run for about 25 years. Operation costs of the current machine, the Large Hadron Collider (LHC) are about $1 billion per year already, and with the FCC, expenses for electricity and staff are bound to increase. That means the total cost for the FCC easily exceeds $40 billion.

That’s a lot of money. And the measurements this next larger collider could make would deliver information that won’t be useful in the next 100 or maybe 5000 years. Now is not the right time for this.

On the risk of oversimplifying an 80,000 word message, we have better things to do. Figure out what’s with dark matter, quantum gravity, or the measurement problem. There are breakthroughs waiting to be made. But we have to be careful with the next steps or risk delaying progress by further decades, if not centuries.

After my talk, in the question session, an elderly man goes on about his personal theory for something. He will later tell me about his website and complain that the scientific mainstream is ignoring his breakthrough insights.

Another elderly man insists that beauty is a good guide to the development of new natural laws. To support his point he quotes Steven Weinberg, because Weinberg, you see, likes string theory. In other words, it’s exactly the type of argument I just explained is both wrong and in the way of progress.

Another man, this one not quite as old, stands up to deliver a speech about how important particle colliders are. Several people applaud.

Next up, an agitated woman reprimands me for a typographical error on a slide. More applause. She goes on to explain the LHC has taught us a lot about inflation, a hypothetical phase of exponential expansion in the early universe. I refuse to comment. There is, I feel, no way to reason with someone who really believes this.

But her’s is, I remind myself, the community I would have been part of had I accepted the fellowship 15 years ago. Now I wonder, had I taken this path, would I be that woman today, upset to learn the boat is sinking? Would I share her group’s narrative that made me their enemy? Would I, too, defend spending more and more money on larger and larger machines with less and less societal relevance?

I like to think I would not, but my reading about group psychology tells me otherwise. I would probably fight the outsider just like they do.

Another woman identifies as experimentalist and asks me why I am against diversifying experimental efforts. I am not, of course. But economic reality is that we cannot do everything we want to do. We have to make decisions. And costs are a relevant factor.

Finally, another man asks me what experiments physicists should do. As usual when I get this question, I refuse to answer it. This is not my call to make. I cannot replace ten thousands of experts. I can only beg them to please remember that scientists are human, too, and human judgement is affected by group affiliation. Someone, somewhere, has to take the first step to prevent social bias from influencing scientific decisions. Let it be particle physicists.

A second round of polite applause and I am done here. A few people come to shake my hand. The room empties. Someone hands me a travel reimbursement form and calls me a taxi. Soon I am on the way back to the city center and on to six more hours on the train.

I check my email and see I will have to catch up work on the weekend, again. Not only doesn’t it help my own research to speak about problems with the current organization of science, it’s no fun either. It’s no fun to hurt people, destroy hopes, and advocate decisions that would make their lives harder. And it’s no fun to have mud slung at me in return.

And so, as always, these trips end with me asking myself, why?, why am I doing this?

And as always, the answer I give myself is the same. Because it matters we get this right. Because progress matters. Because science matters.

Thanks for asking, I am fine. Keep it coming.

Saturday, July 06, 2019

No, we will not open a portal to a parallel universe

Colbert’s legendary quadruple facepalm.
The nutty physics story of the day comes to us thanks to Michael Brooks who reports for New Scientist that “We’ve seen signs of a mirror-image universe that is touching our own.” This headline has since spread to The Independent, according to which scientists are “attempting to open portal to a parallel universe” and the International Business Times, which wants you to believe that “Scientists Build A Portal To Find A Parallel Universe”.

Needless to say, we have not seen signs of a mirror universe we are not building portals to parallel universes. And if we did, trust me, you wouldn’t hear about it from New Scientist. To first approximation it is safe to assume that whatever you read in New Scientist is either not new or not science, or both.

This story is a case of both, neither new nor science. It is really – once again – about hypothetical particles that physicists have invented just because. In this case it’s particles which are exact copies of the ones that we already know, except for their handedness. These mirror-particles* do not interact with the normal particles, which is supposedly why we haven’t measured them so far. (You find instructions for how to invent particles yourself in my book, Chapter 9 in the section “Laws Like Sausages”.)

The idea of mirror-particles has been around since at least the 1960s. It’s not particularly popular among physicists, because what little we know about dark matter tells us exactly that it does not behave the same way as normal matter. So, to make mirror dark matter fit the data, you have to invent some reason for why, in the end, it is not a mirror copy of normal matter.

And then there is the problem that if the mirror matter really doesn’t interact with our normal matter you cannot measure it. So, if you want to get an experimental search funded, you have to postulate that it does interact. Why? Because otherwise you can’t measure it. Sounds like circular reasoning? That’s what it is.

Now once you have postulated that the hypothetical particles may interact in a way that makes them measureable, then you can make an experiment and try to actually measure them. It is such an measurement that this story is about.

Concretely, it seems to be about the experiment laid out in this paper:
    New Search for Mirror Neutrons at HFIR
    arXiv:1710.00767 [hep-ex]
The authors propose to search for evidence of neutrons oscillating into mirror neutrons.

Now, look, this is exactly the type of ill-motivated experiment that I complained about the other day. Can you do this experiment? Sure. Will it help you solve any of the open problems in the foundations of physics? Almost certainly not. Why not? Because we have no reason to think that these particular particles exist and interact with normal matter in just the way necessary to measure them.

It is not a coincidence that we see so many of these small scale experiments now because this is a strategic decision of the community. Indeed, you find this strategy quoted in the paper for justification: “The 2014 Report of the Particle Physics Project Prioritization Panel (P5) stressed the importance of considering “every feasible avenue,”” to look for new types of dark matter particle.

It adds to this that, some months ago, the Department of Energy announced a plan to provide $24 million for the development of new projects to study dark matter which will undoubtedly fuel physicists’ enthusiasm for thinking up even more new particles.

This, folks, is only the beginning.

I cannot stress enough how idiotic this so-called “strategy” is. You will see million after million vanish into searches for particles invented simply because you can look for them.

If you do not understand why I say this is insanity and not proper science, please read my article in which I explain that falsifiability is necessary but not sufficient to make a hypothesis scientific. This strategy is based on a basic misunderstanding of science philosophy. It is an institutionalized form of motivated reasoning, a mistake that will cost taxpayers tens of millions.

The only good thing about this strategy is that hopefully the media will soon get tired writing about each and every little lab’s search for non-existing particles.


* Not to be confused with supersymmetric partner particles. Different story entirely.

Thursday, July 04, 2019

Physicists still perplexed I ask for reasons to finance their research

Chad Orzel is a physics prof whose research is primarily in atomic physics. He also blogs next door and is a good-humored and eminently reasonably guy, so I hope he will forgive me if I pick on him a little.

Two weeks ago I complained about the large number of dark matter experiments that hunt for hypothetical particles, particles invented just because you can hunt for them. Chad’s response to this is “Physicists Gotta Physics” and “I don't know what else Hossenfelder expects the physicists involved to do.”

To which I wish to answer: If you don’t know anything sensible to do with your research funds, why should we pay you? Less flippantly:
Dear Chad,

I find it remarkable how many researchers think they are entitled to tax-money. I am saddened to see you are one of them. Really, as a science communicator you should know better. “We have to do something, so let us do anything” does not convince me, and I doubt it will convince anyone else. Try harder.
But I admit it is unfair to pick on Chad in particular, because his reaction to my blogpost showcases a problem I encounter with experimentalists all the time. They seem to not understand just how badly motivated the theories are that they use to justify their work.

By and large, experimentalists like to think that looking for those particles is business as usual, similar to how we looked for neutrinos half a century ago, or how we looked for the heavier quarks in the 1990s.

But this isn’t so. These new inventions are of considerably lower quality. We had theoretically sound reasons to think that neutrinos and heavy quarks exist, but there are no similarly sound reasons to think that these new dark matter particles should exist.

Philosophers would call the models strongly underdetermined. I would call them wishful thinking. They’re little more than guesses. Making these experiments, therefore, is playing roulette on an infinitely large table: You will lose with probability 1. It is almost certain to waste time and money. And the big tragedy is that with some thinking, we could invest resources much better.

Orzel complains that I am exaggerating how specific these searches are, but let us look at some of those.

Like this one about using the Aharonov-Bohm effect. It proposes to search for a hypothetical particle called the dark photon which may mix with the actual photon and may form a condensate which may have excitations that may form magnetic dipoles which you may then detect. Or, more likely, just doesn’t exist.

Or let us look at this other paper, which tests for space-time varying massive scalar fields that are non-universally coupled to standard model particles. Or, more likely, don’t exist.

Some want to look for medium mass weakly coupled particles that scatter off electrons. But we have no reason to think that dark matter particles are of that mass, couple with that strength, or couple to electrons to begin with.

Some want to look for something called the invisible axion, which is a very light particle that couples to photons. But we have no reason to think that dark matter couples to photons.

Some want to look for domain walls, or weird types of nuclear matter, or whole “hidden sectors”, and again we have no reason to think these exist.

Fact is, we presently have no reason to think that dark matter particles affect normal matter in any other way than by the gravitational force. Indeed, we don’t even have reason to think it is a particle.

Now, as I previously said I don’t mind if experimentalists want to play with their gadgets (at least not unless their toys cost billions, don’t get me started). What I disapprove of is if experimentalists use theoretical fantasies to motivate their research. Why? Think about it for a moment before reading on.

Done thinking? The problem is that it creates a feedback cycle.

It works like this: Theorists get funding because they write about hypothetical particles that experiments can look for. Experimentalists get funding to search for the hypothetical particles, which encourages more theorists to write papers about those particles, which makes the particles appear more interesting, which gives rise to more experiments. Rinse and repeat.

The result is lot of papers. It looks really productive, but there is no reason to think this cycle will converge on a theory that is an actually correct description of nature. More likely, it will converge on a theory that can be eternally amended so that one needs ever better experiments to find the particles. Which is basically what has been going on the past 40 years.

So, Orzel asks perplexed, does Hossenfelder actually expect scientists to think before they spend money? I actually do.

The foundations of physics have seen 40 years of stagnation. Why? It is clearly neither a lack of theories nor a lack of experiments, because we have seen plenty of both. Before asking for money to continue this madness, everyone in the field should think about what is going wrong and what to do about it.

Wednesday, July 03, 2019

Job opening: Database specialist/Software engineer

I am looking for a database specialist to help with our SciMeter project. The candidate should be comfortable with python, SQL, and linux, and have experience with backend web programming. Text mining skills would come in handy.

This is paid contract work which has to be completed by the end of the calendar year. So, if you are interested in the job, you should have some time at hand in the coming months. You will be working with our team of three people. It does not matter to us where you are located as long as you communicate with us in a timely manner.

If you are interested in the job, please send a brief CV and a documentation of prior completed work to hossi@fias.uni-frankfurt.de with the subject "SciMeter Job 2019". I will explain details of the assignment and payment to interested candidates by email.

Friday, June 28, 2019

Quantum Supremacy: What is it and what does it mean?

Rumors are that later this year we will see Google’s first demonstration of “quantum supremacy”. This is when a quantum computer outperforms a conventional computer. It’s about time that we talk about what this means.


Before we get to quantum supremacy, I have to tell you what a quantum computer is. All conventional computers work with quantum mechanics because their components rely on quantum behavior, like electron bands. But the operations that a conventional computer performs are not quantum.

Conventional computers store and handle information in form of bits that can take on two values, say 0 and 1, or up and down. A quantum computer, on the other hand, stores information in form of quantum-bits or q-bits that can take on any combination of 0 and 1. Operations on a quantum computer can then entangle the q-bits, which allows a quantum computer to solve certain problems much faster than a conventional computer.

Calculating the properties of molecules or materials, for example, is one of those problem that quantum computers can help with. In principle, properties like conductivity or rigidity, or even color, can be calculated from the atomic build-up of a material. We know the equations. But we cannot solve these equations with conventional computers. It would just take too long.

To give you an idea of how much more a quantum computer can do, think about this: One can simulate a quantum computer on a conventional computer just by numerically solving the equations of quantum mechanics. If you do that, then the computational burden on the conventional computer increases exponentially with the number of q-bits that you try to simulate. You can do 2 or 4 q-bits on a personal computer. But already with 50 q-bits you need a cluster of supercomputers. Anything beyond 50 or so q-bits cannot presently be calculated, at least not in any reasonable amount of time.

So what is quantum supremacy? Quantum supremacy is the event in which a quantum computer outperforms the best conventional computers on a specific task. It needs to be a specific task because quantum computers are really special-purpose machines whose powers help with particular calculations.

However, to come back to the earlier example, if you want to know what a molecule does, you need millions of q-bits and we are far away from that. So how then do you test quantum supremacy? You let a quantum computer do what it does best, that is being a quantum computer.

This is an idea proposed by Scott Aaronson. If you set up a quantum computer in a suitable way, it will produce probabilistic distributions of measurable variables. You can try and simulate those measurement outcomes on a conventional computer but this would take a very long time. So by letting a conventional computer compete with a quantum computer on this task, you can demonstrate that the quantum computer does something a classical computer just is not able to do.

Exactly at which point someone will declare quantum supremacy is a little ambiguous because you can always argue that maybe one could have used better conventional computers or a better algorithm. But for practical purposes this really doesn’t matter all that much. The point is that it will show quantum computers really do things that are difficult to calculate with a conventional computer.

But what does that mean? Quantum supremacy sounds very impressive until you realize that most molecules have quantum processes that also exceed the computational capacities of present-day supercomputers. That is, after all, the reason we want quantum computers. And the generation of random variables that can be used to check quantum supremacy is not good to actually calculate anything useful. So that makes it sound as if the existing quantum computers are really just new toys for scientists.

What would it take to calculate anything useful with a quantum computer? Estimates about this vary between half a million and a billion q-bits, depending on just exactly what you think is “useful” and how optimistic you are that algorithms for quantum computers will improve. So let us say, realistically it would take a few million q-bits.

When will we get to see a quantum computer with a few million q-bits? No one knows. The problem is that the presently most dominant approaches are unlikely to scale. These approaches are superconducting q-bits and ion traps. In neither case does anyone have any idea how to get beyond a few hundred. This is both an engineering problem and a cost-problem.

And this is why, in recent years, there has been a lot of talk in the community about NISQ computers, that are the “noisy intermediate scale quantum computers”. This is really a term invented to make investors believe that quantum computing will have practical applications in the next decades or so. The trouble with NISQs is that while it is plausible that they soon will be practically feasible, no one knows how to calculate something useful with them.

As you have probably noticed, I am not very optimistic that quantum computers will have practical applications any time soon. In fact, I am presently quite worried that quantum computing will go the same way as nuclear fusion, that it will remain forever promising but never quite work.

Nevertheless, quantum supremacy is without doubt going to be an exciting scientific milestone.

Update June 29: Video now with German subtitles. To see those, click CC in the YouTube toolbar and chose language under settings/gear icon.

Wednesday, June 26, 2019

Win a free copy of "Lost in Maths" in French

My book “Lost in Math: How Beauty Leads Physics Astray” was recently translated to French. Today is your chance to win a free copy of the French translation! The first three people who submit a comment to this blogpost with a brief explanation of why they are interested in reading the book will be the lucky winners.

The only entry requirement is that you must be willing to send me a mailing address. Comments submitted by email or left on other platforms do not count because I cannot compare time-stamps.

Update: The books are gone.

Monday, June 24, 2019

30 years from now, what will a next larger particle collider have taught us?

The year is 2049. CERN’s mega-project, the Future Circular Collider (FCC), has been in operation for 6 years. The following is the transcript of an interview with CERN’s director, Johanna Michilini (JM), conducted by David Grump (DG).

DG: “Prof Michilini, you have guided CERN through the first years of the FCC. How has your experience been?”

JM: “It has been most exciting. Getting to know a new machine always takes time, but after the first two years we have had stable performance and collected data according to schedule. The experiments have since seen various upgrades, such as replacing the thin gap chambers and micromegas with quantum fiber arrays that have better counting rates and have also installed… Are you feeling okay?”

DG: “Sorry, I may have briefly fallen asleep. What did you find?”

JM: “We have measured the self-coupling of a particle called the Higgs-boson and it came out to be 1.2 plus minus 0.3 times the expected value which is the most amazing confirmation that the universe works as we thought in the 1960s and you better be in awe of our big brains.”

DG: “I am flat on the floor. One of the major motivations to invest into your institution was to learn how the universe was created. So what can you tell us about this today?”

JM: “The Higgs gives mass to all fundamental particles that have mass and so it plays a role in the process of creation of the universe.”

DG: “Yes, and how was the universe created?”

JM: “The Higgs is a tiny thing but it’s the greatest particle of all. We have built a big thing to study the tiny thing. We have checked that the tiny thing does what we thought it does and found that’s what it does. You always have to check things in science.”

DG: “Yes, and how was the universe created?”

JM: “You already said that.”

DG: “Well isn’t it correct that you wanted to learn how the universe was created?”

JM: “That may have been what we said, but what we actually meant is that we will learn something about how nuclear matter was created in the early universe. And the Higgs plays a role in that, so we have learned something about that.”

DG: “I see. Well, that is somewhat disappointing.”

JM: “If you need $20 billion, you sometimes forget to mention a few details.”

DG: “Happens to the best of us. All right, then. What else did you measure?”

JM: “Ooh, we measured many many things. For example we improved the precision by which we know how quarks and gluons are distributed inside protons.”

DG: “What can we do with that knowledge?”

JM: “We can use that knowledge to calculate more precisely what happens in particle colliders.”

DG: “Oh-kay. And what have you learned about dark matter?”

JM: “We have ruled out 22 of infinitely many hypothetical particles that could make up dark matter.”

DG: “And what’s with the remaining infinitely many hypothetical particles?”

JM: “We are currently working on plans for the next larger collider that would allow us to rule out some more of them because you just have to look, you know.”

DG: “Prof Michilini, we thank you for this conversation.”

Thursday, June 20, 2019

Away Note

I'll be in the Netherlands for a few days to attend a workshop on "Probabilities in Cosmology". Back next week. Wish you a good Summer Solstice!

Wednesday, June 19, 2019

No, a next larger particle collider will not tell us anything about the creation of the universe

LHC magnets. Image: CERN.
A few days ago, Scientific American ran a piece by a CERN physicist and a philosopher about particle physicists’ plans to spend $20 billion on a next larger particle collider, the Future Circular Collider (FCC). To make their case, the authors have dug up a quote from 1977 and ignored the 40 years after this, which is a truly excellent illustration of all that’s wrong with particle physics at the moment.

I currently don’t have time to go through this in detail, but let me pick the most egregious mistake. It’s right in the opening paragraph where the authors claim that a next larger collider would tell us something about the creation of the universe:
“[P]article physics strives to push a diverse range of experimental approaches from which we may glean new answers to fundamental questions regarding the creation of the universe and the nature of the mysterious and elusive dark matter.

Such an endeavor requires a post-LHC particle collider with an energy capability significantly greater than that of previous colliders.”

We previously encountered this sales-pitch in CERN’s marketing video for theFCC, which claimed that the collider would probe the beginning of the universe.

But neither the LHC nor the FCC will tell us anything about the “beginning” or “creation” of the universe.

What these colliders can do is create nuclear matter at high density by slamming heavy atomic nuclei into each other. Such matter probably also existed in the early universe. However, even collisions of large nuclei create merely tiny blobs of such nuclear matter, and these blobs fall apart almost immediately. In case you prefer numbers over words, they last about 10-23 seconds.

This situation is nothing like the soup of plasma in the expanding space of the early universe. It is therefore highly questionable already that these experiments can tell us much about what happened back then.

Even optimistically, the nuclear matter that the FCC can produce has a density about 70 orders of magnitude below the density at the beginning of the universe.

And even if you are willing to ignore the tiny blobs and their immediate decay and the 70 orders of magnitude, then the experiments still tell us nothing about the creation of this matter, and certainly not about the creation of the universe.

The argument that large colliders can teach us anything about the beginning, origin, or creation of the universe is manifestly false. The authors of this article either knew this and decided to lie to their readers, or they didn’t know it, in which case they have begun to believe their own institution’s marketing. I’m not sure which is worse.

And as I have said many times before, there is no reason to think a next larger collider would find evidence of dark matter particles. Somewhat ironically, the authors spend the rest of their article arguing against theoretical arguments, but of course the appeal to dark matter is a bona-fide theoretical argument.

In any case, it pains me to see not only that particle physicists are still engaging in false marketing, but that Scientific American plays along with it.

How about sticking with the truth? The truth is that a next larger collider costs a shitload lot of money and will most likely not teach us much. If progress in the foundations of physics is what you want, this is not the way forward.

Tuesday, June 18, 2019

Brace for the oncoming deluge of dark matter detectors that won’t detect anything

Imagine an unknown disease spreads, causing temporarily blindness. Most patients recover after a few weeks, but some never regain eyesight. Scientists rush to identify the cause. They guess the pathogen’s shape and, based on this, develop test-stripes and antigens. If one guess doesn’t work, they’ll move on to the next.

Doesn’t quite sound right? Of course it does not. Trying to identifying pathogens by guesswork is sheer insanity. The number of possible shapes is infinite. The guesses will almost certainly be wrong. No funding agency would pour money into this.

Except they do. Not for pathogen identification, but for dark matter searches.

In the past decades, the searches for the most popular dark matter particles have failed. Neither WIMPs nor axions have shown up in any detector, of which there have been dozens. Physicists have finally understood this is not a promising method. Unfortunately, they have not come up with anything better.

Instead, their strategy is now to fund any proposed experiment that could plausibly be said to maybe detect something that could potentially be a hypothetical dark matter particle. And since there are infinitely many such hypothetical particles, we are now well on the way to building infinitely many detectors. DNA, carbon nanotubes, diamonds, old rocks, atomic clocks, superfluid helium, qubits, Aharonov-Bohm, cold atom gases, you name it. Let us call it the equal opportunity approach to dark matter search.

As it should be, everyone benefits from the equal opportunity approach. Theorists invent new particles (papers will be written). Experimentalists use those invented particles as motivation to propose experiments (more papers will be written). With a little luck they get funding and do the experiment (even more papers). Eventually, experiments conclude they didn’t find anything (papers, papers, papers!).

In the end we will have a lot of papers and still won’t know what dark matter is. And this, we will be told, is how science is supposed to work.

Let me be clear that I am not strongly opposed to such medium scale experiments, because they typically cost “merely” a few million dollars. A few millions here and there don’t put overall progress at risk. Not like, say, building a next larger collider would.

So why not live and let live, you may say. Let these physicists have some fun with their invented particles and their experiments that don’t find them. What’s wrong with that?

What’s wrong with that (besides the fact that a million dollars is still a million dollars) is that it will almost certainly lead nowhere. I don’t want to wait another 40 years for physicists to realize that falsifiability alone is not sufficient to make a hypothesis promising.

My disease analogy, as any analogy, has its shortcomings of course. You cannot draw blood from a galaxy and put it under a microscope. But metaphorically speaking, that’s what physicists should do. We have patients out there: All those galaxies and clusters which are behaving in funny ways. Study those until you have good reason to think you know what’s the pathogen. Then, build your detector.

Not all types of dark matter particles do an equally good job to explain structure formation and the behavior of galaxies and all the other data we have. And particle dark matter is not the only explanation for the observations. Right now, the community makes no systematic effort to identify the best model to fit the existing data. And, needless to say, that data could be better, both in terms of sky coverage and resolution.

The equal opportunity approach relies on guessing a highly specific explanation and then setting out to test it. This way, null-results are a near certainty. A more promising method is to start with highly non-specific explanations and zero in on the details.

The failures of the past decades demonstrate that physicists must think more carefully before commissioning experiments to search for hypothetical particles. They still haven’t learned the lesson.