Sunday, July 28, 2019

The Forgotten Solution: Superdeterminism

Welcome to the renaissance of quantum mechanics. It took more than a hundred years, but physicists finally woke up, looked quantum mechanics into the face – and realized with bewilderment they barely know the theory they’ve been married to for so long. Gone are the days of “shut up and calculate”; the foundations of quantum mechanics are en vogue again.

It is not a spontaneous acknowledgement of philosophy that sparked physicists’ rediscovered desire; their sudden search for meaning is driven by technological advances.

With quantum cryptography a reality and quantum computing on the horizon, questions once believed ephemeral are now butter and bread of the research worker. When I was a student, my prof thought it questionable that violations of Bell’s inequality would ever be demonstrated convincingly. Today you can take that as given. We have also seen delayed-choice experiments, marveled over quantum teleportation, witnessed decoherence in action, tracked individual quantum jumps, and cheered when Zeilinger entangled photons over hundreds of kilometers of distance. Well, some of us, anyway.

But while physicists know how to use the mathematics of quantum mechanics to make stunningly accurate predictions, just what this math is about has remained unclear. This is why physicists currently have several “interpretations” of quantum mechanics.

I find the term “interpretations” somewhat unfortunate. That’s because some ideas that go as “interpretation” are really theories which differ from quantum mechanics, and these differences may one day become observable. Collapse models, for example, explicitly add a process for wave-function collapse to quantum measurement. Pilot wave theories, likewise, can result in deviations from quantum mechanics in certain circumstances, though those have not been observed. At least not yet.

A phenomenologist myself, I am agnostic about different interpretations of what is indeed the same math, such as QBism vs Copenhagen or the Many Worlds. But I agree with the philosopher Tim Maudlin that the measurement problem in quantum mechanics is a real problem – a problem of inconsistency – and requires a solution.

And how to solve it? Collapse models solve the measurement problem, but they are hard to combine with quantum field theory which for me is a deal-breaker. Pilot wave theories also solve it, but they are non-local, which makes my hair stand up for much the same reason. This is why I think all these approaches are on the wrong track and instead side with superdeterminism.

But before I tell you what’s super about superdeterminism, I have to briefly explain the all-important theorem from John Stewart Bell. It says, in a nutshell, that correlations between certain observables are bounded in every theory which fulfills certain assumptions. These assumptions are what you would expect of a deterministic, non-quantum theory – statistical locality and statistical independence (together often referred to as “Bell locality”) – and should, most importantly, be fulfilled by any classical theory that attempts to explain quantum behavior by adding “hidden variables” to particles.

Experiments show that the bound of Bell’s theorem can be violated. This means the correct theory must violate at least one of the theorem’s assumptions. Quantum mechanics is indeterministic and violates statistical locality. (Which, I should warn you has little to do with what particle physicists usually mean by “locality.”) A deterministic theory that doesn’t fulfill the other assumption, that of statistical independence, is called superdeterministic. Note that this leaves open whether or not a superdeterministic theory is statistically local.

Unfortunately, superdeterminism has a bad reputation, so bad that most students never get to hear of it. If mentioned at all, it is commonly dismissed as a “conspiracy theory.” Several philosophers have declared superdeterminism means abandoning scientific methodology entirely. To see where this objection comes from – and why it’s wrong – we have to unwrap this idea of statistical independence.

Statistical independence enters Bell’s theorem in two ways. One is that the detectors’ settings are independent of each other, the other one that the settings are independent of the state you want to measure. If you don’t have statistical independence, you are sacrificing the experimentalist’s freedom to choose what to measure. And if you do that, you can come up with deterministic hidden variable explanations that result in the same measurement outcomes as quantum mechanics.

I find superdeterminism interesting because the most obvious class of hidden variables are the degrees of freedom of the detector. And the detector isn’t statistically independent of itself, so any such theory necessarily violates statistical independence. It is also, in a trivial sense, non-linear just because if the detector depends on a superposition of prepared states that’s not the same as superposing two measurements. Since any solution of the measurement problem requires a non-linear time evolution, that seems a good opportunity to make progress.

Now, a lot of people discard superdeterminism simply because they prefer to believe in free will, which is where I think the biggest resistance to superdeterminism comes from. Bad enough that belief isn’t a scientific reason, but worse that this is misunderstanding just what is going on. It’s not like superdeterminism somehow prevents an experimentalist from turning a knob. Rather, it’s that the detectors’ states aren’t independent of the system one tries to measure. There just isn’t any state the experimentalist could twiddle their knob to which would prevent a correlation.

Where do these correlations ultimately come from? Well, they come from where everything ultimately comes from, that is from the initial state of the universe. And that’s where most people walk off: They think that you need to precisely choose the initial conditions of the universe to arrange quanta in Anton Zeilinger’s brain just so that he’ll end up turning a knob left rather than right. Besides sounding entirely nuts, it’s also a useless idea, because how the hell would you ever calculate anything with it? And if it’s unfalsifiable but useless, then indeed it isn’t science. So, frowning at superdeterminism is not entirely unjustified.

But that would be jumping to conclusions. How much detail you need to know about the initial state to make predictions depends on your model. And without writing down a model, there is really no way to tell whether it does or doesn’t live up to scientific methodology. It’s here where the trouble begins.

While philosophers on occasion discuss superdeterminism on a conceptual basis, there is little to no work on actual models. Besides me and my postdoc, I count Gerard ‘t Hooft and Tim Palmer. The former gentleman, however, seems to dislike quantum mechanics and would rather have a classical hidden variables theory, and the latter wants to discretize state space. I don’t see the point in either. I’ll be happy if the result solves the measurement problem and is still local the same way that quantum field theories are local, ie as non-local as quantum mechanics always is.*

The stakes are high, for if quantum mechanics is not a fundamental theory, but can be derived from an underlying deterministic theory, this opens the door to new applications. That’s why I remain perplexed that what I think is the obvious route to progress is one most physicists have never even heard of. Maybe it’s just a reality they don’t want to wake up to.

Recommended reading:
  • The significance of measurement independence for Bell inequalities and locality
    Michael J. W. Hall
  • Bell's Theorem: Two Neglected Solutions
    Louis Vervoort
    FoP, 3,769–791 (2013), arXiv:1203.6587

* Rewrote this paragraph to better summarize Palmer’s approach.

Wednesday, July 24, 2019

Science Shrugs

Boris Johnson
The Michelson-Morley experiment of 1887 disproved the ether, a hypothetical medium that permeates the universe. By using an interferometer with perpendicular arms, Michelson and Morley demonstrated that the speed of light is the same regardless of how the direction of the light is oriented relative to our motion through the supposed ether. Their null result set the stage for Einstein’s theory of Special Relativity and is often lauded for heralding the new age of physics. At least that’s how the story goes. In reality, it was more complicated.

Thing is, Morley himself was not convinced of the results of his seminal experiment. Together with a new collaborator, Dayton Miller, he repeated the measurement a few years later. The two again got a negative result.

This seems to have settled the case for Morley, but Miller went on to build larger interferometers to achieve better precision.

Indeed, in the 1920s, Miller reported seeing an effect consistent with Earth passing through the ether! Though the velocity which he inferred from the data didn’t match expectations he remained convinced to have measured a partial drag caused by the ether.

Miller’s detection could never be reproduced by other experiments. It is today widely considered to be wrong, but just what exactly he measured has remained unclear.

And Miller’s isn’t the only measurement mystery.

In the 1960s, Joseph Weber built the first gravitational wave detectors. At a conference in 1969, he announced that he had measured two dozen gravitational wave events, and swiftly published his results in the Physical Review Letters.

It is clear now that Weber did not measure gravitational waves – those are much harder to detect than anyone anticipated back then. So what then did he measure?

Some have argued that Weber’s equipment was faulty, his data analysis flawed, or that he simply succumbed to wishful thinking. But just what happened? We may never know.

Then, 40 years ago, physicists at the Heavy Ion Society (GSI) in Germany bombarded uranium nuclei with curium. They saw an excess emission of positrons that they couldn’t explain. In a 1983 paper, the group wrote that the observation “cannot be associated with established dynamic mechanisms of positron production” and that known physics is “unlikely match to the data at a confidence level of better than 98%”.

This observation was never reproduced. We still have no idea if this was a real effect, caused by an odd experimental setup, or whether it was a statistical fluke.

Around the same time, in 1975, we saw the first detection of a magnetic monopole. Magnetic monopoles are hypothetical quasi-particles that should have been created in the early universe if the fundamental forces were once unified. The event in case was a track left in a detector sent to the upper atmosphere with a balloon. Some have suspected that the supposed monopole track was instead caused by a nuclear decay. But really, with only one event, who can tell? In 1982, a second monopole event was reported. It remained the last.*

Today we have a similar situation with the ANITA events. ANITA is the Antarctic Impulsive Transient Antenna, and its collaboration announced last year (to much press attention) that they have measured two upward-going cosmic ray events at high energy. Trouble is, according to the currently established theories, such events shouldn’t happen.

ANITA’s two events are statistically significant, and I have no doubt they actually measured something. But it’s so little data there’s a high risk this will remain yet another oddity, eternally unresolved. Though physicists certainly try to get something out of it.

In all of these cases it’s quite possible the observations had distinct causes, just that we do not know the circumstances sufficiently well and do not have enough data to make a sober inference. Science is limited in this regard: It cannot reliably explain rare events that do not reproduce, and in these cases we are left with speculation and story-telling.

How did the world end up with Donald Trump as President of the United States and Boris Johnson as Prime Minister of the United Kingdom? In politics as in physics, some things defy explanation.

* Rewrote this paragraph after readers pointed out the second reference, see comments.

Tuesday, July 23, 2019

When they ask us [I’ve been singing again]

Prompted by last week’s conference (sorry, I meant “unconference”) which saw a lot of climate-related talks, climate modeling, geoengineering, biodiversity, and so on. Wrote this on the plane back home. Loosely inspired by this and this. Enjoy, while it lasts.

Friday, July 19, 2019

M is for Maggot, N is for Nonsense

wormy apple
Imagine you bite into an apple and find a beheaded maggot. Yuck! But it could have been worse. Had you found only half a maggot, you’d have eaten more of it. Worse still, you may have found only a quarter of a maggot, or a hundredth, or a thousandth. Indeed, if you take the limit maggot to zero, the worst possible case must be biting into an apple and not finding a maggot.

Wait, what? That doesn’t make sense. Certainly a maggot-free apple is not maximally yucky. Where did our math fail us?

It didn’t, really. The beheaded maggot is an example of a discontinuous or “singular” limit and originally due to Michael Berry*. You know you have a discontinuous limit if the function whose limit you are taking (that’s the increasing “yuck factor” of the maggot) does not approach the value of the function at the limit (unyucky).

A less fruity example is taking the y-th power of x and sending y to infinity. If x is any positive number smaller than 1, taking its exponent to infinity will give zero. If x is equal to one, all values of y will give back 1. If x is larger than one, the result of taking y to infinity will return infinity. If you plot the limit y to infinity as a function of x, it’s discontinuous.

Such singular limits are not just mathematical curiosities. We have them in physics too.

For example in thermodynamics, when we take the limit in which the number of constituents of a system becomes infinitely large, we see phase transitions where some quantities, such as the derivative of specific heat, become discontinuous. This is, of course, strictly speaking an unrealistic limit because the number of constituents may become very large, but never actually infinite. However, the limit isn’t always unrealistic.

Take the example of massive gravity. In general relativity, gravitational waves propagate with the speed of light and the particle associated with them – the graviton – is massless. You can modify general relativity so that the graviton has a mass. However, if you then let the graviton mass go to zero, you do not get back general relativity. The reason is that if the graviton mass is not zero, then it has additional polarizations and those are independent of the mass as long as the mass isn’t zero**.

The same issue appears if you have massless fields that can propagate in additional dimensions of space. This too gives rise to additional polarization which don’t necessarily disappear even if you take the size of the extra dimensions to zero.

Discontinuous limits are often a sign that you have forgotten to keep track of global, as opposed to local properties. If you for example take the radius of a sphere to infinity the curvature will go to zero, but the result is not an infinitely extended plane. For this reason, there are certain solutions in general relativity that will not approximate each other as you think they should. In a space with a negative cosmological constant, for example, black hole horizons can be infinitely extended planes. But these solutions no longer exist if the cosmological constant vanishes. In this case, black hole horizons have to be spherical.

Why am I telling you that? Because discontinuous limits should make you skeptical about any supposed insights gained into quantum gravity by using calculations in Anti de Sitter space.

Anti De Sitter (AdS) space, to remind you, is a space with a negative cosmological constant. It is popular among string theorists because they know how to make calculations in this space. Trouble is, the cosmological constant in our universe is positive. And there is no reason to think the limit of taking the cosmological constant from negative values to positive values is continuous. Indeed, it almost certainly is not because the very reason that string theorists prefer calculations in AdS is that this space provides additional structure that exists for any negative value of the cosmological constant, and suddenly vanishes if the value is zero.

String theorists usually justify working with a negative cosmological constant by arguing it can teach us something about quantum gravity in general. That may be so or it may not be so. The case with the negative cosmological constant resembles that of finding a piece of a maggot in your apple. I find it hard to swallow.

* ht Tim Palmer
** there are ways to fix this limiting behavior so that you do get back general relativity.

Wednesday, July 10, 2019

Away Note

I will be away for a week to attend SciFoo 2019. Please expect blogging to be sparse and comments to be stuck in the queue longer than usual.

Tuesday, July 09, 2019

Why the multiverse is religion, not science.

This is the 5th and last part in my series to explain why the multiverse is not a scientific hypothesis. The other parts are: 1. Does the Higgs-boson exist? 2. Do I exist? 3. Does God exist? and 4. The multiverse hypothesis.

I put together these videos because I am frustrated that scientists discard the issue unthinkingly. This is not a polemical argument and it’s not meant as an insult. But believing in the multiverse is logically equivalent to believing in god, therefore it’s religion, not science.

To see why, let me pull together what I laid out in my previous videos. Scientists say that something exists if it is useful to describe observations. By “useful” I mean it is simpler than just collecting data. You can postulate the existence of things that are not useful to describe observations, such as gods, but this is no longer science.

Universes besides our own are logically equivalent to gods. They are unobservable by assumption, hence they can exist only in a religious sense. You can believe in them if you want to, but they are not part of science.

I know that this is not a particularly remarkable argument. But physicists seem to have a hard time following it, especially those who happen to work on the multiverse. Therefore, let me sort out some common misunderstandings.

First. The major misunderstanding is that I am saying the multiverse does not exist. But this is not what I am saying. I am saying science does not tell us anything about universes we cannot observe, therefore claiming they exist is not science.

Second. They will argue the multiverse is simple. Most physicists who are in favor of the multiverse say it’s scientific because it’s simpler to assume that all universes of a certain type exist than it is to assume that only one of them exist.

That’s a questionable claim. But more importantly, it’s beside the point. The simplest assumption is no assumption. And you do not need to make any statement about the existence of the multiverse to explain our observations. Therefore, science says, you should not. As I said, it’s the same with the multiverse as with god. It’s an unnecessary assumption. Not wrong, but superfluous.

You also do not need to postulate the existence of our universe, of course. No scientist ever does that. That would be totally ridiculous.

Third. They’ll claim the existence of the multiverse is a prediction of their theory.

It’s not. That’s just wrong. Just because you can write down a theory for something, doesn’t mean it exists*. We determine that something exists, in the scientific sense, if it is useful to describe observation. That’s exactly what the multiverse is not.

Fourth. But then you are saying that discussing what’s inside a black hole is also not science

That’s equally wrong. Other universes are not science because you cannot observe them. But you can totally observe what’s inside a black hole. You just cannot come back and tell us about it. Besides, no one really thinks that the inside of a black hole will remain inaccessible forever. For these reasons, the situation is entirely different for black holes. If it was correct that the inside of black holes cannot be observed, this would indeed mean that postulating its existence is not scientific.

Fifth. But there are types of multiverses that have observable consequences.

That’s right. Physicists have come up with certain types of multiverses that can be falsified. The problem with these ideas is conceptually entirely different. It’s that there is no reason to think we live in such multiverses to begin with. The requirement that a hypothesis must be falsifiable is certainly necessary to make the hypothesis scientific, but not sufficient. I previously explained this here.

To sum it up. The multiverse is certainly an interesting idea and it attracts a lot of public attention. There is nothing wrong with that in principle. Entertainment has a value and so has thought-stimulating discussion. But do not confuse the multiverse with science, because it is not.

* Revised this sentence after two readers misunderstood the previous version.

Update: The video now has German and Italian subtitles. To see those, click on "CC" in the YouTube toolbar. Choose language under settings/gear icon.

Sunday, July 07, 2019

Because Science Matters

[Foto: Michael Sentef]

Another day, another lecture. This time I am in Hamburg, at DESY, Germany’s major particle physics center.

My history with DESY is an odd one, which is none, despite the fact that fifteen years ago I was awarded Germany’s most prestigious young researcher grant, the Emmy-Noether fellowship, to work in Hamburg on particle physics phenomenology. The Emmy-Noether fellowship is a five-year grant that does not only pay the principle investigator but also comes with salaries for a small group. It’s basically the jackpot of German postdoc funding.

I declined it.

I hadn’t thought of this for a long time, but here I am in Hamburg, finally getting to see how my life might have looked like, in that parallel-world where I became a particle physicist. It looks like I’ll be late.

The taxi driver circles around a hotel and insists with heavy Polish accent this must be the right place because “there’s nothing after that”. To make his point he waves at trees and construction areas that stretch further up the road.

I finally manage to convince him that, really, I’m not looking for a hotel. A kilometer later he pulls into an anonymous driveway where a man in uniform asks him to stop. “See, this wrong!” the taxi-man squeaks and attempts to turn around when I spot a familiar sight: The cover of my book, on a poster, next to the entry.

“I’m supposed to give that talk,” I tell the man in uniform, “At two pm.” He looks at his watch. It’s a quarter past two.

I arrive at the lecture hall 20 minutes late, mostly due to a delayed train, but also, I note with some guilty consciousness, because I decided not to stay for the night. With too much traveling in my life already, I have become one of these terrible people who arrive just before their talk and vanish directly afterwards. I used to call it the “In and Out Lecture”, inspired by an American fast food chain with the laxative name “In and Out Burger”. A friend of mine more aptly dubbed it “Blitzkrieg Seminar.”

The room is well-filled. I am glad to see the audience was kept in good mood with drinks and snacks. Within minutes, I am wired up and ready to speak about the troubles in the foundations of physics.

Briefly before my arrival, I learned some particle physicists complained I was even invited. This isn’t the first time this happens. On another occasion some tried to un-invite me, albeit eventually unsuccessfully. They tend to be disappointed when it turns out I’m not a fire-spewing dragon but a middle-aged mother of two who just happens to know a lot about theory development in high energy physics.

Most of them, especially the experimentalists, don’t even find my argument all that disagreeable – at least at first sight. Relying on beauty has not historically worked well in physics, and it isn’t presently working, no doubt about this. To make progress, then, we should take clue from history and focus on resolving inconsistencies in our present description of nature, either inconsistencies between theory and experiment, or internal inconsistencies. So far, they’re usually with me.

Where my argument becomes disagreeable is when I draw consequences. There is no inconsistency to be resolved in the energy range that a next larger collider could reach. It would measure some constants to better precision, all right, but that’s not worth $20 billion.

Those 20 billion dollars, by the way, are merely the estimated construction cost for CERN’s planned Future Circular Collider (FCC). They do not include operation cost. The facility would run for about 25 years. Operation costs of the current machine, the Large Hadron Collider (LHC) are about $1 billion per year already, and with the FCC, expenses for electricity and staff are bound to increase. That means the total cost for the FCC easily exceeds $40 billion.

That’s a lot of money. And the measurements this next larger collider could make would deliver information that won’t be useful in the next 100 or maybe 5000 years. Now is not the right time for this.

On the risk of oversimplifying an 80,000 word message, we have better things to do. Figure out what’s with dark matter, quantum gravity, or the measurement problem. There are breakthroughs waiting to be made. But we have to be careful with the next steps or risk delaying progress by further decades, if not centuries.

After my talk, in the question session, an elderly man goes on about his personal theory for something. He will later tell me about his website and complain that the scientific mainstream is ignoring his breakthrough insights.

Another elderly man insists that beauty is a good guide to the development of new natural laws. To support his point he quotes Steven Weinberg, because Weinberg, you see, likes string theory. In other words, it’s exactly the type of argument I just explained is both wrong and in the way of progress.

Another man, this one not quite as old, stands up to deliver a speech about how important particle colliders are. Several people applaud.

Next up, an agitated woman reprimands me for a typographical error on a slide. More applause. She goes on to explain the LHC has taught us a lot about inflation, a hypothetical phase of exponential expansion in the early universe. I refuse to comment. There is, I feel, no way to reason with someone who really believes this.

But her’s is, I remind myself, the community I would have been part of had I accepted the fellowship 15 years ago. Now I wonder, had I taken this path, would I be that woman today, upset to learn the boat is sinking? Would I share her group’s narrative that made me their enemy? Would I, too, defend spending more and more money on larger and larger machines with less and less societal relevance?

I like to think I would not, but my reading about group psychology tells me otherwise. I would probably fight the outsider just like they do.

Another woman identifies as experimentalist and asks me why I am against diversifying experimental efforts. I am not, of course. But economic reality is that we cannot do everything we want to do. We have to make decisions. And costs are a relevant factor.

Finally, another man asks me what experiments physicists should do. As usual when I get this question, I refuse to answer it. This is not my call to make. I cannot replace ten thousands of experts. I can only beg them to please remember that scientists are human, too, and human judgement is affected by group affiliation. Someone, somewhere, has to take the first step to prevent social bias from influencing scientific decisions. Let it be particle physicists.

A second round of polite applause and I am done here. A few people come to shake my hand. The room empties. Someone hands me a travel reimbursement form and calls me a taxi. Soon I am on the way back to the city center and on to six more hours on the train.

I check my email and see I will have to catch up work on the weekend, again. Not only doesn’t it help my own research to speak about problems with the current organization of science, it’s no fun either. It’s no fun to hurt people, destroy hopes, and advocate decisions that would make their lives harder. And it’s no fun to have mud slung at me in return.

And so, as always, these trips end with me asking myself, why?, why am I doing this?

And as always, the answer I give myself is the same. Because it matters we get this right. Because progress matters. Because science matters.

Thanks for asking, I am fine. Keep it coming.

Saturday, July 06, 2019

No, we will not open a portal to a parallel universe

Colbert’s legendary quadruple facepalm.
The nutty physics story of the day comes to us thanks to Michael Brooks who reports for New Scientist that “We’ve seen signs of a mirror-image universe that is touching our own.” This headline has since spread to The Independent, according to which scientists are “attempting to open portal to a parallel universe” and the International Business Times, which wants you to believe that “Scientists Build A Portal To Find A Parallel Universe”.

Needless to say, we have not seen signs of a mirror universe we are not building portals to parallel universes. And if we did, trust me, you wouldn’t hear about it from New Scientist. To first approximation it is safe to assume that whatever you read in New Scientist is either not new or not science, or both.

This story is a case of both, neither new nor science. It is really – once again – about hypothetical particles that physicists have invented just because. In this case it’s particles which are exact copies of the ones that we already know, except for their handedness. These mirror-particles* do not interact with the normal particles, which is supposedly why we haven’t measured them so far. (You find instructions for how to invent particles yourself in my book, Chapter 9 in the section “Laws Like Sausages”.)

The idea of mirror-particles has been around since at least the 1960s. It’s not particularly popular among physicists, because what little we know about dark matter tells us exactly that it does not behave the same way as normal matter. So, to make mirror dark matter fit the data, you have to invent some reason for why, in the end, it is not a mirror copy of normal matter.

And then there is the problem that if the mirror matter really doesn’t interact with our normal matter you cannot measure it. So, if you want to get an experimental search funded, you have to postulate that it does interact. Why? Because otherwise you can’t measure it. Sounds like circular reasoning? That’s what it is.

Now once you have postulated that the hypothetical particles may interact in a way that makes them measureable, then you can make an experiment and try to actually measure them. It is such an measurement that this story is about.

Concretely, it seems to be about the experiment laid out in this paper:
    New Search for Mirror Neutrons at HFIR
    arXiv:1710.00767 [hep-ex]
The authors propose to search for evidence of neutrons oscillating into mirror neutrons.

Now, look, this is exactly the type of ill-motivated experiment that I complained about the other day. Can you do this experiment? Sure. Will it help you solve any of the open problems in the foundations of physics? Almost certainly not. Why not? Because we have no reason to think that these particular particles exist and interact with normal matter in just the way necessary to measure them.

It is not a coincidence that we see so many of these small scale experiments now because this is a strategic decision of the community. Indeed, you find this strategy quoted in the paper for justification: “The 2014 Report of the Particle Physics Project Prioritization Panel (P5) stressed the importance of considering “every feasible avenue,”” to look for new types of dark matter particle.

It adds to this that, some months ago, the Department of Energy announced a plan to provide $24 million for the development of new projects to study dark matter which will undoubtedly fuel physicists’ enthusiasm for thinking up even more new particles.

This, folks, is only the beginning.

I cannot stress enough how idiotic this so-called “strategy” is. You will see million after million vanish into searches for particles invented simply because you can look for them.

If you do not understand why I say this is insanity and not proper science, please read my article in which I explain that falsifiability is necessary but not sufficient to make a hypothesis scientific. This strategy is based on a basic misunderstanding of science philosophy. It is an institutionalized form of motivated reasoning, a mistake that will cost taxpayers tens of millions.

The only good thing about this strategy is that hopefully the media will soon get tired writing about each and every little lab’s search for non-existing particles.

* Not to be confused with supersymmetric partner particles. Different story entirely.

Thursday, July 04, 2019

Physicists still perplexed I ask for reasons to finance their research

Chad Orzel is a physics prof whose research is primarily in atomic physics. He also blogs next door and is a good-humored and eminently reasonably guy, so I hope he will forgive me if I pick on him a little.

Two weeks ago I complained about the large number of dark matter experiments that hunt for hypothetical particles, particles invented just because you can hunt for them. Chad’s response to this is “Physicists Gotta Physics” and “I don't know what else Hossenfelder expects the physicists involved to do.”

To which I wish to answer: If you don’t know anything sensible to do with your research funds, why should we pay you? Less flippantly:
Dear Chad,

I find it remarkable how many researchers think they are entitled to tax-money. I am saddened to see you are one of them. Really, as a science communicator you should know better. “We have to do something, so let us do anything” does not convince me, and I doubt it will convince anyone else. Try harder.
But I admit it is unfair to pick on Chad in particular, because his reaction to my blogpost showcases a problem I encounter with experimentalists all the time. They seem to not understand just how badly motivated the theories are that they use to justify their work.

By and large, experimentalists like to think that looking for those particles is business as usual, similar to how we looked for neutrinos half a century ago, or how we looked for the heavier quarks in the 1990s.

But this isn’t so. These new inventions are of considerably lower quality. We had theoretically sound reasons to think that neutrinos and heavy quarks exist, but there are no similarly sound reasons to think that these new dark matter particles should exist.

Philosophers would call the models strongly underdetermined. I would call them wishful thinking. They’re little more than guesses. Making these experiments, therefore, is playing roulette on an infinitely large table: You will lose with probability 1. It is almost certain to waste time and money. And the big tragedy is that with some thinking, we could invest resources much better.

Orzel complains that I am exaggerating how specific these searches are, but let us look at some of those.

Like this one about using the Aharonov-Bohm effect. It proposes to search for a hypothetical particle called the dark photon which may mix with the actual photon and may form a condensate which may have excitations that may form magnetic dipoles which you may then detect. Or, more likely, just doesn’t exist.

Or let us look at this other paper, which tests for space-time varying massive scalar fields that are non-universally coupled to standard model particles. Or, more likely, don’t exist.

Some want to look for medium mass weakly coupled particles that scatter off electrons. But we have no reason to think that dark matter particles are of that mass, couple with that strength, or couple to electrons to begin with.

Some want to look for something called the invisible axion, which is a very light particle that couples to photons. But we have no reason to think that dark matter couples to photons.

Some want to look for domain walls, or weird types of nuclear matter, or whole “hidden sectors”, and again we have no reason to think these exist.

Fact is, we presently have no reason to think that dark matter particles affect normal matter in any other way than by the gravitational force. Indeed, we don’t even have reason to think it is a particle.

Now, as I previously said I don’t mind if experimentalists want to play with their gadgets (at least not unless their toys cost billions, don’t get me started). What I disapprove of is if experimentalists use theoretical fantasies to motivate their research. Why? Think about it for a moment before reading on.

Done thinking? The problem is that it creates a feedback cycle.

It works like this: Theorists get funding because they write about hypothetical particles that experiments can look for. Experimentalists get funding to search for the hypothetical particles, which encourages more theorists to write papers about those particles, which makes the particles appear more interesting, which gives rise to more experiments. Rinse and repeat.

The result is lot of papers. It looks really productive, but there is no reason to think this cycle will converge on a theory that is an actually correct description of nature. More likely, it will converge on a theory that can be eternally amended so that one needs ever better experiments to find the particles. Which is basically what has been going on the past 40 years.

So, Orzel asks perplexed, does Hossenfelder actually expect scientists to think before they spend money? I actually do.

The foundations of physics have seen 40 years of stagnation. Why? It is clearly neither a lack of theories nor a lack of experiments, because we have seen plenty of both. Before asking for money to continue this madness, everyone in the field should think about what is going wrong and what to do about it.

Wednesday, July 03, 2019

Job opening: Database specialist/Software engineer

I am looking for a database specialist to help with our SciMeter project. The candidate should be comfortable with python, SQL, and linux, and have experience with backend web programming. Text mining skills would come in handy.

This is paid contract work which has to be completed by the end of the calendar year. So, if you are interested in the job, you should have some time at hand in the coming months. You will be working with our team of three people. It does not matter to us where you are located as long as you communicate with us in a timely manner.

If you are interested in the job, please send a brief CV and a documentation of prior completed work to with the subject "SciMeter Job 2019". I will explain details of the assignment and payment to interested candidates by email.