Friday, July 19, 2019

M is for Maggot, N is for Nonsense

wormy apple
[image: pinclipart.com]
Imagine you bite into an apple and find a beheaded maggot. Yuck! But it could have been worse. Had you found only half a maggot, you’d have eaten more of it. Worse still, you may have found only a quarter of a maggot, or a hundredth, or a thousandth. Indeed, if you take the limit maggot to zero, the worst possible case must be biting into an apple and not finding a maggot.

Wait, what? That doesn’t make sense. Certainly a maggot-free apple is not maximally yucky. Where did our math fail us?

It didn’t, really. The beheaded maggot is an example of a discontinuous or “singular” limit and originally due to Michael Berry*. You know you have a discontinuous limit if the function whose limit you are taking (that’s the increasing “yuck factor” of the maggot) does not approach the value of the function at the limit (unyucky).

A less fruity example is taking the y-th power of x and sending y to infinity. If x is any positive number smaller than 1, taking its exponent to infinity will give zero. If x is equal to one, all values of y will give back 1. If x is larger than one, the result of taking y to infinity will return infinity. If you plot the limit y to infinity as a function of x, it’s discontinuous.

Such singular limits are not just mathematical curiosities. We have them in physics too.

For example in thermodynamics, when we take the limit in which the number of constituents of a system becomes infinitely large, we see phase transitions where some quantities, such as the derivative of specific heat, become discontinuous. This is, of course, strictly speaking an unrealistic limit because the number of constituents may become very large, but never actually infinite. However, the limit isn’t always unrealistic.

Take the example of massive gravity. In general relativity, gravitational waves propagate with the speed of light and the particle associated with them – the graviton – is massless. You can modify general relativity so that the graviton has a mass. However, if you then let the graviton mass go to zero, you do not get back general relativity. The reason is that if the graviton mass is not zero, then it has additional polarizations and those are independent of the mass as long as the mass isn’t zero**.

The same issue appears if you have massless fields that can propagate in additional dimensions of space. This too gives rise to additional polarization which don’t necessarily disappear even if you take the size of the extra dimensions to zero.

Discontinuous limits are often a sign that you have forgotten to keep track of global, as opposed to local properties. If you for example take the radius of a sphere to infinity the curvature will go to zero, but the result is not an infinitely extended plane. For this reason, there are certain solutions in general relativity that will not approximate each other as you think they should. In a space with a negative cosmological constant, for example, black hole horizons can be infinitely extended planes. But these solutions no longer exist if the cosmological constant vanishes. In this case, black hole horizons have to be spherical.

Why am I telling you that? Because discontinuous limits should make you skeptical about any supposed insights gained into quantum gravity by using calculations in Anti de Sitter space.

Anti De Sitter (AdS) space, to remind you, is a space with a negative cosmological constant. It is popular among string theorists because they know how to make calculations in this space. Trouble is, the cosmological constant in our universe is positive. And there is no reason to think the limit of taking the cosmological constant from negative values to positive values is continuous. Indeed, it almost certainly is not because the very reason that string theorists prefer calculations in AdS is that this space provides additional structure that exists for any negative value of the cosmological constant, and suddenly vanishes if the value is zero.

String theorists usually justify working with a negative cosmological constant by arguing it can teach us something about quantum gravity in general. That may be so or it may not be so. The case with the negative cosmological constant resembles that of finding a piece of a maggot in your apple. I find it hard to swallow.


* ht Tim Palmer
** there are ways to fix this limiting behavior so that you do get back general relativity.

Wednesday, July 10, 2019

Away Note

I will be away for a week to attend SciFoo 2019. Please expect blogging to be sparse and comments to be stuck in the queue longer than usual.

Tuesday, July 09, 2019

Why the multiverse is religion, not science.

This is the 5th and last part in my series to explain why the multiverse is not a scientific hypothesis. The other parts are: 1. Does the Higgs-boson exist? 2. Do I exist? 3. Does God exist? and 4. The multiverse hypothesis.

I put together these videos because I am frustrated that scientists discard the issue unthinkingly. This is not a polemical argument and it’s not meant as an insult. But believing in the multiverse is logically equivalent to believing in god, therefore it’s religion, not science.

To see why, let me pull together what I laid out in my previous videos. Scientists say that something exists if it is useful to describe observations. By “useful” I mean it is simpler than just collecting data. You can postulate the existence of things that are not useful to describe observations, such as gods, but this is no longer science.

Universes besides our own are logically equivalent to gods. They are unobservable by assumption, hence they can exist only in a religious sense. You can believe in them if you want to, but they are not part of science.

I know that this is not a particularly remarkable argument. But physicists seem to have a hard time following it, especially those who happen to work on the multiverse. Therefore, let me sort out some common misunderstandings.

First. The major misunderstanding is that I am saying the multiverse does not exist. But this is not what I am saying. I am saying science does not tell us anything about universes we cannot observe, therefore claiming they exist is not science.

Second. They will argue the multiverse is simple. Most physicists who are in favor of the multiverse say it’s scientific because it’s simpler to assume that all universes of a certain type exist than it is to assume that only one of them exist.

That’s a questionable claim. But more importantly, it’s beside the point. The simplest assumption is no assumption. And you do not need to make any statement about the existence of the multiverse to explain our observations. Therefore, science says, you should not. As I said, it’s the same with the multiverse as with god. It’s an unnecessary assumption. Not wrong, but superfluous.

You also do not need to postulate the existence of our universe, of course. No scientist ever does that. That would be totally ridiculous.

Third. They’ll claim the existence of the multiverse is a prediction of their theory.

It’s not. That’s just wrong. Just because you can write down a theory for something, doesn’t mean it exists*. We determine that something exists, in the scientific sense, if it is useful to describe observation. That’s exactly what the multiverse is not.

Fourth. But then you are saying that discussing what’s inside a black hole is also not science

That’s equally wrong. Other universes are not science because you cannot observe them. But you can totally observe what’s inside a black hole. You just cannot come back and tell us about it. Besides, no one really thinks that the inside of a black hole will remain inaccessible forever. For these reasons, the situation is entirely different for black holes. If it was correct that the inside of black holes cannot be observed, this would indeed mean that postulating its existence is not scientific.

Fifth. But there are types of multiverses that have observable consequences.

That’s right. Physicists have come up with certain types of multiverses that can be falsified. The problem with these ideas is conceptually entirely different. It’s that there is no reason to think we live in such multiverses to begin with. The requirement that a hypothesis must be falsifiable is certainly necessary to make the hypothesis scientific, but not sufficient. I previously explained this here.

To sum it up. The multiverse is certainly an interesting idea and it attracts a lot of public attention. There is nothing wrong with that in principle. Entertainment has a value and so has thought-stimulating discussion. But do not confuse the multiverse with science, because it is not.



* Revised this sentence after two readers misunderstood the previous version.

Update: The video now has German and Italian subtitles. To see those, click on "CC" in the YouTube toolbar. Choose language under settings/gear icon.

Sunday, July 07, 2019

Because Science Matters

[Foto: Michael Sentef]

Another day, another lecture. This time I am in Hamburg, at DESY, Germany’s major particle physics center.

My history with DESY is an odd one, which is none, despite the fact that fifteen years ago I was awarded Germany’s most prestigious young researcher grant, the Emmy-Noether fellowship, to work in Hamburg on particle physics phenomenology. The Emmy-Noether fellowship is a five-year grant that does not only pay the principle investigator but also comes with salaries for a small group. It’s basically the jackpot of German postdoc funding.

I declined it.

I hadn’t thought of this for a long time, but here I am in Hamburg, finally getting to see how my life might have looked like, in that parallel-world where I became a particle physicist. It looks like I’ll be late.

The taxi driver circles around a hotel and insists with heavy Polish accent this must be the right place because “there’s nothing after that”. To make his point he waves at trees and construction areas that stretch further up the road.

I finally manage to convince him that, really, I’m not looking for a hotel. A kilometer later he pulls into an anonymous driveway where a man in uniform asks him to stop. “See, this wrong!” the taxi-man squeaks and attempts to turn around when I spot a familiar sight: The cover of my book, on a poster, next to the entry.

“I’m supposed to give that talk,” I tell the man in uniform, “At two pm.” He looks at his watch. It’s a quarter past two.

I arrive at the lecture hall 20 minutes late, mostly due to a delayed train, but also, I note with some guilty consciousness, because I decided not to stay for the night. With too much traveling in my life already, I have become one of these terrible people who arrive just before their talk and vanish directly afterwards. I used to call it the “In and Out Lecture”, inspired by an American fast food chain with the laxative name “In and Out Burger”. A friend of mine more aptly dubbed it “Blitzkrieg Seminar.”

The room is well-filled. I am glad to see the audience was kept in good mood with drinks and snacks. Within minutes, I am wired up and ready to speak about the troubles in the foundations of physics.

Briefly before my arrival, I learned some particle physicists complained I was even invited. This isn’t the first time this happens. On another occasion some tried to un-invite me, albeit eventually unsuccessfully. They tend to be disappointed when it turns out I’m not a fire-spewing dragon but a middle-aged mother of two who just happens to know a lot about theory development in high energy physics.

Most of them, especially the experimentalists, don’t even find my argument all that disagreeable – at least at first sight. Relying on beauty has not historically worked well in physics, and it isn’t presently working, no doubt about this. To make progress, then, we should take clue from history and focus on resolving inconsistencies in our present description of nature, either inconsistencies between theory and experiment, or internal inconsistencies. So far, they’re usually with me.

Where my argument becomes disagreeable is when I draw consequences. There is no inconsistency to be resolved in the energy range that a next larger collider could reach. It would measure some constants to better precision, all right, but that’s not worth $20 billion.

Those 20 billion dollars, by the way, are merely the estimated construction cost for CERN’s planned Future Circular Collider (FCC). They do not include operation cost. The facility would run for about 25 years. Operation costs of the current machine, the Large Hadron Collider (LHC) are about $1 billion per year already, and with the FCC, expenses for electricity and staff are bound to increase. That means the total cost for the FCC easily exceeds $40 billion.

That’s a lot of money. And the measurements this next larger collider could make would deliver information that won’t be useful in the next 100 or maybe 5000 years. Now is not the right time for this.

On the risk of oversimplifying an 80,000 word message, we have better things to do. Figure out what’s with dark matter, quantum gravity, or the measurement problem. There are breakthroughs waiting to be made. But we have to be careful with the next steps or risk delaying progress by further decades, if not centuries.

After my talk, in the question session, an elderly man goes on about his personal theory for something. He will later tell me about his website and complain that the scientific mainstream is ignoring his breakthrough insights.

Another elderly man insists that beauty is a good guide to the development of new natural laws. To support his point he quotes Steven Weinberg, because Weinberg, you see, likes string theory. In other words, it’s exactly the type of argument I just explained is both wrong and in the way of progress.

Another man, this one not quite as old, stands up to deliver a speech about how important particle colliders are. Several people applaud.

Next up, an agitated woman reprimands me for a typographical error on a slide. More applause. She goes on to explain the LHC has taught us a lot about inflation, a hypothetical phase of exponential expansion in the early universe. I refuse to comment. There is, I feel, no way to reason with someone who really believes this.

But her’s is, I remind myself, the community I would have been part of had I accepted the fellowship 15 years ago. Now I wonder, had I taken this path, would I be that woman today, upset to learn the boat is sinking? Would I share her group’s narrative that made me their enemy? Would I, too, defend spending more and more money on larger and larger machines with less and less societal relevance?

I like to think I would not, but my reading about group psychology tells me otherwise. I would probably fight the outsider just like they do.

Another woman identifies as experimentalist and asks me why I am against diversifying experimental efforts. I am not, of course. But economic reality is that we cannot do everything we want to do. We have to make decisions. And costs are a relevant factor.

Finally, another man asks me what experiments physicists should do. As usual when I get this question, I refuse to answer it. This is not my call to make. I cannot replace ten thousands of experts. I can only beg them to please remember that scientists are human, too, and human judgement is affected by group affiliation. Someone, somewhere, has to take the first step to prevent social bias from influencing scientific decisions. Let it be particle physicists.

A second round of polite applause and I am done here. A few people come to shake my hand. The room empties. Someone hands me a travel reimbursement form and calls me a taxi. Soon I am on the way back to the city center and on to six more hours on the train.

I check my email and see I will have to catch up work on the weekend, again. Not only doesn’t it help my own research to speak about problems with the current organization of science, it’s no fun either. It’s no fun to hurt people, destroy hopes, and advocate decisions that would make their lives harder. And it’s no fun to have mud slung at me in return.

And so, as always, these trips end with me asking myself, why?, why am I doing this?

And as always, the answer I give myself is the same. Because it matters we get this right. Because progress matters. Because science matters.

Thanks for asking, I am fine. Keep it coming.

Saturday, July 06, 2019

No, we will not open a portal to a parallel universe

Colbert’s legendary quadruple facepalm.
The nutty physics story of the day comes to us thanks to Michael Brooks who reports for New Scientist that “We’ve seen signs of a mirror-image universe that is touching our own.” This headline has since spread to The Independent, according to which scientists are “attempting to open portal to a parallel universe” and the International Business Times, which wants you to believe that “Scientists Build A Portal To Find A Parallel Universe”.

Needless to say, we have not seen signs of a mirror universe we are not building portals to parallel universes. And if we did, trust me, you wouldn’t hear about it from New Scientist. To first approximation it is safe to assume that whatever you read in New Scientist is either not new or not science, or both.

This story is a case of both, neither new nor science. It is really – once again – about hypothetical particles that physicists have invented just because. In this case it’s particles which are exact copies of the ones that we already know, except for their handedness. These mirror-particles* do not interact with the normal particles, which is supposedly why we haven’t measured them so far. (You find instructions for how to invent particles yourself in my book, Chapter 9 in the section “Laws Like Sausages”.)

The idea of mirror-particles has been around since at least the 1960s. It’s not particularly popular among physicists, because what little we know about dark matter tells us exactly that it does not behave the same way as normal matter. So, to make mirror dark matter fit the data, you have to invent some reason for why, in the end, it is not a mirror copy of normal matter.

And then there is the problem that if the mirror matter really doesn’t interact with our normal matter you cannot measure it. So, if you want to get an experimental search funded, you have to postulate that it does interact. Why? Because otherwise you can’t measure it. Sounds like circular reasoning? That’s what it is.

Now once you have postulated that the hypothetical particles may interact in a way that makes them measureable, then you can make an experiment and try to actually measure them. It is such an measurement that this story is about.

Concretely, it seems to be about the experiment laid out in this paper:
    New Search for Mirror Neutrons at HFIR
    arXiv:1710.00767 [hep-ex]
The authors propose to search for evidence of neutrons oscillating into mirror neutrons.

Now, look, this is exactly the type of ill-motivated experiment that I complained about the other day. Can you do this experiment? Sure. Will it help you solve any of the open problems in the foundations of physics? Almost certainly not. Why not? Because we have no reason to think that these particular particles exist and interact with normal matter in just the way necessary to measure them.

It is not a coincidence that we see so many of these small scale experiments now because this is a strategic decision of the community. Indeed, you find this strategy quoted in the paper for justification: “The 2014 Report of the Particle Physics Project Prioritization Panel (P5) stressed the importance of considering “every feasible avenue,”” to look for new types of dark matter particle.

It adds to this that, some months ago, the Department of Energy announced a plan to provide $24 million for the development of new projects to study dark matter which will undoubtedly fuel physicists’ enthusiasm for thinking up even more new particles.

This, folks, is only the beginning.

I cannot stress enough how idiotic this so-called “strategy” is. You will see million after million vanish into searches for particles invented simply because you can look for them.

If you do not understand why I say this is insanity and not proper science, please read my article in which I explain that falsifiability is necessary but not sufficient to make a hypothesis scientific. This strategy is based on a basic misunderstanding of science philosophy. It is an institutionalized form of motivated reasoning, a mistake that will cost taxpayers tens of millions.

The only good thing about this strategy is that hopefully the media will soon get tired writing about each and every little lab’s search for non-existing particles.


* Not to be confused with supersymmetric partner particles. Different story entirely.

Thursday, July 04, 2019

Physicists still perplexed I ask for reasons to finance their research

Chad Orzel is a physics prof whose research is primarily in atomic physics. He also blogs next door and is a good-humored and eminently reasonably guy, so I hope he will forgive me if I pick on him a little.

Two weeks ago I complained about the large number of dark matter experiments that hunt for hypothetical particles, particles invented just because you can hunt for them. Chad’s response to this is “Physicists Gotta Physics” and “I don't know what else Hossenfelder expects the physicists involved to do.”

To which I wish to answer: If you don’t know anything sensible to do with your research funds, why should we pay you? Less flippantly:
Dear Chad,

I find it remarkable how many researchers think they are entitled to tax-money. I am saddened to see you are one of them. Really, as a science communicator you should know better. “We have to do something, so let us do anything” does not convince me, and I doubt it will convince anyone else. Try harder.
But I admit it is unfair to pick on Chad in particular, because his reaction to my blogpost showcases a problem I encounter with experimentalists all the time. They seem to not understand just how badly motivated the theories are that they use to justify their work.

By and large, experimentalists like to think that looking for those particles is business as usual, similar to how we looked for neutrinos half a century ago, or how we looked for the heavier quarks in the 1990s.

But this isn’t so. These new inventions are of considerably lower quality. We had theoretically sound reasons to think that neutrinos and heavy quarks exist, but there are no similarly sound reasons to think that these new dark matter particles should exist.

Philosophers would call the models strongly underdetermined. I would call them wishful thinking. They’re little more than guesses. Making these experiments, therefore, is playing roulette on an infinitely large table: You will lose with probability 1. It is almost certain to waste time and money. And the big tragedy is that with some thinking, we could invest resources much better.

Orzel complains that I am exaggerating how specific these searches are, but let us look at some of those.

Like this one about using the Aharonov-Bohm effect. It proposes to search for a hypothetical particle called the dark photon which may mix with the actual photon and may form a condensate which may have excitations that may form magnetic dipoles which you may then detect. Or, more likely, just doesn’t exist.

Or let us look at this other paper, which tests for space-time varying massive scalar fields that are non-universally coupled to standard model particles. Or, more likely, don’t exist.

Some want to look for medium mass weakly coupled particles that scatter off electrons. But we have no reason to think that dark matter particles are of that mass, couple with that strength, or couple to electrons to begin with.

Some want to look for something called the invisible axion, which is a very light particle that couples to photons. But we have no reason to think that dark matter couples to photons.

Some want to look for domain walls, or weird types of nuclear matter, or whole “hidden sectors”, and again we have no reason to think these exist.

Fact is, we presently have no reason to think that dark matter particles affect normal matter in any other way than by the gravitational force. Indeed, we don’t even have reason to think it is a particle.

Now, as I previously said I don’t mind if experimentalists want to play with their gadgets (at least not unless their toys cost billions, don’t get me started). What I disapprove of is if experimentalists use theoretical fantasies to motivate their research. Why? Think about it for a moment before reading on.

Done thinking? The problem is that it creates a feedback cycle.

It works like this: Theorists get funding because they write about hypothetical particles that experiments can look for. Experimentalists get funding to search for the hypothetical particles, which encourages more theorists to write papers about those particles, which makes the particles appear more interesting, which gives rise to more experiments. Rinse and repeat.

The result is lot of papers. It looks really productive, but there is no reason to think this cycle will converge on a theory that is an actually correct description of nature. More likely, it will converge on a theory that can be eternally amended so that one needs ever better experiments to find the particles. Which is basically what has been going on the past 40 years.

So, Orzel asks perplexed, does Hossenfelder actually expect scientists to think before they spend money? I actually do.

The foundations of physics have seen 40 years of stagnation. Why? It is clearly neither a lack of theories nor a lack of experiments, because we have seen plenty of both. Before asking for money to continue this madness, everyone in the field should think about what is going wrong and what to do about it.

Wednesday, July 03, 2019

Job opening: Database specialist/Software engineer

I am looking for a database specialist to help with our SciMeter project. The candidate should be comfortable with python, SQL, and linux, and have experience with backend web programming. Text mining skills would come in handy.

This is paid contract work which has to be completed by the end of the calendar year. So, if you are interested in the job, you should have some time at hand in the coming months. You will be working with our team of three people. It does not matter to us where you are located as long as you communicate with us in a timely manner.

If you are interested in the job, please send a brief CV and a documentation of prior completed work to hossi@fias.uni-frankfurt.de with the subject "SciMeter Job 2019". I will explain details of the assignment and payment to interested candidates by email.

Friday, June 28, 2019

Quantum Supremacy: What is it and what does it mean?

Rumors are that later this year we will see Google’s first demonstration of “quantum supremacy”. This is when a quantum computer outperforms a conventional computer. It’s about time that we talk about what this means.


Before we get to quantum supremacy, I have to tell you what a quantum computer is. All conventional computers work with quantum mechanics because their components rely on quantum behavior, like electron bands. But the operations that a conventional computer performs are not quantum.

Conventional computers store and handle information in form of bits that can take on two values, say 0 and 1, or up and down. A quantum computer, on the other hand, stores information in form of quantum-bits or q-bits that can take on any combination of 0 and 1. Operations on a quantum computer can then entangle the q-bits, which allows a quantum computer to solve certain problems much faster than a conventional computer.

Calculating the properties of molecules or materials, for example, is one of those problem that quantum computers can help with. In principle, properties like conductivity or rigidity, or even color, can be calculated from the atomic build-up of a material. We know the equations. But we cannot solve these equations with conventional computers. It would just take too long.

To give you an idea of how much more a quantum computer can do, think about this: One can simulate a quantum computer on a conventional computer just by numerically solving the equations of quantum mechanics. If you do that, then the computational burden on the conventional computer increases exponentially with the number of q-bits that you try to simulate. You can do 2 or 4 q-bits on a personal computer. But already with 50 q-bits you need a cluster of supercomputers. Anything beyond 50 or so q-bits cannot presently be calculated, at least not in any reasonable amount of time.

So what is quantum supremacy? Quantum supremacy is the event in which a quantum computer outperforms the best conventional computers on a specific task. It needs to be a specific task because quantum computers are really special-purpose machines whose powers help with particular calculations.

However, to come back to the earlier example, if you want to know what a molecule does, you need millions of q-bits and we are far away from that. So how then do you test quantum supremacy? You let a quantum computer do what it does best, that is being a quantum computer.

This is an idea proposed by Scott Aaronson. If you set up a quantum computer in a suitable way, it will produce probabilistic distributions of measurable variables. You can try and simulate those measurement outcomes on a conventional computer but this would take a very long time. So by letting a conventional computer compete with a quantum computer on this task, you can demonstrate that the quantum computer does something a classical computer just is not able to do.

Exactly at which point someone will declare quantum supremacy is a little ambiguous because you can always argue that maybe one could have used better conventional computers or a better algorithm. But for practical purposes this really doesn’t matter all that much. The point is that it will show quantum computers really do things that are difficult to calculate with a conventional computer.

But what does that mean? Quantum supremacy sounds very impressive until you realize that most molecules have quantum processes that also exceed the computational capacities of present-day supercomputers. That is, after all, the reason we want quantum computers. And the generation of random variables that can be used to check quantum supremacy is not good to actually calculate anything useful. So that makes it sound as if the existing quantum computers are really just new toys for scientists.

What would it take to calculate anything useful with a quantum computer? Estimates about this vary between half a million and a billion q-bits, depending on just exactly what you think is “useful” and how optimistic you are that algorithms for quantum computers will improve. So let us say, realistically it would take a few million q-bits.

When will we get to see a quantum computer with a few million q-bits? No one knows. The problem is that the presently most dominant approaches are unlikely to scale. These approaches are superconducting q-bits and ion traps. In neither case does anyone have any idea how to get beyond a few hundred. This is both an engineering problem and a cost-problem.

And this is why, in recent years, there has been a lot of talk in the community about NISQ computers, that are the “noisy intermediate scale quantum computers”. This is really a term invented to make investors believe that quantum computing will have practical applications in the next decades or so. The trouble with NISQs is that while it is plausible that they soon will be practically feasible, no one knows how to calculate something useful with them.

As you have probably noticed, I am not very optimistic that quantum computers will have practical applications any time soon. In fact, I am presently quite worried that quantum computing will go the same way as nuclear fusion, that it will remain forever promising but never quite work.

Nevertheless, quantum supremacy is without doubt going to be an exciting scientific milestone.

Update June 29: Video now with German subtitles. To see those, click CC in the YouTube toolbar and chose language under settings/gear icon.

Wednesday, June 26, 2019

Win a free copy of "Lost in Maths" in French

My book “Lost in Math: How Beauty Leads Physics Astray” was recently translated to French. Today is your chance to win a free copy of the French translation! The first three people who submit a comment to this blogpost with a brief explanation of why they are interested in reading the book will be the lucky winners.

The only entry requirement is that you must be willing to send me a mailing address. Comments submitted by email or left on other platforms do not count because I cannot compare time-stamps.

Update: The books are gone.

Monday, June 24, 2019

30 years from now, what will a next larger particle collider have taught us?

The year is 2049. CERN’s mega-project, the Future Circular Collider (FCC), has been in operation for 6 years. The following is the transcript of an interview with CERN’s director, Johanna Michilini (JM), conducted by David Grump (DG).

DG: “Prof Michilini, you have guided CERN through the first years of the FCC. How has your experience been?”

JM: “It has been most exciting. Getting to know a new machine always takes time, but after the first two years we have had stable performance and collected data according to schedule. The experiments have since seen various upgrades, such as replacing the thin gap chambers and micromegas with quantum fiber arrays that have better counting rates and have also installed… Are you feeling okay?”

DG: “Sorry, I may have briefly fallen asleep. What did you find?”

JM: “We have measured the self-coupling of a particle called the Higgs-boson and it came out to be 1.2 plus minus 0.3 times the expected value which is the most amazing confirmation that the universe works as we thought in the 1960s and you better be in awe of our big brains.”

DG: “I am flat on the floor. One of the major motivations to invest into your institution was to learn how the universe was created. So what can you tell us about this today?”

JM: “The Higgs gives mass to all fundamental particles that have mass and so it plays a role in the process of creation of the universe.”

DG: “Yes, and how was the universe created?”

JM: “The Higgs is a tiny thing but it’s the greatest particle of all. We have built a big thing to study the tiny thing. We have checked that the tiny thing does what we thought it does and found that’s what it does. You always have to check things in science.”

DG: “Yes, and how was the universe created?”

JM: “You already said that.”

DG: “Well isn’t it correct that you wanted to learn how the universe was created?”

JM: “That may have been what we said, but what we actually meant is that we will learn something about how nuclear matter was created in the early universe. And the Higgs plays a role in that, so we have learned something about that.”

DG: “I see. Well, that is somewhat disappointing.”

JM: “If you need $20 billion, you sometimes forget to mention a few details.”

DG: “Happens to the best of us. All right, then. What else did you measure?”

JM: “Ooh, we measured many many things. For example we improved the precision by which we know how quarks and gluons are distributed inside protons.”

DG: “What can we do with that knowledge?”

JM: “We can use that knowledge to calculate more precisely what happens in particle colliders.”

DG: “Oh-kay. And what have you learned about dark matter?”

JM: “We have ruled out 22 of infinitely many hypothetical particles that could make up dark matter.”

DG: “And what’s with the remaining infinitely many hypothetical particles?”

JM: “We are currently working on plans for the next larger collider that would allow us to rule out some more of them because you just have to look, you know.”

DG: “Prof Michilini, we thank you for this conversation.”

Thursday, June 20, 2019

Away Note

I'll be in the Netherlands for a few days to attend a workshop on "Probabilities in Cosmology". Back next week. Wish you a good Summer Solstice!

Wednesday, June 19, 2019

No, a next larger particle collider will not tell us anything about the creation of the universe

LHC magnets. Image: CERN.
A few days ago, Scientific American ran a piece by a CERN physicist and a philosopher about particle physicists’ plans to spend $20 billion on a next larger particle collider, the Future Circular Collider (FCC). To make their case, the authors have dug up a quote from 1977 and ignored the 40 years after this, which is a truly excellent illustration of all that’s wrong with particle physics at the moment.

I currently don’t have time to go through this in detail, but let me pick the most egregious mistake. It’s right in the opening paragraph where the authors claim that a next larger collider would tell us something about the creation of the universe:
“[P]article physics strives to push a diverse range of experimental approaches from which we may glean new answers to fundamental questions regarding the creation of the universe and the nature of the mysterious and elusive dark matter.

Such an endeavor requires a post-LHC particle collider with an energy capability significantly greater than that of previous colliders.”

We previously encountered this sales-pitch in CERN’s marketing video for theFCC, which claimed that the collider would probe the beginning of the universe.

But neither the LHC nor the FCC will tell us anything about the “beginning” or “creation” of the universe.

What these colliders can do is create nuclear matter at high density by slamming heavy atomic nuclei into each other. Such matter probably also existed in the early universe. However, even collisions of large nuclei create merely tiny blobs of such nuclear matter, and these blobs fall apart almost immediately. In case you prefer numbers over words, they last about 10-23 seconds.

This situation is nothing like the soup of plasma in the expanding space of the early universe. It is therefore highly questionable already that these experiments can tell us much about what happened back then.

Even optimistically, the nuclear matter that the FCC can produce has a density about 70 orders of magnitude below the density at the beginning of the universe.

And even if you are willing to ignore the tiny blobs and their immediate decay and the 70 orders of magnitude, then the experiments still tell us nothing about the creation of this matter, and certainly not about the creation of the universe.

The argument that large colliders can teach us anything about the beginning, origin, or creation of the universe is manifestly false. The authors of this article either knew this and decided to lie to their readers, or they didn’t know it, in which case they have begun to believe their own institution’s marketing. I’m not sure which is worse.

And as I have said many times before, there is no reason to think a next larger collider would find evidence of dark matter particles. Somewhat ironically, the authors spend the rest of their article arguing against theoretical arguments, but of course the appeal to dark matter is a bona-fide theoretical argument.

In any case, it pains me to see not only that particle physicists are still engaging in false marketing, but that Scientific American plays along with it.

How about sticking with the truth? The truth is that a next larger collider costs a shitload lot of money and will most likely not teach us much. If progress in the foundations of physics is what you want, this is not the way forward.

Tuesday, June 18, 2019

Brace for the oncoming deluge of dark matter detectors that won’t detect anything

Imagine an unknown disease spreads, causing temporarily blindness. Most patients recover after a few weeks, but some never regain eyesight. Scientists rush to identify the cause. They guess the pathogen’s shape and, based on this, develop test-stripes and antigens. If one guess doesn’t work, they’ll move on to the next.

Doesn’t quite sound right? Of course it does not. Trying to identifying pathogens by guesswork is sheer insanity. The number of possible shapes is infinite. The guesses will almost certainly be wrong. No funding agency would pour money into this.

Except they do. Not for pathogen identification, but for dark matter searches.

In the past decades, the searches for the most popular dark matter particles have failed. Neither WIMPs nor axions have shown up in any detector, of which there have been dozens. Physicists have finally understood this is not a promising method. Unfortunately, they have not come up with anything better.

Instead, their strategy is now to fund any proposed experiment that could plausibly be said to maybe detect something that could potentially be a hypothetical dark matter particle. And since there are infinitely many such hypothetical particles, we are now well on the way to building infinitely many detectors. DNA, carbon nanotubes, diamonds, old rocks, atomic clocks, superfluid helium, qubits, Aharonov-Bohm, cold atom gases, you name it. Let us call it the equal opportunity approach to dark matter search.

As it should be, everyone benefits from the equal opportunity approach. Theorists invent new particles (papers will be written). Experimentalists use those invented particles as motivation to propose experiments (more papers will be written). With a little luck they get funding and do the experiment (even more papers). Eventually, experiments conclude they didn’t find anything (papers, papers, papers!).

In the end we will have a lot of papers and still won’t know what dark matter is. And this, we will be told, is how science is supposed to work.

Let me be clear that I am not strongly opposed to such medium scale experiments, because they typically cost “merely” a few million dollars. A few millions here and there don’t put overall progress at risk. Not like, say, building a next larger collider would.

So why not live and let live, you may say. Let these physicists have some fun with their invented particles and their experiments that don’t find them. What’s wrong with that?

What’s wrong with that (besides the fact that a million dollars is still a million dollars) is that it will almost certainly lead nowhere. I don’t want to wait another 40 years for physicists to realize that falsifiability alone is not sufficient to make a hypothesis promising.

My disease analogy, as any analogy, has its shortcomings of course. You cannot draw blood from a galaxy and put it under a microscope. But metaphorically speaking, that’s what physicists should do. We have patients out there: All those galaxies and clusters which are behaving in funny ways. Study those until you have good reason to think you know what’s the pathogen. Then, build your detector.

Not all types of dark matter particles do an equally good job to explain structure formation and the behavior of galaxies and all the other data we have. And particle dark matter is not the only explanation for the observations. Right now, the community makes no systematic effort to identify the best model to fit the existing data. And, needless to say, that data could be better, both in terms of sky coverage and resolution.

The equal opportunity approach relies on guessing a highly specific explanation and then setting out to test it. This way, null-results are a near certainty. A more promising method is to start with highly non-specific explanations and zero in on the details.

The failures of the past decades demonstrate that physicists must think more carefully before commissioning experiments to search for hypothetical particles. They still haven’t learned the lesson.

Sunday, June 16, 2019

Book review: “Einstein’s Unfinished Revolution” by Lee Smolin

Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum
By Lee Smolin
Penguin Press (April 9, 2019)

Popular science books cover a spectrum from exposition to speculation. Some writers, like Chad Orzel or Anil Ananthaswamy, stay safely on the side of established science. Others, like Philip Ball in his recent book, keep their opinions to the closing chapter. I would place Max Tegmark’s “Mathematical Universe” and Lee Smolin’s “Trouble With Physics” somewhere in the middle. Then, on the extreme end of speculation, we have authors like Roger Penrose and David Deutsch who use books to put forward ideas in the first place. “Einstein’s Unfinished Revolution” lies on the speculative end of this spectrum.

Lee is very upfront about the purpose of his writing. He is dissatisfied with the current formulation of quantum mechanics. It sacrifices realism, and he thinks this is too much to give up. In the past decades, he has therefore developed his own approach to quantum mechanics, the “ensemble interpretation”. His new book lays out how this ensemble interpretation works and what its benefits are.

Before getting to this, Lee introduces the features of quantum theories (superpositions, entanglement, uncertainty, measurement postulate, etc) and discusses the advantages and disadvantages of the major interpretations of quantum mechanics (Copenhagen, many worlds, pilot wave, collapse models). He deserves applause for also mentioning the Montevideo interpretation and superdeterminism, though clearly he doesn’t like either. I have found his evaluation of these approaches overall balanced and fair.

In the later chapters, Lee comes to his own ideas about quantum mechanics and how these tie together with his other work on quantum gravity. I have not been able to follow all his arguments here, especially not on the matter of non-locality.

Unfortunately, Lee doesn’t discuss his ensemble interpretation half as critically the other approaches. From reading his book you may get away with the impression he has solved all problems. Let me therefore briefly mention the most obvious shortcomings of his approach. (a) To quantify the similarity of two systems you need to define a resolution. (b) This will violate Lorentz-invariance which means it’s hard to make compatible with standard model physics. (c) You better not ask about virtual particles. (d) If a system gets its laws from precedents, where do the first laws come from? Lee tells me that these issues have been discussed in the papers he lists on his website.

As all of Lee’s previous books, this one is well-written and engaging, and if you liked Lee’s earlier books you will probably like this one too. The book has the occasional paragraph that I think will be over many reader’s head, but most of it should be understandable with little or no prior knowledge. I have found this book particularly valuable for spelling out the author’s philosophical stance. You may not agree with Lee, but at least you know where he is coming from.

This book is recommendable for anyone who is dissatisfied with the current formulation of quantum mechanics, or who wants to understand why others are dissatisfied with it. It also serves well as a quick introduction to current research in the foundations of quantum mechanics.

[Disclaimer: free review copy.]

Thursday, June 13, 2019

Physicists are out to unlock the muon’s secret

Fermilab g-2 experiment.
[Image Glukicov/Wikipedia]
Physicists count 25 elementary particles that, for all we presently know, cannot be divided any further. They collect these particles and their interactions in what is called the Standard Model of particle physics.

But the matter around us is made of merely three particles: up and down quarks (which combine to protons and neutrons, which combine to atomic nuclei) and electrons (which surround atomic nuclei). These three particles are held together by a number of exchange particles, notably the photon and gluons.

What’s with the other particles? They are unstable and decay quickly. We only know of them because they are produced when other particles bang into each other at high energies, something that happens in particle colliders and when cosmic rays hit Earth’s atmosphere. By studying these collisions, physicists have found out that the electron has two bigger brothers: The muon (μ) and the tau (τ).

The muon and the tau are pretty much the same as the electron, except that they are heavier. Of these two, the muon has been studied closer because it lives longer – about 2 x 10-6 seconds.

The muon turns out to be... a little odd.

Physicists have known for a while, for example, that cosmic rays produce more muons than expected. This deviation from the predictions of the standard model is not hugely significant, but it has stubbornly persisted. It has remained unclear, though, whether the blame is on the muons, or the blame is on the way the calculations treat atomic nuclei.

Next, the muon (like the electron and tau) has a partner neutrino, called the muon-neutrino. The muon neutrino also has some anomalies associated with it. No one currently knows whether those are real or measurement errors.

The Large Hadron Collider has seen a number of slight deviations from the predictions of the standard model which go under the name lepton anomaly. They basically tell you that the muon isn’t behaving like the electron, which (all other things equal) really it should. These deviations may just be random noise and vanish with better data. Or maybe they are the real thing.

And then there is the gyromagnetic moment of the muon, usually denoted just g. This quantity measures how muons spin if you put them into a magnetic field. This value should be 2 plus quantum corrections, and the quantum corrections (the g-2) you can calculate very precisely with the standard model. Well, you can if you have spent some years learning how to do that because these are hard calculations indeed. Thing is though, the result of the calculation doesn’t agree with the measurement.

This is the so-called muon g-2 anomaly, which we have known about since the 1960s when the first experiments ran into tension with the theoretical prediction. Since then, both the experimental precision as well as the calculations have improved, but the disagreement has not vanished.

The most recent experimental data comes from a 2006 experiment at Brookhaven National Lab, and it placed the disagreement at 3.7σ. That’s interesting for sure, but nothing that particle physicists get overly excited about.

A new experiments is now following up on the 2006 result: The muon g-2 experiment at Fermilab. The collaboration projects that (assuming the mean value remains the same) their better data could increase the significance to 7σ, hence surpassing the discovery standard in particle physics (which is somewhat arbitrarily set to 5σ).

For this experiment, physicists first produce muons by firing protons at a target (some kind of solid). This produces a lot of pions (composites of two quarks) which decay by emitting muons. The muons are then collected in a ring equipped with magnets in which they circle until they decay. When the muons decay, they produce two neutrinos (which escape) and a positron that is caught in a detector. From the direction and energy of the positron, one can then infer the magnetic moment of the muon.

The Fermilab g-2 experiment, which reuses parts of the hardware from the earlier Brookhaven experiment, is already running and collecting data. In a recent paper, Alexander Keshavarzi, on behalf of the collaboration reports they successfully completed the first physics run last year. He writes we can expect a publication of the results from the first run in late 2019. After some troubleshooting (something about an underperforming kicker system), the collaboration is now in the second run.

Another experiment to measure more precisely the muon g-2 is underway in Japan, at the J-PARC muon facility. This collaboration too is well on the way.

While we don’t know exactly when the first data from these experiements will become available, it is clear already that the muon g-2 will be much talked about in the coming years. At present, it is our best clue for physics beyond the standard model. So, stay tuned.

Wednesday, June 12, 2019

Guest Post: A conversation with Lee Smolin about his new book "Einstein’s Unfinished Revolution"

[Tam Hunt sent me another lengthy interview, this time with Lee Smolin. Smolin is a faculty member at the Perimeter Institute for Theoretical Physics in Canada and adjunct professor at the University of Waterloo. He is one of the founders of loop quantum gravity. In the past decades, Smolin’s interests have drifted to the role of time in the laws of nature and the foundations of quantum mechanics.]

TH: You make some engaging and bold claims in your new book, Einstein’s Unfinished Revolution, continuing a line of argument that you’ve been making over the course of the last couple of decades and a number of books. In your latest book, you argue essentially that we need to start from scratch in the foundations of physics, and this means coming up with new first principles as our starting point for re-building. Why do you think we need to start from first principles and then build a new system? What has brought us to this crisis point?

LS: The claim that there is a crisis, which I first made in my book, Life of the Cosmos (1997), comes from the fact that it has been decades since a new theoretical hypothesis was put forward that was later confirmed by experiment. In particle physics, the last such advance was the standard model in the early 1970s; in cosmology, inflation in the early 1980s. Nor has there been a completely successful approach to quantum gravity or the problem of completing quantum mechanics.

I propose finding new fundamental principles that go deeper than the principles of general relativity and quantum mechanics. In some recent papers and the book, I make specific proposals for new principles.

TH: You have done substantial work yourself in quantum gravity (loop quantum gravity, in particular) and quantum theory (suggesting your own interpretation called the “real ensemble interpretation”), and yet in this new book you seem to be suggesting that you and everyone else in foundations of physics needs to return to the starting point and rebuild. Are you in a way repudiating your own work or simply acknowledging that no one, including you, has been able to come up with a compelling approach to quantum gravity or other outstanding foundations of physics problems?

LS: There are a handful of approaches to quantum gravity that I would call partly successful. These each achieve a number of successes, which suggest that they could plausibly be at least part of the story of how nature reconciles quantum physics with space, time and gravity. It is possible, for example that these partly successful approaches model different regimes or phases of quantum gravity phenomena. These partly successful approaches include loop quantum gravity, string theory, causal dynamical triangulations, causal sets, asymptotic safety. But I do not believe that any approach to date, including these, is fully successful. Each has stumbling blocks that after many years remain unsolved.

TH: You part ways with a number of other physicists in recent years who have railed against philosophy and philosophers of physics as being largely unhelpful for actual physics. You argue instead that philosophers have a lot to contribute to the foundations of physics problems that are your focus. Have you found philosophy helpful in pursuing your physics for most of your career or is this a more recent finding in your own work? Which philosophers, in particular, do you think can be helpful in this area of physics?

LS: I would first of all suggest we revive the old idea of a natural philosopher, which is a working scientist who is inspired and guided by the tradition of philosophy. An education and immersion in the philosophical tradition gives them access to the storehouse of ideas, positions and arguments that have been developed over the centuries to address the deepest questions, such as the nature of space and time.

Physicists who are natural philosophers have the advantage of being able to situate their work, and its successes and failures, within the long tradition of thought about the basic questions.

Most of the key figures who transformed physics through its history have been natural philosophers: Galileo, Newton, Leibniz, Descartes, Maxwell, Mach, Einstein, Bohr, Heisenberg, etc. In more recent years, David Finkelstein is an excellent example of a theoretical physicist who made important advances, such as being the first to untangle the geometry of a black hole, and recognize the concept of an event horizon, who was strongly influenced by the philosophical tradition. Like a number of us, he identified as a follower of Leibniz, who introduced the concepts of relational space and time.

The abstract of Finkelstein’s key 1958 paper on what were soon to be called black holes explicitly mentions the principle of sufficient reason, which is the central principle of Leibniz’s philosophy. None of the important developments of general relativity in the 1960s and 1970s, such as those by Penrose, Hawking, Newmann, Bondi, etc., would have been possible without that groundbreaking paper by Finkelstein.

I asked Finkelstein once why it was important to know philosophy to do physics, and he replied, “If you want to win the long jump, it helps to back up and get a running start.”’

In other fields, we can recognize people like Richard Dawkins, Daniel Dennett, Lynn Margulis, Steve Gould, Carl Sagan, etc. as natural philosophers. They write books that argue the central issues in evolutionary theory, with the hope of changing each other’s minds. But we the lay public are able to read over their shoulders, and so have front row seats to the debates.

There are also working now a number of excellent philosophers of physics, who contribute in important ways to the progress of physics. One example of these is a group, centred originally at Oxford, of philosophers who have been doing the leading work on attempting to make sense of the Many Worlds formulation of quantum mechanics. This work involves extremely subtle issues such as the meaning of probability. These thinkers include Simon Saunders, David Wallace, Wayne Mhyrvold; and there are equally good philosophers who are skeptical of this work, such as David Albert and Tim Maudlin.

It used to be the case, half a century ago, that philosophers, such as Hilary Putnam, who opined about physics, felt qualified to do so with a bare knowledge of the principles of special relativity and single particle quantum mechanics. In that atmosphere my teacher Abner Shimony, who had two Ph.D’s – one in physics and one in philosophy – stood out, as did a few others who could talk in detail about quantum field theory and renormalization, such as Paul Feyerabend. Now the professional standard among philosophers of physics requires a mastery of Ph.D level physics, as well as the ability to write and argue with the rigour that philosophy demands. Indeed, a number of the people I just mentioned have Ph.D’s in physics.

TH: One of your suggested hypotheses, the next step you take after stating your first principles, is an acknowledgment that time is fundamental, real and irreversible, effectively goring one of the sacred cows of modern physics. You made your case for this approach in your book Time Reborn and I'm curious if you've seen a softening over the last few years in terms of physicists and philosophers beginning to be more open to the idea that the passage of time is truly fundamental? Also, why wouldn't this hypothesis be instead a first principle, if time is indeed fundamental?

LS: In my experience, there have always been physicists and philosophers open to these ideas, even if there is no consensus among those who have carefully thought the issues through.

When I thought carefully about how to state a candidate set of basic principles, it became clear that it was useful to separate principles from hypotheses about nature. Principles such as sufficient reason and the identity of the indiscernible can be realized in formulations of physics in which time is either fundamental or secondary and emergent. Hence those principles are prior to the choice of a fundamental or emergent time. So I think it clarifies the logic of the situation to call the latter choice a hypothesis rather than a principle.

TH: How does viewing time as irreversible and fundamental mesh with your principle of background independence? Doesn’t a preferred spacetime foliation, which would provide an irreversible and fundamental time, provide a background?

LS: Background independence is an aspect of the two principles of Leibniz I just referred to: 1) sufficient reason (PSR) and 2) the identity of the indiscernible (PII). Hence it is deeper than the choice of whether time is fundamental or emergent. Indeed, there are theories which rest on both hypotheses about time (fundamental or emergent). Julian Barbour, for example, is a relationalist who develops background-independent theories in which time is emergent. I am also a relationalist, but I make background-independent models of physics in which time and its passage are fundamental.

Viewing time as fundamental and irreversible doesn’t necessarily imply a preferred foliation; by the latter you mean a foliation of a pre-existing spacetime, specified kinematically in advance of the dynamical evolution. In our energetic causal set models there does arise a notion of the present, but this is determined dynamically by the evolution of the model and so is consistent with what we mean by background independence.

The point is that the solutions to background-independent theories can have preferred frames, so long as they are generated by solving the dynamics. This is, for example, the case with cosmological solutions to general relativity.

TH: You and many other physicists have focused for many years on finding a theory of quantum gravity, effectively unifying quantum mechanics and general relativity. In describing your preferred approach to achieving a theory of quantum gravity worthy of the name you describe why you think quantum mechanics is incomplete and why general relativity is in some key ways likely wrong. Let’s look first at quantum mechanics, which you describe as “wrong” and “incomplete.” Why is the Copenhagen (still perhaps the most popular version of quantum theory) school of quantum mechanics wrong and incomplete?

LS: Copenhagen is incomplete because it is based on an arbitrarily chosen division of the world into a classical realm and a quantum realm. This reflects our practice as experimenters, and corresponds to nothing in nature. This means it is an operational approach which conflicts with the expectations that physics should offer a complete description of individual phenomena, with no reference to our existence, knowledge or measurements.

TH: Your objections just stated (what’s known generally as the “measurement problem”) seem to me, even as an obvious non-expert in this area, to be fairly apparent and accurate objections to Copenhagen. If that’s the case, why is Copenhagen still with us today? Why was it ever considered a serious theory?

LS: I don’t think there are many proponents of the Copenhagen view among people working in quantum foundations, or who have otherwise thought about the issues carefully. I don’t think there are many enthusiastic followers of Bohr left alive.

Meanwhile, what most physicists who are not specialists in quantum foundations practice and teach is a very pragmatic, operational set of rules, which suffices because it closely parallels the practice of actual experimenters. They can get on with the physics without having to take a stand on realism.

What Bohr had in mind was a much more radical rejection of realism and its replacement by a view of the world in which nature and us co-create phenomena. My sense is that most living physicists haven’t read Bohr’s actual writings. There are of course some exceptions, like Chris Fuch’s QBism, which is, to the extent that I understand it, an even more radical view. Even if I disagree, I very much admire Chris for the clarity of his thinking and his insistence on taking his view to its logical conclusions. But, in the end, as a realist who sees the necessity of completing quantum mechanics by the discovery of new physics, the intellectual contortions of anti-realists are, however elegant, no help for my projects.

TH: Could this be a good example of why philosophical training could actually be helpful for physicists?

LS: I would agree, in some cases it could be helpful for some physicists to study philosophy, especially if they are interested in discovering deeper foundational laws. But I would never say anyone should study philosophy, because it can be very challenging reading, and if someone is not inclined to think “philosophically” they are unlikely to get much from the effort. But I would say that if someone is receptive to the care and depth of the writing, it can open doors to new ideas and to a highly critical style of thinking, which could greatly aid someone’s research.

The point I would like to make here is rather different. As I discussed in my earlier books, there are different periods in the development of science during which different kinds of problems present themselves. These require different strategies, different educations and perhaps even different styles of research to move forward.

There are pragmatic periods where the laws needed to understand a wide range of phenomena are in place and the opportunities of greatly advancing our understanding of diverse physical phenomena dominate. These kinds of periods require a more pragmatic approach, which ignores whatever foundational issues may be present (and indeed, there are always foundational issues lurking in the background), and focuses on developing better tools to work out the implications of the laws as they stand.

Then there are (to follow Kuhn) revolutionary periods in science, when the foundations are in question and the priority is to discover and express new laws.

The kinds of people and the kinds of education needed to succeed are different in these two kinds of periods. Pragmatic times require pragmatic scientists, and philosophy is unlikely to be important. But foundational periods require foundational people, many of whom will, as in past foundational periods, find inspiration from philosophy. Of course, what I just said is an oversimplification. At all times, science needs a diverse mix of research styles. We always need pragmatic people who are very good at the technical side of science. And we always need at least a few foundational thinkers. But the optimal balance is different in different periods.

The early part of the 20th Century, through around 1930, was a foundational period. That was followed by a pragmatic period during which the foundational issues were ignored and many applications of the quantum mechanics were developed.

Since the late 1970s, physics has been again in a foundational period, facing deep questions in elementary particle physics, cosmology, quantum foundations and quantum gravity. The pragmatic methods which got us to that point no longer suffice; during such a period we need more foundational thinkers and we need to pay more attention to them.

TH: Turning to general relativity, you also don’t mince your words and you describe the notion of reversible time, thought to be at the core of general relativity, as “wrong.” What does general relativity look like with irreversible and fundamental time?

LS: We posed exactly this question: can we invent an extension of general relativity in which time evolution is asymmetric under a transformation that reverses a measure of time. We found two ways to do this.

TH: You touched on consciousness as a physical phenomenon and a necessary ingredient in our physics in your book, Time Reborn (as have many other physicists over the last century, of course). You spend less time on consciousness in your new book — stating “Let us tiptoe past the hard question of consciousness to simpler questions” — but I’m curious if you’ve considered including as a first principle the notion that consciousness is a fundamental aspect of nature (or not) in your ruminations on these deep topics?

LS: I am thinking slowly about the problems of qualia and consciousness, in the rough direction set out in the epilogue of Time Reborn. But I haven’t yet come to conclusions worth publishing. An early draft of Einstein’s Unfinished Revolution had an epilogue entirely devoted to these questions, but I decided it was premature to publish; it also would have distracted attention from the central themes of that book.

TH: David Bohm, one of the physicists you discuss with respect to alternative versions of quantum theory, delved deeply into philosophy and spirituality in relation to his work in physics, as you discuss briefly in your new book. Do you find Bohm’s more philosophical notions such as the Implicate Order (the metaphysical ground of being in which the “explicate” manifest world that we know in our normal every day life is enfolded, and thus “implicate”) helpful for physics?

LS: I am afraid I’ve not understood what Bohm was aiming for in his book on the implicate order, or his dialogues with Krishnamurti, but it is also true that I haven’t tried very hard. I think one can admire greatly the practical and psychological knowledge of Buddhism and related traditions, while remaining skeptical of their more metaphysical teachings.

TH: Bohm’s Implicate Order has much in common with physical notions such as the (nonluminiferous) ether, which has been revived in today’s physics by some heavyweights such as Nobel Prize winner Frank Wilczek (The Lightness of Being: Mass, Ether, and the Unification of Forces) as another term for the set of space-filling fields that underlie our reality. Do you take the idea of reviving some notion of the ether as a physical/metaphysical background at all seriously in your work?

LS: The important part of the idea of the ether was that it is a smooth, fundamental, physical substance, which had the property that vibrations and stresses within it reproduced the phenomena described by Maxwell’s field theory of electromagnetism. It was also important that there was a preferred frame of reference associated with being at rest with respect to this substance.

We no longer believe any part of this. The picture we now have is that any such substance is made of a large collection of atoms. Therefore the properties of any substance are emergent and derivative. I don’t think Frank Wilczek disagrees with this, I suspect he is just being metaphorical.

TH: He doesn’t seem to be metaphorical, writing in a 1999 article:“Quite undeservedly, the ether has acquired a bad name. There is a myth, repeated in many popular presentations and textbooks, that Albert Einstein swept it into the dustbin of history. The real story is more complicated and interesting. I argue here that the truth is more nearly the opposite: Einstein first purified, and then enthroned, the ether concept. As the 20th century has progressed, its role in fundamental physics has only expanded. At present, renamed and thinly disguised, it dominates the accepted laws of physics. And yet, there is serious reason to suspect it may not be the last word.” In his 2008 book mentioned above, he reframes the set of accepted physical fields as “the Grid” (which is “the primary world-stuff”) or ether. Sounds like you don’t find this re-framing very compelling?

LS: What is true is that quantum field theory (QFT) treats all propagating particles and fields as excitations of a (usually unique) vacuum state. This is analogized to the ether, but in my opinion it’s a bad analogy. One big difference is that the vacuum of a QFT is invariant under all the symmetries of nature, whereas the ether breaks many of them by defining a preferred state of at rest.

TH: You consider Bohm’s alternative quantum theory in some depth, and say that “it makes complete sense,” but after further discussion you consider it inadequate because it is generally considered to be incompatible with special relativity, among other problems.

LS: This is not the main reason I don’t think pilot wave theory describes nature.

Pilot wave theory is based on two equations. One, which is the same as in ordinary QM-the Schrödinger equation, propagates the wave-function, while the second-the guidance equation, guides the “particles.” The first can be made compatible with special relativity, while the second cannot. But when one adds an assumption about probabilities, the averages of the guided particles follow the waves and so agree with both ordinary QM and special relativity. In this way you can say that pilot wave theory is “weakly compatible” with special relativity, in the sense that, while there is a preferred sense of rest, it can’t be measured.

TH: If one considers time to be fundamental and irreversible, isn’t there a relativistic version of Bohmian mechanics readily available by adopting some version of Lorentzian or neo-Lorentzian relativity (which are background-dependent)?

LS: Maybe — you are describing research to be done.

TH: Last, how optimistic are you that your view, that today’s physics needs some really fundamental re-thinking, will catch on with the majority of today’s physicists in the next decade or so?

LS: I’m not but I wouldn’t expect any such call for a reconsideration of the basic principles would be popular until it has results which make it hard to avoid thinking about.

Monday, June 10, 2019

Sometimes giving up is the smart thing to do.

[likely image source]
A few years ago I signed up for a 10k race. It had an entry fee, it was a scenic route, and I had qualified for the first group. I was in best shape. The weather forecast was brilliant.

Two days before the race I got a bad cold. But that wouldn’t deter me. Oh, no, not me. I’m not a quitter. I downed a handful of pills and went nevertheless. I started with a fever, a bad cough, and a banging head.

It didn’t go well. After half a kilometer I developed a chest pain. After one kilometer it really hurt. After two kilometers I was sure I’d die. Next thing I recall is someone handing me a bottle of water after the finish line.

Needless to say, my time wasn’t the best.

But the real problem began afterward. My cold refused to clear out properly. Instead I developed a series of respiratory infections. That chest pain stayed with me for several months. When the winter came, each little virus the kids brought home knocked me down.

I eventually went to see a doctor. She sent me to have a chest X-ray taken on the suspicion of tuberculosis. When the X-ray didn’t reveal anything, she put me on a 2 week regime of antibiotics.

The antibiotics indeed finally cleared out whatever lingering infection I had carried away. It took another month until I felt like myself again.

But this isn’t a story about the misery of aging runners. It’s a story about endurance sport of a different type: academia.

In academia we write Perseverance with capital P. From day one, we are taught that pain is normal, that everyone hurts, and that self-motivation is the highest of virtues. In academia, we are all over-achievers.

This summer, as every summer for the past two decades, I receive notes about who is leaving. Leaving because they didn’t get funding, because they didn’t get another position, or because they’re just no longer willing to sacrifice their life for so little in return.

And this summer, as every summer for the past two decades, I find myself among the ones who made it into the next round, find myself sitting here, wondering if I’m worthy and if I’m in the right place doing the right thing at the right time. Because, let us be honest. We all know that success in academia has one or two elements of luck. Or maybe three. We all know it’s not always fair.

I’m writing this for the ones who have left and the ones who are about to leave. Because I have come within an inch of leaving half a dozen times and I have heard the nasty, nagging voice in the back of my head. “Quitter,” it says and laughs, “Quitter.”

Don’t listen. From the people I know who left academia, few have regrets. And the few with regrets found ways to continue some research along with their new profession. The loss isn’t yours. The loss is one for academia. I understand your decision and I think you choose wisely. Just because everyone you know is on a race to nowhere doesn’t mean going with them makes sense. Sometimes, giving up is the smart thing to do.

A year after my miserable 10k experience, I signed up for a half-marathon. A few kilometers into the race, I tore a muscle.

I don’t get a runner’s high, but running increases my pain tolerance to unhealthy levels. After a few kilometers, you could probably stab me in the back and I wouldn’t notice. I could well have finished that race. But I quit.

Saturday, June 08, 2019

Book Review: “Beyond Weird” by Philip Ball

Beyond Weird: Why Everything You Thought You Knew about Quantum Physics Is Different
By Philip Ball
University of Chicago Press (October 18, 2018)

I avoid popular science articles about quantum mechanics. It’s not that I am not interested, it’s that I don’t understand them. Give me a Hamiltonian, a tensor-product expansion, and some unitary operators, and I can deal with that. But give me stories about separating a cat from its grin, the many worlds of Wigner’s friend, or suicides in which you both die and not die, and I admit defeat on paragraph two.

Ball is guilty of some of that. I got lost half through his explanation how a machine outputs plush cats and dogs when Alice and Bob put in quantum coins, and still haven’t figured out why the seer’s daughter wanted to be wed to a man evidently more stupid than she.

But then, clearly, I am not the book’s intended audience, so let me instead tell you something more helpful.

Ball knows what he writes about, that’s obvious from page one. For all I can tell the science in his book is flawless. It is also engagingly told, with some history but not too much, with some reference to current research, but not too much, with some philosophical discourse but not too much. Altogether, it is a well-balanced mix that should be understandable for everyone, even those without prior knowledge of the topic. And I entirely agree with Ball that calling quantum mechanics “weird” or “strange” isn’t helpful.

In “Beyond Weird,” Ball does a great job sorting out the most common confusions about quantum mechanics, such as that it is about discretization (it is not), that it defies the speed of light limit (it does not), or that it tells you something about consciousness (huh?). Ball even cleans up with the myth that Einstein hated quantum mechanics (he did not), Feynman dubbed the Copenhagen interpretation “Shut up and calculate” (he did not, also, there isn’t really such a thing as the Copenhagen interpretation), and, best of all, clears out the idea that many worlds solves the measurement problem (it does not).

In Ball’s book, you will learn just what quantum mechanics is (uncertainty, entanglement, superpositions, (de)coherence, measurement, non-locality, contextuality, etc), what the major interpretations of quantum mechanics are (Copenhagen, QBism, Many Worlds, Collapse models, Pilot Waves), and what the currently discussed issues are (epistemic vs ontic, quantum computing, the role of information).

As someone who still likes to read printed books, let me also mention that Ball’s is just a pretty book. It’s a high quality print in a generously spaced and well-readable font, the chapters are short, and the figures are lovely, hand-drawn illustrations. I much enjoyed reading it.

It is also remarkable that “Beyond Weird” has little overlap with two other recent books on quantum mechanics which I reviewed: Chad Orzel’s “Breakfast With Einstein” and Anil Ananthaswamy’s “Through Two Doors At Once.” While Ball focuses on the theory and its interpretation, Orzel’s book is about applications of quantum mechanics, and Ananthaswamy’s is about experimental milestones in the development and understanding of the theory. The three books together make an awesome combination.

And luckily the subtitle of Philip Ball’s book turned out to be wrong. I would have been disturbed indeed had everything I thought I knew about quantum physics been different.

[Disclaimer: Free review copy.]

Related: Check out my list of 10 Essentials of Quantum Mechanics.

Wednesday, June 05, 2019

If we spend money on a larger particle collider, we risk that progress in physics stalls.

[Image: CERN]
Particle physicists have a problem. For 40 years they have been talking about new particles that never appeared. The Large Hadron Collider was supposed to finally reveal them. It didn’t. This $10 billion machine has found the Higgs-boson, thereby completing the standard model of particle physics, but no other fundamentally new particles.

With this, the Large Hadron Collider (LHC) has demonstrated that arguments used by particle physicists for the existence of new particles beyond those in the standard model were wrong. With these arguments now falsified, there is no reason to think that a next larger particle collider will do anything besides measuring the parameters of the standard model to higher precision. And with the cost of a next larger collider estimated at $20 billion or so, that’s a tough sell.

Particle physicists have meanwhile largely given up spinning stories about discovering dark matter or recreating the origin of the universe, because it is clear to everyone now that this is marketing one cannot trust. Instead, they have a new tactic which works like this.

First, they will refuse to admit anything went wrong in the past. They predicted all these particles, none of which was seen, but now they won’t mention it. They hyped the LHC for two decades, but now they act like it didn’t happen. The people who previously made wrong predictions cannot be bothered to comment. Except for those like Gordon Kane and Howard Baer, who simply make new predictions and hope you have forgotten they ever said anything else.

Second, in case they cannot get away with outright denial, they will try to convince you it is somehow interesting they were wrong. Indeed, it is interesting – if you are a sociologist. A sociologist would be thrilled to see such an amazing example of groupthink, leading a community of thousands of intelligent people to believe that relying on beauty is a good method to make predictions. But as far as physics is concerned, there’s nothing to learn here, except that beauty isn’t a scientific criterion, which is hardly a groundbreaking insight.

Third, they will sure as hell not touch the question whether there might be better ways to invest the money, because that can only work to their disadvantage. So they will tell you vague tales about the need to explore nature, but not ever discuss whether other methods to explore nature would advance science more.

But fact is, building a large particle collider presently has a high cost for little expected benefit. This money would be better invested into less costly experiments with higher discovery potential, such as astrophysical searches for dark matter (I am not talking about direct detection experiments), table-top searches for quantum gravity, 21cm astronomy, gravitational wave interferometers, high-precision but low-energy measurements, just to mention a few.

And that is only considering the foundations of physics, leaving aside the overarching question of societal benefit. $20 billion that go into a particle collider are $20 billion that do not go into nuclear fusion, drug development, climate science, or data infrastructure, all of which can be reasonably expected to have a larger return on investment. At the very least it is a question one should discuss.

Add to this that the cost for a larger particle collider could dramatically go down in the next 20-30 years with future technological advances, such as wake-field acceleration or high-temperature superconductors. In the current situation, with colliders so extremely costly, it makes economically more sense to wait if one of these technologies reaches maturity. Who wants to spend some billions digging a 100km tunnel when that tunnel may no longer be necessary by the time the collider could be be in operation?

Anyone who talks about building a larger particle collider, but who does not mention the above named issues demonstrates that they neither care about progress in physics nor about social responsibility. They do not want to have a sincere discussion. Instead, they are presenting a one-sided view. They are merely lobbying.

If you encounter any such person, I recommend you ask them the following: Why were all these predictions wrong and what have particle physicists learned from it? Why is a larger particle collider a good way to invest such large amounts of money in the foundations of physics now? What is the benefit of such an investment for society?

And do not take as response arguments about benefiting collaborations, scientific infrastructure, or education, because such arguments can be made in favor of any large investment into science. Such generic arguments do not explain why a particle collider in particular is the thing to do. I have a handy list with responses to further nonsense arguments here.

A prediction. If you give particle physicists money for a next larger collider this is what will happen: This money will be used to hire more people who will tell you that particle physics is great. They will continue to invent new particles according to some new fad, and then claim they learned something when their expensive machine falsifies these inventions. In 40 years, we will still not know what dark matter is made of or how to quantize gravity. We will still not have a working fusion reactor, will still not have quantum computers, and will still have group-think in science. Particle physicists will then begin to argue they need a larger collider. Rinse and repeat.

Of course it is possible that a larger collider will find something new. The only way to find out with certainty is to build it and look. But the same “Just Look” argument can be made about any experiment that explores new frontiers. Point is: Particle physicists have so far failed to come up with any reason why going to higher energies is currently a promising route forward. The conservative expectation therefore is that the next larger collider would be much like the LHC, but for twice the price and without the Higgs.

Particle physics is a large and very influential community. Do not fall for their advertisements. Ask the hard questions.