Pages

Wednesday, January 30, 2019

Just because it’s falsifiable doesn’t mean it’s good science.

Flying carrot. 
Title says it all, really, but it’s such a common misunderstanding I want to expand on this for a bit.

A major reason we see so many wrong predictions in the foundations of physics – and see those make headlines – is that both scientists and science writers take falsifiability to be a sufficient criterion for good science.

Now, a scientific prediction must be falsifiable, all right. But falsifiability alone is not sufficient to make a prediction scientific. (And, no, Popper never said so.) Example: Tomorrow it will rain carrots. Totally falsifiable. Totally not scientific.

Why is it not scientific? Well, because it doesn’t live up to the current quality standard in olericulture, that is the study of vegetables. According to the standard model of root crops, carrots don’t grow on clouds.

What do we learn from this? (Besides that the study of vegetables is called “olericulture,” who knew.) We learn that to judge a prediction you must know why scientists think it’s a good prediction.

Why does it matter?

The other day I got an email from a science writer asking me to clarify a statement he had gotten from another physicist. That other physicist had explained a next larger particle collider, if built, would be able to falsify the predictions of certain dark matter models.

That is correct of course. A next larger collider would be able to falsify a huge amount of predictions. Indeed, if you count precisely, it would falsify infinitely many predictions. That’s more than even particle physicists can write papers about.

You may think that’s a truly remarkable achievement. But the question you should ask is: What reason did the physicist have to think that any of those predictions are good predictions? And when it comes to the discovery of dark matter with particle colliders, the answer currently is: There is no reason.

I cannot stress this often enough. There is not currently any reason to think a larger particle collider would produce fundamentally new particles or see any other new effects. There are loads of predictions, but none of those have good motivations. They are little better than carrot rain.

People not familiar with particle physics tend to be baffled by this, and I do not blame them. You would expect if scientists make predictions they have reasons to think it’ll actually happen. But that’s not the case in theory-development for physics beyond the standard model. To illustrate this, let me tell you how these predictions for new particles come into being.

The standard model of particle physics is an extremely precisely tested theory. You cannot just add particles to it as you want, because doing so quickly gets you into conflict with experiment. Neither, for that matter, can you just change something about the existing particles like, eg, postulating they are made up of smaller particles or such. Yes, particle physics is complicated.

There are however a few common techniques you can use to amend the standard model so that the deviations from it are not in the regime that we have measured yet. The most common way to do this is to make the new particles heavy (so that it takes a lot of energy to create them) or very weakly interacting (so that you produce them very rarely). The former is more common in particle physics, the latter more common in astrophysics.

There are of course a lot of other quality criteria that you need to fulfil. You need to formulate your theory in the currently used mathematical language, that is that of quantum field theories. You must demonstrate that your new theory is not in conflict with experiment already. You must make sure that your theory has no internal contradictions. Most importantly though, you must have a motivation for why your extension of the standard model is interesting.

You need this motivation because any such theory-extension is strictly speaking unnecessary. You do not need it to explain existing data. No, you do not need it to explain the observations normally attributed to dark matter either. Because to explain those you only need to assume an unspecified “fluid” and it doesn’t matter what that fluid is made of. To explain the existing data, all you need is the standard model of particle physics and the concordance model of cosmology.

The major motivation for new particles at higher energies, therefore, has for the past 20 years been an idea called “naturalness”. The standard model of particle physics is not “natural”. If you add more particles to it, you can make it “natural” again. Problem is that now the data say that the standard model is just not natural, period. So that motivation just evaporated. With that motivation gone, particle physicists don’t know what to do. Hence all the talk about confusion and crisis and so on.

Of course physicists who come up with new models will always claim that they have a good motivation, and it can be hard to follow their explanations. But it never hurts to ask. So please do ask. And don’t take “it’s falsifiable” as an answer.

There is more to be said about what it means for a theory to be “falsifiable” and how necessary that criterion really is, but that’s a different story and shall be told another time. Thanks for listening.



[I explain all this business with naturalness and inventing new particles that never show up in my book. I know you are sick of me mentioning this, but the reason I keep pointing it out is that I spent a lot of time making the statements in my book as useful and accurate as possible. I cannot make this effort with all my blogposts. So really I think you are better off reading the book.]

Tuesday, January 29, 2019

Book Update

Yesterday, after some back-and-forth with a German customs officer, my husband finally got hold of a parcel that had gone astray. It turned out to contain 5 copies of the audio version of my book “Lost in Math.” UNABRIDGED. Read by Laura Jennings.

7 discs. 8 hours, 43 minutes. Produced by Brilliance Audio.

I don’t need 5 copies of this. Come to think of it, I don’t even have a CD player. So, I decided, I’ll give away two copies. Yes, all for free! I’ll even pay the shipping fee on your behalf.

All you have to do is leave a comment below, and explain why you are interested in the book. The CD-sets will go to the first two such commenters by time stamp of submission. And, to say the obvious, I cannot send a parcel to a pseudonym, so if you are interested, you must be willing to provide a shipping address.

Ready, set, go.

Update: The books are gone!

Sunday, January 27, 2019

New Scientist creates a Crisis-Anticrisis Pair

A recent issue of New Scientist has an article about the crisis in the foundations of physics titled “We’ll die before we find the answers.

The article, written by Dan Cossins, is a hilarious account of a visit at Perimeter Institute. Cossins does a great job capturing the current atmosphere in the field which is one of confusion.

That the Large Hadron Collider so far hasn’t seen any fundamentally new particles besides the Higgs-boson is a big headache, getting bigger by the day. Most of the theorists who made the flawed predictions for new particles, eg supersymmetric partner particles, are now at a loss of what to do:
“Even those who forged the idea [of supersymmetry] are now calling into question the underlying assumption of “naturalness”, namely that the laws of nature ought to be plausible and coherent, rather than down to chance.

“We did learn something: we learned what is not the case,” says Savas Dimopoulos. A theorist at Stanford University in California, and one of the founders of supersymmetry, he happens to be visiting the Perimeter Institute while I am there. How do we judge what theories we should pursue? “Maybe we have to rethink our criteria,” he says.”
We meet Asimina Arvanitaki, the first woman to hold a research chair at the Perimeter Institute:
“There are people who think that, using just the power of our minds, we can understand what dark matter is, what quantum gravity is,” says Arvanitaki. “That’s not true. The only way forward is to have experiment and theory move in unison.”
Cossins also spoke with Niayesh Afshordi, who is somewhat further along in his analysis of the situation:
“For cosmologist Niayesh Afshordi, the thread that connects these botched embroideries is theorists’ tendency to devote themselves to whatever happens to be the trendiest idea. “It’s a bandwagon effect,” he says. He thinks it has a lot to do with the outsize influence of citations.

“The fight now is not to find a fundamental theory. It is to get noticed, and the easiest way to do that is get on board a bandwagon,” he says. “You get this feedback loop: the people who spend longer on the bandwagons get more citations, then more funding, and the cycle repeats.” For its critics, string theory is the ultimate expression of this.”
The article sounds eerily like an extract from my book. Except, I must admit, Cossins writes better than I do.

The star of the New Scientist article is Neil Turok, the current director of Perimeter Institute. Turok has been going around talking about “crisis” for some while and his cure for the crisis is… more Neil Turok. In a recent paper with Latham Boyle and Kieran Finn (PRL here), he proposed a new theory according to which our universe was created in a pair with an anti-universe.

I read this paper some while ago and didn’t find it all that terrible. At least it’s not obviously wrong, which is more than what can be said about some papers that make headlines. Though speculative, it’s a minimalistic idea that results in observable consequences. I was surprised it didn’t attract more media attention.

The calculation in the paper, however, has gaps, especially for what the earliest phase of the universe is concerned. And if the researchers find ways to fill those gaps, I would be afraid, they may end up in the all-too-common situation where they can pretty much predict everything and anything. So I am not terribly excited. Then again, I rarely am.

Let me end with a personal recollection. Neil Turok became director of Perimeter Institute in 2008. At that time I was the postdoc representative and as such had a lunch meeting with the incoming director. I asked him about his future plans. Listening to Turok, it became clear to me quickly that his term would mean the end of Perimeter Institute’s potential to make a difference.

Turok’s vision, in brief, was to make the place an internationally renowned research institution that attracts prestigious researchers. This all sounds well and fine until you realize that “renowned” and “prestigious” are assessments made by the rest of the research community. Presently these terms pretty much mean productivity and popularity.

The way I expressed my concern to Turok back then was to point out that if you couple to the heat bath you will approach the same temperature. Yeah, I have learned since then to express myself somewhat clearer. To rephrase this in normal-people speak, if you play by everybody else’s rules, you will make the same mistakes.

If you want to make a difference, you must be willing to accept that people ridicule you, criticize you, and shun you. Turok wasn’t prepared for any of this. It had not even crossed his mind.

Ten years on, I am afraid to say that what happened is exactly what I expected. Research at Perimeter Institute today is largely “more of the same.” Besides papers, not much has come out of it. But it surely sounds like they are having fun.

Tuesday, January 22, 2019

Particle physics may have reached the end of the line

Image: CERN
CERN’s press release of plans for a larger particle collider, which I wrote about last week, made international headlines. Unfortunately, most articles about the topic just repeat the press-release, and do not explain how much the situation in particle physics has changed with the LHC data.

Since the late 1960s, when physicists hit on the “particle zoo” at nuclear energies, they always had a good reason to build a larger collider. That’s because their theories of elementary matter were incomplete. But now, with the Higgs-boson found in 2012, their theory – the “standard model of particle physics” – is complete. It’s done. There’s nothing missing. All Pokemon caught.

The Higgs was the last good prediction that particle physicists had. This prediction dates back to the 1960s and it was based on sound mathematics. In contrast to this, the current predictions for new particles at a larger collider – eg supersymmetric partner particles or dark matter particles – are not based on sound mathematics. These predictions are based on what is called an “argument from naturalness” and those arguments are little more than wishful thinking dressed in equations.

I have laid out my reasoning for why those predictions are no good in great detail in my book (and also in this little paper). But it does not matter whether you believe (or even understand) my arguments, you only have to look at the data to see that particle physicists’ predictions for physics beyond the standard model have, in fact, not worked for more than 30 years.

Fact is, particle physicists have predicted dark matter particles since the mid-1980s. None of those have been seen.

 Fact is, particle physicists predicted grand unified theories starting also in the 1980s. To the extent that those can be ruled out, they have been ruled out.

Fact is, they predicted that supersymmetric particles and/or large additional dimensions of space should become observable at the LHC. According to those predictions, this should have happened already. It did not.

The important thing is now that those demonstrably flawed methods were the only reason to think the LHC should discover something fundamentally new besides the Higgs. With this method of prediction not working, there is now no reason to think that the LHC in its upcoming runs, or a next larger collider, will see anything besides the physics predicted by the already known theories.

Of course it may happen. I am not saying that I know a larger collider will not find something new. It is possible that we get lucky. I am simply saying that we currently have no prediction that indicates a larger collider would lead to a breakthrough. The standard model may well be it.

This situation is unprecedented in particle physics. The only reliable prediction we currently have for physics beyond the standard model is that we should eventually see effects of quantum gravity. But for that we would have to reach energies 13 orders of magnitude higher than what even the next collider would deliver. It’s way out of reach.

The only thing we can reliably say a next larger collider will do is measure more precisely the properties of the already known fundamental particles. That it may tell us something about dark matter, or dark energy, or the matter-antimatter symmetry is a hope, not a prediction.

Particle physicists had a good case to build the LHC with the prediction of the Higgs-boson. But with the Higgs found, the next larger collider has no good motivation. The year is 2019, not 1999.

Letter from a reader: “Someone has to write such a book” we used to say

Dear Sabine,

congratulations on your book. I read it this sommer and enjoyed it very much. For people like me, working on solid state physics, the issues you addressed were a recurrent subject to talk over lunch over the last decade. “Someone has to write such a book” we used to say, necessarily had to be someone from inside this community. I am glad that you did it.

I came to your book through the nice review published in Nature. I was disappointed with the one I read later in Science; also, with the recent one on Physics Today by Wilczek (“...and beautiful ideas from information theory are illuminating physical algorithms and quantum network design”, does it make sense to anyone?!). To be honest, he should list all beautiful ideas developed, and then the brief list of the ones that got agreement with experiment. This would be a scientific approach to test if such a statement makes sense, would you agree?

I send you a comment from Philip Anderson on string theory, I don’t think you mention it in your book but I guess you heard of it.

Best regards,

Daniel

---------------------------------------------------
Prof. Daniel Farias
Dpto. de Física de la Materia Condensada
Universidad Autónoma de Madrid
Phone: +34 91 497 5550
----------------------------------------------------

[The mentioned comment is Phillip Anderson’s response to the EDGE annual question 2005: What do you believe is true even though you cannot prove it? Which I append below for your amusement.]

Is string theory a futile exercise as physics, as I believe it to be? It is an interesting mathematical specialty and has produced and will produce mathematics useful in other contexts, but it seems no more vital as mathematics than other areas of very abstract or specialized math, and doesn't on that basis justify the incredible amount of effort expended on it.

My belief is based on the fact that string theory is the first science in hundreds of years to be pursued in pre-Baconian fashion, without any adequate experimental guidance. It proposes that Nature is the way we would like it to be rather than the way we see it to be; and it is improbable that Nature thinks the same way we do.

The sad thing is that, as several young would-be theorists have explained to me, it is so highly developed that it is a full-time job just to keep up with it. That means that other avenues are not being explored by the bright, imaginative young people, and that alternative career paths are blocked.

Wednesday, January 16, 2019

Particle physicists want money for bigger collider

Illustration of particle collision.
[screen shot from this video]
The Large Hadron Collider (LHC) at CERN is the world’s current largest particle collider. But in a decade its days will come to an end. Particle physicists are now making plans for the future. Yesterday, CERN issued a press-release about a design study for their plans, which is a machine called the Future Circular Collider (FCC).

There are various design options for the FCC. Costs start at €9 billion for the least expensive version, going up to €21 for the big vision. The idea is to dig a longer ring-tunnel, in which first electrons would be brought to collision with positrons at energies from 91 to 365 GeV. The operation energies are chosen to enable more detailed studies of specific particles than the LHC allows. This machine would later be upgraded for proton-proton collisions at higher energies, reaching up to 100 TeV (or 100k GeV). In comparison, the LHC’s maximum design energy is 14 TeV.

€9 billion is a lot of money and given what we presently know, I don’t think it’s worth it. It is possible that if we reach higher energies, we will find new particles, but we do not currently have any good reason to think this will happen. Of course if the LHC finds something after all, the situation will entirely change and everyone will rush to build the next collider. But without that, the only thing we know that a larger collider will reliably do is measure in greater detail the properties of the already-known particles.

The design-reports acknowledge this, but obfuscates the point. The opening statement, for example, says:
[Several] experimental facts do require the extension of the Standard Model and explanations are needed for observations such as the abundance of matter over antimatter, the striking evidence for dark matter and the non-zero neutrino masses. Theoretical issues such as the hierarchy problem, and, more in general, the dynamical origin of the Higgs mechanism, do point to the existence of physics beyond the Standard Model.” (original emphasis)
The accompanying video similarly speaks vaguely of “big questions”, something to do with 95% of the universe (referring to dark matter and dark energy) and raises the impression that a larger collider would tell us something interesting about that:


It is correct that the standard model requires extension, but there is no reason that the new physical effects, like particles making up dark matter, must be accessible at the next larger collider. Indeed, the currently most reliable predictions put any new physics at energies 14 orders of magnitude higher, well out of the reach of any collider we’ll be able to build in the coming centuries. This is noted later in the report, where you can read: “Today high energy physics lacks unambiguous and guaranteed discovery targets.”

The report uses some highly specific examples of hypothetical particles that can be ruled out, such as certain WIMP candidates or supersymmetric particles. Again, that’s correct. But there is no good argument for why those particular  particles should be the right ones. Physicists have no end of conjectured new particles. You’d end up ruling out a few among millions of models, and make little progress, just like with the LHC and the earlier colliders.

We are further offered the usual arguments, that investing in a science project this size would benefit the technological industry and education and scientific networks. This is all true, but not specific to particle colliders. Any large-scale experiment would have such benefits. I do not find such arguments remotely convincing.

Another reason I am not excited about the current plans for a larger collider is that we might get more bang for the buck if we waited for better technologies. There’s the plasma wakefield acceleration, eg, that is in a test-period now and that may become a more efficient route to progress. Also, maybe high temperature superconductors will reach a level where they become usable for the magnets. Both of these technologies may become available in a decade or two, but they are not presently sufficiently developed so that they can be used for the next collider.

Therefore, investment-wise, it would make more sense to put particle physics on a pause and reconsider it in, say, 20 years to see whether the situation has changed, either because new technologies have become available or because more concrete predictions for new physics have been made.

At current, other large-scale experiments would more reliably offer new insights into the foundations of physics. Anything that peers back into the early universe, such as big radio telescopes, for example, or anything that probes the properties of dark matter. There are also medium and small-scale experiments that tend to fall off the table if big collaborations eat up the bulk of money and attention. And that’s leaving aside that maybe we might be better off investing in other areas of science entirely.

Of course a blog post cannot replace a detailed cost-benefit assessment, so I cannot tell you what’s the best thing to invest in. I can, however, tell you that a bigger particle collider is one of the most expensive experiments you can think of, and we do not currently have a reason to think it would discover anything new. Ie, large cost, little benefit. That much is pretty clear.

I think the Chinese are not dumb enough to build the next bigger collider. If they do, they might end up being the first nation ever to run and operate such a costly machine without finding anything new. It’s not how they hope to enter history books. So, I consider it unlikely they will go for it.

What the Europeans will do is harder to predict, because a lot depends on who has influential friends in which ministry. But I think particle physicists have dug their own grave by giving the public the impression that the LHC would answer some big question, and then not being able to deliver.

Sunday, January 13, 2019

Good Problems in the Foundations of Physics

img src: openclipart.org
Look at the history of physics, and you will find that breakthroughs come in two different types. Either observations run into conflict with predictions and a new theory must be developed. Or physicists solve a theoretical problem, resulting in new predictions which are then confirmed by experiment. In both cases, problems that give rise to breakthroughs are inconsistencies: Either theory does not agree with data (experiment-led), or the theories have internal disagreements that require resolution (theory-led).

We can classify the most notable breakthroughs this way: Electric and magnetic fields (experiment-led), electromagnetic waves (theory-led), special relativity (theory-led), quantum mechanics (experiment-led), general relativity (theory-led), the Dirac equation (theory-led), the weak nuclear force (experiment-led), the quark-model (experiment-led), electro-weak unification (theory-led), the Higgs-boson (theory-led).

That’s an oversimplification, of course, and leaves aside the myriad twists and turns and personal tragedies that make scientific history interesting. But it captures the essence.

Unfortunately, in the past decades it has become fashionable among physicists to present the theory-led breakthroughs as a success of beautiful mathematics.

Now, it is certainly correct that in some cases the theorists making such breakthroughs were inspired by math they considered beautiful. This is well-documented, eg, for both Dirac and Einstein. However, as I lay out in my book, arguments from beauty have not always been successful. They worked in cases when the underlying problem was one of consistency. They failed in other cases. As the philosopher Radin Dardashti put it aptly, scientists sometimes work on the right problem for the wrong reason.

That breakthrough problems were those which harbored an inconsistency is true even for the often-told story of the prediction of the charm quark. The charm quark, so they will tell you, was a prediction based on naturalness, which is an argument from beauty. However, we also know that the theories which particle physicists used at the time were not renormalizable and therefore would break down at some energy. Once electro-weak unification removes this problem, the requirement of gauge-anomaly cancellation will tell you that a fourth quark is necessary. But this isn’t a prediction based on beauty. It’s a prediction based on consistency.

This, I must emphasize, is not what historically happened. Weinberg’s theory of the electro-weak unification came after the prediction of the charm quark. But in hindsight we can see that the reason this prediction worked was that it was indeed a problem of consistency. Physicists worked on the right problem, if for the wrong reasons.

What can we learn from this?

Well, one thing we learn is that if you rely on beauty you may get lucky. Sometimes it works. Feyerabend, I think, had it basically right when he argued “anything goes.” Or, as the late German chancellor Kohl put it, “What matters is what comes out in the end.”

But we also see that if you happen to insist on the wrong ideal of beauty, you will not make it into history books. Worse, since our conception of what counts as a beautiful theory is based on what worked in the past, it may actually get in the way if a breakthrough requires new notions of beauty.

The more useful lesson to learn, therefore, is that the big theory-led breakthroughs could have been based on sound mathematical arguments, even if in practice they came about by trial and error.

The “anything goes” approach is fine if you can test a large number of hypotheses and then continue with the ones that work. But in the foundations of physics we can no longer afford “anything goes”. Experiments are now so expensive and take such a long time to build that we have to be very careful when deciding which theories to test. And if we take a clue from history, then the most promising route to progress is to focus on problems that are either inconsistencies with data or internal inconsistencies of the theories.

At least that’s my conclusion.

It is far from my intention to tell anyone what to do. Indeed, if there is any message I tried to get across in my book it’s that I wish physicists would think more for themselves and listen less to their colleagues.

Having said this, I have gotten a lot of emails from students asking me for advice, and I recall how difficult it was for me as a student to make sense of the recent research trends. For this reason I append below my assessment of some of the currently most popular problems in the foundations of physics. Not because I want you to listen to me, but because I hope that the argument I offered will help you come to your own conclusion.

(You find more details and references on all of this in my book.)



Dark Matter
Is an inconsistency between theory and experiment and therefore a good problem. (The issue with dark matter isn’t whether it’s a good problem or not, but to decide when to consider the problem solved.)

Dark Energy
There are different aspects of this problem, some of which are good problems others not. The question why the cosmological constant is small compared to (powers of) the Planck mass is not a good problem because there is nothing wrong with just choosing it to be a certain constant. The question why the cosmological constant is presently comparable to the density of dark matter is likewise a bad problem because it isn’t associated with any inconsistency. On the other hand, the absence of observable fluctuations around the vacuum energy (what Afshordi calls the “cosmological non-constant problem”) and the question why the zero-point energy gravitates in atoms but not in the vacuum (details here) are good problems.

The Hierarchy Problem
The hierarchy problem is the big difference between the strength of gravity and the other forces in the standard model. There is nothing contradictory about this, hence not a good problem.

Grand Unification
A lot of physicists would rather have one unified force in the standard model rather than three different ones. There is, however, nothing wrong with the three different forces. I am undecided as to whether the almost-prediction of the Weinberg-angle from breaking a large symmetry group does or does not require an explanation.

Quantum Gravity
Quantum gravity removes an inconsistency and hence a solution to a good problem. However, I must add that there may be other ways to resolve the problem besides quantizing gravity.

Black Hole Information Loss
A good problem in principle. Unfortunately, there are many different ways to fix the problem and no way to experimentally distinguish between them. So while it’s a good problem, I don’t consider it a promising research direction.

Particle Masses
It would be nice to have a way to derive the masses of the particles in the standard model from a theory with fewer parameters, but there is nothing wrong with these masses just being what they are. Thus, not a good problem.

Quantum Field Theory
There are various problems with quantum field theories where we lack a good understanding of how the theory works and that require a solution. The UV Landau pole in the standard model is one of them. It must be resolved somehow, but just exactly how is not clear. We also do not have a good understanding of the non-perturbative formulation of the theory and the infrared behavior turns out to be not as well understood as we thought only years ago (see eg here).

The Measurement Problem
The measurement problem in quantum mechanics is typically thought of as a problem of interpretation and then left to philosophers to discuss. I think that’s a mistake; it is an actual inconsistency. The inconsistency comes from the need to postulate the behavior of macroscopic objects when that behavior should instead follow from the theory of the constituents. The measurement postulate, hence, is inconsistent with reductionism.

The Flatness Problem
Is an argument from finetuning and not well-defined without a probability distribution. There is nothing wrong with the (initial value of) the curvature density just being what it is. Thus, not a good problem.

The Monopole Problem
That’s the question why we haven’t seen magnetic monopoles. It is quite plausibly solved by them not existing. Also not a good problem.

Baryon Asymmetry and The Horizon Problem
These are both finetuning problems that rely on the choice of an initial condition, which is considered to be likely. However, there is no way to quantify how likely the initial condition is, so the problem is not well-defined.

The Strong CP Problem
Is a naturalness problem, like the Hierarchy problem, and not a problem of inconsistency.

There are further always a variety of anomalies where data disagrees with theory. Those can linger at low significance for a long time and it’s difficult to decide how seriously to take them. For those I can only give you the general advice that you listen to experimentalists (preferably some who are not working on the experiment in question) before you listen to theorists. Experimentalists often have an intuition for how seriously to take a result. That intuition, however, usually doesn’t make it into publications because it’s impossible to quantify. Measures of statistical significance don’t always tell the full story.

Saturday, January 12, 2019

Book Review: “Quantum Space” by Jim Baggott

Quantum Space
Loop Quantum Gravity and the Search for the Structure of Space, Time, and the Universe
By Jim Baggott
Oxford University Press (January 22, 2019)


In his new book Quantum Space, Jim Baggott presents Loop Quantum Gravity (LQG) as the overlooked competitor of String Theory. He uses a chronological narrative that follows the lives of Lee Smolin and Carlo Rovelli. The book begins with their nascent interest in quantum gravity, continues with their formal education, their later collaboration, and, in the final chapters, follows them as their ways separate. Along with the personal stories, Baggott introduces background knowledge and lays out the scientific work.

Quantum Space is structured into three parts. The first part covers the basics necessary to understand the key ideas of Loop Quantum Gravity. Here, Baggott goes through the essentials of special relativity, general relativity, quantum mechanics, quantum field theory, and the standard model of particle physics. The second part lays out the development of Loop Quantum Gravity and the main features of the theory, notably the emphasis on background independence.

The third part is about recent applications, such as the graviton propagator, the calculation of black hole entropy, and the removal of the big bang singularity. You also find there Sundance Bilson-Thompson’s idea that elementary particles are braids in the spin-foam. In this last part, Baggott further includes Rovelli’s and Smolin’s ideas about the foundations of quantum mechanics, as well as Rovelli and Vidotto’s Planck Stars, and Smolin’s ideas about the reality of time and cosmological natural selection.

The book’s epilogue is an extended Q&A with Smolin and Rovelli and ends with mentioning the connections between String Theory and Loop Quantum Gravity (which I wrote about here).

Baggott writes very well and he expresses himself clearly, aided by about two dozen figures and a glossary. The book, however, requires some tolerance for technical terminology. While Baggott does an admirable job explaining advanced physics – such as Wilson loops, parallel transport, spinfoam, and renormalizability – and does not shy away from complex topics – such as the fermion doubling problem, the Wheeler-De-Witt equation, Shannon entropy, or extremal black holes – for a reader without prior knowledge in the field, this may be tough going.

We know from Baggott’s 2013 book “Farewell to Reality” that he is not fond of String Theory, and in Quantum Space, too, he does not hold back with criticism. On Edward Witten’s conjecture of M-theory, for example, he writes:
“This was a conjecture, not a theory…. But this was, nevertheless, more than enough to set the superstring bandwagon rolling even faster.”
In Quantum Space, Baggott also reprints Smolin’s diagnostic of the String Theory community, which asserts string theorists “tremendous self-confidence,” “group think,” “confirmation bias,” and “a complete disregard and disinterest in the opinions of anyone outside the group.”

Baggott further claims that the absence of new particles at the Large Hadron Collider is bad news for string theory*:
“Some argue that string theory is the tighter, more mathematically rigorous and consistent, better-defined structure. But a good deal of this consistency appears to have been imported through the assumption of supersymmetry, and with each survey published by the ATLAS or CMS detector collaborations at CERN’s Large Hadron Collider, the scientific case for supersymmetry weakens some more.”
I’d have some other minor points to quibble with, but given the enormous breadth of topics covered, I think Baggott’s blunders are few and sparse.

I must admit, however, that the structure of the book did not make a lot of sense to me. Baggott introduces a lot of topics that he later does not need and whose relevance to LQG escapes me. For example, he goes on about the standard model and the Higgs-mechanism in particular, but that doesn’t play any role later. He also spends quite some time on the interpretation of quantum mechanics, which isn’t actually necessary to understand Loop Quantum Gravity. I also don’t see what Lee Smolin’s cosmological natural selection has to do with anything. But these are stylistic issues.

The bigger issue I have with the book is that Baggott is as uncritical of Loop Quantum Gravity as he is critical of String Theory. There is not a mention in the book about the problem of recovering local Lorentz-invariance, an issue that has greatly bothered both Joe Polchinski and Jorge Pullin. Baggott presents Loop Quantum Cosmology (the LQG-based approach to understand the early universe) as testable but forgets to note that the predictions depend on an adjustable parameter, and also, it would be extremely hard to tell apart the LQG-based models from other models. And he does not, in balance, mention String Cosmology. He does not mention the problem with the supposed derivation of the black hole entropy by Bianchi and he does not mention the problems with Planck stars.

And if he had done a little digging, he’d have noticed that the group-think in LQG is as bad as it is in string theory.

In summary, Quantum Space is an excellent, non-technical, introduction to Loop Quantum Gravity that is chock-full with knowledge. It will, however, give you a rather uncritical view of the field.

[Disclaimer: Free review copy.]


* I explained here why the non-discovery of supersymmetric particles at the LHC has no relevance for string theory.

Wednesday, January 09, 2019

The Real Problems with Artificial Intelligence

R2D2 costume for toddlers.
[image: amazon.com]
In recent years many prominent people have expressed worries about artificial intelligence (AI). Elon Musk thinks it’s the “biggest existential threat.” Stephen Hawking said it could “be the worst event in the history of our civilization.” Steve Wozniak believes that AIs will “get rid of the slow humans to run companies more efficiently,” and Bill Gates, too, put himself in “the camp that is concerned about super intelligence.”

In 2015, the Future of Life Institute formulated an open letter calling for caution and formulating a list of research priorities. It was signed by more than 8,000 people.

Such worries are not unfounded. Artificial intelligence, as any new technology, brings risks. While we are far from creating machines even remotely as intelligent as humans, it’s only smart to think about how to handle them sooner rather than later.

However, these worries neglect the more immediate problems that AI will bring.

Artificially Intelligent machines won’t get rid of humans any time soon because they’ll need us for quite some while. The human brain may not be the best thinking apparatus, but it has a distinct advantage over all machines we built so far: It functions for decades. It’s robust. It repairs itself.

Some million years of evolution optimized our bodies, and while the result could certainly be further improved (damn those knees), it’s still more durable than any silicon-based thinking apparatuses we created. Some AI researchers have even argued that a body of some kind is necessary to reach human-level intelligence, which – if correct – would vastly increase the problem of AI fragility.

Whenever I bring up this issue with AI enthusiasts, they tell me that AIs will learn to repair themselves, and even if not, they will just upload themselves to another platform. Indeed, much of the perceived AI-threat comes from them replicating quickly and easily, while at the same time being basically immortal. I think that’s not how it will go.

Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time. It will take large groups of people and many years to build and train an AI. Copying them will not be any easier than copying a human brain. They’ll be difficult to fix once broken, because, as with the human brain, we won’t be able to separate their hardware from the software. The early ones will die quickly for reasons we will not even comprehend.

We see the beginning of this trend already. Your computer isn’t like my computer. Even if you have the same model, even if you run the same software, they’re not the same. Hackers exploit these differences between computers to track your internet activity. Canvas fingerprinting, for example, is a method of asking your computer to render a font and output an image. The exact way your computer performs this task depends both on your hardware and your software, hence the output can be used to identify a device.

Presently, you do not notice these subtle differences between computers all that much (except possibly when you spend hours browsing help forums thinking “someone must have had this problem before” and turn up nothing). But the more complex computers get, the more obvious the differences will become. One day, they will be individuals with irreproducible quirks and bugs – like you and I.

So we have AI fragility plus the trend of increasingly complex hard- and software to become unique. Now extrapolate this some decades into the future. We will have a few large companies, governments, and maybe some billionaires who will be able to afford their own AI. Those AIs will be delicate and need constant attention by a crew of dedicated humans.

This brings up various immediate problems:

1. Who gets to ask questions and what questions?

This may not be a matter of discussion for privately owned AI, but what about those produced by scientists or bought by governments? Does everyone get a right to a question per month? Do difficult questions have to be approved by the parliament? Who is in charge?

2. How do you know that you are dealing with an AI?

The moment you start relying on AIs, there’s a risk that humans will use it to push an agenda by passing off their own opinions as that of the AI. This problem will occur well before AIs are intelligent enough to develop their own goals.

3. How can you tell that an AI is any good at giving answers?

If you only have a few AIs and those are trained for entirely different purposes, it may not be possible to reproduce any of their results. So how do you know you can trust them? It could be a good idea to ask that all AIs have a common area of expertise that can be used to compare their performance.

4. How do you prevent that limited access to AI increases inequality, both within nations and between nations?

Having an AI to answer difficult questions can be a great advantage, but left to market forces alone it’s likely to make the rich richer and leave the poor behind even farther. If this is not something that we want – and I certainly don’t – we should think about how to deal with it.

Monday, January 07, 2019

Letter from a reader: “What’s so bad about randomness?”

[The best part of publishing a book has been getting feedback from readers who report their own experience as it relates to what I wrote about. With permission, I want to share this letter which I received the other day from Dave Hurwitz, a classical music critic.]

Dear Dr. Hossenfelder,

I hope that I am not bothering you, but I just wanted to write to tell you how much I am enjoying your book “Lost in Math.” I haven’t quite finished it yet, but I was so taken with it that I thought I might write to let you know anyway. I am about as far away from theoretical physics as it’s possible to be: I am a classical music critic and independent musical scholar, and I support myself working in real estate; but I am a very serious follower of the popular scientific literature, and I was so impressed by your directness, literacy, and ability to make complex topics digestible and entertaining for the general reader.

I am also very much in sympathy with your point of view. Even though I don’t understand the math, it often seems to me that so much of what theoretical physicists are doing amounts to little more than a sort of high-end gematria – numerology with a kind of mystical value assigned to mathematical coincidence or consistency, or, as you (they) call it, “beauty.” I cringe whenever I hear these purely aesthetic judgments applied to theoretical speculation about the nature of reality, based primarily on the logic of the underlying math. And don’t get me wrong: I like math. Personally, I have no problem with the idea that the laws governing the universe may not be elegant and tidy, and I see no reason why they should be. They are what they are, and that’s all. What’s so bad about randomness? It’s tough enough trying to figure out what they are without assigning to them purely subjective moral or aesthetic values (or giving these undue weight in guiding the search).

It may interest you to know that something similar seems to be infecting current musicology, and I am sure many other academic fields. Discussion of specific musical works often hinges on standardized and highly technical versions of harmonic analysis, mostly because the language and methodology have been systematized and everyone agrees on how to do it – but what it actually means, how it creates meaning or expressiveness, is anyone’s guess. It is assumed to be important, but there is no demonstrable causal connection between the correctness of the analysis and the qualitative values assigned to it. It all comes down to a kind of circular reasoning: the subjective perception of “beauty” drives the search for a coherent musical substructure which, not surprisingly, once described is alleged to justify the original assumption of “beauty.” If you don’t “get” physicists today, then I don’t “get” musicologists.

Anyway, I’m sorry to take up so much of your time, but I just wanted to note that what you see – the kind of reasoning that bothers you so much – has its analogues way beyond the field of theoretical physics. I take your point that scientists, perhaps, should know better, but the older I get the more I realize two things: first, human nature is the same everywhere, and second, as a consequence, it’s precisely the people who ought to know better that, for the most part, seldom do. I thank you once again for making your case so lucidly and incisively.

Best regards,

Dave Hurwitz
ClassicsToday.com

Thursday, January 03, 2019

Book Update

During the holidays I got several notes from people who tried to order my book but were told it’s out of print or not in stock. Amazon likewise had only used copies on sale, starting at $50 and up. Today I have good news: My publisher informed me that the book has been reprinted and should become available again in the next days.

The German translation meanwhile is in the third edition (the running head has been fixed!). The Spanish translation will appear in June with a publisher by name Ariel. Other translations that will come are Japanese, Chinese, Russian, Italian, French, Korean, Polish, and Romanian. Amazon now also offers an English audio version.

Many thanks to all readers!



Oh, and I still don’t have a publisher in the UK.

Wednesday, January 02, 2019

Electrons don’t think

Brainless particles leaving tracks
in a bubble chamber. [image source]
I recently discovered panpsychism. That’s the idea that all matter – animate or inanimate – is conscious, we just happen to be somewhat more conscious than carrots. Panpsychism is the modern elan vital.

When I say I “discovered” panpsychism, I mean I discovered there’s a bunch of philosophers who produce pamphlets about it. How do these philosophers address the conflict with evidence? Simple: They don’t.

Now, look, I know that physicists have a reputation of being narrow-minded. But the reason we have this reputation is that we tried the crazy shit long ago and just found it doesn’t work. You call it “narrow-minded,” we call it “science.” We have moved on. Can elementary particles be conscious? No, they can’t. It’s in conflict with evidence. Here’s why.

We know 25 elementary particles. These are collected in the standard model of particle physics. The predictions of the standard model agree with experiment to best precision.

The particles in the standard model are classified by their properties, which are collectively called “quantum numbers.” The electron, for example, has an electric charge of -1 and it can have a spin of +1/2 or -1/2. There are a few other quantum numbers with complicated names, such as the weak hypercharge, but really it’s not so important. Point is, there are handful of those quantum numbers and they uniquely identify an elementary particle.

If you calculate how many particles of a certain type are produced in a particle collision, the result depends on how many variants of the produced particle exist. In particular, it depends on the different values the quantum numbers can take. Since the particles have quantum properties, anything that can happen will happen. If a particle exists in many variants, you’ll produce them all – regardless of whether or not you can distinguish them. The result is that you see more of them than the standard model predicts.

Now, if you want a particle to be conscious, your minimum expectation should be that the particle can change. It’s hard to have an inner life with only one thought. But if electrons could have thoughts, we’d long have seen this in particle collisions because it would change the number of particles produced in collisions.

In other words, electrons aren’t conscious, and neither are any other particles. It’s incompatible with data.

As I explain in my book, there are ways to modify the standard model that do not run into conflict with experiment. One of them is to make new particles so massive that so far we have not managed to produce them in particle collisions, but this doesn’t help you here. Another way is to make them interact so weakly that we haven’t been able to detect them. This too doesn’t help here. The third way is to assume that the existing particles are composed of more fundamental constituents, that are, however, so strongly bound together that we have not yet been able to tear them apart.

With the third option it is indeed possible to add internal states to elementary particles. But if your goal is to give consciousness to those particles so that we can inherit it from them, strongly bound composites do not help you. They do not help you exactly because you have hidden this consciousness so that it needs a lot of energy to access. This then means, of course, that you cannot use it at lower energies, like the ones typical for soft and wet thinking apparatuses like human brains.

Summary: If a philosopher starts speaking about elementary particles, run.