Wednesday, January 16, 2019

Particle physicists want money for bigger collider

Illustration of particle collision.
[screen shot from this video]
The Large Hadron Collider (LHC) at CERN is the world’s current largest particle collider. But in a decade its days will come to an end. Particle physicists are now making plans for the future. Yesterday, CERN issued a press-release about a design study for their plans, which is a machine called the Future Circular Collider (FCC).

There are various design options for the FCC. Costs start at €9 billion for the least expensive version, going up to €21 for the big vision. The idea is to dig a longer ring-tunnel, in which first electrons would be brought to collision with positrons at energies from 91 to 365 GeV. The operation energies are chosen to enable more detailed studies of specific particles than the LHC allows. This machine would later be upgraded for proton-proton collisions at higher energies, reaching up to 100 TeV (or 100k GeV). In comparison, the LHC’s maximum design energy is 14 TeV.

€9 billion is a lot of money and given what we presently know, I don’t think it’s worth it. It is possible that if we reach higher energies, we will find new particles, but we do not currently have any good reason to think this will happen. Of course if the LHC finds something after all, the situation will entirely change and everyone will rush to build the next collider. But without that, the only thing we know that a larger collider will reliably do is measure in greater detail the properties of the already-known particles.

The design-reports acknowledge this, but obfuscates the point. The opening statement, for example, says:
[Several] experimental facts do require the extension of the Standard Model and explanations are needed for observations such as the abundance of matter over antimatter, the striking evidence for dark matter and the non-zero neutrino masses. Theoretical issues such as the hierarchy problem, and, more in general, the dynamical origin of the Higgs mechanism, do point to the existence of physics beyond the Standard Model.” (original emphasis)
The accompanying video similarly speaks vaguely of “big questions”, something to do with 95% of the universe (referring to dark matter and dark energy) and raises the impression that a larger collider would tell us something interesting about that:


It is correct that the standard model requires extension, but there is no reason that the new physical effects, like particles making up dark matter, must be accessible at the next larger collider. Indeed, the currently most reliable predictions put any new physics at energies 14 orders of magnitude higher, well out of the reach of any collider we’ll be able to build in the coming centuries. This is noted later in the report, where you can read: “Today high energy physics lacks unambiguous and guaranteed discovery targets.”

The report uses some highly specific examples of hypothetical particles that can be ruled out, such as certain WIMP candidates or supersymmetric particles. Again, that’s correct. But there is no good argument for why those particular  particles should be the right ones. Physicists have no end of conjectured new particles. You’d end up ruling out a few among millions of models, and make little progress, just like with the LHC and the earlier colliders.

We are further offered the usual arguments, that investing in a science project this size would benefit the technological industry and education and scientific networks. This is all true, but not specific to particle colliders. Any large-scale experiment would have such benefits. I do not find such arguments remotely convincing.

Another reason I am not excited about the current plans for a larger collider is that we might get more bang for the buck if we waited for better technologies. There’s the plasma wakefield acceleration, eg, that is in a test-period now and that may become a more efficient route to progress. Also, maybe high temperature superconductors will reach a level where they become usable for the magnets. Both of these technologies may become available in a decade or two, but they are not presently sufficiently developed so that they can be used for the next collider.

Therefore, investment-wise, it would make more sense to put particle physics on a pause and reconsider it in, say, 20 years to see whether the situation has changed, either because new technologies have become available or because more concrete predictions for new physics have been made.

At current, other large-scale experiments would more reliably offer new insights into the foundations of physics. Anything that peers back into the early universe, such as big radio telescopes, for example, or anything that probes the properties of dark matter. There are also medium and small-scale experiments that tend to fall off the table if big collaborations eat up the bulk of money and attention. And that’s leaving aside that maybe we might be better off investing in other areas of science entirely.

Of course a blog post cannot replace a detailed cost-benefit assessment, so I cannot tell you what’s the best thing to invest in. I can, however, tell you that a bigger particle collider is one of the most expensive experiments you can think of, and we do not currently have a reason to think it would discover anything new. Ie, large cost, little benefit. That much is pretty clear.

I think the Chinese are not dumb enough to build the next bigger collider. If they do, they might end up being the first nation ever to run and operate such a costly machine without finding anything new. It’s not how they hope to enter history books. So, I consider it unlikely they will go for it.

What the Europeans will do is harder to predict, because a lot depends on who has influential friends in which ministry. But I think particle physicists have dug their own grave by giving the public the impression that the LHC would answer some big question, and then not being able to deliver.

Sunday, January 13, 2019

Good Problems in the Foundations of Physics

img src: openclipart.org
Look at the history of physics, and you will find that breakthroughs come in two different types. Either observations run into conflict with predictions and a new theory must be developed. Or physicists solve a theoretical problem, resulting in new predictions which are then confirmed by experiment. In both cases, problems that give rise to breakthroughs are inconsistencies: Either theory does not agree with data (experiment-led), or the theories have internal disagreements that require resolution (theory-led).

We can classify the most notable breakthroughs this way: Electric and magnetic fields (experiment-led), electromagnetic waves (theory-led), special relativity (theory-led), quantum mechanics (experiment-led), general relativity (theory-led), the Dirac equation (theory-led), the weak nuclear force (experiment-led), the quark-model (experiment-led), electro-weak unification (theory-led), the Higgs-boson (theory-led).

That’s an oversimplification, of course, and leaves aside the myriad twists and turns and personal tragedies that make scientific history interesting. But it captures the essence.

Unfortunately, in the past decades it has become fashionable among physicists to present the theory-led breakthroughs as a success of beautiful mathematics.

Now, it is certainly correct that in some cases the theorists making such breakthroughs were inspired by math they considered beautiful. This is well-documented, eg, for both Dirac and Einstein. However, as I lay out in my book, arguments from beauty have not always been successful. They worked in cases when the underlying problem was one of consistency. They failed in other cases. As the philosopher Radin Dardashti put it aptly, scientists sometimes work on the right problem for the wrong reason.

That breakthrough problems were those which harbored an inconsistency is true even for the often-told story of the prediction of the charm quark. The charm quark, so they will tell you, was a prediction based on naturalness, which is an argument from beauty. However, we also know that the theories which particle physicists used at the time were not renormalizable and therefore would break down at some energy. Once electro-weak unification removes this problem, the requirement of gauge-anomaly cancellation will tell you that a fourth quark is necessary. But this isn’t a prediction based on beauty. It’s a prediction based on consistency.

This, I must emphasize, is not what historically happened. Weinberg’s theory of the electro-weak unification came after the prediction of the charm quark. But in hindsight we can see that the reason this prediction worked was that it was indeed a problem of consistency. Physicists worked on the right problem, if for the wrong reasons.

What can we learn from this?

Well, one thing we learn is that if you rely on beauty you may get lucky. Sometimes it works. Feyerabend, I think, had it basically right when he argued “anything goes.” Or, as the late German chancellor Kohl put it, “What matters is what comes out in the end.”

But we also see that if you happen to insist on the wrong ideal of beauty, you will not make it into history books. Worse, since our conception of what counts as a beautiful theory is based on what worked in the past, it may actually get in the way if a breakthrough requires new notions of beauty.

The more useful lesson to learn, therefore, is that the big theory-led breakthroughs could have been based on sound mathematical arguments, even if in practice they came about by trial and error.

The “anything goes” approach is fine if you can test a large number of hypotheses and then continue with the ones that work. But in the foundations of physics we can no longer afford “anything goes”. Experiments are now so expensive and take such a long time to build that we have to be very careful when deciding which theories to test. And if we take a clue from history, then the most promising route to progress is to focus on problems that are either inconsistencies with data or internal inconsistencies of the theories.

At least that’s my conclusion.

It is far from my intention to tell anyone what to do. Indeed, if there is any message I tried to get across in my book it’s that I wish physicists would think more for themselves and listen less to their colleagues.

Having said this, I have gotten a lot of emails from students asking me for advice, and I recall how difficult it was for me as a student to make sense of the recent research trends. For this reason I append below my assessment of some of the currently most popular problems in the foundations of physics. Not because I want you to listen to me, but because I hope that the argument I offered will help you come to your own conclusion.

(You find more details and references on all of this in my book.)



Dark Matter
Is an inconsistency between theory and experiment and therefore a good problem. (The issue with dark matter isn’t whether it’s a good problem or not, but to decide under when to consider the problem solved.)

Dark Energy
There are different aspects of this problem, some of which are good problems others not. The question why the cosmological constant is small compared to (powers of) the Planck mass is not a good problem because there is nothing wrong with just choosing it to be a certain constant. The question why the cosmological constant is presently comparable to the density of dark matter is likewise a bad problem because it isn’t associated with any inconsistency. On the other hand, the absence of observable fluctuations around the vacuum energy (what Afshordi calls the “cosmological non-constant problem”) and the question why the zero-point energy gravitates in atoms but not in the vacuum (details here) are good problems.

The Hierarchy Problem
The hierarchy problem is the big difference between the strength of gravity and the other forces in the standard model. There is nothing contradictory about this, hence not a good problem.

Grand Unification
A lot of physicists would rather have one unified force in the standard model rather than three different ones. There is, however, nothing wrong with the three different forces. I am undecided as to whether the almost-prediction of the Weinberg-angle from breaking a large symmetry group does or does not require an explanation.

Quantum Gravity
Quantum gravity removes an inconsistency and hence a solution to a good problem. However, I must add that there may be other ways to resolve the problem besides quantizing gravity.

Black Hole Information Loss
A good problem in principle. Unfortunately, there are many different ways to fix the problem and no way to experimentally distinguish between them. So while it’s a good problem, I don’t consider it a promising research direction.

Particle Masses
It would be nice to have a way to derive the masses of the particles in the standard model from a theory with fewer parameters, but there is nothing wrong with these masses just being what they are. Thus, not a good problem.

Quantum Field Theory
There are various problems with quantum field theories where we lack a good understanding of how the theory works and that require a solution. The UV Landau pole in the standard model is one of them. It must be resolved somehow, but just exactly how is not clear. We also do not have a good understanding of the non-perturbative formulation of the theory and the infrared behavior turns out to be not as well understood as we thought only years ago (see eg here).

The Measurement Problem
The measurement problem in quantum mechanics is typically thought of as a problem of interpretation and then left to philosophers to discuss. I think that’s a mistake; it is an actual inconsistency. The inconsistency comes from the need to postulate the behavior of macroscopic objects when that behavior should instead follow from the theory of the constituents. The measurement postulate, hence, is inconsistent with reductionism.

The Flatness Problem
Is an argument from finetuning and not well-defined without a probability distribution. There is nothing wrong with the (initial value of) the curvature density just being what it is. Thus, not a good problem.

The Monopole Problem
That’s the question why we haven’t seen magnetic monopoles. It is quite plausibly solved by them not existing. Also not a good problem.

Baryon Asymmetry and The Horizon Problem
These are both finetuning problems that rely on the choice of an initial condition, which is considered to be likely. However, there is no way to quantify how likely the initial condition is, so the problem is not well-defined.

There are further always a variety of anomalies where data disagrees with theory. Those can linger at low significance for a long time and it’s difficult to decide how seriously to take them. For those I can only give you the general advice that you listen to experimentalists (preferably some who are not working on the experiment in question) before you listen to theorists. Experimentalists often have an intuition for how seriously to take a result. That intuition, however, usually doesn’t make it into publications because it’s impossible to quantify. Measures of statistical significance don’t always tell the full story.

Saturday, January 12, 2019

Book Review: “Quantum Space” by Jim Baggott

Quantum Space
Loop Quantum Gravity and the Search for the Structure of Space, Time, and the Universe
By Jim Baggott
Oxford University Press (January 22, 2019)


In his new book Quantum Space, Jim Baggott presents Loop Quantum Gravity (LQG) as the overlooked competitor of String Theory. He uses a chronological narrative that follows the lives of Lee Smolin and Carlo Rovelli. The book begins with their nascent interest in quantum gravity, continues with their formal education, their later collaboration, and, in the final chapters, follows them as their ways separate. Along with the personal stories, Baggott introduces background knowledge and lays out the scientific work.

Quantum Space is structured into three parts. The first part covers the basics necessary to understand the key ideas of Loop Quantum Gravity. Here, Baggott goes through the essentials of special relativity, general relativity, quantum mechanics, quantum field theory, and the standard model of particle physics. The second part lays out the development of Loop Quantum Gravity and the main features of the theory, notably the emphasis on background independence.

The third part is about recent applications, such as the graviton propagator, the calculation of black hole entropy, and the removal of the big bang singularity. You also find there Sundance Bilson-Thompson’s idea that elementary particles are braids in the spin-foam. In this last part, Baggott further includes Rovelli’s and Smolin’s ideas about the foundations of quantum mechanics, as well as Rovelli and Vidotto’s Planck Stars, and Smolin’s ideas about the reality of time and cosmological natural selection.

The book’s epilogue is an extended Q&A with Smolin and Rovelli and ends with mentioning the connections between String Theory and Loop Quantum Gravity (which I wrote about here).

Baggott writes very well and he expresses himself clearly, aided by about two dozen figures and a glossary. The book, however, requires some tolerance for technical terminology. While Baggott does an admirable job explaining advanced physics – such as Wilson loops, parallel transport, spinfoam, and renormalizability – and does not shy away from complex topics – such as the fermion doubling problem, the Wheeler-De-Witt equation, Shannon entropy, or extremal black holes – for a reader without prior knowledge in the field, this may be tough going.

We know from Baggott’s 2013 book “Farewell to Reality” that he is not fond of String Theory, and in Quantum Space, too, he does not hold back with criticism. On Edward Witten’s conjecture of M-theory, for example, he writes:
“This was a conjecture, not a theory…. But this was, nevertheless, more than enough to set the superstring bandwagon rolling even faster.”
In Quantum Space, Baggott also reprints Smolin’s diagnostic of the String Theory community, which asserts string theorists “tremendous self-confidence,” “group think,” “confirmation bias,” and “a complete disregard and disinterest in the opinions of anyone outside the group.”

Baggott further claims that the absence of new particles at the Large Hadron Collider is bad news for string theory*:
“Some argue that string theory is the tighter, more mathematically rigorous and consistent, better-defined structure. But a good deal of this consistency appears to have been imported through the assumption of supersymmetry, and with each survey published by the ATLAS or CMS detector collaborations at CERN’s Large Hadron Collider, the scientific case for supersymmetry weakens some more.”
I’d have some other minor points to quibble with, but given the enormous breadth of topics covered, I think Baggott’s blunders are few and sparse.

I must admit, however, that the structure of the book did not make a lot of sense to me. Baggott introduces a lot of topics that he later does not need and whose relevance to LQG escapes me. For example, he goes on about the standard model and the Higgs-mechanism in particular, but that doesn’t play any role later. He also spends quite some time on the interpretation of quantum mechanics, which isn’t actually necessary to understand Loop Quantum Gravity. I also don’t see what Lee Smolin’s cosmological natural selection has to do with anything. But these are stylistic issues.

The bigger issue I have with the book is that Baggott is as uncritical of Loop Quantum Gravity as he is critical of String Theory. There is not a mention in the book about the problem of recovering local Lorentz-invariance, an issue that has greatly bothered both Joe Polchinski and Jorge Pullin. Baggott presents Loop Quantum Cosmology (the LQG-based approach to understand the early universe) as testable but forgets to note that the predictions depend on an adjustable parameter, and also, it would be extremely hard to tell apart the LQG-based models from other models. And he does not, in balance, mention String Cosmology. He does not mention the problem with the supposed derivation of the black hole entropy by Bianchi and he does not mention the problems with Planck stars.

And if he had done a little digging, he’d have noticed that the group-think in LQG is as bad as it is in string theory.

In summary, Quantum Space is an excellent, non-technical, introduction to Loop Quantum Gravity that is chock-full with knowledge. It will, however, give you a rather uncritical view of the field.

[Disclaimer: Free review copy.]


* I explained here why the non-discovery of supersymmetric particles at the LHC has no relevance for string theory.

Friday, January 11, 2019

Letter from a reader: “We get trapped doing the same unproductive things over and over and over”

Dear Dr. Hossenfelder,

I read with great interest your book “Lost in Math: How Beauty Leads Physics Astray”. I would like to communicate regarding several points you touch upon in the book. The issues you raise are far more important and far-reaching than you may realize.

First, allow me to introduce myself. I am a U.S. practicing physician and researcher, and my clinical work is in Infectious Diseases. I also possess a substantial philosophy background (graduate student at Tufts University). My research career tracks a pathway you may relate to, given what you state in your book. At one time I was a basic scientist, and worked in the area of host defense against HIV, the virus that causes AIDS. My overall training, however, was in a different area, namely SEPSIS.

SEPSIS is infection sufficient to cause death, and it is a leading cause of mortality worldwide. One concept has driven research and clinical practice in this area (SEPSIS), and that is the idea that infection (usually bacterial) is followed by an exuberant host-derived inflammation that is sufficient to kill the host (patient). In a sense, the bacteria do not kill you, they make you kill yourself.

A fascinating development has emerged in the field of SEPSIS research. More than 250 clinical trials have failed to cure SEPSIS using anti-inflammatory drugs (at a cost of billions of US dollars). The response of the bioscience community to repeated failure has been to double-down…over and over and over again!!!

In your book you describe proliferation of theories in particle physics that do not have empirical implications that can be tested (what philosophers would call a lack of empirical adequacy), and so these become unfalsifiable in principle and function more like religion than science. They drain precious resources and have little, if any chance, to generate empirically testable results (this is the opposite of science). As you touch upon, the concept of theory falsification in science (Popper) is exaggerated, and contradictory data are not just tolerated, they are often ignored.

In my field (Medicine or bioscience) we have a similar crisis, but it is somewhat different from the situation in physics. In our case, there is simply NO THEORY whatever to guide experiment. Have you ever heard of a theory in medicine? I do not believe there is such a thing. In the absence of theoretical guidance, we perform endless experiments and get trapped doing the same unproductive things over and over and over (like 250 failed SEPSIS clinical trials).

The stakes here are higher than one might think, since the idea that inflammation is the cause of disease in general has spread to almost everything else in medicine (inflammation causes cancer... Alzheimer disease... heart attack... stroke... depression... etc, etc, etc). That is why we are told to take aspirin, fish oil, anti-oxidants, vitamin E, CoQ, curcumin, etc. Interestingly, all these applications of the hyperinflammation concept are wrong.

My response? I gave up my laboratory and decided to attempt to explore a novel direction of bioscience research. I work to generate a novel medical theory, and I believe I am currently the world’s only funded medical theoretician (as far as I know). I am attempting to overhaul the hyperinflammation concept of disease and replace it with a different approach beginning with a comprehensive novel theory of SEPSIS.

To conclude, I point out that the crisis you call attention to in physics is present elsewhere in science. In the case of physics, the problem is bad theory, in the case of biomedicine the problem is an absence of theory. The role of theory in science is absolutely pivotal, and really facilitates progress in all scientific fields (imagine physics without Newton).

I think the root of many current problems that block scientific progress is a lack of understanding of what is going on in science (as opposed to the ability to DO science). As a physicist, you are on the front lines of a special kind of science that impinges upon what underlies reality, and this is not the case in other kinds of science like bioscience. You may want to delve into the relationship between Platonism and Scientific Realism... I believe you will find much to consider that will answer your questions about what, exactly, has gone wrong with physics.

Lee Shapiro, M.D., F.A.C.P.

University of Colorado Anschutz Medical Center and Denver Veterans Affairs Medical Center

Wednesday, January 09, 2019

The Real Problems with Artificial Intelligence

R2D2 costume for toddlers.
[image: amazon.com]
In recent years many prominent people have expressed worries about artificial intelligence (AI). Elon Musk thinks it’s the “biggest existential threat.” Stephen Hawking said it could “be the worst event in the history of our civilization.” Steve Wozniak believes that AIs will “get rid of the slow humans to run companies more efficiently,” and Bill Gates, too, put himself in “the camp that is concerned about super intelligence.”

In 2015, the Future of Life Institute formulated an open letter calling for caution and formulating a list of research priorities. It was signed by more than 8,000 people.

Such worries are not unfounded. Artificial intelligence, as any new technology, brings risks. While we are far from creating machines even remotely as intelligent as humans, it’s only smart to think about how to handle them sooner rather than later.

However, these worries neglect the more immediate problems that AI will bring.

Artificially Intelligent machines won’t get rid of humans any time soon because they’ll need us for quite some while. The human brain may not be the best thinking apparatus, but it has a distinct advantage over all machines we built so far: It functions for decades. It’s robust. It repairs itself.

Some million years of evolution optimized our bodies, and while the result could certainly be further improved (damn those knees), it’s still more durable than any silicon-based thinking apparatuses we created. Some AI researchers have even argued that a body of some kind is necessary to reach human-level intelligence, which – if correct – would vastly increase the problem of AI fragility.

Whenever I bring up this issue with AI enthusiasts, they tell me that AIs will learn to repair themselves, and even if not, they will just upload themselves to another platform. Indeed, much of the perceived AI-threat comes from them replicating quickly and easily, while at the same time being basically immortal. I think that’s not how it will go.

Artificial Intelligences at first will be few and one-of-a-kind, and that’s how it will remain for a long time. It will take large groups of people and many years to build and train an AI. Copying them will not be any easier than copying a human brain. They’ll be difficult to fix once broken, because, as with the human brain, we won’t be able to separate their hardware from the software. The early ones will die quickly for reasons we will not even comprehend.

We see the beginning of this trend already. Your computer isn’t like my computer. Even if you have the same model, even if you run the same software, they’re not the same. Hackers exploit these differences between computers to track your internet activity. Canvas fingerprinting, for example, is a method of asking your computer to render a font and output an image. The exact way your computer performs this task depends both on your hardware and your software, hence the output can be used to identify a device.

Presently, you do not notice these subtle differences between computers all that much (except possibly when you spend hours browsing help forums thinking “someone must have had this problem before” and turn up nothing). But the more complex computers get, the more obvious the differences will become. One day, they will be individuals with irreproducible quirks and bugs – like you and I.

So we have AI fragility plus the trend of increasingly complex hard- and software to become unique. Now extrapolate this some decades into the future. We will have a few large companies, governments, and maybe some billionaires who will be able to afford their own AI. Those AIs will be delicate and need constant attention by a crew of dedicated humans.

This brings up various immediate problems:

1. Who gets to ask questions and what questions?

This may not be a matter of discussion for privately owned AI, but what about those produced by scientists or bought by governments? Does everyone get a right to a question per month? Do difficult questions have to be approved by the parliament? Who is in charge?

2. How do you know that you are dealing with an AI?

The moment you start relying on AIs, there’s a risk that humans will use it to push an agenda by passing off their own opinions as that of the AI. This problem will occur well before AIs are intelligent enough to develop their own goals.

3. How can you tell that an AI is any good at giving answers?

If you only have a few AIs and those are trained for entirely different purposes, it may not be possible to reproduce any of their results. So how do you know you can trust them? It could be a good idea to ask that all AIs have a common area of expertise that can be used to compare their performance.

4. How do you prevent that limited access to AI increases inequality, both within nations and between nations?

Having an AI to answer difficult questions can be a great advantage, but left to market forces alone it’s likely to make the rich richer and leave the poor behind even farther. If this is not something that we want – and I certainly don’t – we should think about how to deal with it.

Monday, January 07, 2019

Letter from a reader: “What’s so bad about randomness?”

[The best part of publishing a book has been getting feedback from readers who report their own experience as it relates to what I wrote about. With permission, I want to share this letter which I received the other day from Dave Hurwitz, a classical music critic.]

Dear Dr. Hossenfelder,

I hope that I am not bothering you, but I just wanted to write to tell you how much I am enjoying your book “Lost in Math.” I haven’t quite finished it yet, but I was so taken with it that I thought I might write to let you know anyway. I am about as far away from theoretical physics as it’s possible to be: I am a classical music critic and independent musical scholar, and I support myself working in real estate; but I am a very serious follower of the popular scientific literature, and I was so impressed by your directness, literacy, and ability to make complex topics digestible and entertaining for the general reader.

I am also very much in sympathy with your point of view. Even though I don’t understand the math, it often seems to me that so much of what theoretical physicists are doing amounts to little more than a sort of high-end gematria – numerology with a kind of mystical value assigned to mathematical coincidence or consistency, or, as you (they) call it, “beauty.” I cringe whenever I hear these purely aesthetic judgments applied to theoretical speculation about the nature of reality, based primarily on the logic of the underlying math. And don’t get me wrong: I like math. Personally, I have no problem with the idea that the laws governing the universe may not be elegant and tidy, and I see no reason why they should be. They are what they are, and that’s all. What’s so bad about randomness? It’s tough enough trying to figure out what they are without assigning to them purely subjective moral or aesthetic values (or giving these undue weight in guiding the search).

It may interest you to know that something similar seems to be infecting current musicology, and I am sure many other academic fields. Discussion of specific musical works often hinges on standardized and highly technical versions of harmonic analysis, mostly because the language and methodology have been systematized and everyone agrees on how to do it – but what it actually means, how it creates meaning or expressiveness, is anyone’s guess. It is assumed to be important, but there is no demonstrable causal connection between the correctness of the analysis and the qualitative values assigned to it. It all comes down to a kind of circular reasoning: the subjective perception of “beauty” drives the search for a coherent musical substructure which, not surprisingly, once described is alleged to justify the original assumption of “beauty.” If you don’t “get” physicists today, then I don’t “get” musicologists.

Anyway, I’m sorry to take up so much of your time, but I just wanted to note that what you see – the kind of reasoning that bothers you so much – has its analogues way beyond the field of theoretical physics. I take your point that scientists, perhaps, should know better, but the older I get the more I realize two things: first, human nature is the same everywhere, and second, as a consequence, it’s precisely the people who ought to know better that, for the most part, seldom do. I thank you once again for making your case so lucidly and incisively.

Best regards,

Dave Hurwitz
ClassicsToday.com

Thursday, January 03, 2019

Book Update

During the holidays I got several notes from people who tried to order my book but were told it’s out of print or not in stock. Amazon likewise had only used copies on sale, starting at $50 and up. Today I have good news: My publisher informed me that the book has been reprinted and should become available again in the next days.

The German translation meanwhile is in the third edition (the running head has been fixed!). The Spanish translation will appear in June with a publisher by name Ariel. Other translations that will come are Japanese, Chinese, Russian, Italian, French, Korean, Polish, and Romanian. Amazon now also offers an English audio version.

Many thanks to all readers!



Oh, and I still don’t have a publisher in the UK.

Wednesday, January 02, 2019

Electrons don’t think

Brainless particles leaving tracks
in a bubble chamber. [image source]
I recently discovered panpsychism. That’s the idea that all matter – animate or inanimate – is conscious, we just happen to be somewhat more conscious than carrots. Panpsychism is the modern elan vital.

When I say I “discovered” panpsychism, I mean I discovered there’s a bunch of philosophers who produce pamphlets about it. How do these philosophers address the conflict with evidence? Simple: They don’t.

Now, look, I know that physicists have a reputation of being narrow-minded. But the reason we have this reputation is that we tried the crazy shit long ago and just found it doesn’t work. You call it “narrow-minded,” we call it “science.” We have moved on. Can elementary particles be conscious? No, they can’t. It’s in conflict with evidence. Here’s why.

We know 25 elementary particles. These are collected in the standard model of particle physics. The predictions of the standard model agree with experiment to best precision.

The particles in the standard model are classified by their properties, which are collectively called “quantum numbers.” The electron, for example, has an electric charge of -1 and it can have a spin of +1/2 or -1/2. There are a few other quantum numbers with complicated names, such as the weak hypercharge, but really it’s not so important. Point is, there are handful of those quantum numbers and they uniquely identify an elementary particle.

If you calculate how many particles of a certain type are produced in a particle collision, the result depends on how many variants of the produced particle exist. In particular, it depends on the different values the quantum numbers can take. Since the particles have quantum properties, anything that can happen will happen. If a particle exists in many variants, you’ll produce them all – regardless of whether or not you can distinguish them. The result is that you see more of them than the standard model predicts.

Now, if you want a particle to be conscious, your minimum expectation should be that the particle can change. It’s hard to have an inner life with only one thought. But if electrons could have thoughts, we’d long have seen this in particle collisions because it would change the number of particles produced in collisions.

In other words, electrons aren’t conscious, and neither are any other particles. It’s incompatible with data.

As I explain in my book, there are ways to modify the standard model that do not run into conflict with experiment. One of them is to make new particles so massive that so far we have not managed to produce them in particle collisions, but this doesn’t help you here. Another way is to make them interact so weakly that we haven’t been able to detect them. This too doesn’t help here. The third way is to assume that the existing particles are composed of more fundamental constituents, that are, however, so strongly bound together that we have not yet been able to tear them apart.

With the third option it is indeed possible to add internal states to elementary particles. But if your goal is to give consciousness to those particles so that we can inherit it from them, strongly bound composites do not help you. They do not help you exactly because you have hidden this consciousness so that it needs a lot of energy to access. This then means, of course, that you cannot use it at lower energies, like the ones typical for soft and wet thinking apparatuses like human brains.

Summary: If a philosopher starts speaking about elementary particles, run.