Monday, October 07, 2019

What does the future hold for particle physics?

In my new video, I talk about the reason why the Large Hadron Collider, LHC for short, has not found fundamentally new particles besides the Higgs boson, and what this means for the future of particle physics. Below you find a transcript with references.

Before the LHC turned on, particle physicists had high hopes it would find something new besides the Higgs boson, something that would go beyond the standard model of particle physics. There was a lot of talk about the particles that supposedly make up dark matter, which the collider might produce. Many physicists also expected it to find the first of a class of entirely new particles that were predicted based on a hypothesis known as supersymmetry. Others talked about dark energy, additional dimensions of space, string balls, black holes, time travel, making contact to parallel universes or “unparticles”. That’s particles which aren’t particles. So, clearly, some wild ideas were in the air.

To illustrate the situation before the LHC began taking data, let me quote a few articles from back then.

Here is Valerie Jamieson writing for New Scientist in 2008:
“The Higgs and supersymmetry are on firm theoretical footing. Some theorists speculate about more outlandish scenarios for the LHC, including the production of extra dimensions, mini black holes, new forces, and particles smaller than quarks and electrons. A test for time travel has also been proposed.”
Or, here is Ian Sample for the Guardian, also in 2008:
“Scientists have some pretty good hunches about what the machine might find, from creating never-seen-before particles to discovering hidden dimensions and dark matter, the mysterious substance that makes up 25% of the universe.”
Paul Langacker in 2010, writing for the APS:

“Theorists have predicted that spectacular signals of supersymmetry should be visible at the LHC.” Michael Dine for Physics Today in 2007:
“The Large Hadron Collider will either make a spectacular discovery or rule out supersymmetry entirely.”
The Telegraph in 2010:
“[The LHC] could answer the question of what causes mass, or even surprise its creators by revealing the existence of a fifth, sixth or seventh secret dimension of time and space.”
A final one. Here is Steve Giddings writing in 2010 for
“LHC collisions might produce dark-matter particles... The collider might also shed light on the more predominant “dark energy,”... the LHC may reveal extra dimensions of space... if these extra dimensions are configured in certain ways, the LHC could produce microscopic black holes... Supersymmetry could be discovered by the LHC...”
The Large Hadron collider has been running since 2010. It has found the Higgs boson. But why didn’t it find any of the other things?

This question is surprisingly easy to answer. There was never a good reason to expect any of these things in the first place. The more difficult question is why did so many particle physicists think those were reasonable expectations, and why has not a single one of them told us what they have learned from their failed predictions?

To see what happened here, it is useful to look at the difference between the prediction of the Higgs-boson and the other speculations. The standard model without the Higgs does not work properly. It becomes mathematically inconsistent at energies that the LHC is able to reach. Concretely, without the Higgs, the standard model predicts probabilities larger than one, which makes no sense.

We therefore knew, before the LHC turned on, that something new had to happen. It could have been something else besides the Higgs. The Higgs was one way to fix the problem with the standard model, but there are other ways. However, the Higgs turned out to be right.

All other proposed ideas, extra dimensions, supersymmetry, time-travel, and so on, are unnecessary. These theories have been constructed so that they are compatible with all existing observations. But they are not necessary to solve any problem with the standard model. They are basically wishful thinking.

The reason that many particle physicists believed in these speculations is that they mistakenly thought the standard model has another problem which the existence of the Higgs would not fix. I am afraid that many of them still believe this. This supposed problem is that the standard model is not “technically natural”.

This means the standard model contains one number that is small, but there is no explanation for why it is small. This number is the mass of the Higgs-boson divided by the Planck mass, which happens to be about 10-15. The standard model works just fine with that number and it fits the data. But a small number like this, without explanation, is ugly and particle physicists didn’t want to believe nature could be that ugly.

Well, now they know that nature doesn’t care what physicists want it to be like.

What does this mean for the future of particle physics? This argument from “technical naturalness” was the only reason that physicists had to think that the standard model is incomplete and something to complete it must appear at LHC energies. Now that it is clear this argument did not work, there is no reason why a next larger collider should see anything new either. The standard model runs into mathematical trouble again at energies about a billion times higher than what a next larger collider could test. At the moment, therefore, we have no good reason to build a larger particle collider.

But particle physics is not only collider physics. And so, it seems likely to me, that research will shift to other areas of physics. A shift that has been going on for two decades already, and will probably become more pronounced now, is the move to astrophysics, in particular the study of dark matter and dark energy and also, to some extent, the early universe.

The other shift that we are likely to see is a move away from high energy particle physics and move towards high precision measurements at lower energies, or to table top experiments probing the quantum behavior of many particle systems, where we still have much to learn.


  1. Agree that these are better avenues. Can you say anything about social motivations driving physics? For example subatomic physics enjoyed strong government support because of the "success" of developing nuclear bombs. Cosmology, in my view, taps the transcendental needs previously served by religions. Practical quantum mechanics like quantum computing, cryptography, maybe micro-teleportation could be strongly commercial. Comments?

    1. Yes, particle physics, like nuclear physics, enjoys much governmental support due to historical reasons. It's about time this changes. It's a waste of money and we have more important research to do.

  2. Sabine wrote:

    “The reason that many particle physicists believed in these speculations is that they mistakenly thought the standard model has another problem which the existence of the Higgs would not fix.”

    Going back to 1984, LHC was mostly designed to find the Higgs, study WW interactions and heavy quarks physics. Technicolor and supersymmetry searches were minor goals.

    I do not think that there is any problem with the current standard model at foreseeable energies, and many (most?) particle physicists think the same. Still, I think that building a successor to the LHC would be highly valuable. Even if no “Beyond the Standard Model” physics is discovered by this new accelerator, it would give us better precision measurements than a thousand “tabletop” experiments.
    Just ruling out any new physics at a large energy range would be useful. I agree with you that astronomical and cosmological measurements are the most promising venues for studying fundamental physics. But extracting fundamental physics from such experiment is very complicated and indirect. Any clues that we can extract from particle accelerators, would be extremely valuable for understanding the cosmological data.

  3. Dear Sabine,

    I really admire your courage in repeating the same old story again and again. In my view, your arguments are pretty clear and understandable. In my view, also the discovery of the higgs was not such a big thing. We saw already at LEP that something like a Higgs must exist. It has already been seen in the radiative corrections of the LEP precision data. There were really good reasons to build an LHC in order to study the Higgs in detail and to distinguish between an SM Higgs and possible susy versions or some other really weird possibilities.

    The current LHC data are however in perfect agreement with the SM, there is no hint at all, that something unexpected might happen. This does however not mean, that it woud not be interesting to do further precision tests on the standard model itself, like eg. the higgs coupling or lepton universality.

    In my personal view as a layman, it would be really interesting, to build a muon collider within the existing LHC tunnel after the termination of the LHC program. Clearly this technology is not available at the moment, but the LHC will still be up and running for several more years to come.

    Shurely it would be a tremendous challenge for CERN R&D, to provide the technology for building a muon collider. Today no one knows exactly if this goal can be achieved or not. But this has always been the case in particle physics. And exactly this makes particle physics such exiting. How could one motivate students for HEP if no one does really believe in discovering new and exiting phenomena?

    Muon Universality has never been tested in great detail before. It seems to me, that current (g-2) experiments are giving a hint, that something really interesting might be discovered. Lepton colliders should also be able, to measure the higgs coupling in detail with high precision.

    Is there anyone reading this blog who could provide me with a realistic estimate of my layman considerations?

    1. Former,

      Yeah, I know I've been saying this many times, but I figured I hadn't done a video about it. The audience on YouTube only slightly overlaps with the audience of this blog. Sorry about that. The problem with blogs is that it's basically impossible to get out of a niche because the audience is self-selective. Of course the problem also exists on YouTube (which copes badly with trying to "calculate" an audience for my music videos given the other content) but it's not as pronounced.

      In any case, I am afraid there will be more repetitions in the future as I try to feed blog content into YouTube.

    2. Sabine,

      I really like your videos. They are becoming better and better over time!

  4. I forget this...Do you know that we could not comment you on Facebook because you have reach 5,000 members and Facebook don't take comment from those who are not in the 5,000 ...?

    1. That's not correct. You can follow me and comment on my public posts even if you are not on the friend list. Presumably you are referring to my personal facebook page. Please note that this blog has a separate page (link is in the sidebar).

  5. I must confess that when it comes to mathematics and physics I am dumb. So, in all humbleness, just for the love of physics and learning, these are following experimental evidences which is akin to what we have been discussing for the past several weeks:

    1. "Recording" at the quantum level, an experimental evidence for Quantum Darwinism. The experiments were done in Italy, China, and Germany.

    2. The observer influence or degrees of observer influence. Experimental evidence by Weizmann Institute of Science, 1998.

    3. How can the same thing present two different realities or Wigner's friend paradox? Experimental evidence by Heriot-Watt experiment, Edinburg.

  6. I will take some exception with supersymmetry. The Coleman-Mandula theorem illustrates how unification of gauge fields and gravitation were not possible. Supersymmetry with its intertwining of Poincare symmetry with a transformation of quantum statistics, bosons ↔ fermions, provides a way out of this problem. This is one reason people has been so adamant about SUSY. However, I think if SUSY is correct it occurs in totally different ways that is canonically formulated.

    With precision measurements the current effort to find an electric dipole moment of the electron is most salient for particle physics. The data so far from the ACME group is |d_e| < 1.1×10^{-29}e-cm . This is a search for some structure to the electron. If such exists there is an intrinsic breaking of parity and time symmetry. The classical radius of the electron is about 10^{-13} cm and this result means on this sphere any “bump” is less than 10^{-29}cm. This is a probe of distances approaching the string or Planck scale.

    It seems plausible to detect particles that have extreme masses by performing measurements of spectral gaps similar to the Lamb shift. An electron has a cloud of virtual particles around it, at least in a standard Feynman diagram perspective, and if there are particles that exist in the 100TeV range these contribute to the virtual cloud around the electron and its mass renormalization. We can “tune” the vacuum in a Casimir experimental setting and find how this changes the electron mass.

    I am not of a strong positive or negative mind when it comes to the proposed FCC. It may be there are additional physics, such as the sphaleron, at 100TeV. On the other hand that is a bit of a heavy bet to make for this.

    1. "The Coleman-Mandula theorem illustrates how unification of gauge fields and gravitation were not possible. Supersymmetry with its intertwining of Poincare symmetry with a transformation of quantum statistics, bosons ↔ fermions, provides a way out of this problem."

      So? Serious question, think about it. What makes you think that this bears any relevance to the question whether it's a correct description of nature?

    2. I found it seems to be possible to achieve unification if we add fundamental falling to Poincare symmetry as phenomenon with translation, rotation and boost. Fundamental falling would replace virtual particles between opposite charges, boost via particles would stay between similar charges only...

    3. SUSY is not the only way around the Coleman-Mandula theorem.
      E.g. Garret Lisi thinks he found one when he says “For the Coleman-Mandula theorem, there is no spacetime and thus no S-matrix until AFTER symmetry breaking, when gravitational so(3,1) and gauge fields separate ...” in here.

      Or we just take the theorem seriously and do not try to unify them, but instead investigate further the relation between the theories – theory reductionism.
      Maybe we should let GR and QM just cooperate, instead of trying to force them into a unification.

      Spacetime in GR simply is not quantized, i.e. in the unitary evolution of QM where superposition and entanglement of mass, energy/momentum reigns there simply cannot be a backreaction onto spacetime.
      But there has to be a backreaction.
      Let them cooperate in a quantized manner, e.g. whenever the tension gets too high a measurement could be triggered.
      Then non-linearity in GR and linearity in QM together would explain measurement and quantization.

      “This would imply that vacuum fluctuations do not couple to gravity and this can help explain why the cosmological constant is not 120 orders of magnitude bigger than it is.” as Tim Palmer says in here - admittingly in a different context, but in a chapter that is titled “... The End of Particle Physics?”

    4. "sphaleron" Status Hypothetical [Wikipedia]

      So many hypotheticals (in particle physics), so little time (I have to read about them all).

    5. @ Sabine. There is nothing in the way of a proof of course. An answer to a theoretical problem is not necessarily an empirical solution. I do though still think SUSY is a reasonable construction to keep in one's toolbox.

      I tend to think according to a supersymmetric protected topological order instead of the standard way people have applied it. The SUSY standard model work, which accounts for 10s of thousands of published papers spanning 4 decades, appears likely to be headed for the landfill. I see few ways this can be rescued.

    6. @ Eusa: I guess I do not know what fundamental falling is. In general relativity gravitation is a manifestation of patching together local regions with Poincare symmetry and where the resulting chart has geodesic motion and curvature.

    7. Right, as I say in my public lecture, physicists know they shouldn't make these arguments, but they do it nevertheless. That's why I am convinced humans will go extinct in the next few hundred years.

    8. Lawrence, I mean fundamental falling replace virtual particles which obey E^2-(pc)^2<0 and difference come from the full conservation of causality among mediating particles in the field dynamics. The concept is connected to those two antipodal causal continuums I mentioned earlier.

    9. “... patching together local regions ...”
      Why is differential geometry in GR such a precise tool matching reality so well?
      Maybe nature itself does the patching of spacetime.
      Calculus is pure math with a limit to infinity and thus zero step size – maybe nature has limited resources and makes tiny measurable steps.
      Let’s go a step further beyond just local Lorentz symmetry.
      Let’s include measurement and even non-locally correlated ones.
      In EPR each of Alice’s and Bob’s local patches of spacetime have to be patched and glued to the previous local spacetime according to the non-locally related outcome of their massive “classical” measurement devices’ backreaction.
      Thus, spacetime tells QM matter how to free fall, QM matter gets entangled and superposed. And “localized”, i.e. measured QM tells spacetime how to curve.
      Averaging over the myriads of tiny measurements would reproduce the story of classical matter in GR.

    10. Reimond: Why is differential geometry in GR such a precise tool matching reality so well?

      Mathematics is an invented tool intended to match reality, from the beginning of arithmetic using pebbles to represent sheep.

      Why didn't Euclidean geometry work? Because it was an approximate model of reality. Of course differential geometry works because we can make it a more accurate model of reality, we would have invented something else if it didn't.

      As we will probably have to invent something else to move forward in physics, since the predictions of differential geometry fail to predict what happens at the infinitesimal scale, we learn it is also an approximate model of reality, and we need something that more accurately models reality.

      We just don't know what it is, yet. I doubt any simple extension of existing theory is going to work, I think this is what physicists have been trying for forty years without success.

    11. @Sabine

      "Right, as I say in my public lecture, physicists know they shouldn't make these arguments, but they do it nevertheless. That's why I am convinced humans will go extinct in the next few hundred years. "

      I have no idea what you are talking about. It does not make any sense to me. Could you further explain your considerations?

    12. Correction: “... QM tells ...”“... QM matter tells ...”

      And of course, how to translate the tiny amount of “localized” QM matter into a tiny T^{μν} would still be a question, but at least it is not in superposition after a triggered measurement.
      For GR the backreaction is tiny of the order of the Plank scale. But for QM a Plank mass is huge - it takes a large number of entangled atoms to reach a tiny fraction of it. (*)
      The very relation between QM and GR, i.e. ℏ/G and a threshold would determine this process - theory reductionism at work.

      (*) This also would explain the transition Sabine mentions: It’s only past the level of biochemistry that it starts getting simpler again. – see also here.
      Further proteins live in water to which they also can get entangled. Entanglement happens because QM particles are exactly identical and therefore the state needs to be anti-/symmetrized.

    13. @Reimond: I found it funny that in Sabine's book she devotes a fair amount to interviewing Lisi, when after all Lisi's program is extremely mathematical. I read a couple of his early papers and found them to be interesting expositories on representations of Lie and exceptional algebras.

      With the standard approach to gravity-gauge theory unification there is a group of the form SO(n,1) for n > 3 and a reasonable choice n = 9. In this case there is a space or spacetime. This can in turn be embeded in E6, which in turn can embed in E8. SO(9,1) there is a spacetime. This approach takes us into the whole Kaluza-Klein compactification scheme. That might be the case, but the business is so vast in the numbers of possible Calabi-Yau spaces that it is maybe not worth pursuing. Lisi's program of exceptional algebras is interesting, and I have a lot of work on the same with the Freudenthal determinant and the Jordan J^3(O^3) matrix on three E8s or octonions.

      I am though not entirely sure how he says there is no spacetime. The correspondence between quantum mechanics and spacetime might be a route towards that. Spacetime is emergent as entanglements, where Raamsdonk noted that a statements

      |ψ) = sum_ie^{-E_iβ/2}|E_i)⊗|E_i)

      as an entanglement of states that correspond to holographic states on different horizons. This then defines the two black holes in the Penrose conformal diagram for the Schwarzschild metric. We might then say that spacetime is emergent and in that scheme there may not be spacetime at unification. I am not sure whether Lisi is pursing this line of effort. With AdS_4 = SO(4,2)/SO(4,1) and SU(2,2) ~ SO(4,2) the twistor geometry enters in with SU(2,2) = AdS_4×SO(4,1) and quantum states might then construct twistor geometry. I am though getting a bit afield and into things I am tinkering with. Then if quantum states are fundamental and we have the sort of scheme that Raamsdonk proposes then for a Euclidean time τ ↔ 1/T there is then some sort of quantum critical point here in a phase transition, or maybe a form of edge state. In this setting stochastic supersymmetry could play a role.

      As I have been saying, it is not so much supersymmetry that is heading for a burial; it is the minimal supersymmetric standard model (MSSM). There seems to be no way out for the MSSM at this point. The upholders of such, following the Pied Piper of MSSM Gordon Kane, will make the case for some trans-TeV physics for MSSM. This will make an argument for the FCC “hyper-collider.” I met Kane briefly many years ago at a conference and he seems like an affable guy, so I have to feel sorry for him some. However, MSSM is on serious life support and showing few signs of life these days. Supersymmetry on the other hand connects spacetime symmetry with quantum statistics in a way that is to my mind fascinating. It could of course still be wrong. I just think it has a lot to do with quantum gravitation and the emergence of spacetime; not so much a physics of IR energy physics of low mass particles. In fact I suspected this many years ago.

    14. @Reimond and Castaldo

      Differential geometry and topology is what I did MS grad work on. It is a vast topic. The main construction is the atlas of charts. Suppose there is a manifold of n dimensions M^n and we define in some local region U a map to the Euclidean R^n φ_i:U → P ⊂ R^n. The maps the local region U into some patch in R^n. We do the same for the subset V φ_j:V → P' ⊂ R^n, where U∩V ≠ ∅ or is non-empty. Then from V to U we have the total map φ_j^{-1}· φ_i that will be labeled as g_{ij}. Now we define differentials of this, which give connections and from there we construct curvatures.

      These chart maps φ_i can be to two sorts. The elements g_{ij} can describe a vector space on a bundle lifted off the manifold M^n. This describes an internal space, such as the circle at every point of spacetime that describes electromagnetism. So this is locally on this chart a product space P×M^n, where P is a vector space of dimension m. This is a principal bundle. The other case is where the elements g_{ij} are the manifold M^n itself, and this is a fiber bundle. This is the case for general relativity and curvature of spacetime or the base manifold.

      This lays down the construction aspects of basic differential geometry. There are loads of developments from this. Certainly the physics aspects of this are with gauge theory and gravitation. There have been interesting mathematical results as well, in particular the Atiyah-Singer theorem and the theorems of Donaldson, Freed and Uhlenbeck on the existence of exotic 4-manifolds that are homeomorphic but not diffeomorphic. The singularity containing timelike interiors of Kerr and Riessnor-Nordstrom black holes have features similar to that. The Atiyah-Singer theorem proves the dimension of the space of solutions to an elliptic differential system is equal to the topological index of the space. This connects to the Riemann-Roch theorem, the Chern-Gauss-Bonnet theorem and lots of stuff.

      The Atiyah-Singer theorem in general involves operators of the form D_i = sum_jΓ_{ij}∂_j, which are called Dirac operators. The Dirac equation is a special form of this. On my home work bench where I work on things unrelated to work, which is more engineering, I have a stack of papers on how this works with Jacobi θ-functions ∂θ/∂_t = (1/4π)∂^2θ/∂x^2 for θ = θ(x,it) that construct exceptional and sporadic groups. A Dirac operator form of this has interesting properties, where a spinor generalization for an octonion or E8 maps onto the three E8s of the Jordan J^3(O) in 26 dimensions. It also has supersymmetric aspects.

      There are deep things to think about here, and while Sabine is right that we should not believe a theoretical physics just because of the math, I do think that what ever foundations we might find will have deep structure of some sort. This is because physics is about conservation of quantities or counting quantum numbers or degrees of freedom etc. That takes one into mathematics, and we humans have been tracked along these lines since pebbles were used to represent numbers of cattle and we began counting.

    15. Former,

      "I have no idea what you are talking about. It does not make any sense to me. Could you further explain your considerations?"

      Particle physicists are a community that brings together some thousand of the most intelligent people on the planet. If they cannot understand that their opinions are influenced by the social feedback from the community they are part of, and that this leads to bad decisions, then the chances that any other group of people will understand this are basically zero.

      Why is that a worry? Well, this is the reason why we - by which I mean all of us - cannot solve the problems that we create on a global scale (unless maybe by accident if we get very lucky), and this situation is not going to change. We are plainly too stupid to make decisions in large groups and that will set an end to humanity, eventually. It's only a matter of time.

      I know that you (as pretty much everybody else) will almost certainly think this is silly, but do me a favor and at least think about it for a moment before you laugh: We create problems. We have no mechanism to solve those problems. Pretty much no one understands that we need one. Sooner or later one of these problems will wipe us out.

      Climate change is the obvious example. All that talk will not lead to anything (except cosmetic fixes to attract voters) because the problem is simply to difficult to solve with the existing institutions. At the very best, we'll see some well-intended attempts that are, however, almost certain to not improve the situation.

      We'll not go extinct from climate change itself, of course. But it'll pull after it a long sequences of further problems that are bound to pile up.

      Hope that explains it.

    16. First a little erratum. I said above in this thread g_{ij} is a map from V to U, when I meant P' to P.

      There has been a fracturing of cultural communication for virtually a century. It is curious this has happened while the technology for communication has expanded exponentially. However, we are now in a time where people can no longer communicate at all with each other. For instance, Christian fundamentalists in the US are as different from American scientists as maybe Chinese and Mayans were of the 8th century. This is even though American scientists and fundamentalists speak the same language. In part this is driven by communication technology, which is a vehicle for advertisements that target ever more niche markets and more of them. This fracturing impacts us so that we are no longer in functional communities, and families have gone from being extended to so called nuclear families and increasingly we are now being socially atomized to the “self.” Social and even intimate relationships are now transient. As a result everything is now contentious, where politically trying to reason with a t'Rump supporter is hopeless. I suppose they feel the same way.

      The situation within science is not dissimilar. Trying to publish a paper is hard when you get a reviewer responding with “why did you not cite so and so,” when those are him or her or associates and proteges. The amount of information is enormous, one can't take it all in, and so simply can't be entirely connected to the whole community. This is even within a specialized area. As a result the social context of physics is not as complete as it used to be.

      When it comes to human survival, this disconnectedness or fracturing has lead to a social situation where you have people with entirely different worldviews. There are those who know the science and evidence pointing to global warming or other biosphere damages, while there are others who have a mental landscape filled with conspiracy narratives of wicked scientists trying to demolish capitalism. There are likely to be few conversions between the committed in either of the two camps. The denialism is fueled by money powered propaganda from corporate owners and leaders interested in keeping profits up, expenses down and other costs externalized from the company and maybe into the future. In spite of it all, those guys have the media dollars.

      When it comes to our potential demise, it is interesting that Musk and Bezos have ideas of space colonization. Musk has a mad scheme with a giant steel spaceship that will burn up entering any atmosphere. I really wonder WTF that guy is thinking! Bezos made a statement on how rationing and conserving was bad, and of course it is not surprising that a man valued at $100 billion would think so. However, exponential growth is pernicious. If humans do colonize the solar system we will use up all surface resources in 500 years, and if grand ideas of Dyson spheres etc are possible we will use it all up in 1000 years. In fact if by some “unobtanium” field or new physics we can travel faster than light we would consume the entire observable universe in only 3 million years or so. I have doubts or questions about just colonizing the moon or making habitats out of near Earth asteroids.

      So maybe human failure or self-extermination is not just unavoidable, it might even be seen as the best option.

    17. Sabine Hossenfelder: Although you are absolutely right in this assessment, and I do not think it silly in the least.

      But you are mistaken, there is one mechanism by which intelligent people can solve global scale problems like climate change; and that is the market mechanism.

      If, for example, a chemist invented a scalable and cheap way to produce solar power at half the cost of fossil fuels; the market would ensure it conquered the world, just because it's cheap as shit and nobody in the world wants to pay more for energy than they have to pay. It would be adopted worldwide quicker than cell phones.

      No consensus needed, no treaties, no life style changes demanded, and virtually zero "marketing" required; it would go viral and problem solved.

      That might seem like magical thinking, but aluminum used to cost twice the price of gold; today it is 1/45,000th the price of gold, thanks to the minds of about three people.

    18. Castaldo,

      You are both right and wrong. You are right in saying that a free market could provide the necessary mechanism. You are wrong in thinking that knowing this in and by itself will solve the problem. Because we still need to make a decision to actually use that market, and to enforce the constraints that are necessary to make it work. Like, say, pricing externalities and breaking up monopolies, both of which are known market failures. And this just returns us to the problem I was talking about, that we do not have an institution to make such a decision.

    19. Sabine: Perhaps "market" was the wrong choice of words, what I really mean is that plain human greed can be relied upon. If the new solution is cheaper than the old solution on price alone, greed will drive its adoption worldwide, even in totalitarian states.

      One person could invent the solar collector and converter. With some selfish interest in saving civilization she could preemptively publish the method on YouTube for free, and it can go more viral than a cat video, worldwide. Here are the detailed instructions on how to build one at home, here are the list of materials, here are the results, check it out for yourself. No secrets kept.

      Presuming it were real, it would be adopted in nearly every country in the world as fast as they could build it.

      I do mean greed; not the broader "self-interest". We do all kinds of stuff not in our self-interest, out of laziness, stupidity, or lack of impulse control. but GREED is something we can count on! Heck, it's what's killing us now.

    20. Lawrence,

      “So maybe human failure or self-extermination is not just unavoidable, it might even be seen as the best option.”
      This definitely too would solve the pure logical Halting Problem again via limited resources.
      (To kill exponential growth a certain threshold and/or a coupled non-linearity always helps ;-)

      In the unitary evolution of QM to keep endless proliferation of entanglement down – just cut the seaweed from time to time.
      And from time to time you need to set stuff back on mass shell to make contact with conservation laws.
      Noether’s theorem is a pure on mass shell theorem.

    21. Dr Castaldo,

      Well, in this case I am afraid you are just simply wrong. "Greed" does not optimize resource management without a properly configured market, which is what I said above. If someone comes up with a win-win solution of the type you mention, that would be good news of course. However, even in this case you still have to take into account the costs of widely employing new technology. The cost of infrastructure changes can be considerable (like that or not, our current infrastructure relies on petroleum and gas) and this means GREED (combined with strong future discounting) stands in the way of making a switch even if it looks to you like a win-win -- presumably because you don't discount the future all that much.

      In other words, you have different preferences to other people, you have a different way of assessing risk. You have other values. The question is, how do we arrive at a common strategy despite that?

    22. @ Castaldo, Hossenfelder and Reimond --- three birds with one post

      Reimond: The universe in its entirety evolves to increase entanglement. When my carefully prepared quantum system enters into decoherence the quantum phase, whether that be a superposition or an entanglement within the system, is given up to the environment with a larger reservoir of states. This quantum phase will evolve so reservoir states are in partial entanglement with the system and each other, and quickly the quantum phase is lost you your prepared system. Since quantum states describe probability amplitudes the collapse or outcome happens accordingly. Quantum phase and entanglements then diffuse into the ever wider world. There is no definable endpoint or equilibrium.

      This has some analogy with black hole thermodynamics. A black hole with mass M and temperature T = 1/8πM is set into an environment with a temperature equal to that of the black hole. The black hole may absorb or emit a boson so the mass M → M ± δM which will correspondingly decrease or increase the temperature T → T ∓ δT which will push the black hole out of thermal equality with the background. The black hole will by statistical probability either grow endlessly (in this idealization) or quantum decay away. There is no equilibrium.

      This also connects with the initial state of the cosmos, where it appears to have been in a low entropy configuration. This is probably reflective of how quantum states were in entanglement that were not in some equilibrium, and the quantum phase then evolved to diffuse. This then has some bearing on the relationships between a quantum extremal surface and event horizon, for the case of a black hole and the cosmos in general.

      Castaldo, Hossenfelder: The social theory that greed leads to the best outcome was something that Bentham came up with at the end of the 18th century. It has been an anthem for those in favor of the laissez faire economics. This has been carried forwards by a number of notables, and last century by Ayn Rand who in my judgment was rather sociopathic. I am not particularly anti-capitalism, but I am disturbed by the loud drumfire of propaganda these days in favor of it. There is a constellation of think tanks around the Washington DC beltway, Enterprise Institute, Cato Institute, Heritage Foundation, Federalist Society and others that take many millions of $$$ from corporate donors to beat the free market drum. The pundit shows in the US have a continual lineup of characters with pseudo-academic titles such as “senior fellow at … “ and the mantra of capitalism is a constant noise.

    23. continued due to char-limit

      Greed does not solve all. That is very clear. It solves a lot for those with a lot of money who want to make more, so you get the Koch brothers though now singular as one of them died recently, and the rest who hold billions of dollars. People with that wealth capacity don't just own lots of property, but control power. The Constitution of the US even states one purpose of the Congress is to pass legislation for the regulation of commerce. The founders of this nation, who were certainly not disposed to socialistic ideas, would be appalled by the right winged idea of unregulated commerce.

      I will not go into this at great length, but as I see it there are four “crafts” that have historically been employed to organize society and control people. These are statecraft, warcraft, priestcraft and tradecraft. The words are fairly self-explanatory, and in the widest context beyond ideology. Tradecraft can pertain to either unregulated capitalism or Marxist state socialism. Virtually all societies in history have some mixture of these. For instance the western medieval period was primarily about priestcraft followed up by warcraft. The earliest settlements of humans have identifiable shrines of some ritual or religious importance and were clearly towns organized by some political class. These systems have been with us all along, they have a range of ideological points or maybe follies, and they are all incomplete and with failures. They are also sort of brain memes that capture us and control us. Few people are really that free of them.

    24. Sabine,

      "I know that you (as pretty much everybody else) will almost certainly think this is silly"

      I don't think, this could be silly at all. Indeed, I am becoming more and more aware of this topic. But I think, it's quite interesting to look at this in more detail, at least if one is starting to become aware of it. It's not natural science alone, also psychology - but anyway, it's shurely interesting and it could be helpful to study such mechanisms in more detail.

      If one does not trust the feedback from her/his community at all and does not listen anymore, what others are going to say, one might end up in isolation.

      That would be more woeful then to be influenced by wrong ideas from time to time. In this case one could learn from her/his own experience, about what mechanisms are making ourselves such overcredulous.

      And I am quite shure, there should be more public attention on such mechanisms, in order to become more aware on topics like group dynamics, manipulation techniques etc.

      It seems to me, that there might be something ongoing, which I would personally even like to call an "information war", and from my point of view, it would be helpful to understand in more detail, what's really going on there.

      A few days ago, you told us e.g. about this really weird and nasty phenomena when working with MS windows. Starting a while ago, I am now observing more and more of such really strange behaviour myself, not only when using MS windows but more and more with other software, when receiving the most recent updates available. I am quite puzzled about such phenomena, and I am starting to wonder about the growing number of such strange coincidences.

    25. Sabine: Photo-voltaic is about 20% efficient (thus far). Suppose a German colleague of yours invented a PV panel (not silicon) that was cheap and got 75% efficiency, and gave away the patent as a public service to the world.

      I think that would take off in green friendly countries, trying to convert to green energy.

      I think that companies all over the world, including America, would seize upon it and start providing solar panels for homes that would actually cut their electric bills in half.

      A few people in my neighborhood (many retired people) already own electric cars, and spend nothing on gasoline. If electricity were much cheaper than gas, many would buy electric cars. No infrastructure needed. A 100 km range is about 8x the most I drive in a day.

      Disruptive technology doesn't take that long to penetrate, especially if it can be copied. Yes, our infrastructure relies on petroleum, but the industry relies on vehicles, and those wear out quickly and get replaced quickly, the oil industry can become a niche market for aircraft fuel as quickly as it rose, about 100 years ago. FWIW, it rose because it was cheaper than all the alternatives!

      I think you underestimate the power of "cheap", like the bookstore corporations that spent five years doing nothing about Amazon because nobody would buy books on-line just to save a few bucks. Until it was too late for them.

      But, to each their own. I also think any attempt to inform people is a waste of time; if the problem gets solved (which is unlikely) it will be by an individual, or a handful of experts inventing a disruptive technology. I don't hold out much hope, but there is that!

    26. To address the rising CO2, our economist colleagues have had a robust answer for a long time: a carbon tax. Of course, such a tax would need to be set at an appropriate level, and properly regulated, but both those are easy, in an academic sense. "Political will", however, is lacking.

      A much more difficult challenge is how to stop the world's oceans from acidifying to the extent that oceanic shellfish are wiped out ...

    27. All this talk is depressing, and I agree that the outlook is grim. But I disagree with Sabine, and others here, about the nature of the system that has produced the depressing situation.

      Sabine, and others here, have previously said that people (who have produced the situation we are now in) have no free will: i.e. they are ruled by laws of nature, and the system is superdeterministic. In this view, the future is already in effect closed: set in stone by the laws of nature.

      But with something like a QBist view of the physics of the world, where subjective choice is a genuine feature of the physics of the world, the future is genuinely open.

    28. Castaldo,

      "I think that would take off in green friendly countries, trying to convert to green energy."

      Depends on the production costs, depends on how durable it is, depends on how difficult it is to use. Depends on how many people believe that it will work, which depends on how many people with how much influence lobby against it. Depends on how much people are afraid of it. Why isn't everyone using nuclear power? And in any case, I don't believe that a miracle solution will be laid to our feet to save us.

      No, I don't think it'll be any individual making such a change. I have previously written about how damaging this belief in hero tales is to science and it's similarly damaging to our societies. It's not individuals that bring change. It's their interaction. The biggest problem of all is therefore to organize the interaction of individuals so that change can happen. Most people still do not even understand this is the problem. They say -- exactly like you do -- oh, but someone will come up with something and all will be fine. That's possible. But with the current ways we make collective decisions, it is extremely unlikely to happen.

    29. JeanTate,

      Carbon pricing, more generally, not necessarily taxes. I don't think many free market enthusiasts are in favor of taxes, but it seems to be the easiest mechanism at hand. In any case, the point is, as you say, and as I said above, we have known this for a long time. Yet, we didn't do it. Why? Try to find an answer to that question and maybe you will see why I say the problem is not climate change, the problem is that we are too dumb to come to any decision what to do about it.

    30. Lorraine,

      To say the obvious, the laws of nature are what they are. If you don't like them and prefer to think they are something they are not, that will not change them.

    31. Lawrence,

      “... quickly the quantum phase is lost you your prepared system.”
      Yes, absolutely - this is decoherence.
      But decoherence does not solve the measurement problem.

      “... the collapse or outcome happens accordingly”
      Yes, absolutely – this is what we observe - now you need to fill in the details, otherwise it is just a sentence of the same quality as MWI or Copenhagen interpretation.

      And with thermodynamics you are already on the right track.

    32. “Greed does not solve all.” - This is a bit like with decoherence ;-)

      “They are also sort of brain memes that capture us and control us.”
      Yes, absolutely. And we also should not forget the ‘instant gratification monkey ‘, who messes with all our rational decisions.

    33. And I like the message at the end of the above TED talk:
      Without a deadline, a cutoff nothing real ever happens.

    34. Sabine,
      Are you saying that the future is already (in effect) closed: i.e. the future is already (in effect) set in stone by the laws of nature?

    35. Lorraine,

      Quantum mechanics has (for all we presently know) an irreducibly random element, so the answer to your question is no, the future is only set in stone up to quantum randomness.

    36. JeanTate,

      CO2 and acidification are coupled: CO2 + 2 H2O ⇆ H2CO3 + H2O ⇆ H3O(+) + HCO3(-)
      Our body also regulates its pH via (shallow/deep) breathing.
      The same mechanism that is good in animals is now bad for the ocean - always depends on the environment.
      And sometimes solving one problem can solve others.
      Nature is not malicious - we are just sometimes too shortsighted in this highly non-linear coupled, complex system.

    37. @Reimond & Lorraine

      I sort of garbled this with quantum phase is lost you your prepared system , when it should have read quantum phase is lost to your prepared system The properties of this are very similar to what occurs with thermodynamics of quantum fields in curved spacetime. The entropy of an event horizon is S = A/4ℓ_p^2 + quantum corrections. The quantum corrections means the horizon is dynamical, and it will adjust itself to approach a quantum extremal surface (QES). In the case of a black hole it reduces in area until it approaches the QES. At that stage maybe there is some sort of phase transition. That transition will shift of the mode of quantum decay of the black hole into another channel or mechanism. That might avoid the firewall problem. In the case of a cosmology the cosmological horizon, at radius r = √(3/Λ), will increase in size as particles accelerate away and escape.

      For a cosmology the initial entropy was very small, far smaller than the horizon scale. Maybe elementary particles are all redundant, where from a path integral perspective there is only one electron on a path that zig-zags back and forth in time. Then all we see of electrons and positrons are the large number of electrons around us and few positrons. This is a measure of the initial entropy of the universe. However, as the universe accelerates apart quantum fields and matter escape from us until an horizon region has at most one particle in it. Super massive black holes take 10^{110} years to quantum evaporate, and neutron stars can exist a whopping 10^{10^{70}} years. That is because they only decay by fluctuating into a black hole. So the continual dissipation of available energy, or the unbounded entanglement spread of systems around us are all part of this process.

      Lorraine, Qubism does not tell you what outcome will happen in a measurement. It is a form of quantum Bayesian statistics that indicates Bayesian updates for further measurements after a measurement and your record of it. We might think of this with polarization of light. Consider a toy model of a photon in a cavity with perfectly reflecting mirrored walls. A polarizer introduced with an particular orientation will polarize the photon in a certain direction, assuming it does not absorb the photon. Now if we reintroduce the polarizer with an angle θ relative to the previous measurement, the probability is cos^2θ that the photon will pass through the polarizer with a new polarization. So for an orientation of θ = 30º = π/6 we have the probability √2/3 = .866 . A straight classical estimate would be p < 2/3, and this is a form of Bell's inequality. It also tells us the quantum Bayesian update is not classical. This means if you put on those Rayban polarizing sunglasses you are performing lots of quantum measurements!

      The relationship between freewill and quantum mechanics was examined by Conway and Kochen. The existence of a causally unconstrained agent, i.e. a free will, is not consistent with quantum mechanics. QuBism is a ψ-epistemic interpretation that puts as primary the subjective choice of an observer in their estimate of a Bayesian prior. This means there may be some subtle issue QuBism has with the Conway-Kochen result. However, the observer making a set of measurements of a system is guided by some rational choices for a Bayesian prior, say based on the Bell inequalities.

    38. Lawrence Crowell: What I wrote is to invent something insanely valuable and altruistically give it away for free, make it public knowledge, in the same spirit that academics used to employ: We make our insights available for free, to improve the state of knowledge and humanity. Anybody that wants to use them to can.

      A rational purpose for that approach is to minimize the barrier to usage and maximize the flow of knowledge and potential serial collaboration in order to maximize total knowledge, and any advantages that may result from this more complete understanding of a field.

      From the inventor's POV, what I wrote is obviously not greed in any monetary sense motivating them, they get nothing from it (except perhaps credit for saving the planet).

      Like it or not, nobody feels they can afford to do anything about climate change personally, or that it would matter, or that there would be any point if everybody else in the world isn't doing anything about it. Which means the only solution is to appeal to something besides altruism, which is one of their immediate self-interests.

      Energy is a commodity; meaning the primary selling point for it is price. We have many self-interests, pleasure, entertainment, cachet, saving time, saving money. (On the flip side is avoiding pain). For many, the cachet or personal satisfaction of using renewable energy means they will pay extra for it, like a brand-name.

      but for the vast majority throughout the world all that truly matters is price; they want the cheapest pain-free alternative, period, end of story, to hell with the hidden costs or future costs.

      They don't feel they can afford anything else (or that it would do any good if they did). They don't feel any pain for choosing the cheapest alternative. Imaginary future pain isn't real to them. If you find "greed" pejorative, call it efficiency; they want the most bang for their buck.

      If we cannot make renewable energy cheaper than fossil fuels, it will take a great deal of actual pain for people to give up fossil fuels, and by the time that happens, due to the carbon cycle, IMO it will be too late.

      If it were up to me I'd organize the planet by democratic socialist principles, period, which already work in several countries. Anything important to life, health and maximizing human potential should NOT be a for-profit enterprise. Profit and Greed driven Capitalism is ONLY fine for things we can live without; if JK Rowling can write novels and make a billion dollars, good for her, I don't begrudge her wealth.

      But if you want to fix climate change, there is only one way I see; make clean energy cheaper than dirty energy, with a low enough barrier to entry that small groups can adopt clean energy just to save money.

      As for Sabine's comments about heroic science, It's not individuals that bring change. It's their interaction; that is partially true. Ideas begin in one mind, but generally need other minds to be brought to fruition. That said, it is still reasonable for a handful of people, less than ten, to find a way to bring a major change to society.

      The idea of the telephone began in one mind, as did the idea of the internal combustion engine, as did several of Edison's ideas (light bulb, phonograph, movies). Heck include Einstein's ideas, although not physical inventions.

      None of these required government collaboration to bring to the world, or public funding.

    39. Sabine,

      From your replies, it seems to me that you and others are complaining that the only 2 causal elements in the world,
      i.e. laws of nature and quantum randomness,
      have not produced, and are never going to produce, the outcomes you hoped for.

      The good and bad decisions, greed, lack of political will, and climate change itself, are all just outcomes of the laws of nature and quantum randomness. Even your thoughts and complaints are just outcomes of laws of nature and quantum randomness.

      With your view of the way the world works, you can never escape from always being an outcome of the laws and randomness: you can never actually DO anything.

    40. LEP Experimenter,

      No, it is not coincidence, there might be a whole new science behind it but not for public eyes. Here in India, I am slowly proving it to myself . . . then to the public perhaps.

    41. Lorraine,

      Your summary is correct except for the final sentence. Of course we do something. We constantly do something. Everything constantly does something. That something is to execute the laws of nature.

    42. PS: It's not "my view." It's the way the universe works, according to our best current scientific knowledge.

    43. Sabine and Lawrence,

      Surely, when it comes to climate change, the only relevant issue is whether or not people can ever escape from always being outcomes of things beyond their control; the issue is whether or not people are agents that can genuinely affect the world?

      Physicist John Wheeler’s view from many years ago, and today’s QBist views, present alternative views of the way the world works to the views you hold:

      “John Wheeler…answered, “…Each elementary quantum phenomenon is an elementary act of 'fact creation.' That is incontestable.”” [1]

      “Quantum theory, thus, is no mirror image of what the world is, for “there is no one way the world is;” it is “still in creation, still being hammered out”. Rather the theory should be seen as a “user’s manual” that any agent can adopt for better coping with the world external to him. The agent uses the manual to help guide his little part and participation in the world’s ongoing creation.” [2]

      I’m saying that there are other views of the way the world works that are not quite as gloomy as the outlooks you hold.

      1. “Bohr, Einstein, and the strange lesson of the quantum”, J. A. Wheeler, in Mind in Nature: Nobel Conference XVII , 1982. Quoted in “Notwithstanding Bohr, the Reasons for Qbism”, Christopher A. Fuchs,

      2. “Notwithstanding Bohr, the Reasons for Qbism”, Christopher A. Fuchs,

    44. @Gokul,

      in my view, this science is not new at all. It's about marketing, psychology, group dynamics, memetics, etc.

      The enormous CPU power, deep learning algorithms, social netwoks, etc., seem to allow influencing people nowadays in an quite automatted way.

      Profits can then be made by destabilizing markets and short trade of shares, or eg. by quite cheap investments in economically struck countries.

    45. Quite apart from a QBist view of the way the world works, there are other reasons to think the world is not superdeterministic (i.e. the outlook is not quite as gloomy as might be thought).

      The universe is a system, and a system is not fully describable by a set of equations (representing relationships) because a system needs something to jump at least some of the numbers, so that the other numbers are changed by virtue of the relationships. The number jumps are “hidden” in the delta symbols, but what is jumping the numbers seems to me (as a former systems analyst) to be an issue that physics is not facing.

      Obviously, what is jumping the numbers in the system is a different type of problem to the type of problems that physics has been concerned with, which are about symbolically representing the relationships in the system. But without introducing any new elements to the system (e.g. a new element would be to formally introduce a random number generator), I would think it would have to be matter itself that is causing the numbers to jump: i.e. a reasonable hypothesis is that matter is what knows and jumps the numbers.

  7. Dear Sabine,

    let me start off with a disclaimer: I'm no physicist, I'm doing -- well -- computational chemistry, nothing particularly complicated. Numerically intense, though.

    So, what I wanted to say is that if in a dimensionless theory there are constants of order 1 and there is one particular of the order of 1e-15... it's a pretty peculiar situation. People doing particle physics numerically, probably, can't even use standard floating point arithmetics, which is standard for a reason. No wonder they want an explanation for this, it's very unusual. Just for this reason it can be considered a problem, no?

    Maybe, you are aware of an example of another physical theory, with such... hmmm... fine-tuning?

    Best regards,

    1. longrange,

      You use words like "peculiar" and "unusual". Explain why you think this is so, other than that you think it's ugly.

      Of course there are examples of this kind. The cosmological constant. Or the theta parameter. Both of which are technically unnatural. They just are.

      There are furthermore various historical examples that document arguments from naturalness are unreliable, and so are arguments from beauty more general. (I go through this in my book, link is in the side bar.)

    2. Yes-yes, but I don't think it's ugly, I find it exciting, almost beautiful ((: And I don't care about naturalness. (And I read the book, btw)

      You see, if you do numerical things and, say, solve an equation numerically, you hardly ever care about convergence to 1e-10, but here I'm told, 'wait, dude, it's not even a warm-up'. I mean, one more order of magnitude and you can't even start doing anything in double arithmetics -- it's just below the machine zero.

      I don't know, what theta parameter you mean, but cosmological constant, as far as I understand, is no less peculiar.

      Don't know, if the analogy is correct but imagine that to drive a hundred kilometers would require you to do it with atomic precision (or, since, if I understand correctly, the constant is required to eliminate divergence in one of the integrals, you end up at infinity). Not everyday you do this.

      You've probably seen such arguments many times and already fed up, but for me that's pretty exciting.

    3. The ratio m_h/m_p ≈ 10^{-15} is a measure of the gravitational written as α_g ≈ (m_h/m_p)^2 that is on the order of 10^{-30}. This is a way I like to think of the gravitational constant as a dimensionless parameter. There are all these strange scalar particles in theoretical physics, where the only one known is the Higgs particle or really field. Similarly there is the dilaton field, which is a form of the inflaton, and there are the axion particles. These two may be related to each and in general there is likely a theory of scalar fields in some unified scheme of phase transitions. For Haldane chains and SPT systems of edge states there is something of this form emerging. The inflaton field likely had a mass of 10^{-3} to 10^{-2}m_p so if the gravitational constant is most generally defined according to a ratio of a scalar field mass and the Planck mass we may then have at near Planck energy a less extreme definition of the gravitational coupling.

      These extreme situations could be seen with black hole Hawking radiation. A black hole emits quanta that are a Planck unit of mass-energy. These quanta, say if they are massless such as photons, climb out of the gravity well and are red shifted to much lower energy. For large black holes the red shifted photons are very low energy. For a near quantum black hole this red shift is less and the radiation emitted is more UV. For particles with mass their production involves a mass-gap of 2mc^2, thinking of pair creation, and this happens then with smaller black holes with higher temperatures. With small mass particles there is a “smoother” spectrum of radiation a black hole emits.. However, if many particles were near the Planck mass Hawking radiation would not occur as readily, for a black hole would have to near the horizon emit many Planck units of energy to generate a near Planck mass quanta far removed.

      Arkani-Hamed, Huang and O'Connell in a recent paper, Kerr Black Holes as Elementary Particles, , illustrate how a Kerr black hole has properties similar to elementary particles. We may then think of elementary particles as mass-gapped low energy quanta of black holes, or Planck mass units, emitted by large black holes. That these can escape to “∞” is because there is this enormous gap between Planck mass and particle masses. If the Higgs field, the Goldstone scalars and the Higgs particle are related to the inflaton field then this huge gap might be a manifestation of how the false vacuum of inflation transitioned to the physical vacuum with a tiny cosmological constant and low density so called dark energy.

    4. longrange,

      Your comment depresses me immensely. I wrote a whole book *and* a paper *and* a seemingly endless serious of blogposts explaining why your "excitement" is not a scientific criterion, but evidently it's all for nothing.

    5. Well, I don't recall myself calling it a scientific criterion.

      Ok-ok, I got it! 'Nothing to see here, move along'. 'Nature does not have to be this or that' -- fine. Indeed, it doesn't.

      After all, a lot of scientific discoveries were quite shocking. And we've been just getting used to them all along. We'll get used to this 15 orders of magnitude as we got used to Copenhagen. Oh wait, not the best example. We'll get used to them as we got used to round Earth.


    6. There seem to be a few different perspectives here. Physicists say the math of some hypothesis is beautiful. I don't see it, well I'm not a physicist, but it looks to me complicated not simple. To me it looks like there's an elaborate edifice that the practitioners like to tinker in. Informatics people like to tinker with the abstractions of programming languages and in our field I think it's unproductive.

      A better angle might be whether a theory is compact, meaning low redundancy. Field theories have high redundancy, as a simple particle has to be modelled in all of space. If there was some compact theory with just the necessary degrees of freedom (momenta, spins, etc) and some algebra between them it might be a better theory. Why? Because a more compact theory has a chance to be more robust or explain more observations.

      Yet another angle that I find compelling is Arkani-Hamad's anthropic reasoning. You find some fine-tuned parameter. Then you postulate a way of trying out many "rolls" of that parameter either ante (inflation) or post (evolution). Good thinking but the value of fine tuning is as a clue to uncover or understand such mechanisms, not as a problem to be solved.

    7. The fact that m_h/m_{pl} ~ 10^{-15} should be seen as something of value. This means that a black hole can emit a Planck unit of energy near its horizon, or equivalently lose a Planck area from its horizon, and this can emerge at a distance as a massive particle with a small mass. If the gravitational constant were close to unity or m_h/m_{pl} ~ 1, then a Planck area lost on a black hole could only manifest itself as a massless particle such as a photon. In order to emit a near Planck mass particle out to "infinity" would require the black hole lose a large number of Planck areas of horizon, which has a small probability.

      The weakness of gravitation is due to the small masses of elementary particles. We can think of the Higgs particle mass as a sort of standard. The mass-gap between light particles and the Planck mass then connects elementary particles to black holes. This might be seen as a "naturalness argument" by some.

      What does nature have to be? If there is any criterion it might just be that it is not nonsensical. Conservation laws are a way that nature has of being in some way consistent or regular. By Noether theorem and related results these conservation principles are tied to symmetries. Since we exist here it also seems likely nature most also not be so regular that nothing complex can emerge. So we have some sort of tightrope we must walk.

      We may all have ideas on how this should work out, say be being guided by beauty or mathematical structure and so forth. The caveat though is nobody should think somehow any particular scheme is how nature must be.

    8. Dear Pavlos,

      what people are calling beautiful, is always a matter of taste. Some theoretical physicists are in love with extremely abstract concepts on a very high level. It's just a thrill, and people doing this are mostly considered to be quite clever and smart.

      Of course in this case one is really in danger to loose contact with reality. Nowadays, it is quite difficult to derive any measurable predictions from standard model extending theories. In particular when you get "lost in math". But instead of making real world predictions these people prefer to dream on a "mathematical universe".

      It's quite attractive to be engaged in such kind of stuff and I can understand, that this makes a lot of fun. It's simply a dream. When it turns up to derive some real world predictions, the math behind it becomes less beautiful and sometimes pretty nasty hard work.

      Luckily Feynman invented a method to write down the physical processes in such a way that physicists can read it as a comic strip. The math behind these comic strips is really tough and quite nasty.

      Feynman himself however was not engaged too much in pure abstraction, his way of thinking was in deep connection with understanding phenomena of the real world. He was really clever and smart.

      The explicite calculation of these diagrams is quite tough. The calculations just for the 3rd order QCD matrix elements was a real bone job and required a large number of experts about a decade to calculate them with plenty of blood, sweat and tears.

      Doing this job is not such beautiful and enjoying math... And the names of this heroes will not enter into the history book of quantum physics at all. Phenomenology is also not very attractive for these smart guys.

      It seems to be quite reasonable to favour theories making no predictions or to construct them in such a flexible manner, that they can easily be adopted wrt. their falsification by applying the current detector generation. ;-)

    9. Dear LEP, I did miss that you were joking, thanks for letting me know. It's too easy to misinterpret someone else's words, sorry.

  8. When I first found your blog, one of the early comments I read by you that kept me coming back was “nature doesn’t have to be anything”, in response to some argument about beauty, symmetry, or naturalness, etc. What I find astonishing and bewildering is how many physicists don’t seem to get it. As a physicist how does one not pursue answers based firmly on evidence and probability rather than their sense of order?

    1. Exactly. Isn't it amazing? They just don't get it. Even while they admit it's unscientific they will insist on talking nonsense while the whole world is watching. It's stunning to say the least.

    2. Dear Louis Tagliaferro,

      I couldn't be in deeper disagreement with your statement. In order to see, that nature is indeed very beautiful, you simply have to open your eyes. It seems however to be the case, that nature does not care too much about, what theoretical physicists are considering to be beautiful. It's not the real world, it's just a theory, nothing else ;-)

    3. Seems so toxic to me! No pity for such as you guys. I will always oppose to the last breath

      By worshiping "logic" and sacrificing respect to others you want benefits...

      > I wrote a whole book *and* a paper *and* a seemingly endless serious of blogposts explaining why your "excitement" is not a scientific criterion, but evidently it's all for nothing.

      Of course! I mean not, it is an ill perspective... You made your contribution to society. There just will always exist people that are not you

      If you are not agreeing with that you can go to asylum and write walls of books. There only you exist and nobody (can) resist to your words. There you really solved many problems and only you was solving them (and only your words make perfect sense in these four walls)

    4. I think you misunderstand my view and comment; beauty is subjective, I see it and feel it about nature all the time (my eyes are open). However, mine or a physicist’s subjective sense of what is beautiful has no business being part of our basis to discover how nature works. The two strongest reasons I can think of are:

      1. As Sabine has often said “nature doesn’t care what we think”. A look at history shows how we continually muck up science with what we think rather than purely pursue where evidence and probability point.

      2. We all will disagree on exactly what is beautiful in the same way we can disagree on the meaning of a sentence or phrase. It’s too subjective to even have a common foundational basis from which to build on. It would be like an important physics paper containing just text and no math, even the foremost physicists of our time would disagree on its interpretation.

    5. Pu,

      I entirely understand that there are and will always be people who cannot comprehend even the simplest statements that I make. I was pointing out that I am entirely aware of the futility of the attempt to talk reason to people. We'll all fucking die from our own fucking dumbness and you can see it right here, in the comments on this blog.

    6. But is it really Reason?

      (Consequentialists blame moral, but do they really offer something in return?)

      You give the impression that tons of good theories get thrown just because of beauty of others... like a theory based on more evidence won't win over beautifull ones. So is it more a speculation ("maybe there's more evidence-based theories", "maybe we should think about other theories more", "maybe super-determenism will give a testable and good model") like a communism or something

      > We'll all fucking die from our own fucking dumbness and you can see it right here, in the comments on this blog

      I read that "humans will go extinct in the next few hundred years", do you always say that? seems like irony (like it is a hint that something is not so serious)

    7. I forgot to note my reply above was to the comment by "A former LEP experimentalist"

      Dear Sabine, My observation is the cause of "our own fucking dumbness" is our feelings coupled with how oblivious we are of their enormous influence on what we think, do, and say.

    8. Lois,

      I am really sorry. Plese do not take my comment too serious, it should simply be a joke. Indeed, I'm in complete agreement with you. But in my view, the facts are such obvious, that only humour may help. I thought you would get the point.

    9. Dear Sabine,

      It seems to me, that you are sometimes a bit too serious, in order to broadcast the problems of fundamental physics research into the world. I can completely understand this, considering the reactions of your opponents.

      I'm appreciating your work pretty much, but obviously you must be quite tough, to fight against such deeply rooted misconceptions. In my view, arguments alone will not help here.

      You already have a quite large audience and public attention. Most experts in the field have heard about your arguments, but a few days later, they may have forgotten, what you have said. Or may be they simply do not want to discuss this kind of stuff with their colleagues. Probably, one does not make many friends in doing so. People may eg. fear loosing their job or their reputation. It may be better, not to think about it too much... Group Think... Business as usual.

      I don't have any solution for this problem. But I have the idea, that it would be better having even a much larger audience. If these topics are getting real public attention, an satisfying answer to your questions cannot be denied any more.

      Personally however, I'm not at all an expert in getting public attention. Other people are knowing much more about this. Maybe youtube is an appropriate platform.

      But if you would like doing it this way - simply repeating your arguments with the utmost precision would probably not help too much. It's merely a matter of presentation, you will be more in the entertainment business then. However, some of your videos are already quite entertaining... Maybe you will find also some support from the media in order to get more attention.

      Personally, I wish you all the best.

    10. @Pu14unkiihooiV

      "I wrote a whole book *and* a paper *and* a seemingly endless serious of blogposts explaining why your "excitement" is not a scientific criterion, but evidently it's all for nothing. "

      To be honest, It is impossible to learn anything from your comments. You are going to blame all the other readers of this blog, in not understanding your "genious publications". If you want other peope to learn from your ideas, it is your own responsibility to present them in such a way, that these people might get interested in what you want them to say.

    11. @A former LEP expermentalist

      I am sorry, I was quoting Sabine! So who are you talking to?

  9. The big Challenge for Physics for the upcoming decade is not let the gap between experiment and Theory widen too much (and or let it become even bigger).

  10. In an 11 April, 2018 article at Forbes "Five Years After The Higg, What Else Has The LHC Found?", Ethan Siegel points out that only 2% of the data has so far been gathered that the LHC will eventually accumulate over it operational lifetime. Additionally, he worries that since only 1/10,000th of the collision data is ever saved, new physics might have been overlooked, or could be overlooked in future runs of the LHC. So, perhaps there's some hope, without building an entirely new accelerator, that implementing improved collision debris filtering, something unexpected could show up.

    1. Sure, there is always hope. But 2% of data does not mean 2% of discovery potential. After the upgrade the LHC will make more collisions, allowing for better statistics, but at the same energy of before.

      Yes, that much of the data is discarded (necessarily) is a worry and one of the reasons I sleep badly at night.

    2. Sabine: It would be less troubling if we had at least a 1% unfiltered sampling, but I have read there is a filtering code that decides what is worth saving.

      That hardly seems conducive to discovering unknown physics. It's like deciding you're only going to read words with an "e" in them.

      I mean, you could filter out 99% of events to make the case for what we know should be interesting, but give the discards a simple 1% chance of being stored as unfiltered data, so new methods could be applied to searching them, even decades from now.

    3. From memory - I wasn't able to find primary sources just now, for some reason - there have been a number of dedicated searches for various kinds of non-SM particles (planned? certainly; done/results published? don't know) in/around the LHC. Also, at one time at least (engineering runs? later?), nearly complete datasets were captured (i.e. ~no filtering/triggering). Of a truly tiny fraction of all (subsequent) collisions.

    4. Jean, I believe I may have found an article that addresses your recollections regarding planned searches for non-SM particles. This article was very enlightening to me with respect to the possibility of the LHC having missed long-lived particles. Here's the URL:

    5. Thanks David Schroeder.

      I have found several proposed detectors, mainly for "long lived particles": FASER (and FASER2), MATHUSLA, ANUBIS, all with at least arXiv documents (e.g. one MATHUSLA one: I lack the specialist knowledge to make much sense of any of these, other than to say that at least some are trigger-less, and that while there's some (unavoidable?) theoretical dressing, serendipitous discoveries do not seem to be excluded.

      There was also a Zooniverse-based, citizen science project, Higgs Hunters, which included an explicit search for anomalous tracks, and one was found! (see and refs and cites)

      No progress on general, "all collisions", full datadumps though :(

    6. Jean Tate: Thank you for the links, particularly the one about citizen scientists tasked with looking for anomalies in LHC collision images. That reminded me of an experience in the early 1990’s when examining bubble chamber photos of 2nd and 3rd generation hadron decays I noticed what appeared to be missing momentum, in not just one photo, but seven or eight (out of ten separate photos). These photos were in the book “Introduction to High Energy Physics”, by Donald Perkins (Third Edition, 1987). For one of these photos (pg. 115), showing a neutral lambda (point B) decaying to a proton and negative pion it seemed quite evident that momentum was missing. I even went to the trouble of examining the photo under high magnification, carefully measuring track curvatures. When I did the math it confirmed the missing momentum. But as this was a 2D image any displacement of the track in the camera axis isn’t being taken into account, which would skew my measurement results.

      At that time I had hit upon a naively simple idea to explain the origin of the two extra generations of fundamental particles. Aware of hypothetical magnetic monopoles, where perhaps at very high energies early in the Universe’s evolution pair production of north and south magnetic monopoles might have occurred, I wondered if the same thing could happen with the weak force. The big difference is the weak force ranges only to 10^-16 cm. versus the infinite range of the electromagnetic force. So, it intuitively made sense that reining in the far flung lines of electric and magnetic lines of force from electrical charges, to topologically reconfigure into a north-south magnetic monopole pair, would take vastly more energy than for the finite range of the weak force. And, in fact, magnetic monopoles, if they exist, are expected to have enormous masses of around 10^16 GeV. But the weak force, ranging only to 10^-16 cm., might require vastly less energy for topological reconfiguration, and thus weak magnetic monopoles would be produced copiously. So, the thought was the muon-neutrino and tau-neutrino were actually weak south and weak north monopoles.

      These weak magnetic ‘charges’ being the lowest mass particles carrying this type of charge would be part of the final, stable, decay products of higher generation hadronic decays, and be preserved (thus my excitement about apparent missing momentum in the neutral lambda decay). But this meant that an additional pair of neutrinos, (for spin and lepton number conservation) would need to emerge from the decay vertexes of higher generation hadrons. Alas, that was the downfall of this simple idea, as four particles emerging from a single vertex has a vanishingly small probability of occurring. So, unfortunately, it was a flash in the pan idea.

    7. @Sabine,

      "Sure, there is always hope. But 2% of data does not mean 2% of discovery potential. After the upgrade the LHC will make more collisions, allowing for better statistics, but at the same energy of before."

      I am not an expert on this topic, but personally I believe an energy upgrade would do much better than a luminosity upgrade. In particular, since most of the incoming data are disregarded by applying highly automatted 1st level triggering mechanisms.

      But perhaps the experimental results from an energy upgraded LHC would even more question the need for a larger collider with 100km circumference.

    8. Former,

      In my understanding the only way to increase the energy at this point is to replace the magnets, which is prohibitively costly and makes little sense.

    9. Do you really think so? There's a lot of development going on in HT superconductivity nowadays. Of course, I don't know any details, but in my view, It could be possible, that even the amout of liquid He could be reduced, when going up to higher energies.

    10. Not sure where we disagree. LHC magnets are by now more than a decade old, the technology is even older. You can probably squeeze out a little more energy if you replace them with something newer. But that would be very expensive. That's all I am saying. Can you do it? Yes. Should you do it? In a world with infinite resources, yes. In the real world, no.

    11. $former LEP exp: HT superconductors save on the cooling. I am not aware of increases in magnetic field intensity by an order of magnitude. If that could happen so then the RF cavity Delta-electric potential can be boosted similarly then what you say would be an option.

    12. Your argument is perfectly valid, of course. But I don't believe that the LHC program will be stopped anyway. An energy upgrade however would be quite more realistic and a much better idea than building a new 100km circumference collider crossing the Swiss mountains. In my personal believe, that would really be of utmost stupidity. Apart from that, working within an LHC collaboration is a quite important experience for students - in order to aquire knowledge about how to handle complex technologies and how to collaborate within almost globally spreaded scientific working groups.

    13. Flavor changing neutral currents (FCNC’s) are highly suppressed in particle interactions due to the GIM rule. FCNC’s entail the transformation of a quark or lepton from one generation to another generation via a neutral current interaction mediated by a Z-zero boson. For example, the transition of a b quark (3rd generation) to an s quark (2nd generation) was observed 24 times in an experiment described in the link below. The probability of such an occurrence is about one part in a million. Some FCNC’s could be mediated by supersymmetric particles as mentioned on the FCNC page at Wikipedia, but have not yet been observed. But searches will probably continue at CERN, and elsewhere, for such interactions.

      In the idea mentioned above on October 11, a flavor change would always entail preservation of the flavor embodied in the originating particle, via transfer to the same generation neutrino. Until last December that quarter century old idea seemed unworkable due to the need for a 4 particle decay from a single vertex. Then a sudden inspiration came along – why not break up the decay of the neutral lambda into two stages so the probability of occurrence of each decay becomes normal? I was giddy with excitement, maybe the idea could be salvaged after all! There was even a mechanism for explaining CP violation. Surely this was a slam dunk.

      Playing around with Feynman diagrams it seemed possible that the neutral lambda could decay to a positively-charged, charmed lambda and negative pion in the 1st stage; a W-minus transferring negative charge to the pion. However, the charmed lambda is double the weight of the neutral lambda (2281 GeV versus 1115.6 GeV). But that seemed to work in favor of the idea, as invoking the Heisenberg uncertainty principle meant the intermediate-state, charmed lambda, would be so short-lived that its track would be indiscernible in the bubble chamber photo. And, indeed, looking at the photo on page 115 no short-lived, charged particle track is in evidence radiating from the intersection of the pion and proton.

      In stage 2, the charmed lambda would decay to the final end products: a (1st generation) electron-neutrino and (2nd generation) muon-antineutrino via a Z-zero, plus the proton. But elation quickly turned into disappointment, as it was immediately obvious that the 2nd stage constituted an FCNC, which is highly suppressed by the GIM rule. I tried to rationalize that problem away by assuming such interactions had never been seen due to the neutrinos leaving no tracks. But that, of course ignores their momentum contribution, and it seemed totally outlandish that analysis of millions of collider events could have missed that. But an even more fatal objection is that to make spins work out the electron-neutrino would have to be right-handed, and the muon-antineutrino left-handed. Neither, right-handed leptons, or left-handed antileptons, have ever been seen.

      The take-away for us amateurs, concocting ‘light bulb’ ideas, is that the Standard Model is very tightly constrained and not easily modified. Nevertheless, I keep a pad and pen handy before entering the dream realm in case another ‘flash bulb’ idea percolates up to consciousness, and doesn’t immediately fade away with further scrutiny.

  11. With regard to current theoretical physics you seem to continuing the themes brought forward in your book "Lost In Math" I find myself summing them up as: The Map in not the Territory. Gottlob Frege's work was part of my undergraduate education. Human mathematics one may conclude is primarily about relationships that may or many not reflect an underlying reality.

  12. Does anyone think about why the so-called complete standard model is utterly useless at calculating much beyond the simplest interactions in the weak sector? What good is QCD when it is essentially useless for calculating anything? It is little more than successful taxonomy. I think one of the main obstacles to progress is the tendency to forget that the standard model is almost all phenomenology. It is easy to see why - a proper gauge theory must necessarily have massless gauge bosons. So you have to throw out the defining principle on page 1 in order to make any progress.

    In other words - there has always been a great deal to do, but no one has been doing it, because they have been distracted by a theories that are too big to fail (string theory, cosmology as physics, etc.)


    1. Beautiful gauge field symmetries have certainly led everyone astray.

    2. What about the Higgs mechanism to provide a mass?

    3. How is QCD useless? It might be difficult to see the link between its Lagrangian and hadronic resonances at low energies, but it's incredibly predictive at high energies! The QCD portion of final states at the LHC contains a wealth of information that we can use to extract various theory inputs, like coupling constants and structure functions. In order to see hints of new physics, we need better predictions for the measurements we make at colliders. The hold up right now is QCD precision, which is why it's still an active research area.

    4. Not so - the SM is, as a classification scheme, a gigantic success. The problem is that, as with the measurement problem, the scheme is not enough. The main principle is abandoned out of necessity on page 1. A new idea, symmetry breaking, must be introduced, and there is no justification for it other than to make the gauge bosons acquire mass and so limit the range of the interaction. That implies missing information in the theoretical structure of quantum field theory. These problems go back to the very beginning. Even a proper gauge theory, QED alone, requires heroic efforts to construct a viable theory that can produce measurable numbers, at the expense of mathematically ambiguity that has never been resolved. It was wishful thinking to believe that broken gauge invariance would be any less ambiguous. But without gauge invariance there are no conservation laws. Even if string theory had managed to cough out the SM from its hopeless miasma of numberless vacua, that issue would remain. No, the problem seems to be, again, the attempt to reduce matter to isolated particles, strings, or whatever in otherwise empty space. Gauge invariance is as of now the only way to make such a scheme lead to local conservation laws. And without those there is no physics.


    5. What gives mass to the Higgs?
      (Mustn't there be an infinite regress?)

      What if observable particles that interact with us and our detectors have intrinsic mass?

      What if particles *are* inertial mass?

    6. "How is QCD useless?"

      I thought I said why - you cannot calculate anything with it. Even the simplest problem, the magnetic moment of the neutron, something whose analog in QED (magnetic moment of the electron) you can calculate to 8 or 10 decimal places, cannot be calculated with any accuracy. Actual problems of nuclear structure are not only impossible to calculate in practice, they are impossible to calculate in principle in the finite space offered by the surface of the Earth, assuming it were covered with computing machinery operating at full speed, all the time, for millenia. The perturbative approach is simply useless, and alternative approaches e.g. Monte Carlo methods on a lattice are inherently inaccurate. If fluid dynamics were as intractable as QCD, then weather prediction could do little more than identify clouds by their shape, in a few cases of limited practical interest. The entire structure of QCD depends on a calculation scheme that is impossible to use in the real world.


    7. It is of course incorrect that one "cannot calculate anything" with QCD. One can do calculations with QCD just fine when the coupling constant is small, which means at high energies.

    8. @drl

      You are quite right, that precise calculations are not possible applying QCD only. So far experimental measurements are far less precise than in the EW sector. Also when applying lattice calculations, it is not possible to understand e.g. the decay of a proton into two hadrons.

      But on the other hand, it's quite useful in understanding what's going on. It is like always, when applying a theory - all theories are still preliminary - until they are replaced by "The Final TOE"

  13. Hi Sabine, one thing I do not understand is why to expect dark matter particles and never mention a dark energy particle?
    According to basic principles, it should also be a particle after all. No?


    1. akidbelle,

      No, a particle (regardless of what type) would not create the effect that dark energy has. The name is unfortunate.

  14. One survival-critical feature of biological intelligence is its ability to find and 'factor out' environmental issues that change so little over time that they can mostly be treated as constants when calculating survival strategies. This allows biological intelligences to conserve use of their limited neural circuitry processing speed and capacities.

    That's a fancy way of saying this: Fish are so used to swimming in water they don't even know it exists until some fisherwoman pulls them out of it.

    So, given that biology observation, here's a physics question: What new data is needed for fundamental physics to break out of the largely results-free impasse of the past half century?

    Perhaps we are already swimming in it. That is, perhaps the most critical clues to getting to the next level of physics understanding are in the data we already have in hand, but have psychologically given up on and accepted as 'givens' in no need of further explanation.

    A few minor examples: Why are quarks ⅓ the charge of electrons? Why are there three spatial dimensions? Why does only one chirality of the fermion respond to the weak interaction? Why are neutrinos the only fermions biased towards a single chirality? Why are there three generations of fermions, but only one of bosons? Why does the extremely simple and absurdly arbitrary assumption behind MOND lead to such astonishingly good curve fits for a wide range of (but not all) observational data for galaxy-scale objects? Why is space mostly flat? How does the direction of time emerge? (Actually, I think Cortês and Smolin are making some impressive progress on solving that last one.)

    A few more: Can anyone explain simply why spin ½ even exists? I mean… really? Being blasé about the existence of spin ½ is almost exactly like discovering you can play skip rope using only half of a rope, then saying "meh… I guess that's just the ways skip ropes work." Also, once you explain what spin ½ is, why is it associated uniquely with the asymmetric wave functions characteristic of fermions? Why do quantum probability functions resemble fully classical chaos-driven probability functions? Is it possible that at some level they are chaos-driven probabilities, perhaps even in a way that is testable in constrained experiments? Why do Regge trajectories, the basis of the original and quite real hadron-scale concept of string vibrations, do so amazingly well at predicting hadron excitation masses? How are the strong and electric forces related, since SU(5) failed miserably? (I have my own ironic and genuinely amusing attempt at an answer for that one, related to my earlier rishon comments.)

    Wikipedia has more good examples. Notice that with the exception of SU(5) I have not included in 'problems' that are more akin to whining about why various theories have had no predictive success. That's not stating the data, that's just biasing the solution strategy. It's like insisting back in the early 1900s that the photoelectric effect 'must' have a classical resolution.

    Did you read my list and think 'Well duh Terry, that doesn't help much! Lots of very smart people have worked on those problems for decades without any success!'

    Wow, are there ever some good comebacks for that! I'll try to be constructive though: When simple problems appear unresolvable, it is likely that like fish in water we are swimming in assumptions held so deeply that we cannot see them. We can't help it; we're built that way as part of our fundamental survival mechanisms. The trick is to go back a level or two deeper, e.g. as Cortês and Smolin have done regarding the nature of time, and begin questioning the meanings even of some of our simplest words.

    In short, good physics requires fighting not just reality, but biology.

    Alternatively, you could just build a bigger particle accelerator and hope to find a particle that somehow resolves all of this. Good luck!

    1. "So, given that biology observation, here's a physics question: What new data is needed for fundamental physics to break out of the largely results-free impasse of the past half century?"

      Pretty clear - same as it ever was - a new paradigm, a re-imagining of the conflicting data, as happened twice in the 20th century with relativity and quantum theory, both of which were themselves re-imagined before settling down into a coherent form. The particles on a background interaction scheme has been pushed to its limit. It's not the first time the prevailing ideas have run out of steam. A new way of thinking about matter and spacetime together is needed. String theory had that part right, but it was in fact an extremely conservative theory that did not go far enough.

      "Why are quarks ⅓ the charge of electrons? ..." etc.

      This sort of question is not one that gets answered directly - rather, answers to questions like this typically come as a side benefit of answering more direct and pertinent questions. When a phenomenological theory transitions to a tight theoretical structure (example - the deformable electron of Lorentz vs. the new spacetime geometry of Einstein and Minkowski), the answers to the side questions show up automatically, as if by magic. That is one of the great appeals of physics as an intellectual endeavor.

      "Can anyone explain simply why spin ½ even exists"

      Because that's implied by the simplest possible representation of the group action behind relativity. Geometry = group action (Klein) and the simplest representations are its bones so to speak (Weyl).


    2. Spin 1/2 particles exist because they emit and absorb ("couple to") spin 1 photons.

      This is actually Physics in a Nutshell!

    3. Physically the electron charge has to balance the proton charge. If the quark model for the proton had been developed before the electron/proton relationship had been established we'd almost certainly be saying "the charge on an electron is 3 times that of a down quark" and not "the charge on an down quark is 1/3 that of the electron". In other words it's simply historical that, rather arbitrarily, the charge on e- was taken as the basis of charge measurement.

    4. drl, I greatly enjoyed your response! I think your point about the need to ask "more direct and pertinent questions" is spot on, and can also be used to help focus the backtrack search strategy around this thought:

      What are the critical questions that were never asked?

      To ask a new question is to open a new search path. If the current search branch has gone persistently fractal, it is time to look for new questions that are close enough to the foundations to open up entirely new avenues of exploration.

      Minkowski and spin ½, yes! Isn't it fascinating that the fundamentals features of certain particles are so intimately intertwined with the most fundamental properties of spacetime that Dirac was able to derive the existence of the positron simply by peering closely at how to factor his own energy-momentum generalization of Einstein's equation, that is (in one form) (E/c)^2=p^2+(mc)^2?

      I love that simple three-term quadratic form equation! At one level it is nothing more than the algebraic expression of a right triangle. Yet it is incredibly rich in e.g. Clifford algebra type details when expanded just a bit, e.g. into three mutually orthogonal momenta axes, and when factors such as the spin-to-vector equivalence that is found only in 3-spaces are taken into account.

      One wonders: Does the energy-momentum relationship go deeper than just electrons and positrons being required by special relativity? For example, might spacetime somehow be connected to all of the fermions — the full set of spin ½ electrons, quarks, and neutrinos — so that with the right mathematical framework, each becomes the definitional complement of the other? A fun thought to explore, even if tricky! One would have to look very carefully at each of those three terms, just as Prince de Broglie peered intently at the similarly simple photon momentum equation until he realized that it also applied to matter. Could some small detail, perhaps another expansion, have been overlooked in the finer details of how the SR energy-momentum equation works? In Dirac's day, the only fermions known were electrons and protons. What might he have come up with if he had had the Standard Model in front of him when be began that trek?

      Regarding string theory, here's a thought: String vibrations are fine as long as they are both experimentally real and defined in ways that are simpler than the particles they are attempting to represent. That's because fully elaborated vibrating strings have so much basic and solid-state physics embedded in them that if you are not very careful, they become a bit like trying to explain the earth's magnetic field by postulating that the earth's core contains millions of slightly used DeLorean engines driving old-style telephone generators. It may well be possible to configure such a hypothesis to produce the right observational outcome, but at what cost? (Full disclosure: I stole that analogy from Bloom County!)

      Greg Field, I like the succinctness of your comment. It captures a relationship that seems to go deep indeed. I suspect that as we eventually understand the full scope of how spacetime and matter are related, the juxtaposition of spin ½ and spin 1 will turn out to be an invariant that is inherent in this relationship.

    5. Terry. re "String vibrations are fine as long as they are both experimentally real and defined in ways that are simpler than the particles they are attempting to represent.".
      Taking one example would Dirac's proposal re. the positron (1928) not 'be fine' until Anderson's experimental proof in 1932? True generally speaking experimental proof is required for a theory to be 'accepted' but lack of experimental proof does not in itself render a theory false (does the theory exist in a superposition of true and false until measured?) Conversely experimental 'proof' does not in itself render a theory true.
      Regarding complexity. That an idea/theory/conjecture is far more complex than the object it attempts to describe does not render that theory invalid. Rather it depends on the knowledge, skill and experience of the observer. A clay brick for example is easy to describe (try google) to a general audience most of whom would consider the detailed chemical/physical explanation of why and how the individual atoms making up the brick interact highly complex and (other than they trust the scientists to get it right) akin to science fiction. In general our understanding progresses from the basic to the complex and then to (hopefully) simpler underlying principles. At a given time a theory can fit into that progression. It may be more more complex than previous ideas but still a useful progression on the path to understanding even if proven later to be incorrect.
      Incidentally your example of DeLorean engines etc. driving the earths magnetic field could be an example - follow the idea through to how/why the engines generate a magnetic field, chuck in a few observations (heat) and a bit of common sense and bingo.

    6. Very thought-provoking thread. There seems to be an inside/outside perspective. Working physicists ask extremely precise technical questions like "is there a structure to the electron" or "are there any spin 3/2 particles". I think Sabine is bemoaning that this is a very pedestrian and institution-friendly approach that's reached diminishing returns.

      Then you have a few rogues who ask "where do probabilities and quantization come from" or "how does a gathering of degrees of freedom manifest as curvature of spacetime"? Or "how likely is this universe with us in it, vs. the counterfactuals". For someone outside the field these questions sound much more fulfilling, but if physics is your job you wouldn't know where to start.

      I'm not a fan of string theory or multiverse because it seems they're taking the insider attitude and extrapolating to the latter kind of questions. That causes a literal explosion of hypothesis. Might be better to take the outside perspective and say how can we rethink the universe to reproduce this limited experience that we observe. Another loss of objectivity may be the price, like we discover an algebra of degrees of freedom but since there's no copy operator at most 1/2 of them are ever accessible to us.

    7. RGT:

      I admit it, I did not expect to see a mild defense of my DeLorean engines explanation of the earth's magnetic field. The imagery was of course just a bit of silliness on my part. Every geophysicist knows, via Fourier analysis of the last four centuries of magnetic pole drift combined with data on the core-to-surface energy transfer paths implied by the global surface heat signatures you just mentioned, that earth's magnetic field generation relies entirely on Wankel rotary engines… ;)

      More seriously: You make a great point that "always simpler" is just a heuristic, not an absolute rule! Based on that excellent feedback, I will attempt to sharpen my assertion:

      Beware of explanations that unintentionally incorporate the desired result.

      This issue hits hard on artificial intelligence, where it can be uncannily easy to make a system seem smarter by a process that turns out to be nothing more than using a large database of prerecorded responses, stored in some cryptic and highly factored form. Amusingly, almost all current efforts at 'deep learning' (a gross misnomer, it's actually broad-data-set perception training) simply accept this definition of machine smartness as a given. It's why they robot perception tends to fray very quickly when you move the bot out of its training sweet spot, e.g. by letting self-driving cars trained to recognize pedestrians drive freely on Halloween night. (You did not hear that from me, nothing to see here, move along now…)

      One (modestly) famous older example was a theorem proving program that seemed to come up with human-like results. Someone complained to the author that he had inadvertently incorporated the needed math into his software. The author took the critique seriously, examined his own software more carefully… and discovered the critique to be correct. (Side note: By way of contrast, some recent generations of formula-from-data and game-playing software are scarily good at rederiving physics formulae and advanced gaming strategies truly on their own. That's a somewhat different situation, however.)

      So, back to string theory: The problem is that if your strings 'feel' almost exactly like generalizations of real strings in terms of how they vibrate mathematically, one must examine carefully how much of physics is implied simply by asserting the existence of 'vibrating strings'. Inertia, kinetics, solid-state cohesion, spatial localization (of some sort), and even simply the existence of 'space' and 'objects' are all implied by such an idea. Some folks (I am in this camp) suspect that assuming not much more than the existence of Minkowski spacetime ends up implicitly defining most of physics, including quantum and particle physics; it just does so in ways we do not yet fully understand.

      If the Minkowski-implies-all-of-physics (Miaop? Argh, I actually kind of like that!…) hypothesis is even partially true, then using string-like objects to 'explain' particles and fields could prove to be the ultimate in recursive explanation: Little vibrating strings explain most of physics because you must first assume most of physics simply to create little vibrating strings. The enormous number of possible string universes tends to support this, since it suggests that Planck scale strings are far too powerful and far too generic, much like using mega-suites of DeLorean (oops, Wankel!) engines to explain the earth's magnetic field.

    8. Pavlos Papageorgiou:

      [First, a side note: There are 2 'delta' baryons with 3/2 spin in their lowest energy states. One uses 3 down quarks, and the other 3 up quarks. Their having identical quarks first tipped physicists off to the need for three types of charge in the strong force, one for each quark. Fascinating particles, the deltas.]

      You said:
      "For someone outside the field [of physics, questions on foundation concepts such as why quantization exists] sound much more fulfilling, but if physics is your job you wouldn't know where to start."

      There is no easy answer to your excellent point, but one method that can help bridge between 'operational' science and more radical innovations is this: Look for the absolute simplest mathematical structure that supports your experimental data. When you find it, ponder hard on why such a simple structure works.

      For example, when the brilliant applied x-ray diffraction work by Rosalind Franklin and her graduate student Raymond Gosling established the helical structure of DNA (there were some other folks involved, I forget their names), it explained a curiously persistent datum: In any sample of DNA data, the molecular quantities of adenine and thymine, and also of guanine and cytosine, were always identical.

      Had other folks pondered this one datum more seriously, they likely could have found the double helix model long before Gosling and Franklin practically shouted it at them. Equal amounts simply meant that the two pairs were forming bonded units within DNA. Recognizing this, then applying weak-bonding models to the pairs and attaching the results to DNA backbones, almost certainly would have led more quickly to the double helix.

      But how about a physics example?

      OK, here's a very simple one: What is the most succinct mathematical structure that correctly describes the relationship between electrical charge and color charge for all known particles?

      The usual strategy for unifying electric and strong forces is to apply higher dimensional structures called symmetry groups, which can get quite complicated. These worked very well for the electric and strong individually, and for electroweak unification, but proved disappointing for unifying electric and strong.

      But notice that this is not quite what I asked. My question was: What is the simplest structure that describes charge data for all known particles? That is a far less audacious goal that leaping directly to force dynamics.

      As with DNA, a potentially important datum is that color charge never occurs without an accompanying electric charge. That is odd, since electric charge occurs without associated color charges. So what is the simplest mathematical description that captures this charge asymmetry in fermions?

      Unexpectedly, it is to demote the electric force into a non-orthogonal sum of three color-charge unit displacements. That in turn suggests it is difficult to unify strong and electric because they never really split. Every electric charge in the universe becomes a 'colorless' vector combination of three strong color charges in rgb 3-space (not xyz space). Electric seems unique only because there is an anisotropic behavior in color 3-space, rather than in the charges themselves, that allows only this combination to act at infinite range.

      Oddly, this structure is not a new idea. The earliest mention I've found of it is in the 1979 Glashow paper "The Future of Elementary Particle Physics", in which Glashow shows it as a mnemonic for recalling the fermions in a family. The same model also indirectly explains why rishons, which are really vector triplets, are so predictive.

      However, prioritizing the idea that electric charge is just a special case of color charge — moving it from the 'subconscious' of the collective intelligence of physics thought to its higher priority 'consciousness', where it can be more seriously analyzed — has to the best of my knowledge never happened.

    9. @Terry Bollinger

      I sat down to try to write a general reply to you wrt preons which are mentioned in a post of yours in the previous blog article. But your newer post has a specific point about electric charge in relation to colour charge. I can refer to preons in my answer to this point, instead of making a more general reply.

      I have an amateur preon model which you can easily find via google. For preons, in my model, there is a direct dependance of electric charge on colour charge. But electric charge is so important to everyone that I retain it as a key property even though I regard it as a 'false' front runner as far as fundamentality is concerned.

      An obvious difficulty in accepting the dependence of electric charge on colour charge is that for quarks the electric charge is not derived from the colour. A red quark can be either negatively charged or positively charged electrically. But this difficulty is only a problem if preons are rejected.

      A red preon (R) has a negative electrical charge and an aggregate of RGB will make a colour-neutral electric charge. The electric charge of a quark will depend on preon aggregates such as RGB which can dominate the determination of total charge irrespective of single preon colours which determing the colour charge of the quark.

      IMO this is an example of symmetry breaking. There is a dependence of the electric charge of a preon on its colour which is not true for quarks because a quark is an aggregate of preons.

      Austin Fearnley

    10. Austin,

      Interesting comments, thanks. I'm reading your second paragraph as saying that electric charge is not fundamental, but is nonetheless pragmatically and classically important enough to be worth calling out as a separate entity. I certainly would not disagree with that.

      I looked at three or four of your preon papers. As a former IEEE editor, I hope you won't mind a writing suggestion: Try to focus on just one or two main ideas per paper, and explain those ideas as fully as you can. There's a powerful temptation when looking at a topic for years to attempt to put everything into each paper, but the sad truth is that if a paper doesn't have a strong, simple, stand-alone message, most readers will never get beyond the abstract. This critique is probably true for about 95% of all published papers, peer reviewed or not.

      Regarding your model, wow, that's a lot of preons! Another suggestion: When the number of parts (preons) begins to exceed the number of things you are building (fermions and bosons), you need to ask yourself a pointed question: Are you really factoring the data to get at the smaller components, or are you instead creating a generic language for describing a much large set of possibilities than the data actually calls for? For example, the vibration modes of string theory are extraordinarily generic, powerful, and thus language-like, by which I mean they are capable of describing almost anything. Thus boasting that the Standard Model can be expressed in string theory ends up not being much more impactful than saying that the Standard Model can be expressed in Spanish.

      From my brief look at your papers I am unsure whether you are talking about particle preons, or something more generic that you are just calling preons.

      My own current analysis, which relies more on information and cognitive theory search heuristics and less on traditional physics and physics math heuristics, leads me to make two assertions about preon-like theories:

      (1) There is almost certainly at least one more layer of structure underlying the Standard Model, simply because the particles of the Standard Model share so many features (e.g. charge and spin) that are not by themselves stand-alone entities.

      (2) That next lower level of structure is almost certainly not particles, strings, or membranes, but some set of more abstract entities (such as spaces) that are topologically simpler and farther away from Platonic object ideals.

      For the 'signals' that seem to suggest preons, I would assess with high (~95%) probability that these signals actually indicate the need for a 3-space generalization of the original Gauss-Maxwell-Heaviside concept of 1-space electric charge displacements (D). Electric charge displacements become identical to the 'colorless' diagonals of the all-positive and all-negative octants of an rgb 3-space. This diagonal would also be the axis of a profound anisotropy, since for any given rgb vector the only component of that vector that will be visible at infinite xyz range is its dot product with the colorless axis. This anisotropy is to me the most fascinating part of the rgb data model, since it suggests a deep link between charge and classical spacetime.

      A more precise theory would dispense with electromagnetism entirely and replace it with anisotropic rgb theory, in which the photon would become the (no longer) 'forbidden' ninth gluon. There is a potential for significant matrix simplification. Analogously with the quantum-to-classical transition, anisotropic rgb theory would quickly transform into Maxwell's equations at scales above nucleons. It could also imply that gluons have photon-like features (e.g. polarizations) that are not typically factored into QCD calculations, and thus might reduce calculation costs.

      Anisotropic rgb space is not a particle theory, since for example it does not explain neutrinos. It's just the simplest model that fully captures the color and electric charges of actual particle data.

    11. Terry

      I appreciate your comments especially as you are a former IEEE editor. Thank you. My last paper did try to narrow down the topic to mainly covering leptoquarks and their suggested preon composition.

      Only one denial: you say that I have made a lot of preons whereas in fact I only made four. The trouble is that I cannot see preons as indivisible and so I speculated about hexarks, and then septarks .... But at the fifth, or preon, level there are only four preons. If you discount the sixth and seventh layers of entities then there are very few components.

      You made two assertions about preons.

      (1) The idea of stand-alone entities has influenced me also. I am not sure that the ability to 'stand alone' is important for any layer of particles. IMO the SUSY ideas are to bridge the gap between two stand-alone entities that are the fermions and bosons. The fermions and bosons are quite distinct in the Standard Model using different maths to underpin them. SUSY connects them with the same maths. But with a preon model there is already a strong connection between fermions and bosons as they are made using the same tool box of preons. But working with the idea of preons has naturally made me sympathetic to the idea of SUSY, even though I am dubious about proposed SUSY sparticles.

      This stand-alone issue for me is more relevant to a sixth layer of particles: hexarks. My hexarks are numerous and each one has its own polar attitude to every quality (eg charge and spin). With no sitting on the fence of neutrality. And no skipping of qualities. I was influenced by LL Thurstone's work on poles of the mind from the 1920s or thereabouts. I do not see any problem with the large number of hexarks as IMO it is not necessarily fewer and fewer components per layer.

      (2) With respect to your second assertion, I see something akin to your rgb (colour) space as replacing the Kaluza-Klein fifth dimension (of electric charge), so the KK theory would need to be written in 8 dimensions rather than 5 dimensions. The electric dimension could vanish as it IMO depends entirely on the colour dimensions.

      Your point about the nature of spacetime. IMO the ability to generate spacetime is embedded in the particles. I have written elsewhere about how Rasch analysis is used to generate hypothetical abstract spaces. The inputted binary data need only to be based on pairs of particles with a preference for one particle over another. In a spacetime context the binary outcomes could be 1 for entangled and 0 fror not-entangled, taken pair of particles at a time. So I see a possible role for entanglement in creating space/time.

      Austin Fearnley

    12. Austin,

      Thank you for an interesting and well-written response, and for your apt correction of my misreading of the number of primary components in your preon hierarchy. This has been an interesting sub-thread!


  15. (REWRITE of earlier comment).
    What follows (if I'm allowed the attempt) shows the likeness of particle physics with fusion physics.

    Thermodynamics predicts fusion ignition is not possible in a closed system, the First-law foretells fusion ignition will not occur.

    The standard model predicts the Higgs-boson, the LHC experiment showed the Higgs-boson was correct.

    Fusion physicists develop new ignition criteria to predict the onset of fusion ignition at greater and greater energies, yet after every laboratory experiment fusion does not occur. Thermodynamics was correct.

    Q: "The more difficult question is why did so many particle physicists think those were reasonable expectations, and why has not a single one of them told us what they have learned from their failed predictions?"

    A: To obfuscate results preserving honor is preferable to acknowledge failure. It is more lucrative to change the criterion than to halt experimentation. (Keep the ball moving, jog the goalpost.)

    Q: "The reason that many particle physicists believed in these speculations is that they mistakenly thought the standard model has another problem which the existence of the Higgs would not fix."

    A: In the analogy from fusion physics, Thermodynamics is not the barrier, experimentalists thought they needed a bigger hammer.

    Fusion occurs when conditions for quantum tunneling occur at scale of fusion on the sun, yet in the laboratory fusion experimentalists do not pursue experiments to scale for quantum tunneling.

    Q: "I am afraid that many of them still believe this. This supposed problem is that the standard model is not 'technically natural'."

    A: Likewise in fusion physics perhaps the Schrodinger equation is not "technically natural" enough.

    Physics has entered the era of hard questions, and there is more money to be made speculating theories than discovering truth, and that is why in fusion physics like particle physics, we finance bigger and bigger hammers.

    My apologies for hijacking this post about the LHC, but I couldn't wait until Dr. Hossenfelder's post about fusion.

  16. Maybe future of particle physics could start with going back to neglected basic questions, like configuration of EM field of electron.
    Assuming perfect point charge would mean infinite energy of electric field alone - what is a nonsense as we know upper bound for this energy: 511keVs.

    Looking for experimental confirmation of this common belief in point electron leads to Dehmelt's 1988 extrapolation from two points (proton and triton) using parabola (!): fig. 8 in
    In contrast, looking at electron-positron scattering and extrapolating to rest mass energy to remove Lorentz contraction, we get ~2fm size in agreement with required deformation of electric field of point charge not to exceed 511keVs.

    Is electron a perfect point? What experimental evidence supports this belief?
    Shouldn't the "future of particle physics" focus also on such neglected fundamental questions?

  17. People confuse mathematical polnt particles with physical point particles.

    Remember Zeno

  18. SuSy could be saved in a fully anti material universe right?
    SO, why not suggest that there is at least one anti material universe nearby at a distant equipped with a anti chiral oscillating dark energy vacuum system able hold stability for antimatter.?

  19. Just had a read of the comments on the UTube page/vid ... so many comments, so little content. Perhaps the most amusing (or scary?) are the ones promoting the anti-science ideas of "the Electric Universe" ...

    1. JeanTate, on the contrary. You may not agree with the hypothesis that electromagnetic fields and cosmic-scale plasmas constitute a primary factor in the large scale structure of the universe, but it is definitely science as such. Indeed the rejection out of hand of alternative hypotheses because they are not mainstream (to use an abused word) is what is actually in my opinion "anti-science". There are so many holes in the current cosmological paradigm that the only correct reaction of a healthy science is to consider alternatives. The current era of stagnation and groupthink and heel-digging into infertile ground began around the same time as when Halton Arp's very interesting ideas about intrinsic redshift were simply ignored and the man disparaged and ostracized simply for holding heterodox ideas - much as happened to Galileo.

      (I have not seen the video comments so you may be referring to something else.)


    2. drl, the "Electric Universe" includes loads of Velikovsky nonsense, such as a giant lightning bolt created the Grand Canyon and produced a giant comet, that all comets are solid rock, the Sun is powered by giant inter-galactic currents. The comments over in UTube explicitly avow this.

      There's some fringe science under the heading Plasma Cosmology, with A. Peratt being the most well-known current proponent.

      Halton Arp's intrinsic redshift ideas were most certainly NOT ignored! At the beginning - 1960s - they were treated very seriously. If you think any of those ideas are consistent with robust astronomical observations, from the 1970s onward, by all means write a paper and submit it for publication.

      Ditto any of the Plasma Cosmology ideas.

      Alternative ideas are all well and good, but they have to pass experimental/observational muster, and none of what you mention does.

    3. @JeanTate

      "..a giant lightning bolt created the Grand Canyon and produced a giant comet, that all comets are solid rock, the Sun is powered by giant inter-galactic currents. The comments over in UTube explicitly avow this."

      OMG :) No, I was referring to Peratt et. al. not this drivel. It may be wrong but at least it's real science. And because EM is conformally invariant in the vacuum, the idea of cosmic-scale EM fields is not without interest.

      Martin Lopez-Corredoirra has done very interesting, traditional astrophysics work on objects evincing discordant redshift. Some of his cases are so striking that the chance alignment explanation, invoked repeatedly, beggars belief. Perhaps the most famous cases are NGC 4319 and NGC 7603. It is next to impossible to get telescope time to explicitly investigate these ideas. Physics is not the only area plagued by committee-fueled groupthink.


    4. According to Anthony Peratt is a panegyric for the plasma universe. This is the brain child of Arp and Alfven. Collections of charges, whether in cold neutral matter or in plasmas, tend to equal out and the long range forces of plasmas are very small. This charge saturation, if we might call it that, is what nixes any of these idea.

      The plasma universe is about as wrong as the Chaldean cosmos the Bible has reference to. I am also not sure why, but my experience with plasma physicists is they are a strange bunch and tend to be very contentious. Arp and Alfven were no exception.

    5. @drl: Peratt has published few astrophysics papers, none recent (AFAIK). I think the best you could say is that later observations render his ideas moot; less charitably, his understanding of galaxies is (was) woeful. If you think you can make a Plasma Cosmology case, based on good observational data, please go ahead!

      If you remove Velikovsky and the "Electric Universe", there's no connection between Arp's discordant redshifts and "EM". So, apples and oranges.

      Are you familiar with the Zooniverse-based citizen science project, Galaxy Zoo (GZ; there are many iterations and variants)? If you think some of Lopez-Corredoirra's cases beggar belief, your mind will be totally blown by some of what is posted in GZ fora. So, as someone who has oohed and aahed when checking out the GZ's weird and wonderfuls, I am seriously underwhelmed by all ML-C's cases.

      The best counter to "next to impossible to get telescope time" is, perhaps, the fact that these GZ citizen scientists got quite a lot of telescope time, on the Hubble; check one example out here:

    6. @drl: one more thing ... a long time ago, Arp made a suggestion re a program of (optical) astronomy observations, data from which he thought would clearly show strong evidence for his intrinsic redshift ideas.

      Fast forward a half century or so and today there are huge databases of astronomical observations, available for free, to anyone. Collectively they vastly surpass Arp's dream, in part because they cover much more than just the optical. Yet, with this vast wealth of data, no one has been able - yet - to show that Arp's intrinsic redshift ideas have legs.

      Maybe you'd like to try?

    7. @Lawrence Crowell: Alfvén got a Nobel for his plasma physics work, the (or a) foundation for today's space physics (the in situ study of the solar wind, comets, planetary magnetospheres, etc), and plasma astrophysics (e.g. trying to understand the jets and lobes associated with Active Galactic Nuclei).

      That said, Alfvén's forays into astrophysics were spectacularly unsuccessful, beyond the solar system anyway.

      AFAIK, Arp was never really interested in theory; back in the day he was a very good observational astronomer though.

    8. @JeanTate

      The idea I found compelling, and really, the only one that interested me, was that the large scale filamentary structures of integalactic space might be associated with cosmic scale magnetic fields. EM is conformally invariant, which implies that phenomena on one scale can be replicated at all scales. This instantly gives a natural explanation to those filaments. Of course it is rather difficult to send a probe out into intergalactic space to do measurements, but the IBEX ribbon was totally unsuspected right? That I would say is a precedent.


    9. @JeanTate yes I am trying already - have been for a while :)


    10. @JeanTate: I remember Alfven waves in 2nd semester E&M. I had forgotten that Arp was an astronomer and not a plasma maven. Alfven was a contributor to space plasma physics in the early space age days and was a Nobel Laureate. He missed the boat though on cosmology.

    11. @drl Good luck. I look forward to reading your paper(s) when it/they appear in arXiv's astro-ph section.

    12. More on the Electric Universe fans: my impression is that there's a pattern of behavior being repeated: some fans take the opportunity of UTube videos like this one to promote their ideas, primarily by "link spam" (i.e. lots of links to other UTube videos promoting the Electric Universe).

      It's like free advertising or free marketing, even if the individuals posting the links are not willfully engaged in marketing, the result is lots the same. So far, no such activity here, but perhaps it's just a matter of time ...

  20. Why String Theory is Wrong / PBS Space Time

    "So, where did things gone worng?.. in a sense, they were never really right"

    Maybe we have a chance! and it is not so bad

  21. I have an old friend who was one of many authors writing about the LHC becoming a "black hole factory" ...he doesn't like to talk much about this anymore and I don't blame him. I think when we are grabbing at straws like those it shows that the framework of foundational physics is unstable and ripe for revolution. We can only hope that the answers comes sooner instead of later..

  22. Why should a tiny black hole not "eat" all the matter in its surrounding? If the matter does not move to the black hole, the black hole just will move to the matter since there is always an asymmetry in the distribution. Fortunately black holes and unicorns have much in common.

    1. weristdas:

      Please avoid off-topic questions like this. The gravitational attraction of a tiny black hole is also tiny. Consequently it only "eats" matter if it happens to directly hit that matter. The probability for this to happen is also tiny because the black hole is tiny. Quantitatively, the cross-section is even smaller than for neutrinos. Now add to this that these tiny black holes are unstable and decay, due to Hawking radiation, within less than 10^-20 seconds.

    2. Hello Sabine,

      how should one detect a black hole in one of the LHC detectors? Is there any particular signature?

    3. Sabine Hossenfelder7:45 AM, October 12, 2019
      "Consequently it only "eats" matter if it happens to directly hit that matter. "

      I've thought it's not a matter of chance but rather of gravitational forces. Is that idea naive?

    4. weristdas,

      Please read my full answer. The gravitational pull of a tiny black hole is tiny. It's entirely negligible.

    5. A black hole that is the size of an atom, about 10^{-8}cm has around 10^{25}Planck units of mass. This is 10^{17}kg of mass. Now just compute the acceleration with Newton's equation, nothing fancy needed, a = -GMm/r^2. For r = 1m this acceleration is around a 10^6m/s^2. So you would not want to be close to this. The temperature from Hawking radiation would be around T ≈ 10^7K. This would also mean Hawking radiation is starting to stream out of it. So this would be a nasty thing to be close to, and in Wald's little book on thermodynamics and QFT of black holes he writes about situations where the inflow and outflow of matter in some medium balances. For a black hole the size of a nucleon, 10^{-13}cm at 1m the gravitational force would be 1m/s^2 and the Hawking temperature about 10^{12}K.

      For even smaller black holes the curvature close to the horizon becomes enormous, but the extent of that curvature is limited. So a Planck mass black hole has negligible gravity even on the scale of a nucleus, and it is probably so unstable it would spontaneously erupt in 10^{11}j of energy. That is about as much as 10 liters of petrol or more. This also illustrates a problem with detecting particle physics of quantum black holes, for they would produce a mole or more daughter products and these would be difficult to track. The data management problem is almost intractable.

  23. I was just reading an old SciAm article from 2008 filled with excited prose about all the wonderful new physics the upcoming LHC would definitely find.

    And the website "has the large hadron collider destroyed the world yet dot com" is still around to protect us! :D (It actually has JavaScript code that tests to see.)

  24. OK so the new collider may not be the best use of money. But your objection is based on an incorrect premise, which is that if the money is not spent on the collider, it will be spent in other areas of physics. That's just not true. The question is not: "Where should the money be spent in physics". Rather the question is: "Should money be spent on the new collider, or should we give tax breaks to the rich or should we build a couple of submarines?" That is the reality. If you want money to be spent in areas of physics, other than the collider, you have to make proposals that resonate with the government and the public. To be against the collider, without suggesting alternatived is just, in pragmatic terms, an anti-science position , and no wonder your colleagues are annoyed with you.

  25. ok, to be honest she repeatedly said something on it, like dark matter research with cosmological/astrophysical observations.

    major resistance point is, which has very strong argumentation, loosing the experts in the field and some regressive fate insitu on that field. is that training more valuable than his very existence, or the majority problem concured again.

    anyway, there are more experts as always, fantastic imaginations indeed where they are.

    so what is the real problem signed in the hide dark corner actually, before than the output, which is the research strategy dependent (accumulative) state.

    in that place we might see money flow goes within mostly on "it" sector rather than physical science in global fashion.

    creative people must be drained in a way through reality/functionality sens as a byproduct "string wars".

  26. and of course, isnt it enoughly vibrant; quantum gravitational aspects of the mechanical revolution, like electronic!



Comment moderation on this blog is turned on.
Submitted comments will only appear after manual approval, which can take up to 24 hours.
Comments posted as "Unknown" go straight to junk. You may have to click on the orange-white blogger icon next to your name to change to a different account.