Pages

Friday, April 05, 2019

Does the world need a larger particle collider? [video]

Another attempt to explain myself. Transcript below.


I know you all wanted me to say something about the question of whether or not to build a new particle collider, one that is larger than even the Large Hadron Collider. And your wish is my command, so here we go.

There seem to be a lot of people who think I’m an enemy of particle physics. Most of those people happen to be particle physicists. This is silly, of course. I am not against particle physics, or against particle colliders in general. In fact, until recently, I was in favor of building a larger collider.

Here is what I wrote a year ago:
“I too would like to see a next larger particle collider, but not if it takes lies to trick taxpayers into giving us money. More is at stake here than the employment of some thousand particle physicists. If we tolerate fabricated arguments in the scientific literature just because the conclusions suit us, we demonstrate how easy it is for scientists to cheat.

Fact is, we presently have no evidence – neither experimental nor theoretical evidence – that a next larger collider would find new particles.”
And still in December I wrote:
“I am not opposed to building a larger collider. Particle colliders that reach higher energies than we probed before are the cleanest and most reliable way to search for new physics. But I am strongly opposed to misleading the public about the prospects of such costly experiments. We presently have no reliable prediction for new physics at any energy below the Planck energy. A next larger collider may find nothing new. That may be depressing, but it’s true.”
Before I tell you why I changed my mind, I want to tell you what’s great about high energy particle physics, why I worked in that field for some while, and why, until recently I was in favor of building that larger collider.

Particle colliders are really the logical continuation of microscopes, you build them to see small structures. But think of a light microscope: The higher the energy of the light, the shorter its wavelength, and the shorter its wavelength, the better the resolution of small structures. This is why you get better resolution with microscopes that use X-rays than with microscopes that use visible light.

Now, quantum mechanics tells us that particles have wavelengths too, and for particles higher energy also means better resolution. Physicists started this with electron microscopes, and it continues today with particle colliders.

So that’s why we build particle colliders that reach higher and higher energies, because that allows us to test what happens at shorter and shorter distances. The Large Hadron Collider currently probes distances of about one thousandth of the diameter of a proton.

Now, if probing short distances is what you want to do, then particle colliders are presently the cleanest way to do this. There are other ways, but they have disadvantages.

The first alternative is cosmic rays. Cosmic rays are particles that come from outer space at high speed, which means if they hit atoms in the upper atmosphere, that collision happens at high energy.

Most of the cosmic rays are at low energies, but every once in a while one comes at high energy. The highest collision energies still slightly exceed those tested at the LHC.

But it is difficult to learn much from cosmic rays. To begin with, the highly energetic ones are rare, and happen far less frequently than you can make collisions with an accelerator. Few collisions means bad statistics which means limited information.

And there are other problems, for example we don’t know what the incoming particle is to begin with. Astrophysicists currently think it’s a combination of protons and light atomic nuclei, but really they don’t know for sure.

Another problem with cosmic rays is that the collisions do not happen in vacuum. Instead, the first collision creates a lot of secondary particles which collide again with other atoms and so on. This gives rise to what is known as a cosmic ray shower. This whole process has to be modeled on a computer and that again brings in uncertainty.

Then the final problem with cosmic rays is that you cannot cover the whole surface of the planet to catch the particles that were created. So you cover some part of it and extrapolate from there. Again, this adds uncertainty to the results.

With a particle collider, in contrast, you know what is colliding and you can build detectors directly around the collision region. That will still not capture all particles that are created, especially not in the beam direction, but it’s much better than with cosmic rays.

The other alternative to highly energetic particle collisions are high precision measurements at low energies.

You can use high precision instead of high energy because, according to the current theories, everything that happens at high energies also influences what happens at low energies. It is just that this influence is very small.

Now, high precision measurements at low energies are a very powerful method to understand short distance physics. But interpreting the results puts a high burden on theoretical physicists. That’s because you have to be very, well, precise, to make those calculations, and making calculations at low energies is difficult.

This also means that if you should find a discrepancy between theory and experiment, then you will end up debating whether it’s an actual discrepancy or whether it’s a mistake in the calculation.

A good example for this is the magnetic moment of the muon. We have known since the 1960s that the measured value does not fit with the prediction, and this tension has not gone away. Yet it has remained unclear whether this means the theories are missing something, or whether the calculations just are not good enough.

With particle colliders, on the other hand, if there is a new particle to create above a certain energy, you have it in your face. The results are just easier to interpret.

So, now that I have covered why particle colliders are a good way to probe short distances, let me explain why I am not in favor of building a larger one right now. It’s simply because we currently have no reason to think there is anything new to discover at the next shorter distances, not until we get to energies a billion times higher than what even the next larger collider would reach. That, and the fact that the cost of a next larger particle colliders is high compared to the typical expenses for experiments in the foundations of physics.

So a larger particle collider presently has a high cost but a low estimated benefit. It is just not a good way to invest money. Instead, there are other research directions in the foundations of physics which are more promising. Dark matter is a good example.

One of the key motivations for building a larger particle collider that particle physicists like to bring up is that we still do not know what dark matter is made of.

But we are not even sure that dark matter is made of particles. And if it’s a particle, we do not know what mass it has or how it interacts. If it’s a light particle, you would not look for it with a bigger collider. So really it makes more sense to collect more information about the astrophysical situation first. That means concretely better telescopes, better sky coverage, better redshift resolution, better frequency coverage, and so on.

Other research directions in the foundations that are more promising are those where we have problems in the theories that do require solutions, this is currently the case in quantum gravity and in the foundations of quantum mechanics. I can tell you something about this some other time.

But really my intention here is not to advocate a particular alternative. I merely think that physicists should have an honest debate about the evident lack of progress in the foundations of physics and what to do about it.

Since the theoretical development of the standard model was completed in the 1970s, there has been no further progress in theory development. You could say maybe it’s just hard and they haven’t figured it out. But the slow progress in and by itself is not what worries me.

What is worries me is that in the past 40 years physicists have made loads and loads of predictions for physics beyond the standard model, and those were all wrong. Every. Single. One. Of them.

This is not normal. This is bad scientific methodology. And this bad scientific methodology has flourished because experiments have only delivered null results.

And it has become a vicious cycle: Bad predictions motivate experiments. The experiments find only null results. The null results do not help theory development, which leads to bad predictions that motivate experiments, which deliver null results, and so on.

We have to break this cycle. And that’s why I am against building a larger particle collider.

41 comments:

  1. So, imagine tomorrow the LHC shuts down.

    Well, it turns out that there is a recent 2.5 sigma excess in CMS in the search of gluinos in a GMSB scenario. In the exact same channel, ATLAS has a 2 sigma excess. Are you telling me that we should't try to see whether it is real or not?

    It is simply wrong to say that the LHC will not find anything. At the end of it, it will have necessarily a few excesses here and there. You are promoting an scenario where we will not be able to know whether they are real or not, that we shouldn't keep looking at it. And of course, that we shouldn't look what is there at higher energies.

    You are advocating anti-science. Probably you don't care and you are just enjoying this circus around you.

    ReplyDelete
    Replies
    1. Marc,

      Your comment is ill-informed.

      "So, imagine tomorrow the LHC shuts down."

      The LHC is presently shut down - for a scheduled upgrade.

      "Well, it turns out that there is a recent 2.5 sigma excess in CMS in the search of gluinos in a GMSB scenario. In the exact same channel, ATLAS has a 2 sigma excess. Are you telling me that we should't try to see whether it is real or not?"

      The LHC (and its HL upgrade) will continue to collect data for about 10 more years. I have not said, not here and not elsewhere, ever, that one should discontinue it.

      As to the fluctuations. If you stop taking data there will *always* be fluctuations at low significance that could be something. This is not a good argument for continuing to search - it could always be made for any experiment.

      "It is simply wrong to say that the LHC will not find anything."

      I did not say it would not. Stop fabricating things I did not say.

      "At the end of it, it will have necessarily a few excesses here and there. You are promoting an scenario where we will not be able to know whether they are real or not,...

      As I said, you will always have "a few excesses here and there", no matter at which point you stop taking data. This is not a valid argument to build a next larger experiment.

      "You are advocating anti-science."

      I am saying that we should invest money into research efficiently. You seem to have a problem with that.

      "Probably you don't care and you are just enjoying this circus around you."

      This is an ad-hominem attack combined with amateur psychology.



      Delete
    2. Marc, it is anti-science to refuse to admit that your theories are wrong when the $10B LHC that breathlessly promised to find evidence the theories were correct, and confidently asserted if the evidence existed it the LHC would find it, then failed to find that evidence.

      Specifically, there is no evidence of extra dimensions, or super-symmetry particles, or dark matter particles or any other new physics.

      A scientist that cannot admit failure and just keeps revising the same theory into currently untestable regions is no longer a scientist, but a religionist. They believe in their theory regardless of how many experiments fail to confirm it.

      What's the overall track record on 2.5 and 2.0 sigma results in the LHC? What percentage of those turn into 5 sigma results? I'd wager the odds are not favorable.

      Unfortunately on the Normal curve we can't rule anything out, but the real question is about the odds of an experiment teaching us something significant (i.e. is actually evidence of something new), and that is where this debate should be focused.

      It seems pretty clear the overall odds of learning something significant with twenty new experiments is going to be higher than just continuing to collide particles in the hope that new physics will finally show up.

      Delete
  2. Bee - I tend to agree with you on the idea of another hadron collider. However, I would be interested to know your thoughts on eRHIC/CEBAF , next linear collider (a few choices) and generally facilities specializing in nuclear (as opposed to particle) physics. Best and hope there is still snow where you are :)

    ReplyDelete
  3. Doubled word, 'in', missing word, 'of'.

    >...until recently I was in in favor building that larger collider.

    >...until recently I was in favor of building that larger collider.

    ReplyDelete
    Replies
    1. Thanks for pointing out. I must have read this 100 times and didn't notice!

      Delete
  4. Hi Sabine !
    Great video (and text).
    - Straight to the point.
    - some of my favorite teachers (if they were still alive) would certainly help you
    put the 'new' LHC debate to rest.
    - some would certainly say
    " If we could shave
    a little
    from these defense budgets,
    we could finance
    phenomenal scientific experiments (regardless of results) and alleviate some of the populace poverty issues
    - at the same time.
    ... if only.

    Once again.
    Love Your Work.
    All Love,

    ReplyDelete
    Replies
    1. A.C.
      There's a saying over here (UK) - "if wishes were pounds we'd all be millionaires". It may be politically astute for a Government to cut a budget item, persuading their populace that switching money from defense to CERN rather than say to health or education would be a difficult sell to say the least.
      The Governments that support CERN both full and associate members do so from their overall science budgets. In the UK oversight of our contribution to CERN is by a committee made up of scientists, engineers and civil servants not - rightly - just particle physicists. I don't know but it seems reasonable to assume other countries have the same set up. CERN decides where to spend their money but they have to persuade their funding members and associates that their plans are feasible and potentially productive.
      I slightly disagree with Dr Hossenfelder -" physicists should have an honest and public debate about the evident lack of progress in the foundations of physics and what to do about it". Accepting that 99% of the public and 99.999% of politicians won't have a clue about the scientific arguments doesn't mean that the debate shouldn't be out in the open. The public, scientific and (putting the UK Brexit farago to one side) political communities all have a interest in how budgets are set, what the expectations are and where their particular interests fit with other demands on financial resources.
      The decision on which components of the FCC proposal to fund—or whether to fund it at all—will be made by the CERN Council, shortly after the next European Strategy is finalised early next year. There's no real prospect that CERN funding will be withdrawn en-masse by member states but the council will have to persuade a wide and potentially sceptical audience that the decision they come to is correct.

      Delete
    2. @ RGT,
      Ah,my friend, you're right. - It was just a lament.
      sometimes I think or feel aloud (it usually doesn't end well) lol
      I'll trade you a saying
      whence I come.
      - (scientifically),
      If every incorrect theory,
      failed experiment, wrong prediction, or even wind blown fad that sends good scientists like barking dogs to a wood with nothing but 'wrong trees' we're a schilling ;
      the wealth would be
      unimaginable.
      Even this be true
      I find myself neither
      Rich nor downtrodden.
      - trust you the same.
      All the best.



      Delete
  5. Bee,

    do your arguments against building a new larger 100km FCC collider apply to simply upgrading the magnets for LHC but keeping the 27km tunnel and reusing most of the LHC equipment to 16 Teslas for CM of 27 TEV?

    ReplyDelete
    Replies
    1. It would be too inefficient to accelerate protons in this configuration due to the higher synchrotron radiation.

      Delete
    2. I know about synchrotron, but then why then are there serious CERN proposals for HE-LHC, if they haven't solved synchrotron radiation ? cost estimate ~7 billion

      Delete
    3. I thought, though I could easily be wrong here, that synchrotron radiation in p-pbar accelerators was a factor of about 10^13 less than in e--e+ accelerators for a given beam energy. Since the upgrade from LEP to the LHC, I thought that magnetic field strength became the limiting factor in design energy.

      Delete
  6. The writer is generally right as far as her arguments cannot be disputed. She is right in arguing that public funds shouldn't be wasted. Why is a much bigger collider needed. Physicists in fact cannot provide so far any justification.Some private companies seem to encourage building expensive experimental infrastructure of no use really. I can list at least more than a dozen such centers. However once my coming book is published it would show that we do no yet understand the most critical aspects of particles. We need additional new insights about the ultimate nature of the universe and tha requires knowledge of the ultimate nature of particles and forces.

    ReplyDelete
  7. You’re fighting the good fight here Sabine. Hopefully without overwhelming you however, I’d like to suggest that you’re trying to fix just one element of a far larger problem. Of course we’d love it if you did get somewhere with this focused approach, but how far could that truly be if the issues that you raise are currently endemic to science itself? In soft sciences such as psychology for example, outright p-hacking is known to occur. Thus it may be that in order to fix the field of physics in the ways that you suggest, we’ll also need to fix science in general. Even conceptually, how might the institution of science itself become improved?

    Here it may be helpful to note that metaphysics, epistemology, and axiology, or the three branches of philosophy, exist at a more basic level than science itself exists. Thus in order to fix science wherever it remains “soft”, we may need a respected community of professionals with its own generally accepted principles of metaphysics, epistemology, and axiology from which to guide the institution of science itself. I’ve developed four such principles.

    It’s my second principle of epistemology that seems most applicable to your concerns however. It reads: “There is but one process by which anything conscious, consciously figures anything out. It takes what it thinks it knows (or evidence), and uses this to assess what it’s not so sure about (or a model). As evidence continues to remain consistent with a given model, there is increasing reason for it to become believed.”

    If such a principle were widely adopted by a respected community of professionals, far more than just “the soft side of physics” should benefit.

    ReplyDelete
    Replies
    1. Your '2nd principle' is too broad to be useful in science.

      You would need hard definitions of what counts as "evidence", as "consistent", and "increasing reason"; and these are very hard to devise.

      When do we decide that evidence NOT consistent with a model should make us STOP believing in a model? For example, the whole reason for developing the model of dark matter is because galactic rotational speeds do not obey the Einsteinian model of gravitational attraction.

      But that is evidence against that model, so why don't we reject it? Why invent some mystery matter, which we still cannot find, as a way to keep the Einsteinian model without any modifications, instead of just considering it disproved?

      Part of "figuring things out" in your model must include "unfiguring things out". As it stands, you present a monotonically increasing function; namely consistent evidence increases belief, but inconsistent evidence does nothing. All the evidence for believing in Newton's theory of gravitation still applies, but now we reject it.

      As presented, your principle for "belief" is a monotonically nondecreasing function, and you do not consider the weights of evidence. A single counter-factual can destroy belief in a system and outweigh all the evidence in favor of it.

      And the big problem with the physics of GR and QM right now seems to be, IMO, an utter lack of evidence; the systems are consistent with what evidence we have but not consistent with each other, and there is no clear evidence at all to help physicists figure out how to fix that problem.

      So we have a meta-modeling problem your 2nd principle does not address: With zero evidence to suggest that one approach A to produce evidence is superior to another approach B to produce evidence, how do we proceed? What do we pay for?

      Of the dozens of ideas and experiments that have no evidence, but might lead to evidence if funded, how do we conscious beings consciously decide where we should spend our time and money?

      And since we will be operating without any clear evidence, how do we decide when to give up on an idea, or decide that its money would be better spent elsewhere?

      Science cannot always proceed on a simple "If A, then B" model of discovery. That often works, but t the root of things, somewhere, there is some person that says, without any evidence or justification anything will happen, "I wonder what happens if I do this..."

      They do an experiment, on paper or physically, out of curiosity. And that is a way of "figuring things out" that violates your 2nd principle. If you insist that it doesn't, then IMO your principle is too broad a claim to be usefully applied toward making scientific decisions.

      Delete
    2. Thank you Dr. Castaldo for your involved reply. Sorry for the delay. I don’t usually hang out at physics blogs. It’s my tremendous esteem for Sabine that nevertheless forces me to occasionally stop by. So is my EP2, too broad to be useful in science? Great question!

      Just because "evidence", "consistent", and "increasing reason" are difficult to assess, this doesn’t mean that such ideas must be cast aside in themselves. They are of course crucial to science regardless of any given set of epistemological standards. They are variables from which to assess what’s going on. And indeed, I propose is that my EP2 itself is what we use to assess the validity of what qualifies for “evidence”, “consistent”, and “increasing reason” in any given instance. (This is to say that there is only one process by which anything conscious, consciously figures out whether or not something qualifies as “evidence”, as “consistent”, and as “increasing reason”. It takes what it thinks it knows (1), and uses this to assess what it’s not so sure about (2). As (1) remains consistent with (2), (2) tends to become more believed.)

      Furthermore I think that you’ve overlooked the implicit statements of my EP2. If I explicitly state that there is only one process by which anything conscious, consciously figures anything out, then there will naturally be an implicit statement that anything which doesn’t quite meet that standard, doesn’t quite consciously figure anything out.

      What we actually play for, seems to be what’s politically useful us to pay for. But if the institution of science had more effective epistemology from which to work than it does today, then I believe it would become more politically effective to spend our money in more productive ways. My EP2 clearly supports evidence based rather than “beautiful” theories. And if philosophers cannot agree upon any such effective principles from which to better found the institution of science, then scientists will need to develop some of their own. I propose two regardless of standard “science vs. philosophy” nonsense.

      Delete
    3. Philosopher Eric: Your third paragraph is just a circular re-definition of the word 'consciousness' and what it means to 'figure things out'.

      You don't get to redefine words; nor do you get to claim that 'consciousness' works "in only one way" when all we have is a very general definition of consciousness as a phenomenon in the first place.

      I find that equivalent to claiming computer code works in "only one way", it is all a network of transistors switching each other on and off recursively. Great, and true enough, but it is far too broad a definition to help us predict anything at all about what comes next in computer science.

      Your 2nd postulate is worse than that. There is no generally accepted model of how consciousness works or figures things out that is consistent with all credible experiments on how real conscious people figure things out; nor do you provide such a model. You just assert 'consciousness' has this property of "working only one way," without evidence.

      I reject that assertion, from personal experience with consciousness and how I come to believe certain scientific principles are true, or come to reject scientific principles I believed were true. This has more to do with LOGIC than the weight of evidence; a single, repeatable, observable counter-factual is enough to kill a mountain of evidence. The two-slit experiment is an example of that: By logic, what I believed to be true about matter, due to the increasing evidential weight of many millions of waking experiences with matter, was overturned by one experiment. The outcome of which was definitely perceived by me consciously; thus your 2nd principal just doesn't work. It isn't about the weight of how many experiments confirm a belief; it is about logic.

      Delete
    4. Dr. Castaldo,
      I can see that you and I would have all sorts of fun if I do start hanging out around here! Perhaps….

      I don’t see that the premise of my third paragraph mandated the conclusion. The point of it was that if something is explicitly stated, then there should naturally be certain implicit statements that go along. Apparently there were certain implicit statements of my EP2 that you had originally missed. Thus my clarification.

      It’s interesting that you mention that I don’t get to redefine words (not that I think I did so here). Actually my first principle of epistemology states that there are no true or false definitions for our terms, but rather just more and less useful ones in a given context. One of academia’s greatest flaws, I think, that people today say things like “What is consciousness”, “time”, “life” and so on, as if true terms exist out there to potentially discover. Instead I believe that we must simply try to develop effective definitions for our terms. You could counter my EP1 by providing what you consider to be a true definition for any term.

      It’s interesting that you say my EP2 is too broad, and then also state that you have other ways to figure things out. If true that would actually make it too narrow. But have you? Is my EP2 too narrow? You could show this by means of an example. Try to find something that you’ve figured out, which doesn’t also reduce back to my EP2. Are there any models which you’re able to validate without evidence?

      Modern physicists are spending a great deal of money trying to do exactly this from the premise of “beauty”. The author of this site objects. My own suggestion however is that this is just one example of a far greater problem which infects all of science (though actually seems most problematic in softer varieties such as psychology). I believe that the cure will be to develop a respected community with its own accepted principles of philosophy from which to better found the institution of science. I propose one such principle of metaphysics, two of epistemology, and one of axiology.

      Delete
    5. Philosopher Eric: Your third paragraph says there is only *one process* by which consciousness can figure things out; that is too narrow, then claims the inverse, that if anything is figured out by any other process, it isn't consciousness; that is too broad a claim. By this definition, you don't prove there is only one process; you simply assert it.

      It is like me saying all cars requires gasoline to move under their own power, thus if it doesn't require gasoline to move under its own power, it isn't a car.

      Can you see I am then redefining what a "car" is, by assertion alone, without any proof that I am right? We have all electric cars that require zero gasoline to move under their own power.

      If you then try to prove your assertion that there is only one process, then (in my mind) that "process" must encompass so much brain activity we might as well call it "information processing" which is too broad to be of any use (like my transistor example).

      I think you also have an infinite descent problem in this claim, because sooner or later, somebody has to get the idea to invent something new that doesn't follow from the "evidence".

      In fact, people have to do that a lot. The microscope was invented when a spectacles maker put two lenses on each end of a pipe, and looked through it and realized magnification. Why did he do that?

      Since he was surprised by the result, we know it was likely just to see what would happen if he stacked lenses, and not because of any goal to accomplish microscopy or figure something out. Heck, he couldn't even know this would have any application, the discovery of cells and micro-organisms followed this invention and by the same route of just wondering what would happen: A microscope was at hand and used to look into a drop of water. Or a cork, which is how 'cells' were discovered.

      It is after curiosity produces a surprise that the process of tuning lenses to make a sharp image begins, or the process of classifying micro-organisms, or classifying biological cells, begins. Following classification by form, we start to learn function and behavior. Most of the science of modern medicine (and much of astronomy) rests on a spectacle-grinder 470 years ago wondering what would happen if he stacked lenses.

      Throughout this process of science, we humans are 'figuring things out' NOT by consciously processing any evidence and building up beliefs, but by just intentionally (and thus consciously) trying random things to see what happens.

      If what happens is a surprise that doesn't kill us, then we begin the processes of classification, identifying generalizations of relationships between classes, and then using the generalizations as the fodder for theories of the relationships.

      But that ascent begins with an inciting novel experiment that surprised somebody. It is not the processing of data and building up a belief and confirmation of it. Random curiosity may often go awry, but it has to be part of how we consciously figure things out.

      Delete
    6. Okay Dr. Castaldo, I think I understand what you’re saying. First you’re concerned that if anyone does come up with another way to figure something out, then in order to preserve the validity of my EP2 I’ll simply assert that it’s not “conscious figuring”. Unfortunately today “consciousness” is a very squishy term so I do respect such pessimism. Panpsychism seem to keep getting more popular for example. I’ve now decided something given your concern however. I shall recuse myself as arbiter of “conscious” in these matters. In all cases that shall be for others to decide. If one does want to define “consciousness” generally enough, then my principle should tend to fail.

      Furthermore I’m pleased that you’ve proposed a competing way to figure things out, and I certainly do consider it “conscious”. This is exactly the sort of probing that I seek!

      When we try things out just for the hell of it (like the stacked lenses), or even notice interesting things accidentally (as Alexander Fleming did with penicillin), are things consciously figured out beyond evidence based models? Hmm…

      In the lens case, couldn’t it be said that one has an inkling that something interesting might happen by stacking them (or a model of at least that), with evidence being the person’s experience with the sorts of things that lenses do? If so then this could be assessed to fall under my EP2, and even if somewhat at a “proto” capacity. Of course if taken further we’d expect strong correspondence with my EP2, that is unless evidence were subverted for pleas to “beauty”.

      Regarding “dumb luck observations” like Fleming’s, here no model exists initially at all — just evidence. Thus once such evidence does emerge, modeling might begin as well. In order for any conscious understandings to exist however, both modeling and evidence should be necessary to some degree.

      Regardless of whether or not I’ve addressed your concerns sufficiently, let’s talk about the big picture as well. Do you consider it a waste of time to try to develop various generally accepted principles of epistemology from which to perhaps improve the institution of science? And if you do, what is your evidence? Can you make a convincing case that I also should not be hopeful?

      If you are conceptually interested in the improvement of science through various accepted principles of epistemology, I’d appreciate hearing about it!

      I’m not sure how to leave comment links in blogger, but if you can follow this you’ll see a 2015 comment I made at philosopher Massimo Pigliucci’s, which Sabine responded to a few comments down. You’ll also note how similar my message remains today.
      https://platofootnote.wordpress.com/2015/12/10/why-trust-a-theory-part-iii/comment-page-1/#comment-1477

      Delete
    7. Philosopher Eric: I think it clear that "models" (made of neurons) are necessary to figure anything out at all; so I imagine the spectacles maker, even if 1540, had a fairly robust empirical mental models of how lenses work.

      In my view, his "wonder" moment might also have happened accidentally; seeing some pattern of light through lenses on his worktable, or might have been an idle curiosity because he realized his mental model didn't tell him what "recursive" lensing would do (i.e. looking at a lens through a lens).

      In the first sense (sun accidentally shining through multiple lenses), he would be doing the equivalent of discovering penicillin; in the second sense, he would be doing an experiment to resolve a gap in his mental model of lenses.

      But in both cases, he is surprised, so unlike your 2ndP, he has no existing model for what will happen, the mental "model" of what happens, and a microscope itself, appears full-blown upon him seeing the results of his experiment.

      I imagine the "Diet Coke and Mentos Fountain" was discovered exactly this way; someone did it for the hell of it, was astonished, and now billions of people have seen it.

      On the big picture, a mind (from mice to men) is constantly developing and updating models; it is a learning machine, primarily this is modeling the real world to anticipate the future (at any time scale) and potentially change it. (only humans are capable of very long scale anticipation, but that is a matter of degree, all neural minds anticipate at some scale, because even a single neuron is modeling something).

      Understanding how we can learn and know things is just another model anticipating the future; in your case, to influence the future of scientific endeavors and make them more efficient and less prone to error. That is not a waste of time.

      That said, I invoke Einstein's Razor; that we should make our theories as simple as possible, but no simpler!

      I think science generally proceeds from simple classification, to similarity groupings, to theories. Darwin started out to make a catalog of God's Work, not to redefine and overthrow the existing science of biology (and disease and much medicine).

      Dr. Hossenfelder is busy cataloging, here are the biases, here is how they are deployed, here is how this desire for "beauty" is ruining science, here is how money is corrupting it.

      I would suggest, instead of trying to come up with a unified theory of how science is done, it might be more productive to catalog the ways science is done wrong, then group by similarities and/or figure out why scientists fall prey to this; and what can prevent that category of error. This "bottom up" approach may indeed lead to a unified theory of science, but even if it doesn't, it can have utility in understanding the individual elements of science.

      Delete
    8. Well at least grant me this Dr. Castaldo. Some valid objections to a given model warrant that the model be discarded outright. Other valid objections merely warrant model tweaking. It’s not like you’ve found all sorts of distinctly different ways to figure things out. Instead you’ve presented some points at the margins of my model which require accounting for.

      One of them is that only fairly normal conceptions of “conscious” apply. Certainly nothing like panpsychism! Secondly if something is done just to see what will happen, this should still present an “evidence/ model” framework of some sort. The model could be as simple as “something interesting might happen if (I drop Mentos into Diet Coke)...”, with evidence for such a simple model existing as curiosity about novel ways of treating various things. If such “proto-figuring” does result in anything interesting then we’d expect more detailed models to be assessed against associated evidence. Thirdly when something interesting is accidentally observed (such as the effects of Penicillin or accidentally dropping Mentos into Diet Coke) this will not in itself be consciously figuring anything out. Such figuring might indeed begin once associated models are developed however.

      I’ve been bringing up this principle for about four years on the blogs, though no one has achieved near your level of critical analysis. Well done!

      On the big picture, I hadn’t quite made the association that Dr. Hossenfelder’s approach is “bottom up”, while my own is “top down”. That does seem reasonable. And it could be that her approach is best for her and the field of physics. My own project faces tremendous structural impediments and true allies seem rare. But let me give you a bigger big picture so that you might have a better sense of my perspective.

      Of course “natural philosophy” emerged from philosophy in recent centuries, which then became known as “science”. Hard forms of science like physics then went on to make the human into an extremely powerful animal. Conversely our mental and behavioral sciences remain extremely soft today. Even softer still is what’s left of philosophy. While soft scientists do find some common understandings, and at least dream of developing various general models from which to effectively describe our nature, philosophy is often considered by philosophers as more of an art to appreciate rather than an exploration of reality.

      As I’ve said, I believe that the topics which remain under the domain of philosophy, or metaphysics, epistemology, and axiology, exist as the premise from which all of science must build. (But if philosophy remains so troubled, then why have hard forms of science done so well? I think because they’r less susceptible given their empirical nature and that their subject matter doesn’t naturally threaten standard values, unlike human related sciences.)

      I’d like humanity to develop a community of professionals that has its own generally accepted principles of metaphysics, epistemology, and axiology, mainly so that our soft sciences will have a strong enough foundation from which to finally build effectively.

      (I shouldn’t admit this here, but I’m sinisterly pleased that modern physicists make appeals to beauty, thus giving the field a taste of how things work on the soft side. When not sufficiently founded, apparently even our greatest science can suffer!)

      I certainly appreciate people working on bottom up solutions. Given that my own project concerns all of science however, I believe that nothing less than a top down solution will suffice. And indeed, I consider success here pretty much inevitable at some point. Has science even reached puberty yet?

      Delete
    9. No problem, not every challenge to a theory is fatal. However, a theory should be a model of some aspect of reality against which it can be tested.

      It should be able to predict what will happen, and what won't happen. In the case of probabilistic theories, it's predictions over multiple trials should match the distribution. In primarily observation-only disciplines (e.g. astronomy, paleontology, archeology, forensics) a theory should explain what has happened, or at least narrow down what could have happened to lead to the observed data, and in that sense it rules out some things that did not happen.

      If 'tweaking' the theory destroys the model, so even unrealistic things or outright errors are 'explained' by the model, then the tweak was just a fallacy masquerading as a fix.

      One must also avoid the fallacy of "proving too much", i.e. offering an explanation that allows anything to happen (or have happened). Belief in all-powerful or magical beings or magical mechanisms (like Karma or a Grand Plan) are all in this category; they explain nothing because literally anything can happen.

      Science too can fall into this trap; e.g. I think based on reading that String Theory "proves too much", as does (IMO) the "multiverse" as a way to explain the constants (as drawn from a distribution) or the "Many Worlds" interpretations of wavefunction collapse like Hugh Everett's.

      A theory of science has to be a predictive model. It cannot exclude serendipity, it cannot exclude subconscious conclusions (My subconscious still solves problems for me a few times a year, I wake up (from an unconscious state) in the middle of the night with a solution in mind -- It is one of the ways this particular conscious being invents solutions to problems; and I know I am not the only one).

      I think (to go in circles) the major flaw in your theory is failing to account for "disproof by counter-example," in which a beloved theory can be overturned by a single piece of evidence. It isn't about "reinforcing a belief". An ant must be able to defeat a lion.

      As for philosophy being an art, I'd say have fun, but don't pretend it is science. If it doesn't fit real data and to a degree better than chance match what does happen and exclude what doesn't happen, IMO that isn't science. Also IMO, the odds of a top-down model doing that without referring to or being constrained by any real data seem very long indeed.

      Delete
    10. Dr. Castaldo,
      Let’s consider my first principle of epistemology once again: “There is only one process by which anything conscious, consciously figures anything out. It takes what it things it knows (evidence), and uses this to assess what it’s not so sure about (a model). As a model continues to remain consistent with evidence, it tends to become progressively more believed.”

      There are various implicit statements which this model makes that I could tack on explicitly if not clear enough. For example I could conclude with “…progressively more believed rather than proven”. Why? Because the only thing that could ever truly be “proven” to you about reality, is that you yourself exist in some manner — all else will concern more and less supported belief. (Actually this might be a useful inclusion since many today seem not to grasp the profundity of Rene Descartes.)

      Regarding your "disproof by counter-example" concern, for that I could formally add something like: “When evidence does not remain consistent with a model, the model tends to become disbelieved”. But that’s already implicitly behind what’s explicitly stated. Even Einstein showed a preference for parsimony with his “Everything should be as simple as it can be, but not simpler!” razor.

      Regardless of how it might be most productive to tweak my EP1, observe that this is certainly not physics. The same could be said for the proposals that you’ve just provided. (Good stuff by the way!) So what does academia call the sort of things that we’ve been dickering with here? In a word this is “epistemology”. Rather than force each branch of science to figure out their own epistemology however, shouldn’t physicists be doing physics, chemists be doing chemistry, and so on? Shouldn’t there be a community of specialists tasked with sorting out epistemology for scientists to use in general?

      That was the point of the comment that I linked to earlier, and I certainly didn’t expect Dr. Hossenfelder to actually weigh in! She did however, and decided that it deserved further consideration. Back then perhaps she was more hopeful that she’d be able to straighten her field out herself. Of course “Lost in the Math” was still to come. Now that it’s out however, assessments must be made. Will a “bottom up” approach be sufficient? I suspect not.

      I believe that all of science is in need of a respected community of professionals with its own generally accepted principles of metaphysics, epistemology, and axiology, from which to guide the institution of science. Thus I believe that a new breed of philosopher must emerge that isn’t concerned with cultivating two and a half millennia of humanistic “art”, but rather is tasked with developing accepted principles from which to better found the institution of science. Thus there’d be two distinctly different varieties of philosopher roaming the halls of academia.

      It’s good to hear your thoughts on "Many Worlds" interpretations of wavefunction collapse. I haven’t explored that approach much, but if these physicists actually invent innumerable worlds in order to render human observations consistent, then I’ve got to laugh! WOW!

      This gives me an excuse to submit my single principle of metaphysics. It reads, “To the extent that causality fails, nothing exists to figure out”. I thus stand with Einstein as a perfect naturalist. I would however adjust his famous line to something a bit more epistemically responsible. It would instead read, “To the extent that God plays dice, nothing exists to discover”.

      Delete
    11. Phil. Eric: Friendly arguments.

      ...the model tends to become disbelieved.

      My problem with this is in the word "tends"; this is precisely the parallel of arguments in both Earth Science and Evolution of "Gradualism" versus "Punctuated Equilibrium" or "Catastrophism". The latter two are predominant. An asteroid struck the Earth, a catastrophe for all life on it, it changed both the landscape and evolution.

      Some of evolution seems to be gradual adaptation; but we can't be certain. When we see some mutations that result, for example, in little people born from average-sized parents, that is not a gradual change in size. We know by the nature of genetics that changing a single nucleotide can cause a "catastrophic" change in a phenotype; thus we shouldn't expect evolution to always proceed gradually. Earth science is the same, a volcano appears in the relative blink of an eye, we have seen the appearance of a new island in a single human lifetime. The Grand Canyon is gradual, but a single extended storm can make a lake overflow its banks, erode them away, and cause a flood and then the disappearance of the lake, all within days.

      The same is true for science. Balloons can pop. A single counter-example doesn't "tend" to do anything, it is a catastrophe for the belief; alive one second, dead the next. Language like "tends to" implicitly endorses gradualism; and in a logic-based enterprise like science, almost nothing warrants gradualism. Either there is an infinite number of primes, are there is a finite number. As children learning to multiple and divide, we are all taught how to prove there are an infinite number of primes, and once we learn that simple proof, it is a "catastrophic" bit of knowledge. We don't "tend" to believe there are infinite primes, we absolutely effing know there is no largest prime.

      on "Many Worlds" interpretations of wavefunction collapse.

      A quantum wavefunction can, for example, incorporate several states with probabilities for where a particle will be found; but when it collapses only one of these states, at random, becomes reality; the rest are discarded (This is what Einstein meant by "God does not play dice", he wouldn't believe the choice was non-deterministic). A series of such random selections creates a "history".

      The Many Worlds interpretation (by Hugh Everett, 1957) attempts to address Einstein's concern by claiming nothing gets discarded, all possible histories are realized in an infinite number of parallel universes (which we cannot see). Thus there is no randomness, and we don't have to explain it, or why our particular history is as it is. There was no "random choice" made, that is an illusion. To my mind the problem with this is it is by its nature not a testable theory, and it doesn't truly explain anything to us. If believed, it also halts any investigation into why the wavefunction collapses or how an eigenstate is determined; e.g. perhaps Penrose's suggestion that gravitational quantum super-position plays a role; which might be better clarified by a quantum theory of gravity.

      “To the extent that God plays dice, nothing exists to discover”

      Nope. That is patently false. God or not, the field of statistics is robust and there is plenty left to discover about things for which we have no simple or clear model. I could claim "God plays dice" in Evolution, and Evolution has "discovered" amazing machines. In physics, chemistry, statistical sociology and politics and law enforcement and medicine and Earth sciences (predicting earthquakes, floods, volcanic eruptions), there are distributions to discover.

      When the world gets stochastic, the statisticians get going. Well, they probably get going!


      Delete
    12. Dr. Castaldo,
      Yes friendly arguments not only seem the most enjoyable, but the most effective. I try to emulate Ben Kingsley’s portrayal of Gandhi for my own rhetoric.

      I understand your point on “tends”, though here I think you might be mixing ontology with epistemology a bit. I presume that there are various discrete states of existence out there to potentially describe. Human understandings beyond phenomenal experience however, or what Immanuel Kant called “noumena”, simply cannot be perfectly known — belief is all we have. “Proof” today may literally be termed “extremely well founded belief”.

      In any case note that if I were instead to write “…as a model stays consistent with evidence, it becomes believed” rather than ”tends to become believed”, then my principle would be easy to challenge. Surely even with extremely well supported evidence a given person might disbelieve a given model.

      You’ve mentioned “knowing” that there is no largest prime number. Furthermore you’re quite able to mathematically prove this to be the case and so can’t be wrong. But mathematics is merely a humanly fabricated language rather than “noumena”. Many assertions are true/false by definition simply given the nature of the languages that we’ve developed — tautologies. Surely BackReAction is the wrong place to let yourself get “Lost in Math”! :-)

      I consider us quite aligned on the Hugh Everett QM interpretation. To me it seem utterly ridiculous to propose that each quantum state exists exactly as such in its own parallel universe, merely give that this would help support determinism. As you say, this is not a testable theory and doesn’t effectively explain anything. Still as a mere human I must not conclude that this makes it wrong. But I can still say that the whole thing seems utterly ridiculous, and even if somehow true. Regardless I find it difficult to respect physicists who preach that sort of thing.

      We’ve delved strongly into my epistemology, though now we get into my metaphysics. I’m a perfect determinist like Einstein was. I have no idea how the wave function collapse might be perfectly mandated to happen exactly as it does happen, but I nevertheless presume full causality here given that otherwise there wouldn’t be founding reason from which to support such events. Without causality forcing things to function exactly as they do, they can effectively be termed “magical”. I don’t mind physicists supporting the notion of non-causal function, but with that I’d like them to acknowledge that they aren’t quite naturalists in this regard as well.

      A friend once gave me a reasonable argument that Bohr and Heisenberg initially presented their Copenhagen Interpretation epistemologically rather than ontologically. I consider this to be the most responsible way to go. Here it’s merely effective for the human to perceive QM non-deterministically, not to further presume an ultimate void in causality itself. But apparently with his God/dice challenge it was Einstein who took things in a blatantly ontological direction. So perhaps Bohr and Heisenberg were goaded into an ontological position as well?

      As I understand it, QM is the only topic in physics where many professionals have decided that causality itself fails. Otherwise it’s thought that causal dynamics do explain what’s going on in the end (even though we’re far too ill informed to specifically explain what will or has happened in various instances, and so must commonly rely upon probability distributions).

      Thus consider my only principle of metaphysics with perhaps a bit of care. If a failure in causality itself is what explains human uncertainty in a given instance (rather than standard ignorance), how might it be possible to figure out the function of such an event? Given an ontological void in causality itself, would anything even conceptually exist to figure out? What would we then be trying to grasp?

      Delete
    13. Philosopher Eric: "Lost in Math" refers to seeking beauty in the mathematics used to model physics, i.e. simplicity of form with 'pretty' constants). I am not lost in math when talking about prime numbers. I just know the rules. Math IS invented by humans, a game with rules designed from the beginning to model the real world, counting livestock and measuring crops.

      >> ... how might it be possible to figure out the function of such an event?

      Statistics; which is what was used. Although slightly different, statistics was invented to address randomness in the real world; specifically the randomness of gambling; throwing dice, playing cards, flipping coins, racing horses, betting on games of chance.

      Although in those circumstances, we don't suspect the outcomes depend on anything truly random; it might as well be, because it would be impossible to measure all the starting conditions accurately enough (and non-destructively) to know in advance how the dice are going to land if a human throws them.

      But we can model the games of Craps or Roulette statistically, and although no game would follow the odds perfectly, in the long run casinos make money off these games. American casinos alone earn $6 Billion annually, just playing the odds.

      You don't have to figure out "the function" of such an event or why it happened, if you can characterize how often it happens, you can make accurate predictions about how many such events will turn out, and that can be very useful.

      This is not to say we shouldn't try to figure out the function, or try to narrow the distribution (a single deterministic answer is the narrowest "distribution"), but if those attempts all come to naught, the probabilistic answer remains useful.

      Delete
    14. Dr. Castaldo,
      So “Lost in Math” didn’t address the common fallacy that mathematics is more than a humanly fabricated language such as “English”? The fallacy that it exist “of this world”, or beyond a mind that uses it? The fallacy that it needs to be explored as a science rather than as a conceptual tool? Thus platonism was left unscathed? And Max Tegmark’s mathematical universe hypothesis didn’t get royally bitched out? Well hopefully next edition…

      We seem agreed on the role of statistics. Furthermore I see that you haven’t complained about me accusing many physicists of believing in “magic” through their ontological interpretations of quantum mechanics. A number of physicists at other blogs have taken exception with me about this. First they belittle Einstein as clueless regarding QM, and then get upset when I say that Einstein displayed a full measure of naturalism in this regard, while their naturalism instead gets into a “super” variety.

      I suppose that we’re about tapped out for this one. But I do have an extensive set of models on the soft side of science that I’d love your thoughts on. This ranges from psychology to brain architecture and consciousness. Shoot me an email if you’re interested: thephilosophereric@gmail.com

      Delete
  8. What if we were to do FCC as a joint program between the EU CERN and the US DOE? Maybe the Chinese or Russians can even join in. If the cost is diluted enough around the world then it will not impact other scientific budgets as badly.

    There is this suspicion I have that there is more to the standard model out at higher energy. If we ran at 100TeV we would get more complete physics and measurement of symmetry recovery of QED and EW or QFT. The scalar Higgs field is an odd thing in my opinion.

    Cosmic rays are probably the long term future no matter what. Even with FCC we will want to look to even higher energy. At some point we will have nothing but cosmic rays to work with. The Earth's upper atmosphere is a sort of scintillation material, which is an older particle detector method. So high altitude detectors on heliostats or balloons can measure these events. The stats will never be as good of course.

    ReplyDelete
  9. I love your analogy of an electron microscope as a type of particle collider -- because that's just what it is! Many thanks!

    ReplyDelete
  10. It sounds like you've been thinking about this for a long time and honing your arguments. There were good arguments for building the LHC, and there are good arguments for continuing its operation with upgrades. There are no good arguments for building an even larger accelerator of similar design. Sciences seem to alternate between eras of stamp collecting and theoretical consolidation. Particle accelerators haven't been turning any interesting stamps lately. (The Higgs was right where we predicted decades ago, but that's just one stamp.) There are lots of other approaches to try.

    (You'd think bright young-ish physicists would be embracing a shift away from a new gigantic accelerator. Odds are there would be dozens, maybe hundreds, of new projects funded. That means lots of grants for more junior PIs. The career and empire building opportunities should have careerists drooling.)

    ReplyDelete
  11. Spend as much time coming up with ideas as you do explaining why others are bad. You won't have to convince anybody if there is an obviously better way to go. Associate your name with something besides being against colliders. Cause it's getting old and I'm sure you have plenty of brilliant knowledge to share. You make plenty of interesting points, but you don't follow them to any type of conclusion or experiment. Your ideas presented here are support for your argument, not something we can take action on instead of a collider. Specifically make cases for projects, instead of listing all the interesting unsolved problems. I think the collider debate will work itself out if Physicists focus on physics. Find something to argue for, instead of against. You specifically say you aren't advocating for an alternative. Why not!?! You don't like what's happening, but also have no proposed alternative. Telling a bunch of thinkers that they aren't thinking is a hard and loanly road. My 2 cents. Cheers!

    ReplyDelete
    Replies
    1. JB: I'm not speaking for Dr. Hossenfelder; but if my problem is that physicists are misleading the public into spending $20B by lying about what a new collider is likely to discover, then "coming up with my own ideas" does not address the problem of them lying.

      If what I want is for scientists to stop lying and telling half-truths and promising fantasies, and instead to have an honest debate about what we should fund and why, then claiming my own idea is "the best" idea would severely undermine my credibility and make it seem like I have a selfish agenda, and I'm just like the people I criticize.

      The problem at hand is dishonesty in the field. That cannot be addressed with more dishonesty, and it cannot be addressed by keeping quiet about the dishonesty.

      It also cannot be addressed with some alternative: If lies are allowed as justifications, then truthful proposals are at a severe disadvantage because they cannot promote fantasy and magical new physics; and how are the politicians and public funding the projects supposed to know the difference?

      If everybody shuts up about the lies, the liars win, and truthful proposals go unfunded because real science cannot compete with science fiction.

      The problem is trying to get $20B in funding by intentionally misleading political leaders and the public they serve about what is likely to be discovered; in particular "new physics".

      We don't solve the problem of defrauding the public by not informing the public they are being defrauded. Advocating for something else doesn't solve it, shutting up doesn't solve it, and explaining why their fraudulent arguments are bad ideas is precisely what needs to be done.

      As I said, I don't speak for Dr. Hossenfelder; nor am I a knee-jerk acolyte for whatever she says, but I have read her argument and book and agree with her on this issue. Explaining why their arguments are bad and misleading is both brave and the right thing to do.

      Delete
  12. The cycle cannot be broken if consistent null results don't lead to the development of a new theory that is fundamentally different from the old theory - essentially a competitor theory. To the extent that there are competing theories now, they are all variations on the same underlying assumptions.

    Tweaking parameters to reset expectations has not worked. Null results have to have consequences other than the "heads we win, tails we play again" set up currently in place. A model that has consistently produced null results needs to be reconsidered in its entirety.

    A theoretical exercise of potential value would be to simply construct alternatives of the two standard models by making alternative foundational assumptions. Rather than requesting that experimentalists "go look for new stuff", maybe theorists should try to rework those foundational assumptions. You know, just to see what's there.

    Such an effort should not, however, entail an open-ended commitment to exploring every far-fetched conjecture that the fertile imaginations of theorists might entertain. Science doesn't need another string hypothesis.

    ReplyDelete
  13. The old collider-junkie in me has a hard time with this prognosis, so I certainly have a psychological bias here. And so, unsurprisingly, I'd like to see plans moving forward for some newer collider.

    I easily agree with Bee that, especially as funding is tight and is likely to remain so, it's important to fund broad experimental research in the foundations of physics. Dumping most or all of the research money into one bigger collider doesn't seem wise.

    That said, here are some arguments for continuing to work on designing a new big machine:

    * As others have mentioned, there are existing anomalies to be explored and cross-validation with other experiments beyond the LHC is essential. There always will be anomalies I'm sure, so this argument is somewhat flawed.

    * To maintain and improve humanity's ability to build colliders and accelerate particles, we need to maintain a kind of ecosystem for physicists and accelerator scientists. For example, research into plasma wakefield acceleration.

    * It takes maybe 20 or 30 years to progress from design discussions to a fully-operational collider and detectors, or longer if funding is lower. If we can also continue to develop neutrino experiments, astronomical facilities, precision measurements at lower energies, etc then perhaps by the time a new collider is in later stages of construction we'll have worked out a lot more theories to test there.

    * It's still too soon to put the final nail in the coffin of supersymmetry. Yes, there is certainly less promise there, but be careful not to take early experimental results too seriously.

    Take a look at neutrino experiments. There was a period of almost three decades where not a single experimental result was *not* later retracted. But indeed today we do have interesting results on neutrinos, and these may be providing us already with the best evidence for physics beyond the standard model. I'm really saying two things here. One is that experimental physics has actually been very successful between the 1970's and today, and the standard model has had to be adapted since its inception. The other is just to be more patient because experiments are difficult.

    What we most need is more funding for basic science. It's not a waste of taxpayer money -- the vast majority of that money gets dumped into tech industries, and it also helps fuel higher education. How we spend the money we have has a technical component: What's the best bang for the buck? But it also has a political component: How can we get more people to support research? Big accelerators have pros and cons there. They make the expenses highly visible, but they also raise public awareness.

    ReplyDelete
  14. I wanted to throw in some history on gravity wave detectors here too. There are some parallels to particle accelerators.

    Early gravity wave detectors were Weber bars, first constructed in the 1960's even though their sensitivities were many orders of magnitude too low. These early experiments were not terribly expensive. Bell labs even had one! But probably their potential for discovery was greatly exaggerated.

    It took 50 years (and many retracted/discredited results) to get to the point where we are today, with multiple confirmed results and even mixed-mode astronomy. Many people argued during this time that the money spent on these projects was being wasted, and many argued that sufficient sensitivity was impossible to achieve. The argument "you just have to look" was used a lot.

    I'm not saying we should be lying to the public with false hopes of discovery. That's not a sustainable means of politicking. But I understand the temptation. The public is just so astonishingly foolish! I am suggesting that sustained, long-term investment into accelerators seems to me to be likely to pay off eventually.

    ReplyDelete
  15. Dear Sabine,

    Would you approve building a "Higgs factory" collider to study the properties of the last discovered particle?

    ReplyDelete
  16. US Large Hadron Collider research is funded by the US Department of Energy Office of Science and the National Science Foundation.

    This source of funding for LHC now looks uncertain. It looks like the US may be getting out of funding basic science research. For example, the United States government has just canceled funding for the JASON (advisory group).

    The Pentagon move to cancel the JASON contract appears to be part of a larger trend by federal agencies to limit independent scientific and technical advice. As noted by Rep. Cooper at yesterday's congressional house hearing, the Navy also lately terminated its longstanding Naval Research Advisory Committee.

    JASON is an independent group of elite scientists which advises the United States government on matters of science and technology, mostly of a sensitive nature. The group was first created as a way to get a younger generation of scientists—that is, not the older Los Alamos and MIT Radiation Laboratory alumni—involved in advising the government. It was established in 1960 and has somewhere between 30 and 60 members. Its work first gained public notoriety as the source of the Vietnam War's McNamara Line electronic barrier. Although most of its research is military-focused, JASON also produced early work on the science of global warming and acid rain. Current unclassified research interests include health informatics, cyberwarfare, and renewable energy.

    ReplyDelete
    Replies
    1. Axil: I'm sure some of that is Trumpism, and some of it is scientists taking money and not telling the US Govt what they want to hear about things like global warming, income inequality and other issues.

      Within that atmosphere, I imagine some part of getting out of LHC funding is the failure to deliver on promises. The DoD (Dept of Defense) would be happy to pay $billions for new physics they could then exploit, but that doesn't look promising now.

      At least, not as promising as funding supercomputers, AI and other info tech, including a big money push on anti-hacking and info security research that has transformed some university CS departments.

      I don't know about other countries, but the USA is pretty much a bottom-line operation in seeking applications. IMO The DoD isn't much interested in theory and knowledge for the sake of knowledge; they want new toys. :-)

      Delete
  17. https://www.project-syndicate.org/commentary/large-hadron-collider-mainly-null-results-by-sabine-hossenfelder-2019-04

    An article in Project Syndicate by
    SABINE HOSSENFELDER / Apr 18, 2019

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.