Pages

Monday, November 06, 2017

How Popper killed Particle Physics

Popper, upside-down.
Image: Wikipedia.
Popper is dead. Has been dead since 1994 to be precise. But also his philosophy, that a scientific idea needs to be falsifiable, is dead.

And luckily so, because it was utterly impractical. In practice, scientists can’t falsify theories. That’s because any theory can be amended in hindsight so that it fits new data. Don’t roll your eyes – updating your knowledge in response to new information is scientifically entirely sound procedure.

So, no, you can’t falsify theories. Never could. You could still fit planetary orbits with a quadrillion of epicycles or invent a luminiferous aether which just exactly mimics special relativity. Of course no one in their right mind does that. That’s because repeatedly fixed theories become hideously difficult, not to mention hideous, period. What happens instead of falsification is that scientists transition to simpler explanations.

To be fair, I think Popper in his later years backpedaled from his early theses. But many physicists not only still believe in Popper, they also opportunistically misinterpret the original Popper.

Even in his worst moments Popper never said a theory is scientific just because it’s falsifiable. That’s Popper upside-down and clearly nonsense. Unfortunately, upside-down Popper now drives theory-development, both in cosmology and in high energy physics.

It’s not hard to come up with theories that are falsifiable but not scientific. By scientific I mean the theory has a reasonable chance of accurately describing nature. (Strictly speaking it’s not an either/or criterion until one quantifies “reasonable chance” but it will suffice for the present purpose.)

I may predict for example, that Donald Trump will be shot by an elderly lady before his first term is over. That’s compatible with present knowledge and totally falsifiable. But chances it’s correct are basically zero and that makes it a prophecy, not a scientific theory.

The idea that falsifiability is sufficient to make a theory scientific is an argument I hear frequently from amateur physicists. “But you can test it!” they insist. Then they explain how their theory reworks the quantum or what have you. And post their insights in all-caps on my time-line. Indeed, as I am writing this, a comment comes in: “A good idea need only be testable,” says Uncle Al. Sorry, Uncle, but that’s rubbish.

You’d think that scientists know better. But two years ago I sat in a talk by Professor Lisa Randall who spoke about how dark matter killed the dinosaurs. Srsly. This was when I realized the very same mistake befalls professional particle physicists. Upside-down Popper is a widely-spread malaise.

Randall, you see, has a theory for particle dark matter with some interaction that allows the dark matter to clump within galaxies and form disks similar to normal matter. Our solar system, so the idea, periodically passes through the dark matter disk, which then causes extinction events. Or something like that.

Frankly I can’t recall the details, but they’re not so relevant. I’m just telling you this because someone asked “Why these dark matter particles? Why this interaction?” To which Randall’s answer was (I paraphrase) I don’t know but you can test it.

I don’t mean to pick on her specifically, it just so happens that this talk was the moment I understood what’s wrong with the argument. Falsifiability alone doesn’t make a theory scientific.

If the only argument that speaks for your idea is that it’s compatible with present data and makes a testable prediction, that’s not enough. My idea that Trump will get shot is totally compatible with all we presently know. And it does make a testable prediction. But it will not enter the annals of science, and why is that? Because you can effortlessly produce some million similar prophecies.

In the foundations of physics, compatibility with existing data is a high bar to jump, or so they want you to believe. That’s because if you cook up a new theory you first have to reproduce all achievements of the already established theories. This bar you will not jump unless you actually understand the present theories, which is why it’s safe to ignore the all-caps insights on my timeline.

But you can learn how to jump the bar. Granted, it will take you a decade. But after this you know all the contemporary techniques to mass-produce “theories” that are compatible with the established theories and make eternally amendable predictions for future experiments. In my upcoming book, I refer to these techniques as “the hidden rules of physics.”

These hidden rules tell you how to add particles to the standard model and then make it difficult to measure them, or add fields to general relativity and then explain why we can’t see them, and so on. Once you know how to do that, you’ll jump the bar every time. All you have to do then is twiddle the details so that your predictions are just about to become measureable in the next, say, 5 years. And if the predictions don’t work out, you’ll fiddle again.

And that’s what most theorists and phenomenologists in high energy physics live from today.

There are so many of these made-up theories now that the chances any one of them is correct are basically zero. There are infinitely many “hidden sectors” of particles and fields that you can invent and then couple so lightly that you can’t measure them or make them so heavy that you need a larger collider to produce them. The quality criteria are incredibly low, getting lower by the day. It’s a race to the bottom. And the bottom might be at asymptotically minus infinity.

This overproduction of worthless predictions is the theoreticians’ version of p-value hacking. To get away with it, you just never tell anyone how many models you tried that didn’t work as desired. You fumble things together until everything looks nice and then the community will approve. It’ll get published. You can give talks about it. That’s because you have met the current quality standard.  You see this happen both in particle physics and in cosmology and, more recently, also in quantum gravity.

This nonsense has been going on for so long, no one sees anything wrong with it. And note how very similar this is to the dismal situation in psychology and the other life-sciences, where abusing statistics had become so common it was just normal practice. How long will it take for theoretical physicists to admit they have problems too?

Some of you may recall the book of philosopher Richard Dawid who claimed that the absence of alternatives speaks for string theory. This argument is wrong of course. To begin with there are alternatives to string theory, just that Richard conveniently doesn’t discuss them. But what’s more important is that there could be many alternatives that we do not know of. Richard bases his arguments on Bayesian reasoning and in this case the unknown number of unknown alternatives renders his no-alternative argument unusable.

But a variant of this argument illuminates what speaks against, rather than for, a theory. Let me call it the “Too Many Alternatives Argument.”

In this argument you don’t want to show that the probability for one particular theory is large, but that the probability for any particular theory is small. You can do this even though you still don’t know the total number of alternatives because you know there are at least as many alternatives as the ones that were published. This probabilistic estimate will tell you that the more alternatives have been found, the smaller the chances that any one of them is correct.

Really you don’t need Bayesian mysticism to see the logic, but it makes it sound more sciency. The point is that the easier it is to come up with predictions the lower their predictive value.

Duh, you say. I hear you. How come particle physicist think this is good scientific practice? It’s because of upside-down Popper! They make falsifiable predictions – and they believe that’s enough.

Yes, I know. I’m well on the way to make myself the most-hated person in high energy physics. It’s no fun. But look, even psychologists have addressed their problems by introducing better quality criteria. If they can do it, so can we.

At least I hope we can.

129 comments:

  1. It has to be "falsifiability in context" (the subject being addressed), but it remains an important principle. Back in Copernicus' day Ptolemaic epicycles could not be falsified, but they can today

    ReplyDelete
  2. I have always enjoyed reading your postings, as I do this one. So it is puzzling as to why you chose a particularly political and distasteful analogy, when SO many other option are within your extensive skill set. It has diminished the strength of your article. Thanks for the many years of clear writing on science.

    ReplyDelete
    Replies
    1. I liked the Trump analogy. It illustrated her point perfectly.

      Delete
    2. @Bob, I didn't read it as political due to context. The discussion was on the justifactory framework behind a testable idea/prediction, not political ideology. Distaste and politics is more so a reflection of the reader than of the writer in this case.

      Delete
  3. bee:

    falsifiability is a necessary, but not sufficient requirement for a scientific theory . qctually, it might be better to say that testability, rather than falsifiability, is a minimal requirement. there are additional criteria that can be used, such as simplicity (either mathematical or conceptual), compatibility with other theories, etc.
    btw - trump getting shot is more of a wish than a prediction. and as the saying goes "be careful what you wish for". If Trump is out (one way or another) we'll have Pence to deal with and he could actually be even worse (https://www.newyorker.com/magazine/2017/10/23/the-danger-of-president-pence)

    richard

    ReplyDelete
  4. I would love to hear your thoughts on Kuhn in this context.

    ReplyDelete
  5. This discussion reminds me of the question of what makes a good mathematical conjecture. It's not hard at all to come up with a statement that nobody has any idea how to prove or disprove, and therefore that is (i) consistent with the evidence we have so far and (ii) testable (in the sense that somebody might one day come up with a proof or disproof). But that doesn't make it a good conjecture. A good conjecture is one that makes surprising and testable predictions. For example, you might have some conjecture about a mathematical structure that would imply a remarkable formula for some quantity, and then discover that that formula is correct for the first few values of n.

    To be slightly more precise, a conjecture is good if it has unexpected consequences that you can test. The reason that's better than having entirely expected consequences is that if the consequences are unexpected, then there's an interesting phenomenon to try to uncover and explain, whereas if the consequences are what you'd expect, then it's difficult to get your hands on anything.

    Maybe I should say slightly more: a conjecture is good if it has already made surprising predictions that have been confirmed, and has the capacity to make more such predictions.

    ReplyDelete
  6. bob,

    Quite possibly just so everyone could go on and complain it's political and distasteful...

    ReplyDelete
  7. You say: "That's because any theory can be amended in hindsight so that it fits new data."

    That means there was no reason to change from Newtonian Physics to Einstein's relativity in 1919; Newtonian Physics just merely needed updating to fit the new data.

    And as I have pointed out to people: Newtonian Physics was updated in 1758 by Boscovich's theory but that gets omitted from Physics degree courses.

    ReplyDelete
    Replies
    1. Updated? Was Boscovich's theory of points of force confirmed? How did it lend clarity to the formulation of equations of motion?

      Delete
  8. Roger,

    I suggest you also read the paragraph after that.

    ReplyDelete
    Replies
    1. Sabine,
      That only creates the problem of what do you mean by "simpler", far as I am concerned Newtonian physics (even when updated) is "simpler"; thus no need to transition from Newton to Einstein!

      Delete
    2. No, what you say is wrong. As I have said over and over and over again, "simpler" is a statement about a theory's ability to accurately explain data. It is meaningless to praise a theory itself for being "simple" because there's no merit in having a "simple" theory if it does not describe Nature. It is very, very difficult to explain currently available data with Newtonian physics alone, as I am sure you have heard.

      Delete
  9. Greetings friends & colleagues!

    Epic article!

    In all these discussions on what science is and ought to be, coupled with the observation of today's fallen state of science, the most remarkable thing is the absence of the words and thoughts of the likes of Einstein, Newton, Feynman, Faraday, Maxwell, Newton, Born, Bohr, Schrodinger et al.

    While Einstein, Planck, Newton, Feynman, Maxwell, Newton, Born, Bohr, Schrodinger et al. all advanced science, neither Lisa Randall nor Lee Smolin ever have. And thus before listing to Lisa and Lee, should we not turn towards the Greats who advanced science?

    I mean what if a multimillionaire multiverse maniac receives tens of millions of dollars over numerous decades to "lead" and shape science? Will they not recreate science in their own image, as a field of click-baiting, TV-preaching, grant-seeking, handwaving politics, where they remain perched atop, parceling out peanuts (buying silence of possible critics) from on high?

    Let us celebrate the words of Newton and Einstein which the multimillionaire multiverse maniac denies!

    Albert Einstein: The development of Western Science is based on two great achievements, the invention of the formal logical system (in Euclidean geometry) by the Greek philosophers, and the discovery of the possibility to find out causal relationships by systematic experiment (Renaissance).

    Not only does String Theory violate the two pillars of Western Science—formal logic and empirical observations—but it also violates Newton’s “Rules of Reasoning,” as set forth in his marvelous Principia:

    Rule 1: We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances.

    Rule 2: Therefore to the same natural effects we must, as far as possible, assign the same causes.

    Rule 3: The qualities of bodies, which admit neither intensification nor remission of degrees, and which are found to belong to all bodies within the reach of our experiments, are to be esteemed the universal qualities of all bodies whatsoever.

    Rule 4: In experimental philosophy we are to look upon propositions inferred by general induction from phenomena as accurately or very nearly true, notwithstanding any contrary hypothesis that may be imagined, till such time as other phenomena occur, by which they may either be made more accurate, or liable to exceptions.

    Today’s scientists need to study the thoughts and philosophies of the world’s greatest scientists.

    Einstein: But before mankind could be ripe for a science which takes in the whole of reality, a second fundamental truth was needed, which only became common property among philosophers with the advent of Kepler and Galileo. Pure logical thinking cannot yield us any knowledge of the empirical world; all knowledge of reality starts form experience and ends in it. Propositions arrived at by purely logical means are completely empty as regards reality. Because Galileo saw this, and particularly because he drummed it into the scientific world, he is the father of modern physics—indeed, of modern science altogether. (Albert Einstein, Ideas and Opinions)

    Schrodinger: . . .in the end exact science should aim at nothing more than the description of what can really be observed.

    Albert Einstein: Time and again the passion for understanding has led to the illusion that man is able to comprehend the objective world rationally by pure thought without any empirical foundations — in short, by metaphysics (string theory/multiverse mania).

    ReplyDelete
  10. Sorry. I think this post was a waste of everyone's time.

    ReplyDelete
  11. The problem with particle theories of dark matter is that none of them have ever had any empirical support, which is a whole different category of error. Dark matter as black holes is finally getting a decent number of peer reviewed publications in support, and has actual observational evidence consistent with it.

    ReplyDelete
  12. So I am trying to understand your point.

    >>OF COURSE<< most theoretical proposals are wrong. And >>OF COURSE<< the theories have knobs you can twiddle to "save" the theory until the next measurement.

    This is well known and embraced by the community. But how is that wrong? As long as the theory is in agreement with existing data and not falsified by anything, it remains a possible explanation.

    Now it is true that a lot of these hidden sector theories are implausible, as are extra dimensions and lots of theories - many of which you understand better than I do. And they are a bit outlandish. But what metric would you use to kill them other than suffocating them with data, one by one?

    And yes, Lisa's idea about self-interacting dark matter and the death of T-Rex sounds unsettlingly similar to many of the emails we both get that combine the idea of quark colors as a deep link to physics and a justification of using a color identification for personalities and corporate hiring.

    On the other hand, self-interacting dark matter, while unsupported by data, has an intrinsic plausibility, given the rich litany of structures and forces observed in Standard Model matter. (I'm not claiming I believe in these ideas, but given that we haven't ruled them out, how can you be sure.)

    I'm very much looking forward to receiving your book.

    ReplyDelete
  13. 1. Wouldn't it be nice if an old lady verified your idea and did send Trump to Popper. Sooner please.
    2. Bob, epicycles have not been falsified, if I remember correctly. You just have to have enough of them to make them fit the data. Indeed, because they don't rely on gravity, they can fit the data better than Newton, because you don't have to bend space-time! Epicycles were dumped because we like to attribute cause to effect, and gravity does that nicely - except for accounting for the odd rotation of galaxies - but that only affects all of the galaxies and everything in them. Some say that we dumped epicycles because Occam razors, but have you studied General Relativity? In that context epicycles are much simpler, so maybe we should listen to Ptolemy and cut Einstein loose! Ha!

    ReplyDelete
  14. Oops. When I said 'Bob' I meant 'Matthew'. All these epicycles are making me dizzy. Sorry.

    ReplyDelete
  15. "I may predict for example, that Donald Trump will be shot by an elderly lady before his first term is over. That’s compatible with present knowledge and totally falsifiable. But chances it’s correct are basically zero and that makes it a prophecy, not a scientific theory."

    5 sitting US presidents have been shot (4 dying): https://en.wikipedia.org/wiki/List_of_United_States_presidential_assassination_attempts_and_plots out of 44 presidents. Trump is probably going to be a one-termer, so we'll ignore a second term. He is at the tail of his first year (arguably, his first year just finished), so he is 1/4 the way through, for a assassination risk of 1/4 * 5/44. Assassins are usually male, as is violence in general; women are much less likely to commit murders with estimates usually being something like 10% of murderers (despite being ~51% of the population), so 1/5 there. Elderly people are also less prone to violence however, as Las Vegas reminds us, even elderly people still commit crimes and violence; 'elderly' is unspecified here but reasonable values would also give us something around 1/5 too. So it all would be: 1/4 * 5/44 * 1/5 * 1/5 = 0.1% (or perhaps 1 in 1000)

    0.1% is very different from 0. You would not get into a car if it had a 0.1% risk of killing you, and it's similar to the death rate for even the safest invasive surgeries.

    ReplyDelete
  16. Falsifiability is problematic;a vestige of empiricism. What if there was a wholly rationally derived theory that began with a necessary principle, from which a universal model was implied and derived, that did not seem to match the world as we measure it? Given the derivation, the theory can't be wrong. By Descartes's Method of Doubt, the theory is epistemologically and ontologically superior to theory derived from experiments,because they are affected by Kant's 'lens of the mind'. The investigator is then bound as a matter of intellectual honesty to accept that in some way we are misinterpreting the data, or perhaps there is some hidden aspect.
    I mention this because I am working on such a theory, and it matches with the Bekenstein Hawking information bound, and matches quite a few important criteria, for example explaining why the entropy of the universe began with a very low value, but protons or quarks seem to begin with very low mass that then evolves, so that can't be right, can it? It gets confusing, but 'has to be' correct. Damn you Popper! (Writing this makes me realise that an evolving mass, but steady charge, would change the way atom work, and this may be similar to conditions in the early universe - back to the old drawing board - thank you Popper!).

    ReplyDelete
  17. I think you misunderstand the HEP landscape a bit and the general feeling around these "let me find a non-excluded point in phase space" theories. No one likes this game.

    Regarding the text, you should check BAH! Fest.

    ReplyDelete
  18. Randall's dark disk is meant to explain the disk of dwarf galaxies seen around Andromeda and elsewhere. There is a vertical component to the sun's motion around the galaxy, and when it passes through this very narrow band of dark matter, objects in the Oort cloud would be perturbed and some would fall into the inner solar system. This type of dark matter would have to be self-interacting, and Randall and collaborators are trying to constrain what type of interactions are consistent with the scenario and other data.

    ReplyDelete
  19. "the easier it is to come up with predictions the lower their predictive value." That's a really cool idea. I'll have to chew on it for a little while to understand exactly what it means, but I'm into it.

    @gowers: that's an interesting criterion, too. We might say that a good conjecture today needs to have at least a slightly interesting past and a plausibly interesting future.

    ReplyDelete
  20. So if falsifiability is not good enough then, in your opinion, what would make a good scientific theory? I feel tht you implied that Occam's razor should come into play at some point. Would this be enough?

    ReplyDelete
  21. Sabine,
    In the next paragraph what is supposed to be relevant to the point I was making. You say “you can’t falsify theories.” – which would mean in the context of the point I was making that cannot falsify Newtonian Physics. And when you say “scientists transition to simpler explanations.” maybe you are thinking in terms that the transition from Newtonian Physics to Einstein's relativity was a transition to Einstein's relativity as being simpler than Newtonian physics; which I and many would disagree that relativity was simpler than Newtonian Physics. If you are thinking in those terms; how do you show when a theory is "simpler"??

    ReplyDelete
  22. Very interesting piece, and I respect your forthrightness in stating your view that falsifiability is not actually a criterion for modern physics, even though that is of course the mythology. However, does anyone seriously state that testability/falsifiability is the only criterion of a well-crafted hypothesis? I don't know of any examples personally. Also, if simplicity is your substitute for falsifiability, what are your principles for judging simplicity? Judging simplicity is in my view not that simple. For example, while Lorentz's earlier version of relativity yields the same predictions as Special Relativity (because they both use the Lorentz transformations) it is widely believed today that SR is simpler and thus to be preferred over Lorentzian relativity. But this is arguably not the case b/c of all of the issues that SR leads to in terms of explaining many obvious features of empirical time, such as the arrow of time, our experience of a present moment, etc. (among other difficulties of SR). That's just one example, but when we get into more complex theories like quantum gravity what would you suggest as our guideposts for judging simplicity? Last, we can look to Kuhn for counterexamples to your thesis in his reconstruction of how paradigms in physics have changed. He wouldn't agree that simplicity has been much of a factor I believe.

    ReplyDelete
  23. What do you suggest instead of testability to raise the bar? Some kind of cencorship, that some new ideas are allowed and some not? Who is going to decide that? It is much better to have the current "academic democracy" or diversity. If someone has a crazy new idea that is consistent with all present data and it is interesting enough it deserves the right to be out for discussion.

    ReplyDelete
  24. I'm in agreement with Richard/naivetheorist. Non-falsifiability makes a theory non-scientific. I'd always interpreted Popper's intent as that, nothing more, nothing less. Then the "Popper upside-down" issue you're referring to boils down to theorists falling prey to a version of the "A therefore B. Not A therefore not B" fallacy, if they're claiming a theory is scientific because it's falsifiable.

    Falsification is about consistency (or lack thereof) with experimental evidence. I'm a fan of falsifiability, as long as it's viewed as just one of several criteria (the existence of a mechanism, simplicity as per Occam's razor,...) and we always remember that a) our current best theories are just the ones that fit these other criteria and haven't been falsified _yet_, and b) we can only ever falsify a theory (or fail to falsify it) to the sensitivity of our experiments, and so theories can at best be declared "probably falsified" or "not probably falsified".

    Sundance

    ReplyDelete
  25. If protons decay, the half-life is greater than 10^32 years. That’s 100 billion trillion times the current age of the universe. From a pure physics point of view (not cosmology, of course), I would say that the proton decay hypothesis has been refuted.

    The speed of neutrinos as demonstrated by SN 1987A is very close to the c. This is an interesting problem and maybe an important clue that particle physics has missed an important property beyond the standard model. At least physicists can occasionally actually detect neutrinos although not normally at the LHC. Have we ever detected slow neutrinos where v<<c which should be theoretically possible if they really have mass?

    I hope physics gets back some of its skepticism.

    ReplyDelete
  26. Well, I'm certainly looking forward to reading your book . . . Trumps not going to get shot, he's going to get beheaded for treason . . .

    ReplyDelete
  27. This says so concisely what I've been trying to explain to people for ages - and it applies to all of us, not only cranks!

    "In the foundations of physics, compatibility with existing data is a high bar to jump, or so they want you to believe. That’s because if you cook up a new theory you first have to reproduce all achievements of the already established theories. This bar you will not jump unless you actually understand the present theories, which is why it’s safe to ignore the all-caps insights on my timeline."

    ReplyDelete
  28. I like your thinking. String theory is suspect since it is too easy to come up with string theories that sort of work and make testable predictions, but that are mutually exclusive. Only one of them can be true, so the odds of any one being the right one is small. The whole multiverse thing seems to be even more of a cop out. It tries to let all the string theories be true, just not all at once.

    I'm looking forward to your book, and I'm looking forward to the ruckus that seems to be in the offing.

    ReplyDelete
  29. An interesting critique of Popper (and one that I absolutely agree with.) Have you read Lakatos? I was reminded of the "soft shell" of his theory with your discussion of revision.

    ReplyDelete
  30. Nice post. I am reminded of the people who post every possible prediction on Twitter for, e.g., the outcome of the 2020 presidential election, then delete all but the one that turned out to be correct after the event and try to convince people that they were psychic.

    ReplyDelete
  31. "That’s because repeatedly fixed theories become hideously difficult, not to mention hideous, period."

    Well, that succinctly describes particle physics and cosmology alright. Both disciplines are an unscientific compendium of mathematical fantasias that, in the aggregate, bear only a glancing resemblance to physical reality. And it is that lack of resemblance that makes them both so horrible in scientific terms. Observed reality does not contain the features that are prominent in the standard models.

    Of course, to a mathematicist (like Max Tegmark for instance), this does not present a problem because the mathematics is thought to underlie and be determinate of reality. If their mathematical models require fractionally charged particles and dark matter, then such must exist. The absence of evidence is not evidence of absence, as the popular sophistry has it. Reality is deficient, not the models which are, by definition, always correct or at least always correctable.

    While Tegmark may be an extreme example, it could be argued that mathematicism is the dominant paradigm in the scientific academy and has been for the better part of the past century. Empirical science no longer constitutes an open ended investigation into the nature of physical reality but is now a mere adjunct of theory, dispatched to remote realms in search of confirmation, no matter how threadbare, for the preferred standard models. For evidence of this look no further than the LHC and LIGO, where grandiose claims, of models triumphant, are spun from minuscule evidence that has been lovingly massaged from enormous piles of carefully pawed over data. Science has become a lab assistant in the department that bears its name.

    Sabine, you and Lee Smolin are right to be uneasy with the current situation and it is undoubtedly brave of you to speak up, especially since it has rendered your employment situation difficult. Many Worlds, Parallel Universes and similar vapid theoretical concepts are the direct offspring of mathematicism. But mathematicism is essentially just a kind of modern day secular mysticism. Its objects of concern lie in the supernatural realm of the human imagination, far beyond the reach of proper scientific inquiry.

    May I suggest that to defeat mathematicism, if that is your purpose, you need only rise to the defense of empiricism and logic as the foundational elements of science. Math will consequently reacquire its proper relevance as a branch of logic, and an essential modeling tool alongside qualitative analysis. Mathematicism will then be free to slink off to the philosophy department, if they'll have it.

    Best of luck. Your blog is a pleasure to read.

    ReplyDelete
  32. Another bit of dogma that I was taught in my Philosophy of Science course, and have come to doubt, is the deductive (Good) / inductive (Bad) distinction. Famously, of course, Michelson and Morley weren't trying to disprove the ether, but measure it. Or Rutherford, on the famous nucleus-scattering experiment "[Geiger reported] 'We have been able to get some of the α-particles coming backwards...' It was quite the most incredible event that has ever happened to me in my life. It was almost incredible as if you fired a 15-inch shell at a piece of tissue paper and it came back and hit you (1938, p. 68). Or Planck's Law, where he first "discovered the empirically fitting function", then "constructed a physical derivation of this law". It's possible to argue that, well, it doesn't matter whether the experiments support your model or reject it. But in these cases (and many others), the tests were done, as it were, for completely misguided reasons. And that's leaving out fields like astronomy or ethnology, which seem to be almost completely inductive.

    ReplyDelete
  33. The Dinosaur extinction already has a good explanation which is the consensus theory. Also I don't see how a dark matter collision could kill most large creatures without disrupting the solar system in some identifiable way. I think her idea fails in a way similar to phlogiston - there is a better theory with more evidence. (Although she is much smarter than I am so my opinion is not worth much.)

    Other than that usual scientific standard, I can't think of a good, non-subjective way to weed out bad types of research in advance. Just as in biological evolution, I think there will be thousands (or more) of bad ideas for every good idea, and the good ideas are not always identifiable until they prove out.

    Perhaps the problem is that it has become harder and harder to get new data. Would we have had Relativity without the Michelson–Morley experiment? Maybe, but would it have been accepted? Hubble, other satellites, LHC, and LIGO - those are the kinds of science we most need.



    ReplyDelete
  34. I agree with richard the naivetheorist. Falsifiability alone cannot make a theory (or hypothesis) scientific. But if a theory is not falsifiable, using your example, one cannot even amend that theory after a test that is not consistent with its prediction. An unfalsifiable theory cannot not contribute to scientific progress. I am sure you would agree with this.

    I believe, if you abandoned the falsifiability criteria, you probably would need to replace it with something else to address the demarcation problem. Do you agree, Sabine?

    ReplyDelete
  35. Huh, I read Popper and others as an angsty freshman. They sort of all made some sense, but mostly didn't seem to have any idea about how science is/was done. If pushed I'd be more an Iductivist (Inductivism from wiki.). We gather data and make ideas, sometimes a smart guy/gal comes along and says, "this way of thinking about the data makes more sense." And everyone (eventually) agrees and we use that idea. (Because it works better.)

    I have a pretty mundane view of the scientific method. It's how I trouble shoot my tractor, and debug my latest instrument. I make guesses as to what is wrong, I make changes or tests based on the guess and see if that's right. Debugging a new instrument or circuit is the most fun, because there's no guarantee that it will work. It could be that there is a flaw in my thinking about the world, and not the execution/fabrication.
    The problem with particle physics and such is there is not enough data. Or not enough weird data. (to state the obvious) (Have the neutrino masses been explained/measured?)

    ReplyDelete
  36. Goldbach's Conjecture is tantalizingly simple, but makes no predictions that are surprising.

    ReplyDelete
  37. We need a catchy name for this phenomenon to spread the discussion wider. I propose "The Reducibility Crisis in Theoretical Physics", to play off the replicability crisis and extend your p-hacking analogy.

    ReplyDelete
  38. Don,

    OF COURSE. That's what I wrote. I didn't say it's wrong and I don't know how you came to think that's what I said. What I said was that just being falsifiable doesn't make a theory scientific.

    ReplyDelete
  39. Unknown,

    I didn't say people like the game and don't know why you think so. I don't understand the rest of your comment.

    ReplyDelete
  40. Roger,

    If you ask anyone who actually works on gravity they'll all agree that general relativity is simpler. Simpler of course not to describe the particular case of an apple falling off a tree but simpler - quantifiably simpler - to describe the vast number of gravitational phenomena that we have observed. The amount of fudging and fiddling you'd have to do with post-post-post Newtonian approximations to beat General Relativity is evidently (I dare to say) not something anyone wants to cope with. Best,

    B.

    ReplyDelete
  41. Tam,

    No, but that's what they act like, to first approximation. All I'm saying is it's a weak quality criterion.

    ReplyDelete
  42. Kaleberg,

    Wow, that is some misunderstanding of string theory. You should rethink this.

    ReplyDelete
  43. Brian,

    I am not criticizing Popper. I'm saying Popper alone isn't enough.

    ReplyDelete
  44. Unknown,

    I previously called it the "overproduction crisis".

    ReplyDelete
  45. Love this post--a nice companion to your hit on inflation a few weeks ago. You might really enjoy this guy's insights about revoluationary ideas in science: http://amasci.com/weird/vindac.html

    ReplyDelete
  46. Interesting _view of things_ from the perspective of High Energy Physics.

    I think your mentions to Popper and scientificness ("scientificality"?) may be distracting the reader from possibly more interesting questions. At times I'd say that physicist have a tendency to taker bigger shortcuts than usual in terrains outside of physics/science, so the apparent problem with falsifiability you raise would be more with misinterpretation of Popper than with Popper's argument itself, case in point, Randall. Idem with scientificness. If we'd push this argument, namely, "_testability alone doesn't make it a sience_", it seems to me we'd be leaving out of Science many fields pursued in academia as such (Biology, Psychology, Archeology,...) After all, isn't science a sociological construct? In this view, Science would be what we agree upon that science is.

    For me, the interesting part comes after the middle, when you comes to the current situation in High Energy Physics. It'd seem you then stop just shy of asking whether this would signal the need for new guiding principles, both, methodological as well a physical.

    Is that what you have in mind? or am I reading too much into all of it?

    ReplyDelete
  47. Ivan,

    First things first, there's nothing democratic about science. More importantly, I just want people to face reality. This nonsense is caused by pressure to publish and everyone knows it. I think it's time they take a stand against it. I'm not going to tell anyone what to do. To begin with, no one would listen to me anyway, so I might as well save my breath. But also, it's incompatible with my faith in expert knowledge. I'm sure if the will was there they'd be able to come up with some sensible quality criteria.

    ReplyDelete
  48. Sundance/Unknown/naivetheorist,

    Non-falsifiability "in principle" should be killed by Occam's razor - you don't need Popper for that. Non-falsifiability in practice doesn't make a theory non-scientific, because there's nothing wrong with a theory that has free parameters. That's perfectly normal and justified theory-development.

    I really think that Popper has done more harm than good to theorists by making it exceedingly unpopular to work on theoretical problems that will not lead to predictions in the short term. There are many hard problems lingering in the math of qft which pretty much nobody ever works on, like Haag's theorem or the non-convergence of the perturbative expansion. I think these are issues that could actually move us forward if resolved. But that would take a long time and it's therefore basically impossible to work on this in the present paper-production craze.

    (Just to prevent misunderstandings, this isn't to say that I want to work on this. It's really not my terrain. I'm very much a GR/QG person. These are just two examples that spring to my mind.)

    Best,

    B.

    ReplyDelete
  49. @Sabine

    The crisis indeed has several symptoms, and practical causes. But underneeth it all, I think a change in our educational system is required in the long run, both in highschools and universities.

    We could start by analysing how Finland is developing a totally renewed way of educating students (and teachers!), with tangeable results. Google and you will find dozens of articles on it.

    And I'm very curious to see if this overhaul will also have effects on physics research from Finland, say in 20 years from now.

    Best, Koenraad

    ReplyDelete
  50. I read this post yesterday and was too boggled by it to comment. I'm still confused today, but allowing it to sink in.

    I think to be fair to Popper one must situate him historically. The falsifiability criteria was a response to the Logical Positivists. They argued that a proposition can only be true if it is verified. And Popper countered with the now famous black swan argument. Philosophical truth cannot be sought by verification; we can only show that something is not true by finding counterexamples. There may always be a black swan waiting to come along and falsify something believed to be true.

    Of course Popper did not allow for retrospectively changing one's prediction to fit new data. And maybe in retrospect, that was a mistake. And science doesn't really seek the truth, IMO it seeks accuracy of explanation and prediction (some scientists believe that this amounts to truth, but naive realism is another story).

    I think I knew that Popper was at least incomplete because of Higgs. The LHC was not made to falsify the predictions of Peter Higgs. It was made to "search for the Higgs boson". To verify the prediction. A lot of scientists are apparently still logical positivists.

    Thanks for making me *think*!

    BTW the field in which this process has the largest impact is not physics, but economics. The standard economic models constantly fail to predict the real world, but are tweaked to fit the data retrospectively. Economists believe that if they can do this then they understand what is going on. They fail to predict events like the deepest and longest recession in living memory, but keep their jobs anyway. Because their models can be endlessly tweaked.

    ReplyDelete
  51. Sean Carroll and others have argued that we should give up falsifiability as an important criterion in science. I disagree. Any scientific theory has to be falsifiable in principle, pretty much by definition. Some things might not be falsifiable in the short term, but that is no reason not to work on them. But one shouldn't equate these with things which are not falsifiable, even in principle.

    I think Kuhn has done much more harm than Popper. :-|

    ReplyDelete
  52. Sabine...

    I guess then I am unclear on what is the metric whereby a theory becomes scientific?

    Falsifiability is one criterion. As you say, Lisa Randall's idea passes that criterion. But what criterion is it missing? You said " By scientific I mean the theory has a reasonable chance of accurately describing nature."

    So what metric are you using to relegate her idea to the "no reasonable chance" category? I'll grant you that you her idea also triggered my "you gotta be kidding me" detector, but since I don't know what the next advance in science will be, how can I reject her hypothesis?

    ReplyDelete
  53. Don,

    That's what my "Too many alternatives argument" was supposed to illuminate. The more models you can think up, the lower the predictive power of any one of them. I don't think that's a particularly deep insight.

    As to what's a better criterion. Excellent question of course to which I have no answer. Even if I had I don't think it would matter because what do I know? This is a question particle physicists must answer for themselves. Better quality criteria are necessary, but which?

    As to Lisa, I think you misunderstood that. I didn't say and didn't mean to say that her idea (which I didn't even look at in detail) has no reasonable chance of being correct. I merely said that her reply to the question I mentioned didn't explain why that would be so. Maybe there is a better reason.

    Best,

    B.

    ReplyDelete
  54. It helps to think through a practical example. How would one falsify general relativity?

    Imagine, for example, that we discovered evidence against general relativity. Just spitballing here, but just imagine we discovered that instead of seeing distant objects pulled toward each other by gravity, that they were actually *accelerating* away from each other? Crazy right? So what would scientists do?

    A) Consider relativity falsified?

    or

    B) Explain away the exception, positing a previously unknown force called "dark energy"?

    The point being, there is no clear criteria for falsifiability, because you can always posit an explanation for the result. This isn't bad science. New science is often found in the exceptions.

    ReplyDelete
  55. I think if a hypothesis is falsifiable in theory but not in practice, because we don't yet have the technology, that's not ideal but it's fair enough, it's still scientific. We need to distinguish those hypotheses from the hypotheses which are not even falsifiable in theory - like the multiverse. That's not scientific.

    ReplyDelete
  56. First it had to be noted that Popper fallibility condition should be put in a historical context. Popper mainly discuss Wiema circles of philosophy of science, Wittgenstein as well, and he focused mainly on critique of scientific positivism. It says that the only things you need to make science research is facts. All other things should be treated just as decoration. Popper criticism point that book of facts is not science, sn various theoretical and abstract elements of theory, like wavefunction for quantum systems are not avoidable. And instead focusing on agreement with facts, which was so trivial in XIX ( aether), II BC ( epicycles), and today as ("This nonsense has been going on for so long, no one sees anything wrong with it") he proposed to add additional criteria like falsification.
    So Popper, as stupid as it may appear, was not so stupid as it may seem at first.
    Just today we do not remember how prominent was positivism in physics....

    ReplyDelete
  57. I think this is a reflection about usage of the term "theory" than anything else. I'd say usage of the word theory has changed a bit over the last 50 years or so.

    I don't think there's many scientists that disagree here that the SU(5) GUT theory is a falsifiable theory. And indeed the SU(5) GUT was falsified.

    But if theory is supposed to mean something much more loose like "framework" or approach" (think string theory, inflation theory, etc...) then a theory is not necessarily falsifiable simply because a framework or approach does not make clear unambiguous
    empirical predictions that can be tested.

    ReplyDelete
  58. I always enjoy reading your blog. Seeing the dialogue between you and Don I was disappointed to learn you didn’t have a better criterion. I anticipated reading one deeper in and none materialized. You did note in a reply to Ivan the problem is exacerbated by a pressure to publish. Had that been prominent in the article it would have been much more constructive, and I agree pressure to publish is likely a significant cause for the “overproduction of worthless predictions”.

    ReplyDelete
  59. I guess I am confused. I don't think anyone (at least now) is claiming that just being falsifiable is a sufficient condition to being scientific simply that it is a necessary condition.

    ReplyDelete
  60. Roger:Most new theories are only later found to be "simpler".

    ReplyDelete
  61. Sabine: if some field of physics is not falsifiable, it just ain't science. End of story. The fact that something falsifiable might not be science doesn't change that. But I agree that if people are twiddling and fiddling that ain't science either. Especially if the end result is hyping some model with "discoveries" that are inferred rather than observed, protecting it with propaganda and censorship, and treating the public like fools. Which is perhaps why physics funding has been struggling of late.

    ReplyDelete
  62. Sorry, but in addition to taking the contrapositive, you're making 'amending the theory' do a lot of heavy lifting here. A specific example (yours): you claim that a theory of the aether that incorporated the Lorentz-Fitzgerald contraction would be the 'same' theory, albeit 'amended'. I'd say the original theory had been falsified and replaced with a new one, albeit one still employing the concept of aether.

    Shorter: until you can explain, precisely, what is the difference between a 'new' theory and an old one that has merely been 'amended', there is not a lot of there, there.

    ReplyDelete
  63. Without the clear and grounded picture/conception/reconstruction of a theory (its components, structures, functions and properties), the debates whether something called “a theory” is scientific or not will be endless.
    There are many more or less partial conceptions/reconstructions of a theory in the modern philosophy of science: conceptual, standard/propositional/sententional, structuralist, semantic, instrumentalist/operationalist, problem-solving/erotetic, structure-nominative etc. See, the overview of some of these reconstructions: Burgin, Kuznetsov, Scientific Problems and Questions from a Logical Point of View // Synthese, 1994, 100,1: 1-28. DOI 10.1007/BF01063918
    Each of reconstructions mentioned has proposed its own criteria of scientificality.
    It seems, that Sir Carl had an oversimplified and ironically very influential (among scientists and philosophers of science) conception of a theory. It may be described as an informal and incomplete reduction of standard reconstruction. According to it, a scientific theory is akin on an empirical proposition (or coherent collection of such propositions) that can be simply verified or falsifies by a single observation.

    ReplyDelete
  64. As Scott Aaronson put it very nicely:
    You can't say your theory about how many angels dance on a pinhead is scientific because God might show up on a cloud one day and say you are wrong. :)

    ReplyDelete
  65. Falsifiability cannot be a scientific theory itself, because if it were, there would have to be a scientific theory which is not falsifiable, which would mean that scientific theories do not have to be falsifiable.

    To have any validity at all, falsifiability must stand entirely outside of science, where it would either have to be an isolated and unimpeachable truth, or part of a system of thought which is prior to all science and can never be interrogated by science, and yet which all science depends upon for any claim of truth.

    ReplyDelete
  66. What's a better criterion? Isn't that data? Science says we have to judge our ideas against the real world. Making up our 'own' criteria is silly IMHO. No insult intended.

    ReplyDelete
  67. Louis,

    Sorry to disappoint, but I prefer to be consistent with myself. You see, I think that the best we can do in science is to let experts freely decide what to do. It's pretty much an argument from deregulation, necessary to make optimization efficient. But in the present circumstances researchers are not free to decide what to do.

    This isn't only because of publication pressure, it's more importantly by a self-created peer pressure and unaddressed cognitive biases. It's a self-made problem. Now, if I were to tell anyone what to do, that wouldn't be consistent with my conviction that the experts know best themselves. Therefore what we really should be doing is remove any incentives for people to work on what everybody works on already. (I have a rather detailed list with practical recommendations in the appendix of the book.)

    Let me add that I'm not just guessing that many researchers would change to a different topic if they could, I actually did a survey on this. Though it was some years ago, turns out that the fraction of physicists who would change their research topic if only they could is 1/3 -1/2, depending on field (turns out it's the highest in cond-mat/hep and the lowest in mathematical physics).

    Best,

    B.

    ReplyDelete
  68. George,

    Your comment demonstrates a serious misunderstanding about how science works in practice. Most of the scientific enterprise is dedicated to identifying "good ideas". Every field has its own quality critera for that, though they are rarely written down. I have previously referred to this as "hypothesis pre-selection" and it's pretty much what the whole "ivory tower" is all about. It's to keep the crap out, to put it bluntly.

    I know that Popper was after an axiomatic demarcation of pseudo-science from real science but that's very 19th century. Today we speak of emergent community ethics and being a child of the 20th century, that's what makes sense to me. The community creates their own demarcation. This means the responsibility to come up with quality criteria for what constitutes good research lies within the communities.

    I just think most scientists don't understand that.

    Best,

    B.

    ReplyDelete
  69. Unfortunately Sabine, the ivory tower is keeping the crap in.

    ReplyDelete
  70. "Imagine, for example, that we discovered evidence against general relativity. Just spitballing here, but just imagine we discovered that instead of seeing distant objects pulled toward each other by gravity, that they were actually *accelerating* away from each other? Crazy right? So what would scientists do?"

    A really, really, really bad example. Apart from the fact that the cosmological constant is part of GR. (Einstein's "biggest blunder" remark notwithstanding; the only evidence for this is a quip by Gamow, who was something of a jokester; Barrow has some history-of-science reasoning which demonstrates that he probably never made this remark; in any case, Einstein was not always right and, historically, one could have had the cosmological constant from the beginning, rather than adding it based on observations (which turned out to be misinterpreted), as Einstein did. Reality is independent of the way by which we come to understand it.)

    It is easy to falsify GR: just show that bodies of different mass don't fall at the same speed or that bodies of the same mass but different composition don't fall at the same speed. I'm not saying that this is impossible, just that it would falsify GR. (Of course, it would falsify it like GR falsified Newtonian gravitation; the latter is still fine for many things and represents a limit (weak fields, slow speeds, etc).) In fact, there are many reasons to believe that GR will have to be modified, but as Sabine has pointed out many times, neither dark matter nor dark energy in any way compromise GR.

    ReplyDelete
  71. @Sabine

    " Therefore what we really should be doing is remove any incentives for people to work on what everybody works on already. "

    Exactly, totally agree, that's the essential part. Diversity is the mother of intelligent problem solving.
    And the current 'feudal' structure (sorry for that slightly disrespectfull word, but it hits the mark) of physics education and research, greatly discourages this diversity. Better have 10 people working on 10 concepts, then 1000 people on one idea. Because the whole point is to recombine parts of concepts with other parts, it's a game of replacing bits of dna until you get that unique combination which yields added value. Bad ideas are usually just ideas in the wrong context, it's a basic truth of design.

    Best, Koenraad

    ReplyDelete
  72. @Tom Miller "It helps to think through a practical example. How would one falsify general relativity?" Violate the Equivalence Principle (EP). GR becomes a special case. EP-optional theory (e.g., Einstein-Cartan) is Officially ignored for “good” (postulated!) reasons. Stalemate.

    GR predictions are validated against all measurable property and field divergences through ~16 significant figures ̶ classical, quantum mechanical, relativistic, and gravitational (strong EP). Non-measurable observables exist re Popper.

    Target postulates. Euclid contains neither Earth's surface geodesic paths nor nuclear plant cooling towers. Both are non-Euclidean.

    ReplyDelete
  73. Of course you're right about removing "incentives for people to work on what everybody works on already". One of my frustrations following scientific progress is seeing how many scientists don't recognize the extent that their self-interest influences their own work; thus the "scientific method" is not as purely scientific as they believe.

    ReplyDelete
  74. So am I right that on a Venn diagram, "falsifiable theory" would be a big circle and completely within it would be a smaller circle, "scientific theory"?

    So that:
    scientific implies falsifiable;
    non-falsifiable implies not scientific;
    but:
    not scientific does not imply non-falsifiable;
    falsifiable does not imply scientific.

    ReplyDelete
  75. Roger,

    Well, as I said, you can't in practice falsify theories. As I wrote elsewhere, what you do instead is that you "implausify" them. Even leaving that aside, your picture is overly rosy because theories might linger for a long time in the category "not yet falsifiable". The problem being that it can take a long time until one actually understands the consequences of a theory. Best,

    B.

    ReplyDelete
  76. Sabine,
    respect, you don't shy away from making enemies when convinced of some very contentious issue. I agree to your reasoning. Probably some idea how to rate theories anyway would improve acceptance.
    The situation you describe somewhat reminds me of the prolific invention of ad-hoc models to calculate phonon frequencies around 1980. These were effectively only interpolations of the well measured frequencies and as was well known infinitely arbitrary. Nevertheless the plausible models and their parameters were often discussed as if proven. In that case the arbitrariness could be simply lifted by including eigenvectors (when becoming available) and were later replaced by microscopic (mostly density functional) calculations. First microscopic calculations had difficulties to be accepted, because the results were much less accurate than the accustomed ad-hoc (Fit-)models.
    In particle theory perhaps new breakthrough theories will face similar difficulties, because initially appearing inferior to reproduce measurements.

    ReplyDelete
  77. The name of that elderly lady? Hillary Clinton.

    ReplyDelete
  78. Thanks for the fine tuning. That all makes sense.

    ReplyDelete
  79. The great Bayesian theorist E. T. Jaynes refuted Popper and Dawid years ago on page 310 of his book PROBABILITY THEORY: THE LOGIC OF SCIENCE (Cambridge University Press). (Jaynes called Popper an irrationalist.) In brief, Jaynes showed that probability theory can used only to decide between KNOWN theories; it is meaningless to assign probabilities to unknown theories.

    Frank J. Tipler Professor of Mathematical Physics

    ReplyDelete
  80. Sabine, thanks for the response. I think I misunderstood what you and Don were talking about. I thought your criteria were for picking good theories/ideas, after the fact.
    (If someone discovers a dark matter particle, that will refine our ideas.)
    Not the good ideas that should be funded or pursued. (My time in academic research was mostly as a solid state experimentalist.)

    As for picking good research. I guess for me that's a much more personal thing.
    Once or twice a month, I'll lay aside what I should be doing, take part of a
    Friday and go do something I've been wanting to do. (playing with diode laser,
    building some circuit, testing the limits of some part.)

    I'm not a theorist, but I can imagine you doing something similar.

    As for the funding of science research, this is alas a human endeavor and subject to all our fads and foibles.

    ReplyDelete
  81. Sir Karl Popper's main aim was to question the scientific pretensions of chiliastic Marxism which in his time appeared to be an irresistible force. In time the critique took in Freudian psychoanalysis and Darwinian evolution, basically any theory that took in contradictory phenomenon by expanding its narrative. He didn't have much to say about applications to physics, since he usually deferred to physicists in matters concerning falsifiability.

    ReplyDelete
  82. Frank Tipler:

    Jaynes' comment is evidently directed at a certain sort of Bayesian. Since Popper was about the farthest thing from a Bayesian as is imaginable, refusing to even assign probabilities to theories, that comment cannot even have been directed at Popper if Jaynes knew what he was about.

    ReplyDelete
  83. All this is old news, right? Kuhn has noted that "falsification is not how science really works" back in the 70's. That has inspired Lakatos who came up with a better model of scientific development that still takes into account empirical confrontation a bit later. The too-easy-to-fit-data problem and unconceived alternative to our theories you mention is just the old debate on underdetermination and on epistemic values beyond empirical adequacy (simplicity, fruitfulness, explanatory power, ...) which had been largely anticipated by Duhem in the... 19th century!

    Perhaps scientists still like Popper because at least, he tried to make science "special" (not like psychoanalysis or marxism) whereas there's probably a continuum between crazy doctrines and well established scientific theories. I think Lakatos mostly has it right on how science works, and on his criteria of fruitful vs degenerating research programs.
    To me, the ability to unify old theories that address different types of phenomena (electromagnetism+mechanics) into a single framework (relativity), and the ability of this unification to make new predictions and thus extend the scope of old theories to new domains, and the confirmation of these new predictions is the hallmark of good theorising. This what classical mechanics, relativity and quantum theory did. I mean as a criteria of why theories become widely accepted (not as a method, investment strategy or anything).

    ReplyDelete
  84. Apparently, Popper once collaborated with some physicists in a veritable train wreck of irrationalism. An idiotic psi-ontic version of the CI

    The most widely known interpretation of quantum mechanics is the Copenhagen interpretation put forward by Niels Bohr and his school. It maintains that observations lead to a wavefunction collapse, thereby suggesting the counter-intuitive result that two well separated, non-interacting systems require action-at-a-distance.

    crashing into an idiotic insistence on observer independence

    Popper argued that such non-locality conflicts with common sense, and would lead to a subjectivist interpretation of phenomena, depending on the role of the 'observer'.

    ReplyDelete
  85. David Deutsch presents his interpretation of Popper and makes quite a good argument of what constitutes a good explanation (scientific theory) in his book "Beginning of Infinity". Falsifiability alone really is not enough, as he shows even in the first chapters.

    ReplyDelete
  86. bee:
    it should be noted that Popper (and others) only discuss the philosophy of physics, not the philosophy of science. they ignore chemistry which most people consider to be a science (Newton certainly did) but some of its goals are quite different from that of physics. (see e.g. https://sciblogs.co.nz/molecular-matters/2012/10/08/what-does-a-synthetic-organic-chemist-do/) so when physicists claim (as Dirac did when he said that "The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble.) that chemistry is a subset of physics, they are exposing either their ignorance or their arrogance (probably both). Similarly, the science of biology has its own goals which are not the same as those of physics (or of chemistry) and the theoretical physics of biological systems is as profound as that of so-called fundamental physics (one could even argue that entropy plays a even greater role in biology than it does in physics).
    richard

    ReplyDelete
  87. Paul Hayes:
    It is very hard to determine from your comment what "idiotic" refers to. We know from the PBR Theorem that any adequate account of quantum theory must be psi-ontic. Insofar as Popper was criticizing a completely psi-epistemic understanding, he was right on target.

    ReplyDelete
  88. Paul Hayes:
    It is very hard to determine from your comment what "idiotic" refers to. We know from the PBR Theorem that any adequate account of quantum theory must be psi-ontic. Insofar as Popper was criticizing a completely psi-epistemic understanding, he was right on target.

    ReplyDelete
  89. Tim Maudlin,

    It isn't hard and we don't know that. We know that the PBR theorem at most rules out the 'realist' psi-epistemic interpretations. Furthermore, as Matt Leifer put it in a blog post on that result, probably "no theorem on Earth could rule out" the 'vanilla' psi-epistemic interpretations.

    ReplyDelete
  90. Sabine

    I think to “implausify” a theory is as problematic as to “falsify” it; for instance pre-Copernican revolution it would have been implausible idea to most academics that the earth moved. The transition from thinking an idea implausible to accepting it as plausible seems very subjective. In the case of Einstein’s relativity, it is usually accepted that it is changing our understanding of space and time etc., and violating what was ordinary feelings of commonsense on such issues, so that makes Einstein’s relativity implausible. Back to what I was pointing out: there was in a sense no need for a change to Einstein’s relativity, we could just have stayed with updating Newtonian physics; and indeed it was updated by Boscovich. (However, of course you could say that in a sense Einstein’s relativity was an update to Newtonian physics, but I think that problematic in not being precisely clear what that update was supposed to be on such issues as what Lorentz was saying versus what Einstein said.)
    Roger

    ReplyDelete
  91. "Frank J Tipler said... "

    I would have expected you to chime in on the fine-tuning and anthropic-principle discussion following another recent post on this blog. (After all you did, with the famous writer John D. Barrow, literally write the book on the anthropic cosmological principle.)

    ReplyDelete
  92. Frank Tipler

    Yes, Bayesian inference is valid only if based on frequency distribution of empirical data. You cannot assign probabilities a priori. That would be akin to the Delphi method in decision theory in management science. I hesitate to call 'management' a science. Delphi is an appropriate name for the method after the ancient Oracle of Delphi, who inhales hallucinogenic fumes before making her prophesies. It's not hallucinations but "visions of the future"

    BTW I read your book Physics of Immortality many years ago. I still don't understand how the Omega minus becomes an all-powerful intelligent being. Isn't the Omega minus a singularity at the end of the universe? Perhaps super intelligent machines in the future can impart intelligence to a black hole?

    ReplyDelete
  93. Lisa Randall has tested her dark matter dinosaur extinction theory. I read somewhere, maybe in Scientific American, the confidence level of the data supporting her theory is something like 60 or 70%. In most empirical sciences, 95% is the bar for rejecting the null hypothesis, much higher in physics. I'm sure she is aware of this. It's just an interesting theory to sell her book. It's the fault of readers if they took it seriously. Like people are still analyzing the true meaning of the poem, The Hunting of the Snark. Clue. Lewis Carroll was a mathematician and logician. Maybe he's just testing the intelligence of the readers, can they really make sense out of nonsense?

    ReplyDelete
  94. Thank you for this, and many posts before it. For the past decade or so, extreme physics has read like story telling, not science, to the point that phycists have lost credibility with me.

    You are restoring that credibility. I do enjoy speculation, but it's been annoying as heck that some people present it as true.

    ReplyDelete
  95. "BTW I read your book Physics of Immortality many years ago."

    See Sean Carroll's take on this, including the comments and links to book reviews.

    ReplyDelete
  96. How do theoretical particle physicists respond to Sabine Hossenfelder's "How Popper Killed Particle Physics"?

    Answer by Michelle Kathryn McGee

    Hossenfelder’s argument is complicated to the point of being so convoluted that it misses its own point.In her attempt to create dramatic cross-currents, she does not settle into the existential clarity needed...

    Read More at

    https://www.quora.com/How-do-theoretical-particle-physicists-respond-to-Sabine-Hossenfelders-How-Popper-Killed-Particle-Physics/answer/Michelle-Kathryn-McGee?srid=3sYMZ

    ReplyDelete
  97. Michelle,

    I appreciate your effort, but you are missing the point. Since you seem to think my argument is "convoluted" allow me to simplify it: "Particle physicists excuse fruitless model-building by claiming anything falsifiable is good science."

    Having said that, you are definitely right that I have not "settled into existential clarity." Best,

    B.

    ReplyDelete
  98. @Sabine

    "Particle physicists excuse fruitless model-building by claiming anything falsifiable is good science."

    Generalization / straw-man argument. What are your best examples?

    ReplyDelete
  99. 'All you have to do then is twiddle the details so that your predictions are just about to become measureable in the next, say, 5 years.'
    From my pov there is nothing against creating possible theories to solve known problems and of course they have to be falsifiable. But choosing them as you describe more or less intentionally would be the scientific analog of a conspiracy theory: you can always construct a theory staying under the radar of observability (and of course adjust it as often as needed).
    But I guess at least part of the distinction to 'good science' is due to the intention: is it to clarify as much as possible, e. g. by categorizing which kinds of theories can be constructed and how (far) these would be recognizable as meaningful or excluded. it would not make much sense in such a case to specifically describe one of these specifically, if that is a finetuning only possible after definitive measurements allowing for such discrimination are available. I doubt that there is a sharp criterion, only the extreme cases are rather clear.
    I think the problem is the need/incentive for (many) publications. For that it is more appropriate and easier accepted to create many, but rather meaningless theories than trying to clarify a whole sector of possible theories (-> hidden rules) or the like - especially when forced to finish something in reasonable time. From that perspective of course string theory, GUT etc. have been bold attempts - unfortunately not working out. The next big ideas and/or or guiding experiments are needed.

    ReplyDelete
  100. I can't see what exactly is your proposal for the new golden standard. The requiement for a theory to make new predictions is also artificial in principle, because you implicitly assume that there are new discoveries waiting to be made. OK, it does not seem to be the moment right now, but the ultimate theory will by definition be fruitless for exactly that reason. Nothing more to explain. On the other hand, you have the religion-based theories, which have perfect explanatory power (it is exactly so and so because God wanted that), but zero predictability of new facts. So what exactly do you want to change?

    ReplyDelete
  101. Is this discussion more nuanced than saying that falsification itself is scale-dependent: that the universe will only reveal itself in a coarse-grained manner through our inherently inductive theories? So as already instantiated, General Relativity without the Cosmological constant/parameter is but a less effective Effective field theory than one incorporating such a freely available degree of freedom. The latter theory only being sensitive to the apparent phenomenon of deceleration between superclusters at large length separations.

    As part of the reductionist program drilling through running (coupling) scales anyone of monism spirit has to admit to the emergence of irreducibility and by extension falsification of their theory at some scale. That the falsification regime scale is determined by our technological innovation renders theory implausing discovery dependent.

    ReplyDelete
  102. Theories are data compression algorithms. When they get bigger rather than smaller for a given set of data points you are going in the wrong direction.

    ReplyDelete
  103. I completely agree: Falsifiability is an interesting idea in the philosophy of science, but to bang on about it as though it's the only idea is bad philosophy. Personally speaking, it seems to me that string theory are like Ptolemaic epicycles heaped upon one another. Also, when science began in Greek antiquity the ideas bounced around then simply weren't falsifiable then but they shaped scientific discourse for the next two millenia.

    ReplyDelete
  104. Sabine,

    You did an admirable job of knocking down Popper reversing straw persons, and calling out formulaic papers. However, if a theory is falsified or a theorem counter exampled, then they are flawed. If easily patched , great, if ugly patched, not great.

    Are you aware that Popper created a reasonable hypothesis, H, and a test that confirms H, but instead of H gaining greater credibility, it now is more likely to be believed false.

    ReplyDelete
  105. Truth is what’s left after the rest has been proven false. But instead, in 1934, Popper said that science is what can be shown to be potentially false. It had a disastrous effect on physics.

    Popper: “In so far as a scientific statement speaks about reality, it must be falsifiable; and in so far as it is not falsifiable, it does not speak about reality.” How does Popper falsify reality? By being God? Did Popper believe he was God? Is a lion non falsifiable? Does lack of falsiability make a lion’s claw unreal?

    After proposing the heliocentric theory, using his concept of inertia, circa 1350 CE, Buridan observed that the heliocentric theory could not be experimentally distinguished (yet) from the geocentric theory, and thus, one may as well believe the latter, as “Scripture” said so.

    It was definitively proven that Venus turned around the Sun (Sol) more than three centuries after Buridan wrote, when telescopes became powerful enough to observe the phases of Venus (how the Sun illuminated Venus). So the question of falsifiability is not new.

    Even earlier, 14 centuries ago, the ancient Greeks demonstrated the atomic theory by observing perpetual motion of small particles (what we call now according to an Englishman, Brownian motion, because nearly everything was discovered by Englishmen say the English).

    Popper believed that a scientific theory should be “falsifiable”. As he wrote: "A theory is falsifiable, as we saw in section 23, if there exists at least one non-empty class of homotypic basic statements which are forbidden by it; that is, if the class of its potential falsifiers is not empty."
    Popper, The Logic of Scientific Discovery, p. 95

    Pure mumbo-jumbo. (Popper’s mumbo-jumbo would make the epicycles theory “scientific”... as as Tycho found; epicycles partisans could have fixed that with more cycles…)

    Popper’s mumbo-jumbo enabled Popper to speak of science, while avoiding the concept of truth. Under the cover of sounding scientific (thus honorable). If science itself was not about truth, nor induction, neither was society in need to be about truth… or induction (so no revolution). That could only please an establishment put in place by the history of privilege. So Popper became Sir Karl, got plenty of honors, and part of the elite. That was good for Sir Karl. After all, if there is no truth, there is still the Queen.

    On the face of it, believing, as Popper affected to, that one should be able to prove that a theory could be false, to make it true enable us to make zombies “scientific” (they could be false!) To be true something just has to potentially be false.

    God is not falsifiable, because God can’t “conceivably” be false (at least to the believer in said God). Thus, if God exists, that makes God true, yet unfalsifiable. So we would have the problem of a God which is true, yet non-scientific.

    The more general problem is that, how could something which is true be falsifiable?

    Popper himself threw the science as falsiability theory under the bus in his later years: “Science may be described as the art of systematic over-simplification — the art of discerning what we may with advantage omit.” The Open Universe : An Argument for Indeterminism (1992), p. 44
    Science must begin with myths, and with the criticism of myths. Ch. 1 "Science : Conjectures and Refutations", Section VII

    Lisa Randall made a profitable theory, and proved it experimentally: she found that Dark Matter sells book.

    In truth, the dinosaurs were in bad trouble for millions of years (the fossil record about the number of species shows), because the Dekkan Traps hyper-volcanism had been acting up for millions of years, smothering the planet, heating from CO2, and cooling fast, from sulfates, while acidifying the oceans. Warm blooded animals and those who burrow survived. Such hyper-volcanism cools the planet's radioactive core, and happens every 200 million years or so. I didn't even bother opening the book. But I will buy yours.
    https://patriceayme.wordpress.com/2009/11/21/trapped-by-super-traps/

    ReplyDelete
  106. That's unfair to Popper. Popper's criterion of empirical content is not only absolute (a theory has to be falsifiable) but also relative (if a theory can be falsified by an experiment which cannot falsify the other, competing theory, but not in the other direction, then the first theory has higher empirical content, thus, has to be preferred.

    Simply adding yet another field makes a theory more complex, and usually adds some parameters, which can be used to fit evidence which falsifies the original theory without the additional field. So, following Popper the unmodified theory has to be preferred until it is falsified.

    Popper has also pointed out that most of the scientific preference for simple theories can be explained in this way - to falsify a simple theory is easier, thus, it has higher empirical content.

    All this is afair already in the Logic of Scientific Discovery, thus, no backpedaling.

    (And, as a side remark, nobody tries to invent a luminiferous aether which just exactly mimics special relativity simply for a quite simple reason: because this has been already done by Lorentz, and is simply the original Lorentz ether interpretation of special relativity. And to mimic GR is also quite trivial: take harmonic coordinates (famous for simplifying the Einstein equations) and interpret them as continuity and Euler equations for the ether. Only one iteration necessary: to preserve the Lagrange formalism, introduce them by adding a term which enforces them.)

    ReplyDelete
  107. Scientific concepts must satisfy two criteria: empirical import and theoretical significance. A concept has empirical import if it has clear and unambiguous criteria of application. For each concept there must be a rule or set of rules which can be used to determine whether or not to predicate the concept of any given spaciotemporal point. These rules themselves must use, or be translatable into, an unproblematic language. A language is unproblematic if those who use it virtually never make incorrigible mistakes in its use and if anyone with normal intelligence can be taught this use. A concept has theoretical significance to the degree that it fits into laws and theories. It is a measure of the scientific utility the assessment of theoretical significance statements in which it is found, the degree and the level or scope of the statements.

    ReplyDelete
  108. Of course, one must try to falsify a theory.
    A theory should be testable, falsifiable, coherent, concise and if possible compatible with older theories.
    What is wrong with falsifiability? It surely is not enough for a theory but it’s surely one of the starting points.
    Of course, you can falsify a theory claiming that the world is made of little ice-cream triangles or another one claiming that the earth is a flat pizza with mozzarella at the edges.
    Trying to amend this kind of theories will lead nowhere. They are just wrong.

    ReplyDelete
  109. Scientists are clearly not philosophers because Popper fought against induction! Do they know that? The problem that science has is that it wants to entertain crazy theories rather than entertain reasonable ones based on what we already know. David Stove completely debunked Popper. Science is about induction and not just deduction!

    ReplyDelete
  110. Very interesting. I shall just comment the following affirmation: " That’s because if you cook up a new theory you first have to reproduce all achievements of the already established theories." I think it is generally a good idea but as we know, quantum mechanics does not seems to explain some effects of general relativity but even though was accepted during history. It means that this principle is not general even if this would be a very strong point. When Einstein unified mechanics and eletromagnetism with spetial relativity, he really did almost that, but he did not include gravitation. He needed more owrk to construct general relativity in order to include it... It is possible that some new theories may be very usefull and don't explain everythin explained by General Relativity, that's why it so complicated to predict what new theory is a good one and it generally take sometime to be accepted.

    ReplyDelete
    Replies
    1. But once a theory is amended, it is a new theory.

      Delete
  111. I think falsifiability as a good demarcation criterion, the problem is that not every scientific hypothesis is equally parsimonious, and that's not something to blame falsifiability or Popper for.

    ReplyDelete
  112. The discussion is still continuing for three months without attempts to explicate 1) what Popper did really mean under a scientific theory; 2) how Popper’s understanding fits to real scientific theories and 3) what disputants mean under a scientific theory in the view of plurality of its reconstructions in philosophy of science.

    ReplyDelete
  113. Volodymyr,

    Luckily most other readers understood that this blogpost isn't about Popper.

    ReplyDelete
  114. Falsification is a silly criterion. Newtonian mechanics and quantum mechanics are NOT even falsifiable- yet are some of our most useful and important models.

    ReplyDelete
  115. hello, I'm new to this blog, so apologies if I missed some previous blogs about my question.
    I agree with your opinion about the present criterion for a scientific theory, however it makes me wonder, what in your opinion, classifies as a good hyposthesis/theory?

    ReplyDelete
    Replies
    1. A good hypothesis, loosely speaking, is one that either explains more, or that needs less to explain the same data to the same accuracy.

      I say "loosely" because there are also theoretical advances that one could call improvements of user-friendliness which are enormously important to the practice of science but sometimes go underappreciated. Eg, Feynman diagrams were not actually a new hypothesis. They were "merely" a convenient notation to do difficult calculations in an already known theory. But they were enormously useful. So, well, I guess what I am saying is that fitting data isn't all there is to theory development.

      Delete
  116. It would be easy to construct an example of a theory that elegantly and efficiently explains one phenomenon but is utterly incompatible with theories explaining related phenomena. No theory is an island. It is one piece in a giant jigsaw puzzle called Science, the goal of which is to "paint" a complete and integrated picture of the world. (I said "goal", so don't bother to point out that we're not done yet.)

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.