[image: theskillsfarm.com] |

An hypothesis that is not falsifiable through observation is optional. You may believe in it or not. Such hypotheses belong into the realm of religion. That much is clear, and I doubt any scientist would disagree with that. But troubles start when we begin to ask just what it means for a theory to be falsifiable. One runs into the following issues:

**1. How long it should take to make a falsifiable prediction (or postdiction) with a hypothesis?**

If you start out working on an idea, it might not be clear immediately where it will lead, or even if it will lead anywhere. That could be because mathematical methods to make predictions do not exist, or because crucial details of the hypothesis are missing, or just because you don’t have enough time or people to do the work.

My personal opinion is that it makes no sense to require predictions within any particular time, because such a requirement would inevitably be arbitrary. However, if scientists work on hypotheses without even trying to arrive at predictions, such a research direction should be discontinued. Once you allow this to happen, you will end up funding scientists forever because falsifiable predictions become an inconvenient career risk.

**2. How practical should a falsification be?**

Some hypotheses are falsifiable in principle, but not falsifiable in practice. Even in practice, testing them might take so long that for all practical purposes they’re unfalsifiable. String theory is the obvious example. It is testable, but no experiment in the foreseeable future will be able to probe its predictions. A similar consideration goes for the detection of quanta of the gravitational field. You can measure those, in principle. But with existing methods, you will still collect data when the heat death of the universe chokes your ambitious research agenda.

Personally, I think predictions for observations that are not presently measurable are worthwhile because you never know what future technology will enable. However, it makes no sense working out details of futuristic detectors. This belongs into the realm of science fiction, not science. I do not mind if scientists on occasion engage in such speculation, but it should be the exception rather than the norm.

**3. What even counts as a hypothesis?**

In physics we work with theories. The theories themselves are based on axioms, that are mathematical requirements or principles, eg symmetries or functional relations. But neither theories nor principles by themselves lead to predictions.

To make predictions you always need a concrete model, and you need initial conditions. Quantum field theory, for example, does not make predictions – the standard model does. Supersymmetry also does not make predictions – only supersymmetric models do. Dark matter is neither a theory nor a principle, it is a word. Only specific models for dark matter particles are falsifiable. General relativity does not make predictions unless you specify the number of dimensions and chose initial conditions. And so on.

In some circumstances, one can arrive at predictions that are “model-independent”, which are the most useful predictions you can have. I scare-quote “model-independent” because such predictions are not really independent of the model, they merely hold for a large number of models. Violations of Bell’s inequality are a good example. They rule out a whole class of models, not just a particular one. Einstein’s equivalence principle is another such example.

Troubles begin if scientists attempt to falsify principles by producing large numbers of models that all make different predictions. This is, unfortunately, the current situation in both cosmology and particle physics. It documents that these models are strongly underdetermined. In such a case, no further models should be developed because that is a waste of time. Instead, scientists need to find ways to arrive at more strongly determined predictions. This can be done, eg, by looking for model-independent predictions, or by focusing on inconsistencies in the existing theories.

This is not currently happening because it would make it more difficult for scientists to produce predictions, and hence decrease their paper output. As long as we continue to think that a large number of publications is a signal of good science, we will continue to see wrong predictions based on useless models.

**4. Falsifiability is necessary but not sufficient.**

A lot of hypotheses are falsifiable but just plain nonsense. Really arguing that a hypothesis must be science just because you can test it is typical crackpot thinking. I previously wrote about this here.

**5. Not all aspects of a hypothesis must be falsifiable.**

It can happen that a hypothesis which makes some falsifiable predictions leads to unanswerable questions. An often named example is that certain models of eternal inflation seem to imply that besides our own universe there exist an infinite number of other universes. These other universes, however, are unobservable. We have a similar conundrum already in quantum mechanics. If you take the theory at face value then the question what a particle does before you measure it is not answerable.

There is nothing wrong with a hypothesis that generates such problems; it can still be a good theory, and its non-falsifiable predictions certainly make for good after-dinner conversations. However, debating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests.

This post was brought on by Matthew Francis’ article “Falsifiability and Physics” for Symmetry Magazine.

You might find interesting Lee McIntyre's book The Scientific Attitude (see my review: http://popsciencebooks.blogspot.com/2019/04/the-scientific-attitude-lee-mcintyre.html) which spends quite a lot of time on the demarcation issue (either between science and non-science or science and pseudoscience).

ReplyDeleteStill, if proposed mechanism as a hypothesis was especially odd but there were no other reasonable explanation yet, would also crazy ideas be considered as science? (Susskind's words adapted).

ReplyDeleteIf the crazy ideas pass through experimental verification, then those ideas are considered proven science.

DeleteHow would you classify an analysis of this kind? A nice argument to account for the Born rule within MWI (https://arxiv.org/abs/1903.12027 "Less is More: Born's Rule from Quantum Frequentism" ). I believe it's fair to say the paper concludes that the experimental validity of the Born rule implies the universe is necessarily infinite. If we never observe a violation of the Born rule, would this hypothesis qualify as science?

ReplyDeleteThis seems to be a perfect case of falsifiability. If a Born rule violation is never observed this is not proven, but there is a confidence level, maybe some form of statistical support, for this theory. If a Born rule is found to be violated the theory is false, or false outside some domain of observation. Since a quantum gravity vacuum is not so far well defined, and there are ambiguities such as with Boulware vacua, it could be the Born rule is violated in quantum gravity.

DeleteScience has defied categories since the start. If anyone is responsible for defining science in the modern context it is probably Galileo. Yet we have different domains of science that have different criteria for what is meant by testable. A paleontologist never directly experiences the evolution of life in the past, but these time capsules called fossils serve to lead to natural selection as the most salient understanding of speciation. Astronomy studies objects and systems at great distances, where we will only ever visit some tiny epsilon of the nearest with probes. So we have to make enormous inferences about things. From parallax of stars, to Cepheid variables, to red-shift of galaxies to the luminosity of type I supernova we have this chain of meter sticks to measure the scale of the universe. We measure not the Higgs particle or the T-quark, but the daughter products that infer the existence of these particles and fields. We do not make observations that are as direct as some purists would like.

DeleteAs Eusa and Helbig point out there are aspects of modern theories which have unobservable aspects. Susskind does lean heavily on the idea of theores that are of a nature "it can't be any other way." General relativity predicts a lot of things about black hole interiors. That is a big toughy. No one will ever get close to a black hole that could be entered before being pulled apart, the closest is SgrA* at 27k light years away. Even if theoretical understanding of black hole interiors is confirmed in such a venture that will remain a secret held by those who entered a black hole. It is plausible that aspects of black hole interiors can have some indirect physics with quantum black holes, but we will not be generating quantum black holes any day soon.

Testability and falsifiability are the gold standard of science. Theories that have their predictions confirmed are at this top. quantum mechanics is probably the modern physics that is the most confirmed. General relativity has a good track record, and the detection of gravitational radiation is a big feather in the GR war bonnet. Other physics such as supersymmetry are really hypotheses and not theories in a rigorous sense. Supersymmetry also is a framework that one puts phenomenology on. So far all that phenomenology of light SUSY partners looks bad. When I started graduate school I was amazed that people were interested in SUSY at accelerator energies. At first I thought it was properly an aspect of quantum gravitation. I still think this may be the case. At best some form of split SUSY Arkani Hamed proposes may play a role at low energy, which I think might be 1/8th SUSY or something. So these ideas are an aspect of science, but they have not risen to the level of a battle tested theory. IMO string theory really should be called the string hypothesis; it is not a theory --- even if I might think there may be some stringy aspect to nature.

There is a certain character in a small country sandwiched between Austrai, Germany and Poland who has commented on this and ridicules the idea of falsifiability. I just checked his webpage and sure enough he has an entry on this. I suppose his pique on this is because he holds to an idea about the world that producing 35 billion tons of carbon in CO_2 annually into the atmosphere has no climate influence. He upholds a stance that has been falsified; the evidence for AGW is simply overwhelming, and by now any scientific thinker should have abandoned climate denialism. Curious how religion and ideology can override reason, even with the best educated.

Lawrence Crowell wrote:

Delete“General relativity predicts a lot of things about black hole interiors. That is a big toughy. No one will ever get close to a black hole that could be entered before being pulled apart, the closest is SgrA* at 27k light years away. Even if theoretical understanding of black hole interiors is confirmed in such a venture that will remain a secret held by those who entered a black hole. It is plausible that aspects of black hole interiors can have some indirect physics with quantum black holes, but we will not be generating quantum black holes any day soon.”

This assumption may not be the case. The theory of Hawking radiation has been verified in supersonic wave based analog black holes in the lab. Yes, entangled virtual items have been extracted from the vacuum and made real.

The point to be explored in the assumptions that underlie science is can such a system using Hawking radiation be engineered to greatly amplify the process of virtual energy realization to the point where copious energy is extracted from nothing. When does such a concept become forbidden as a violation of the conservation of energy to consider as being real? In this forbidden case, it is not so much the basic science of the system, but the point where the quantity of its energy production becomes unthinkable since the conservation of energy is inviolate.

There are optical analogues of black holes and Hawking radiation. Materials that trap light can be made to appear black hole like. This property can be tuned with a reference beam of some type. There is no "something from nothing" here. The energy comes from the energy employed to establish the BH analogue. Black holes have a time-like Killing vector, which in a Noether theorem sense means there is a constant of motion for energy. Mass-energy is conserved.

Delete

ReplyDelete"It can happen that a hypothesis which makes some falsifiable predictions leads to unanswerable questions. An often named example is that certain models of eternal inflation seem to imply that besides our own universe there exist an infinite number of other universes. These other universes, however, are unobservable. We have a similar conundrum already in quantum mechanics."Another example: GR says a lot about what goes on inside the event horizon of a black hole, which (classically) is by definition non-observable. But of course this is not a mark against GR. Similarly, the unobservability of other universes in (some types of) the multiverse is not a mark against the theories which have the multiverse as a consequence, as long as they are testable in other ways.

"

DeleteGR says a lot about what goes on inside the event horizon of a black hole, which (classically) is by definition non-observable. But of course this is not a mark against GR."It is not GR

per se, that is responsible for the event horizon (or the singularity) of the modern 'relativistic' black hole. Rather it is the Schwarzschild solution to the GR field equations that produces both of those characteristics.If Schwarzschild had incorporated the known fact that the speed of light varies with position in a gravitational field we probably wouldn't be talking about black holes.

Here another culprit: The

Deleterenormalisation groupitself as David Tong says in here (pdf p.62):“The renormalisation group isn't alone in hiding high-energy physics from us. In gravity, cosmic censorship ensures that any high curvature regions are hidden behind horizons of black holes while, in the early universe, inflation washes away any trace of what took place before. Anyone would think there's some kind of conspiracy going on....”Phillip,

DeleteI believe I have said this before but here we go again:

1) What happens inside a black hole horizon is totally observable. You just cannot come back and tell us about it.

2) We have good reason to think that the inside of a black hole does play a role for our observations and that, since the black hole evaporates, it will not remain disconnected.

For these reasons the situation with black holes is very different from that of postulating other universes which you cannot visit and that are and will remain forever causally disconnected.

I think the comparison of other pocket cosmologies and black hole interiors is actually fairly comparable. The interior of a black hole probably has some entanglement role with the exterior world. We might have some nonlocal phenomena with other pocket worlds or these pockets may interact.

DeleteThere is some data coming about that could upend a fair amount of physics and cosmology. The CMB data is compatible with a Hubble parameter H = 67km/sec-Mpc and data from galaxies out to z > 8 indicates H = 74km/sec-Mpc. The error bars on these data sets do not overlap. Something odd is happening. This could mean possibly three things, four if I include something completely different we have no clue about.

The universe is governed by phantom energy. The evolution of the vacuum energy dρ/dt = -3H(p + ρ) > 0 with p = wρ, and for w < -1 we have dρ/dt = -3H(1 + w)ρ > 0. This means the observable universe will in time cease to primarily exponentially expand, but will asymptote to some value in a divergent expansion. This is the big rip.

One possibility is this pocket world interacted with another at some point. If the two regions had different vacuum energy then maybe some of that from the other pocket spilled into this world. The region we observe out to around 12 billion light years and beyond the cosmic horizon then had this extra vacuum energy fill in sometime in the first few hundred million years of this observable world.

Another is that quantum states in our pocket world have some entanglement with quantum states in the inflationary region or in other pocket regions. There may then be some process similar to the teleportation of states that is increasing the vacuum energy of this pocket. It might be this happens generally, or it occurs under different conditions the pocket is in within the inflationary spacetime. Susskind talks about entangled black holes, and I think more realistically there might be entanglement of a few quantum states on a black hole with some quantum states on another, maybe in another pocket world or cosmology, and then another set entangled with a BH elsewhere and there is then a general partition of these states that is similar to an integer partition. If so then it is not so insane to think of the vacuum in this pocket world entangled with vacua elsewhere.

The fourth possibility is one that no one has thought of. At any rate, we are at the next big problem in cosmology. This discrepancy in the Hubble parameter from CMB and from more recent galaxies is not going away.

Regarding the forth possibility...

DeleteThe CMB tells us about the state that the universe existed in when it was very young. There is no reason to assume that the expansion of the universe is constant. The associated projections about the proportions of the various types of matter and energy that existed at that early time are no longer reliable since the expansion rate of the universe has increased. It is likely that the associated proportions of the various types of matter and energy that exist now have changed from its primordial CMB state. This implies that there is a vacuum based variable process in place that affects the proportions of the various types of matter and energy as an ongoing activity that has always existed and that has caused the Hubble parameter derived from the CMB to differ from its current measured value.

We ultimately get back to this problem with what we mean by energy in general relativity. I wrote the following on stack exchange on how a restricted version of FLRW dynamics can be derived from Newton's laws

Deletehttps://physics.stackexchange.com/questions/257476/how-did-the-universe-shift-from-dark-matter-dominated-to-dark-energy-dominate/257542#257542

The ADM space plus time approach to general relativity results in the constraints NH = 0 and N^iH_i = 0 that are the Hamiltonian and momentum constraints respectively. The Hamiltonian constraint, or what is energy on a contact manifold, means there is no definition of energy in general relativity for most spacetimes. The only spacetimes where energy is explicitly defined is where there is an asymptotic flat region, such as black holes or Petrov type D solutions. In a Gauss's law setting for a general spacetime there is no naturally defined surface where one can identify mass-energy. Either the surface can never contain all mass-energy or the surface has diffeomorphic freedom that makes it in appropriate (coordinate dependent or non-covariant etc) to define an observable such as energy.

The FLRW equations though are a case with H = 0 with kinetic and potential parts

E = 0 = ½må^2 - 4πGρa^2/3

for a the scale factor on distance x = ax_0, where x_0 is some ruler distance chosen by the analyst and not nature. Further å = da/dt for time t on the Hubble frame. From there the FLRW equations can be seen. The density has various dependencies for matter ρ ~ a^{-3}, radiation ρ ~ a^{-4} and for the vacuum ρ is generally assumed to be constant.

The question is then what do we mean by a vacuum. The Hamiltonian constraint has the quantum mechanical analogue in the Wheeler-DeWitt equation HΨ[g] = 0, which looks sort of like the Schrödinger equation HΨ[g] = i∂Ψ/∂t, but where i∂Ψ/∂t = 0. The time-like Killing vector is K_t = K∂/∂t and we can think of this as a case where the timelike Killing vector is zero. This generally is the case, and the notable cases where K_t is not zero is with black holes. We can however adjust the WDW equation with the inclusion of a scalar field φ and the Hamiltonian can be extended to include this with HΨ[g, φ] = 0, such that there is a local oscillator term with a local meaning to time. This however is not extended everywhere, unless one is happy with pseudotensors. The FLRW equation is sort of such as case; it is appropriate for the Hubble frame. One needs a special frame, usually tied to the global symmetry of the spacetime, to identify this.

However, transformations can lead to troubles. Even with black holes there are Boulware vacua, and one has no clear definition of what is a quantum vacuum. I tend to think this may be one thing that makes quantum gravitation different from other quantum fields.

But isnt it advantageous for proponents of something like string theory to not have anything that is falsifiable..and continue with the hope the "results" are just around the corner..and for the $$ to keep flowing..forever ?

ReplyDelete>1. How long should it take to make a falsifiable prediction or postdiction ...

ReplyDelete>2. How practical should a falsification be?

It doesn't make much difference how long it takes, the real question is how much work, and/or time, and/or money it should take to develop an

executablefalsifiable outcome.In the final analysis this comes down to whether we should pay person A, B, or C to work on hypotheses X, Y or Z. It is a relative-value question, and at times it is very difficult to rank hypotheses in a way that lets us sort them.

This is especially true when "beauty" and "naturalness" can generate enthusiasm among researchers; those can render the people that know the most about the prospects for hypotheses X, Y or Z incapable of properly ranking them; their bias is to vote on the hypothesis most pleasing if it were true, instead of the hypothesis most likely to be true or most testable, or that would take the fewest personnel-hours to pursue.

In the end there is a finite amount of money-per-year, thus a finite number of personnel-hours, equipment, lab space, computer time and engineering support. In the end it is going to be portioned out, one way or another.

The problem is in judging the unknowns:

1) How many $ are we away from an executable falsifiable proposal?

2) How much time and money will it cost?

3) How

likelyis a proof/refutation?4) How much impact will a proof/refutation of the hypothesis have on the field in question?

Ultimately we need stats we are unlikely to ever develop!

In such a case, one solution is to sidestep the reasoning and engage in something like the (old) university model: Professors get paid to work on whatever they feel like, as long as they want, in return for spending half their week teaching students. That can include some amount for experimentation and equipment. "Whatever they want" can include the work of other researchers; so they can collaborate and pool resources. This kind of low-level "No Expectations" funding can be provided by governments.

Additional funding would not be provided until the work was developed to the point that the above "unknowns" are plausible answered; meaning when they DO know how to make a falsifiable proposal for an experiment.

As for the thousands of dead-ends they might engage in: That's self-regulating; they would still like to work on something relevant with experiments. But if their goal is just to invent new mathematics or whatever that bear no relationship to the real world; that's fine. Not all knowledge requires practical application.

It's nice that you toy in your own terms with the familiar philosophical notion of under-determination by experience (remarked on by physicist Duhem as early as 19th century, and leveraged against Popper and positivist philosophies in the 1950's). Maybe the problem is more widespread than you think, and I would be tempted to add a (6): coming up with clear-cut falsification criteria requires assuming an interpretative and methodological framework.

ReplyDeleteTo take just the most extreme cases, one should exclude the possibilities that experimenters are systematically hallucinating, and other radical forms of skepticism. But this also includes a set of assumptions that are part of scientific methodology on how to test hypotheses, what kinds of observations are robust, what statistical analysis or inductive inferences are warranted, etc.

This things are shared by scientists, because they belong to the same culture, and the general success of science brings confidence into them. Yet all these methodological and interpretative principles are not strictly speaking falsifiable.

Now the problem is: of what counts as falsification rests on non-falsifiable methodological assumptions, how can anything be absolutely falsifiable? And I think the answer is that nothing is strictly falsifiable, but only relative to a framework that is acceptable for its general fruitfulness.

" one should exclude the possibilities that experimenters are systematically hallucinating,"

DeleteYes, we're all hallucinating that are computers, which confirm the quantum behaviour of the electron quadrillions of times a second, are working; and that are car satnavs which confirm time dilation in GR trillions of times a second, are working.

*Real* scientists are the only people who are *not* hallucinating.

Steven Evans,

DeleteYou're missing the point. I'm talking about everything that you have to implicitly assume to trust experimental results, in your example, the general fiability of computers and the fact that they indeed do what you claim they do.

I personally don't doubt it. It seems absurd of course. The point is that any falsification ultimately rests on many other assumptions, there's no falsification simpliciter.

@Steven Evans maybe you're under the impression that I'm making an abstract philosophical point that is not directly relevant to how science works out should work. But no: take the OPERA experiment that apparently showed that neutrino travel faster than light. It tooks several weeks for scientists to understand what went wrong, and why relativity was not falsified. If anything, this shows that falsifying a theory is not a simple recipe, just a matter of observing that the theory is false. (And the bar can be more or less high depending on how will the theory is established so pragmatic epistemic cost considerations enter into the picture).

DeleteMy point is simply this: what counts as falsification is not a simple matter, a lots of assumptions and pragmatic aspects come in. Do you disagree with this?

@Quentin Ruyant

Delete"maybe you're under the impression that I'm making an abstract philosophical point that is not directly relevant to how science works out should work."

You are. Take 1 kilogram of matter, turn it into energy. Does the amount of energy = c^2? Put an atomic clock on an orbiting satellite. Does it run faster than an atomic clock on the ground by the amount predicted by Einstein? Building the instruments is presumably tricky, checking the theories not so much. OPERA was a mistake, everybody knew it was a mistake.

Where there is an issue, the issue is not a subtle point about falsifiability, it is far more mundane - people telling lies about their being empirical evidence to support universal fine-tuning or string theory. Or people claiming the next gen collider is not a hugely expensive punt. The people saying this are frauds. In the medical or legal professions they would be struck off and unable to practise further.

"To make predictions you always need a concrete model..."

ReplyDeleteThe problem is that qualitative (concrete) modeling is a lost art in modern theoretical physics. All of the emphasis is on quantitative modeling (math). The result is this:

"

...large numbers of models that all make different predictions. This is, unfortunately, the current situation in both cosmology and particle physics. It documents that these models are strongly underdetermined. In such a case, no further models should be developed because that is a waste of time."It

isa waste of time to develop more quantitative model variants on the same old concrete models, but what is desperately needed are new qualitative models. All of the existing quantitative models are variations on qualitative models that have been around for the better part of a century (the big bang and quantum theory). The qualitative models are the problem.Unfortunately, with its mono-focus on quantitative analysis, modern theoretical physics does not appear to have a curriculum or an environment conducive to properly evaluating and developing new qualitative models.

I want to be clear that I am not suggesting the abandonment of quantitative for qualitative reasoning. What is crucial is a rebalancing between the two approaches, such that in reflecting back on one another, the possibility of beneficial, positive and negative feedback loops is introduced.

The difficulty in achieving such a balance lies in the fact that qualitative modeling is not emphasized, if taught at all, in the scientific academy. Every post-grad can make new mathematical models. Nobody even seems to think it necessary to consider, let alone construct, new qualitative models.

At minimum, if the qualitative assumptions made a century ago aren't subject to reconsideration, "the crisis in physics" will continue.

Thanks for stating this so clearly.

ReplyDeleteSome non-falsifiable hypotheses are not optional. These are known as axioms or assumptions (aka religion) and no science is possible without them. For instance, cosmology would be dead without the unverifiable assumption (religious belief) that the laws of physics are universal in time and space.

ReplyDeleteScience = Observation + Assumptions, Facts Selection, Extrapolations, Interpretations…

Assumptions, Facts Selection, Extrapolations, Interpretations… = Sum of Axiomatic Beliefs

Sum of Axiomatic Beliefs = Religion …therefore,

Science = Observation + Religion

The demarcation problem of science versus pseudo science was of course pondered long before Karl Popper. Aristotle for one was quite interested in solving it. No indication that this quandary will ever be satisfactorily resolved. Though I do consider Popper’s “falsifiability” heuristic to be reasonably useful, I’m not hopeful about the project in general.

ReplyDeleteI love when scientists remove their science hats in order to put on philosophy hats! It’s an admission that failure in philosophy causes failure in science. And why does failure in philosophy cause failure in science? Because philosophy exists at a more fundamental level of reality exploration than science does. Without effective principles of metaphysics, epistemology, and value, science lacks an effective place to stand. (Apparently “hard” forms of sciences are simply less susceptible than “personal” fields such as psychology, though physics suffers here as well given that we’re now at the outer edges of human exploration in this regard.)

I believe that it would be far more effective to develop a new variety of philosopher rather than try to define a hard difference between “science” and “pseudo science”. The sole purpose of this second community of philosophers would be to develop what science already has — respected professionals

.with their own generally accepted positionsThough small initially, if scientists were to find this community’s principles of metaphysics, epistemology, and value useful places from which to develop scientific models, this new community should become an essential part of the system, or what might then be referred to as “post puberty science”.

One problem with this proposal is the use of the word "metaphysics". To me this carries connotations of God, religion, angels, demons and magic. It means "beyond physics," and in the world today it is synonymous with the "supernatural" (i.e. beyond natural) and used to indicate faith which is "beyond testable or verifiable or falsification".

DeleteI hear "metaphysics" and I run for the hills.

Unless their position on metaphysics is

, I cannot imagine why I would have any professional respect for them. Their organization would be founded on a cognitive error.there are no metaphysicsI think it is likely possible to develop a "science of science" by categorizing and then generalizing what we think are the

failuresof science; and why.From those one might derive or discover useful new axioms of science, self-evident claims upon which to rest additional reasoning about what is and is not "science".

Part of the problem may indeed be that we have not made such axioms explicit; and instead we rely on instinct and absorption of what counts as self-evident. That is obviously an approach ripe for error, and difficult to correct without formal definitions. Having something equivalent to the family tree of logical fallacies could be useful in this regard.

But that effort would not be separate from science, it would just be a branch of science, science modeling itself. That should not cause a problem of recursiveness or infinite descent; and we have an example of this in nature: Each of us contain a neural model of

ourself, which we use for everything from planning our movements to deciding what we'd enjoy for dinner, or what clothing we should buy, or what career to pursue.Science can certainly model science, without having to appeal to anything above or beyond science. To some extent this has already been done. Those efforts could be revisited, revised, and expanded.

Dr. Castaldo,

DeleteI think you’d enjoy my good friend Mike Smith’s blog. After reading this post of Sabine’s he wrote an extensive post on the matter as well, and did so even before I was notified that Sabine had put up this one! I get the sense that you and he are similarly sharp. Furthermore I think you’d enjoy extensively delving into the various mental subjects which are his (and I think my) forte. Anyway I was able to submit the same initial comment to both sites. He shot back something similarly dismissive of philosophy btw.

https://selfawarepatterns.com/2019/04/25/the-relationship-between-usefulness-and-falsifiability/comment-page-1/#comment-29423

On metaphysics, I had the same perspective until a couple years ago. (I only use the “philosophy” modifier as a blogging pseudonym.) Beyond the standard speech connotation, I realized that “metaphysics” is technically mean to refer to what exists before one can explore physics… or anything really. A given person’s metaphysics might be something spiritual for example, and thus faith based. My own metaphysics happens to be perfectly causal. The metaphysics of most people seems to fluctuate between the two.

Consider again my single principle of metaphysics, or what I mean to be humanity’s final principle of metaphysics: “To the extent that causality fails (in an ontological sense rather than just epistemically mind you), nothing exists for the human to discover.”

All manners of substance dualist populate our soft sciences today. Furthermore many modern physicists seem to consider wave function collapse to ontologically occur outside of causality, or another instance of supernaturalism. I don’t actually mind any of this however. Some of them may even be correct! But once (or if) my single principle of metaphysics becomes established, these people would then find themselves in a club which resides outside of standard science. In that case I’m pretty sure that the vast majority of scientists would change their answer in order to remain in our club. (Thus I suspect that very few physicists would continue to take an ontological interpretation of wave function collapse, and so we disciples of Einstein should finally have our revenge!)

Beyond this clarification for the “metaphysics” term, I’m in complete agreement. Science needs a respected community of professionals with their own generally accepted principles of how to do science. It makes no difference if these people are classified as “scientist”, “philosopher”, or something else. Thus conscientious scientists like Sabine would be able to get back to their actual jobs. Or they might become associated professionals if they enjoy this sort of work. And there’s plenty needed here since the field is currently in need of founders! I hope to become such a person, and by means of my single principle of metaphysics, my two principles of epistemology, and my single principle of axiology.

PhilosopherEric:

Delete“To the extent that causality fails (in an ontological sense rather than just epistemically mind you), nothing exists for the human to discover.”I don't get the distinction. There is much to be said for the "shut up and compute" camp; though I don't like the name.

It is an approach that works, and has worked for millennia. We never had to know the cause of gravity in order to compute the rules of gravity. We may still not know the cause of gravity; there may be no gravitons, and I admit I am not that clear on how a space distortion translates into an acceleration.

Certainly when ancient humans were building and sculpting monoliths, they ran a "shut up and compute" operation; i.e. it makes no difference

whythis cuts stone, it does. The investigation can stop there.Likewise, I don't have to believe in magic or the supernatural to believe the wavefunction collapses for reasons that appear random to me, or truly are random, or in principle is predictable but would require so much information to predict that prediction is effectively impossible.

That last is the case in predicting the outcome of a human throwing dice: Gathering all the information necessary to predict the outcome before the throw begins would be destructive to the human, the dice, and the environment!

"Shut up and compute" says ignore

why, just treat the wavefunction collapse as randomized according to some distribution described by the evolution equations, and produce useful predictions of the outcomes.Just like we can ignore

whygravity is the way it is, why steel or titanium is the way it is, why granite is the way it is. We can test all these things to characterize what we need to know about them in order to build a skyscraper. Nor do we need to know why earthquakes occur. We can characterize their occurrence and strength statistically and successfully use that to improve our buildings.Of course I am not dissing the notion of investigating underlying causations and developing better models of what contributes to material strength, or prevents oxidation, or lets us better predict earthquakes or floods.

But I

amsaying that real science does not demand causality; it can and has progressed without it. Human brains are natural modeling machines. I don't need a theory ofwhyanimals migrate on certain paths to use that information to improve my hunting success, and thus my survival chances. We didn't need to be botanists or geneticists to understand enough to start the science of farming and selective breeding for yields. It is possible to know that some things work reliably without understandingwhythey work reliably.To my mind, it is simply false to claim that without causality there is nothing to know. There is plenty to know, and a true predictive science can (and has) been built resting on foundations of "we don't know why this happens, but it does, and apparently randomly."

Well let’s try this Dr. Castaldo. I’d say that there are both arrogant and responsible ways to perceive wave function collapse. The arrogant way is essentially the ontological stance, or “This is how reality itself

Delete”. The responsible way is instead epistemological, or “This is how weISreality”. The first makes absolute causal statements while the second does not. Thus the first may be interpreted as “arrogant”, with the second “modest”.perceiveI’m sure that there are many here who are far more knowledgeable in this regard than I am, and so could back me up or refute me as needed, but I’ve been told that in the Copenhagen Interpretation of QM essentially written by Bohr and Heisenberg, they did try to be responsible. This is to say that they tried to be epistemic rather than ontological. But apparently the great Einstein would have none of it! He went ontological with the famous line, “God does not play dice”. So what happens in a psychological capacity when we’re challenged? We tend to double down and get irresponsible. That’s where the realm of physics seems to have veered into a supernatural stance, or that things happen in an ontological capacity, without being caused to happen.

So my understanding is that this entire bullshit dispute is actually the fault of my hero Einstein! Regardless I’d like to help fix it by means of my single principle of metaphysics. Thus to the extent that “God” does indeed play dice, nothing exists to discover. And more importantly, if generally accepted then the supernaturalists which reside in science today, would find that they need to build themselves a club which is instead populated by their own kind! :-)

@Philosopher Eric

ReplyDeleteYou seem to have an inflated view of philosophy and philosophers. I fully agree with you in so far as one ought not ignore what philosophers do and say. To excel in other fields will constrain one from interrogating the work of philosophers. Those who make that choice ought to accept their decision and refrain from the typical contemptuous language seen so often.

I have spent the last thirty years studying the foundations of mathematics. To be quite frank about it, I am exhausted by the lunacy of both philosophers and scientists who think mathematics has any relationship to reality beyond one's subjective cognitive experience. From what I can tell, the main emphasis of philosophers in this arena over the last century has been to justify science as a preferred world view by crafting mathematics in the image of their belief systems.

Their logicians are even more pathetic. Hume's account of skepticism is good philosophy. It is also unproductive. To represent a metaphysical point of view and then invoke a distinction between syntax and semantics to claim one is not doing metaphysics is simply deceptive. We have a great deal of progress with no advancement.

You are correct that such matters cannot be sorted out without digging into the philosophical development of the subject matter. But what you are likely to find are people running around saying, "I don't believe that!". So what one has are contradictory points of view and different agendas.

That is what philosophers and their logicians have given to mathematics.

Should you disagree with me, what is logic without truth? One can claim that one is only studying "forms". But once one believes they have identified a correct form, one defends one's claims from the standpoint of belief. Philosophers and their logicians can never get away from metaphysics whether they care to admit it or not. But their pretensions to the contrary are simply lies.

Science fails because of naive beliefs with respect to truth, reality, and the inability to accept epistemic limitations. Philosophers have shown just as much willingness to fail along those same lines.

mls,

DeleteThanks for your reply. I’ve dealt with a number of professional philosophers online extensively, and from that can assure you that they don’t consider me to inflate them. Unfortunately most would probably say the opposite, and mind you that I try to remain as diplomatic with them as possible. Your disdain for typical contemptuous language is admirable. They’re a sensitive bunch. Aren’t we all?

What I believe must be liberated in order to improve the institution of science, is merely the subject matter which remains under the domain of philosophy. Thus apparently we’ll need two distinct forms of “philosophy”. One would be the standard cultural form for the artist in us to appreciate. But we must also have a form that’s all about developing a respected community with it’s own generally accepted understandings from which to found the institution of science.

So you’re a person of mathematics, and thus can’t stand how various interests defile this wondrous language — this monument of human achievement — by weaving it into their own petty interests? I hear you there. But then consider how inconsistent it would be if mathematics were instead spared. I believe that defiling things to our own interests needs to become acknowledged to be our nature. I thinks it’s standard moralism which prevents us from understanding ourselves.

I seek to “fix science” not for that reason alone, but rather so that it will be possible for the human to effectively explore the nature of the human. I’d essentially like to help our soft sciences harden. Once we have a solid foundation from which to build, which is to say a community of respected professionals with their own associated agreements, I believe that many of your concerns would be addressed.

What is logic without truth? That’s exactly what I have. I have various tools of logic (such mathematics) but beyond just a single truth, I have only belief. The only truth that I can ever have about Reality, is that I exist. It is from this foundation that I must build my beliefs as effectively as can.

" I am exhausted by the lunacy of both philosophers and scientists who think mathematics has any relationship to reality beyond one's subjective cognitive experience."

DeleteWiles proved, via an isomorphism between modular forms and semi-stable elliptic curves, that there are no positive integer solutions to x^3 + y^3 = z^3.

Now, back in "reality", take some balls arranged into a cube, and some more balls arranged into another cube, put them all together and arrange them into a single cube. You can't. Why is that do you think?

Steven,

DeleteIt seems to me that two equally sized cubes stacked do not, by definition, form a cube. Nor do three. Four of them however, do. It’s simple geometry. But I have no idea what that has to do with lms’ observation about the lunacy of people who believe that mathematics exists beyond subjective experience, or the mathematical proof that you’ve referred to. I agree entirely with lms — I consider math to merely be a human invented language rather than something that exists in itself (as platonists and such would have us believe.) Do you agree as well? And what is the point of your comment?

@Steven Evans

DeleteShould you take the time to learn about my views, you would find that I am far more sympathetic with core mathematics than not. Get a newsgroup reader and load headers for sci.logic back to January 2019. Look for posts by "mitch".

I doubt that you will have much respect for what you read, but, you will find an account of truth tables based upon the affine subplane of a 21-point projective plane. Since there is a group associated with this affine geometry, this basically marrys Klein's Erlangen program with symbolic logic in the sense of well-formedness criteria (that is, logical constants alone do not make a logical algebra). But this is precisely the kind of thing committed logicists will reject.

Now, Max Black presented a critical argument against mathematical logicians based upon a "symmetric universe". My constructions are similarly based upon symmetry considerations -- except that I am using tetrahedra oriented with labeled vertices.

Who knew that physicists had been inventing all sorts of objects on the basis of similar ideas, although they use continuous groups because they must ultimately relate to physical measurement?

For the last two weeks I have been associating collineations in that geometry with finite rotations in four dimensions using Petrie polygon projections of tesseracts.

And, as other posts in that newsgroup show, any 16 element group which carries a 2-(16,6,2) design can be mapped into this affine geometry.

So, I happen to think that logicians and philosophers have turned left and right into true and false. You must forgive me for critcizing physicists who publish cool mathematics as science without a single observation to back it up.

@Philosopher Eric

DeleteI don't mean stack the cubes(!), I mean take 2 cubes of balls of any size, take all the balls from both cubes and try to rearrange them into a single cube of balls. You can't, whatever the sizes of the original 2 cubes. The reason we know you can't do this is because of Wiles' proof of Fermat's Last Theorem: there are no positive integer solutions to x^3 + y^3 = z^3

The point is this that this is maths existing in reality, in contradiction to what you wrote - whether you know Wiles' theorem or not, you can't take 2 cubes-worth of balls and arrange them into a single cube.

There are 2 reasons that this theorem applies to reality:

1) The initial abstraction that started mathematics was the abstraction of number. So it is not a surprise when mathematical theorems, like Wiles', can be reapplied to reality.

2) Wiles' proof depends on 350 years-worth of abstractions upon abstractions (modular forms, semi-stable elliptic curves) from the time of Fermat, but the reason Wiles' final statement is still true is because mathematics deals with precise concepts. (Contrast with philosophy, which largely gets nowhere because they try to write "proofs" in natural language - stupid idea.)

Tl;DR Maths often applies to reality because it was initially an abstraction of a particular characteristic of reality.

"You must forgive me for critcizing physicists who publish cool mathematics as science without a single observation to back it up."

DeleteFair criticism, and it is the criticism of the blog author's "Lost In Math" book. But that's not what you wrote originally. You wrote originally that it was lunacy to consider any maths as being real. O.K., arrange 3 balls into a rectangle. How did it go? Now try it with 5 balls, 7 balls, 11 balls, 13 balls,.. What shall we call this phenomenon in reality that has nothing to do with maths? Do you think there is a limit to the cases where the balls can't be arranged into a rectangle? My money is on not. But maths has nothing to do with reality. Sure.

Okay Steven, I think that I now get your point. You’re saying that because the idea you’ve displayed in mathematics is also displayed in our world, that maths must exist in reality, or thus be more than a human construct. And actually you didn’t need to reference an esoteric proof in order to display your point. The same could be said of a statement like “2 + 2 = 4”. There is no case in our world where 2 + 2 does not equal 4. It’s true by definition.

DeleteBut this is actually my point. Mathematics exists conceptually through a conscious mind, and so is what it is by means of definition rather than by means of causal dynamics of this world. It’s independent of our world. This is to say that in a universe that functions entirely differently from ours, our mathematics would still function exactly same. In such a place, by definition 2 + 2 would still equal 4.

We developed this language because it can be useful to us. Natural languages such as English and French are useful as well. It’s interesting to me how people don’t claim that English exists independently of us, even though just as many “true by definition” statements can be made in it.

I believe it was Dr. Castaldo who recently implied to me that “Lost in Math” doesn’t get into this sort of thing. (My own copy of the book is still on its way!) In that case maybe this could be another avenue from which to help the physics community understand what’s wrong with relying upon math alone to figure out how our world works?

" that maths must exist in reality,"

DeleteYou've got it the wrong way round. Maths is an abstraction of a property in physical reality. Even before humans appeared, it was not possible to arrange 5 objects into a rectangle.

" And actually you didn’t need to reference an esoteric proof "

The point is that modular forms and elliptic curves are still related to reality, because the axioms of number theory are based on reality.

"2 + 2 would still equal 4."

The concept might not arise in another universe. In this universe, the only one we know, 2+2=4 represents a physical fact.

"what’s wrong with relying upon math alone to figure out how our world works"

It's a trivial question. Competent physicists understand you need to confirm by observation.

Steven,

DeleteIf you’re not saying that maths exists in reality beyond us, but rather as an abstraction of a physical property, then apparently I had you wrong. I personally just call maths a language and don’t tie it to my beliefs about the physical, though I can see how one might want to go that way. As long as you consider it an abstraction of reality then I guess we’re square.

The title of this column and the second paragraph appear to conflate theories and hypotheses. Theories can generate hypotheses, and hopefully do, but it is the hypothesis that should be falsifiable, and the question remains whether even a robustly falsified hypothesis has any impact on the validity of a theory. Scientists work in the real work, and in that real world, historically, countless hypotheses have been falsified -- or have failed tests -- yet the theories behind them were preserved, and in some cases (one thinks immediately of Pasteur and the spontaneous generation of life) the theory remains fundamental to this day.

ReplyDeleteAt the same time, I always remember philosopher Grover Maxwell's wonderful example of a very useful hypothesis that is not falsifiable: all humans are mortal. As Maxwell noted, in a strict Popperian test, you'd have to find an immortal human to falsify the hypothesis, and you'll wait a looooong time for that.

" I always remember philosopher Grover Maxwell's wonderful example of a very useful hypothesis that is not falsifiable: all humans are mortal."

DeleteAnd yet no-one so far has made it past about 125 years old, even on a Mediterranean diet. What useful people philosophers are.

I don't understand how 'all humans are mortal' is a useful hypothesis. It is pretty obvious to anybody reaching adulthood that other humans are mortal, and to most that they themselves can be hurt and damaged, by accident if nothing else. We see people get old, sick and die. We see ourselves aging. I don't understand how this hypothesis is useful for proving anything. It would not even prove that all humans die on some timescale that matters. It doesn't tell us how old a human can grow to be; it doesn't tell us how long an extended life we could live with technological intervention.

DeleteA hypothesis, by definition, is a supposition made as a starting point for further investigation. Is this even a hypothesis, or only

claimedto be a hypothesis?I will say, however, that in principle it is a

verifiablehypothesis; because it doesn't demand that all humans that will ever exist be mortal, and there are a finite number of humans alive today. So we could verify this hypothesis by bringing about the death of every human on Earth, and then killing ourselves; and thus know that indeed every human is mortal. Once a hypothesis is confirmed, then of course it cannot be falsified. That is true of every confirmed hypothesis; and the unfalsifiability ofconfirmedhypotheses is not something that worries us.Dr. Castaldo: "useful to prove anything" is not a relevant criterion for being good science. That said, much of human existence entails acting on the assumption that all humans are mortal, so I think that Maxwell's tongue-in-cheek example is of a hypothesis that us extremely useful. Your comment about how the hypothesis is in principle verifiable (because there are a finite number of humans) is, forgive me, somewhat bizarre -- the classic examples of good falsifiable hypotheses, such as "all swans are white" would be equally verifiable for the same reason, yet those examples were invented to show that it is the logical form of the hypothesis that Popper and falsificationists appeal to, not the practicalities of testing. Moreover, while it could be arguable that the number of anything in the universe is finite, one issue with humans (and swans) is that the populations are indefinite in number -- as Maxwell commented, you don't know if the next baby to be born will be immortal, or the 10 millionth baby to be born.

Delete@Steven Evans: while your observation about human longevity is true (so far), Maxwell's humorous point -- which, by the way, was a critique of Popper -- was that you cannot be absolutely certain that the next child born will not be mortal, just as Popper insisted that the next swan he encountered could, just possibly, be black. Maxwell's point was about how you would establish a test of this hypothesis. In Popper's strange world of absolutes, you'd have to find an immortal human. Maxwell noted that here in in the real world of actual science, no one would bother, especially since markers of mortality pile up over the lifespan.

@DKP: I am not the one that claimed it was a useful hypothesis. Once that claim is made, it should be provable: What is it useful for? The only thing a hypothesis can be useful for is to prove something true or false if it holds true or fails to hold true; I am interested in what that is: Otherwise it is not a useful hypothesis. In other words, it must have consequences or it is not a hypothesis at all.

DeleteMaking a claim that is by its nature is unprovable does not make it a hypothesis. I can't even claim every oxygen atom in the universe is capable of combining with two hydrogen atoms, in the right conditions, to form a molecule of water. I can't claim that as a hypothesis, I can't prove it true for every oxygen atom in the universe, without that also being a very destructive test.

UNLESSI rely on accepted models of oxygen and hydrogen atoms, andtheirassertions that these apply everywhere in the universe, which they also cannot prove conclusively.Maxwell's "hypothesis" is likewise logically flawed; but if we resort to the definition of what it is to be human, than it is easily proven, because it is not a hypothesis at all but a statement of an inherent trait of being human; just like binding with hydrogen is a statement of the inherent trait of oxygen and the atom we call oxygen.

I know Maxwell's point was about how you would establish a test of this hypothesis; MY point was that Maxwell's method is not the only method, is it? If all living humans should die, then there will be no future humans, and we will have proved conclusively that all humans are mortal. In fact, in principle, my method of confirming the truth of the hypothesis is superior to Maxwell's method if falsifying it, because mine can be done in a finite amount of time (since there are a finite number of humans alive at any given time, and it takes a finite amount of time to kill each one of us). And confirmation would obviously eliminate the need for falsification.

Of course, I am into statistics and prefer the statistical approach; I imagine we (humanity, collectively throughout history) have exceed an 8 sigma confirmation by now on the question of whether all humans are mortal; so I vote against seeking absolute confirmation by killing everyone alive.

@DKP

Delete" In Popper's strange world of absolutes, you'd have to find an immortal human. "

Or kill all humans.

The point is that you can apply falsifiability in each instance - run a test that confirms the quantum behaviour of the electron. Then carry out this test 10^100000000000 times and you now have an empirical fact, which is certainly sufficient to support a business model for building a computer chip based on the quantum behaviour of the electron.

By the standards of empirical science, there will never be an immortal human as the 2nd law will eventually get you, even if you survive being hit by a double-decker:

https://www.youtube.com/watch?v=21uuo8Nsj18

As a society, we would better off giving most "philosophers" a brush and tell them to go and sweep up leaves in the park. They could still ponder immortal humans and other irrelevant, inane questions while doing something actually useful.

@Steven Evans: Perhaps you missed the point of Maxwell's example, which was to suggest that at least one particular philosopher was irrelevant, by satirizing his simplistic notion of falsification. As a scientist myself, and not a philosopher, I found myself in agreement with Maxwell, and 50 years later I still find historians of science to offer more insight into the multiple ways in which "science" has worked and evolved -- while philosophers still wrestle, as Maxwell satirized, with simplistic absolutes.

DeleteMore seriously, your proposed test of the behavior of the electron makes the point I started with in my first comment: theories are exceedingly difficult to falsify in the way that Sabine's article here suggests; efforts at falsification focus on hypotheses.

There is an intriguing name (proposal) for a new book by science writer Jim Baggott (@JimBaggott): A Game of Theories. Theory-making does seem to form a kind of game, with 'falsifiability' just one of the cards (among many) to play.

ReplyDeleteAnd today (April 26) is Wittgenstein's (language games) birthday.

Very manipulative article. All the traditional attempts of theoreticians to dodge the question are there.

ReplyDeleteBut to me it was even more amusing to see an attempt to bring in Popper and not to oppose Marx. But since Popper was explicitly arguing against Marx' historicism they had to make up "Stalinist history" (what would it even be?).

I mentioned neither Popper nor Marx.

DeleteAh, not you of course. The original article.

DeleteHi Sabine, You claim that string theory makes predictions, which prediction do you have in mind? Peter Woit often claims that string makes no predictions ... "zip, zero, nadda" in his words.

ReplyDeleteThanks

Jan

Jan,

DeleteIt's in the FAQ.

Thanks, that FAQ #1 is a little short on specifics. As a result I am still puzzled. As far as string cosmology goes I would question whether it is so flexible you can get just about anything you want out of it.

DeleteString cosmology is not string theory. You didn't ask for specifics.

DeleteSabine Said…

ReplyDeletedebating non-observable consequences does not belong into scientific research. Scientists should leave such topics to philosophers or priests.Of course you are correct, I’m wondering if you’ve also gotten the impression some scientists may even be using non-observable interpretations as a basis for their research?

Thank you. Your writing is clear and amusing, as usual.

ReplyDeleteI'm glad to see that you allow for some nuance when it comes to falsifiability. There is a distinction between whether or not a non-falsifiable hypothesis is "science", and whether or not the practice of a particular science requires falsifiability at every stage of its development, even over many decades. I am glad string theory was pursued. I am also glad, but only in retrospect, that I left theoretical physics after my undergraduate degree and did not waste my entire career breaking my brain doing extremely difficult math for its own sake. Others, of course, would not see this as a waste. But how much of this will be remembered?

Or to quote Felix Klein:

"When I was a student, abelian functions were, as an effect of the Jacobian tradition, considered the uncontested summit of mathematics and each of us was ambitious to make progress in this field. And now? The younger generation hardly knows abelian functions."

Dr. Hossenfelder,

ReplyDeleteSo a model, e.g. string cosmology, is a prediction?

Korean War,

DeleteA model is not a prediction. You make a prediction with a model. If the prediction is falsified, that excludes the model. Of course the trouble is that if you falsify one model of string cosmology, you can be certain that someone finds a fix for it and will continue to "research" the next model of string cosmology. That's why these predictions are useless: It's not the model itself that's at fault, it's that the methodology to construct models is too flexible.

Dr. Hossenfelder,

DeleteThanks for your response, I thought that was the case. If this comment just shows ignorance, please don't publish it.

If it might be of use, my question arose because Jan Reimera asked for a specific string theory prediction to refute Peter Woit''s claim that none exist. After reading the faq, I couldn't see that it does this unless the string cosmology model is either sufficient in itself or can be assumed to reference already published predictions.

String theory generically predicts string excitations, which is a model-independent prediction. Alas, these are at too high energies to actually be excited at energies we can produce, etc etc. String cosmology is a model. String cosmology is not the same as string theory.

DeleteHi, Sabine.

ReplyDeleteIch vertraue darauf,dass alles

gut lauft.

I enjoyed your post.

All things considered,

Ich weiB nicht,wie du dass

machst.

Anyway,

I'll be plain,

I'm a experimental scientist. "

The term ' falsification '

(for me) belongs to the realm

of theory.

when I run an experiment

the result is obvious

- and observable.

When you brought up Karl (Popper) , we're you making a statement on ' critical rationalism' , I hope not.

(in the quantum realm, you will find a maze)

At any rate, You struck me

with the words ' I start working on an idea

and then ...

You know me. (2 funny)

In parting, for You

I have a new moniker

for Your movement.

- as a # , tee-shirts,

etc.

ready?

it's -- DISCERN.

(a play on words)

not to mean 'Dis-respect'

CERN

In the true definition

of the word:

' to be able to tell the

difference.'

say, ... between a good idea

- and be a bad one.

Once again,

Love Your Work.

- All Love,

I did not "bring up Popper." How about reading what I wrote before commenting?

DeleteWasn't the argument that atomism, arguably one of the most productive theories of all time, wasn't falsifiable? Of course it was ultimately confirmed, which is not quite the same thing - it just took 2000 plus years.

ReplyDelete@Lawrence: Off the top of my head: Perhaps the statistical distributions are wrong, and thus the error bars are wrong. I don't know anything about how physicists have come to conclusions on distributions (or have devised their own), but I've done work on fitting about 3 dozen different statistical distributions, particularly for finding potential extreme values; and without large amounts of data it is easy to mistakenly think we have a good fit for one distribution when we know the test data was generated by another. Noise is another factor, if the data being fitted is noisy in any dimension; including time.

ReplyDeleteFor example, in the generalized extreme value distribution, using IRL engineering to predict the worst wind speeds, flood levels, or in aviation, the extent of crack growth in parts do to aviation stressors (and thus time to failure), minor stochastic errors in the values can change things like the shape parameter that wildly skew the predictions.

Even computing something like a 100-year flood level: sorting 100 samples of the worst flood per year. The worst of all would be assigned the rank index (100/101), (i/(N+1) is its expected value on the probability axis) but that can be wrong.

The worst flood in 1000 years may have occurred in the last 100 years. There is considerable noise in both dimensions; the rank values and the measured values, even if we fit the correct distribution.

There is also the problem of using the

wrongdistribution; I believe I have seen this in medical literature. Weibull distributions can look very much like a normal curve, but they are skewed, and have a lower limit (a reverse Weibull has an upper limit). They are easily confused with Fréchet distributions. But they can give very different answers on exactly where your confidence levels (and thus error bars) are for 95%, 99%, or 99.9%.One fourth possibility is the assumption of what the statistical distribution should even be is in error. It may depend upon initial conditions in the universe, or have too much noise in the fitting, or too few samples to rule out other distributions prevailing. In general, the assumptions made in order to compute the error bars may be in error.

I can't comment too much on the probability and statistics. To be honest this has been from early years my least favorite area of mathematics. I know just the basic stuff and enough to get me through.

DeleteWith Hubble data this trend has been there for decades. Telescope redshift data for decades have been in the 72 to 74km/sec-Mpc range. The most recent Hubble data is 74.03±1.42km/sec-MpC. With the CMB data this is now based on the ESA Planck spacecraft data. It is consistent with the prior NASA WMAP spacecraft data. and this is very significantly lower around 67.66±0.42Km/sec-Mpc. Other data tend to follow a similar trend. There has been in the last 5 to 10 years this growing gap between the two.

I would imagine there are plenty of statistics pros who eat the subject for lunch. I question whether some sort of error has gotten through their work.

Each brain creates a model of the inside and outside. Each of us call that the reality. But it's just a model. Now we create models of parts of the model that might or not fit the first model. It's a bit of a conundrum.

ReplyDeletePersonally I believe that it's all about information. The one that makes a theory that takes all that into account will reap the nobel price. That's the next step.

"An hypothesis that is not falsifiable through observation is optional. You may believe in it or not."

ReplyDeleteOne has no reason to think it is true as an empirical fact. Believing it in this case is just delusion (see religion).

The issue is simply honesty. People who claim there is empirical evidence for string theory, or fine-tuning of the universe, or the multiverse, or people who claim that the next gen collider at CERN is anything but a massively expensive, completely unprecedented punt are simply liars. It's easy to see when you compare with actual empirical facts, which in physics are often being confirmed quintillions of times a second in technology (quantum behaviour of electron in computer chips, time dilation in satnav, etc.)

How can someone honestly claim that universal fine-tuning is physics just like E=mc^2? They can't - they are lying. Where taxpayers' money is paying for these lies, it is criminal fraud.

I realise that the notion of "model" is too subtle for someone like me just fixated at playing with parsing guaranteed Noether symmetries with ward like identities upon field equations from action principles.... So the Equivalence Principle in itself is predictive in that it need not be supplemented with some Consitituive equations (like the model of susceptibility of medium in which a Maxwell field source resides say or model of a star) to describe the materiality of the inertial source?

ReplyDelete@Philosopher Eric

ReplyDeleteNice response. I think your initial remarks sparked a reaction rather than a response on my part. Your last paragraph expresses an essential problem. One's first assumption, then, ought to be that one is not alone. And, science as a community enterprise requires something along the lines of Gricean maxims.

This is completely undermined when, for the sake of a logical calculus, philosophers pretend that words are to be treated as mere parameters. Tarski explicitly rejected this methodology in his paper on the semantic conception of truth. Yet, those who invoke the distinction between semantics and syntax as some inviolable principle regularly invoke Tarski as the source of their views (one should actually look to Carnap as the source of such extreme views).

This is the kind of thing I find so disturbing where philosophy, logic, and mathematics intersect. There is a great deal of misinformation in the literature.

There is a great deal that needs "fixing". But the received paradigms are largely defensible. It is not as if they are not the product of highly intelligent practitioners.

The difficulty of detecting gravitons raises a related question: what counts as a detection? Saying that it must be detected in a conventional particle physics experiment is a rather ad hoc criterion. If all the knowledge we have today already implies the existence of the graviton, then that should count as it having been detected.

ReplyDeleteThe same can be said about LIGO's detection of gravitational waves. The existence of gravitational waves was already implied by the detection of the decaying orbits of orbiting pulsars. Or one may argue that this was in turn a prediction of GR which had ample observational support before the observation of the orbiting pulsars.

Sean Carroll wrote a blogpost about this:

ReplyDeletehttps://www.preposterousuniverse.com/blog/2018/01/17/beyond-falsifiability/

He is not a crackpot. Maybe you two could have a podcast / youtube-discussion about it?

In practice, calls to remove falsifiability are intended to support string theory, fine-tuning and the multiverse as physics. They are not physics, merely speculation, and the people claiming they are physics *are* crackpots. Remove falsifiability and just watch all the loonies swarm in with their ideas that "can't be disproved" and are "compatible with observations". There's nothing wrong with speculation but it is important that one is aware it is speculation otherwise you end up with the situation as in string theory where too much money and too many careers have been wasted on it. (Or philosophy where several thousand years have been wasted.)

Delete@Steven Evans

ReplyDeleteI assure you that we are, for the most part, on the same side of these issues. Your arguments, however, are very much like those of the foundations community who challenge dissent by demanding that a contradiction to their views be shown. 1999 Pavicic and Megill (the latter known for the metamath program) showed that propositional logic is not categorical and that the model faithful to the syntactic structure of the logic is not Boolean. So the contradiction demand is silly and simplistic.

You are making arguments on the basis of 'abstractions'. Where exactly do these abstractions reside in time and space? Or, as many philosophers do, are you speaking of a realm of existence beyond time and space? Indeed, Tarski's semantic conception of truth properly conveys the intentions we ascribe to correspondence theories of truth. So, if we state that some abstraction is meaningful with respect to the truth of our scientific theories, we must account for the existence of the objects denoted by our language terms.

Either you are claiming realms of existence which I shall not concede to you, or, you can show me "the number one" as an existent individual.

Most of my acquaintances do not have formal education. When they ask me to explain my interest, I remind them of just how often one hears that "mathematics is the language of science". So, in a very crude sense, what is true in science depends on the nature of truth in mathematics. I expect that you will disagree with that view. But, I do not think you will be able to demonstrate the substantive existence of the abstractions you are invoking to challenge me.

You may have problems with the very publications I mentioned because we share a similar sense of what constitutes science. But I see the kernel of the problem in the very statements you are making about the nature of mathematics.

It is not so much that I disagree with you, it is that your positions are not defensible. You need to stipulate a theory of truth. You need to stipulate which conception of truth is applied under that theory. You need to stipulate logical axioms. You need to stipulate axioms for your mathematical theory. You need to decide whether or not you are following a formalist paradigm. If not, you will have to accommodate substitutions in the calculus with a strategy to warrant substitutions. If so, you will be faced with the problem of non-categoricity.

Dr. Hossenfelder discussed this last problem in her book when considering Tegmark's suggestion that all interpretations be taken as meaningful.

It is just not as simple as you would like it to be.

I've no idea what the correct logical terms are, but arithmetic is a physical fact. I can do arithmetic with physical balls, add them, subtract them, show prime numbers, show what it means for sqrt(2) to be irrational, etc., etc. This maths exists physically, and it is this physical maths that is the basis of abstract maths. Physical arithmetic obeys the axioms of arithmetic and the logical steps used to prove theorems are also embodied in physical arithmetic. Of course - because arithmetic and logic are observed in the physical world, that's where the ideas come from.

DeleteOf course, philosophers can witter on at length about theoretical issues with what I have written, but they will never be able to come up with a concrete counter-example. They will nit-pick. I will re-draft what I have written. They will nit-pick some more, I will re-draft some more. And 2,000 years later we will have got nowhere, yet still it will be physically impossible to arrange a prime number of balls into a rectangle.

" So, in a very crude sense, what is true in science depends on the nature of truth in mathematics."

Again, you've got it the wrong way round. Maths comes from the physical.

Anyway, the issue of this blog post, falsifiability, is in practice an issue with people trying to suspend falsifiability to support string theory, fine-tuning and the multiverse. In more extreme cases, it is about philosophers and religious loonies claiming they can tell us about the natural world beyond what physics tells us. These people trying to suspend falsifiability are all dishonest cranks. That is why falsifiability is important, not because of any subtleties. There are straight-up cranks, even amongst trained physicists, who want to blur the line between "philosophy"/"religion" and physics and claim Jesus' daddy made the universe. Falsifiability stops these cranks getting their lies on the physical record.

Hi Sabine,

ReplyDeletesorry for a late reply.

(everyone's busy)

All apologies for the misunderstanding.

1) I did read your post.

2) I know you didn't mention him by name, but

(in my mind) I don't see how one can speak of

'falsifiability' and not

'bring up' Karl Popper.

3) In the intro to your post you said " I don't know why we should even be talking about this".

I agreed.

... and then wondered why

we were.

I thought you might be making a separate statement of some kind.

At any rate,

I'm off to view your new video while I have time.

(can't wait)

Once again,

Love Your Work

All love,

A.C.

Maths does exist in reality beyond us. Of course it does, because Maths comes from a description of reality. 5 objects can't be arranged into a rectangle whether human mathematicians exist or not.

ReplyDeleteSteven,

DeleteI’m not going to say that you’re wrong about that. If you want to define maths to exist beyond us given that various statements in it are true of this world (such as 5 points cannot form a rectangle, which I certainly agree with), then yes, math does indeed exist. I’m not sure that your definition for “exists” happens to be all that useful however. In that case notice that English and French also exist beyond us given that statements can be made in these human languages which are true of this world.

The term “exists” may be defined in an assortment of ways, though when people start getting platonic with our languages, I tend to notice them developing all sorts of silly notions. Max Tegmark would be a prominent example of this sort of thing.

Sabine Hossenfelder posted (Thursday, April 25, 2019):

ReplyDelete>

In physics we work with theories. The theories themselves are based on axioms, that are mathematical requirements or principles, eg symmetries or functional relations.Are theories also based on principles for how to obtain empirical evidence, such as, famously, Einsteins requirement that

»All our space-time verifications invariably amount to a determination of space-time coincidences {... such as ...} meetings of two or more of these material points.«?>

To make predictions you always need a concrete model, and you need initial conditions.As far as this is referring to experimentally testable predictions this is a very remarkable (and, to me, agreeable and welcome) statement; constrasting with (wide-spread) demands that "scientific theories ought to make experimentally testable predictions", and claims that certain theories did make experimentally testable predictions.

However: Is a principal reason for considering "

[concrete] initial conditions" separate from "a [or any] concrete model", and not as part it ?Sabine Hossenfelder wrote (2:42 AM, April 27, 2019):

>

A model is not a prediction. You make a prediction with a model.Are conrete, experimentally falsifiable predictions part of models ?

>

if you falsify one model [...] someone [...] will continue to "research" the next modelI find this description perfectly agreeable and welcome; yet it also seems very remarkable because it appears to contrast with (wide-spread) demands that "scientific theories ought to be falsifiable", and claims that certain theories had been falsified.

>

That's why these predictions are useless: [...]Any predictions may still be used as rationals for economic decisions, or bets.

mls:

ReplyDeleteWhere exactly do these abstractions reside in time and space?Originally the abstractions were embodied in mental models, made of neurons. Now they are also on paper, in textbooks, as a way to program and recreate such neural models.

Math is just recursively abstracting abstractions. When I count my goats, each finger stands for a goat. If I have a lot of goats, each hash mark stands for one finger. When I fill a "hand", I use the thumb to cross four fingers, and start another hand. Abstractions of abstractions.

Math is derived from reality, and built to model reality; but the rules of math can be extended, by analogy, beyond anything we see in reality. We can extend our two dimensional geometry to three dimensions, and then to any number of dimensions; I cluster mathematical objects in high dimensional space fairly frequently; it is a convenient way to find patterns. But I don't think anybody is proposing that reality has 143 dimensions, or that goats exist in that space.

So math can be used to describe reality, or because the abstractions can be extended beyond reality, it can also be used to describe non-reality.

If you are looking for "truth", that mixed bag is the wrong place to look. Even a simple smooth parabolic function describing a thrown object falling to earth is an abstraction.

If all the world is quantized,there is no such thing: The smooth function is just an estimator of something taking quantum jumps in a step-like fashion, even though the steps are very tiny in time and space; so the progress appears to be perfectly smooth.To find truth, we need to return to reality, and prove the mathematics we are using describe something observable. That is how we prove we are not using the parts of the mixed bag of mathematics that are abstractions extended

beyondreality.@ Steven Evans

ReplyDeleteIn response to David Hume's "An Enquiry Concerning Human Understanding" Kant offered an account of objective knowledge grounded in the subjective experience of individuals. He distinguished between mathematics (sensible intuition) and logic (intelligible understanding). But to take this as his starting point he had to deny the actuality of space and time as absolute concepts. He took space to correspond with geometry and time to correspond with arithmetic. The relation to sensible intuition he claimed for these correspondences is expressed in the sentences,

"Time is the form of inner sense."

"Space, by all appearances, is the form of outer sense."

The qualification in the second statement reflects the fact that the information associated with what we do not consider as part of ourselves is conditioned by our sensory apparatus before it can be called a spatial manifold. Hence, external objects are only known through "appearances".

This certainly provides a framework by which mathematics can be understood in terms of descriptions related to the reality of experience. But it does not provide for a reality outside of our own. This, of course, is why I acknowledged Philosopher Eric's knowledge claim in his response to me. You seem to be assuming that an external reality substantiates the independent existence of your descriptions.

The Christians I know use the same strategy to assure themselves of God's existence and the efficacy pf prayer.

Kant's position on geometry is one instance of misinformation in the folklore of mathematical foundations. But, that does not really affect many of the arguments used against him. Where in sensible experience, for example, can one find a line without breadth? Or, if mathematics is grounded in visualizations, what of optical illusions? These criticisms are not without merit.

Of major importance is that the sense of necessity attributed to mathematical truth seems to be undermined. Modern analytical philosophy recovers this sense of necessity by reducing mathematics to a priori stipulations presentable in formal languages with consequences obtained by rules for admissible syntactic transformations.

Any relationship with sensible intuition is eradicated. What is largely lost is the ability to account for the utility of mathematics in applications.

The issues are just not that simple. And they were alluded to by George Ellis in Dr. Hossenfelder's book.

@mls:

Delete"Time is the form of inner sense." / "Space, by all appearances, is the form of outer sense."Kant sounds utterly ridiculous, and these sound like trying to force a parallelism that does not exist. These definitions have no utility I can fathom.

mls:

Where in sensible experience, for example, can one find a line without breadth?Points without size and lines without breadth are abstractions used to avoid the complications of points and lines

withbreadth. So our answers (say about the sums of angles) are precise and provable.A line without breadth is the equivalent of a limit: If we reason using lines with breadth, we must give it a value, say W. Then our answer will depend on W. The geometry of lines

withoutbreadth is what we get as W approaches 0, and this produces precise answers instead of ranges that depend on W.mls:

Or, if mathematics is grounded in visualizations, what of optical illusions?Mathematics began grounded in reality. Congenitally blind people can learn and understand mathematics without visualizations. Those are shortcuts to understanding for sighted people, not a necessity for mathematics, so optical illusions are meaningless. Thus contrary to your assertion, those criticisms are indeed without merit. Mathematics began by abstracting things in the physical world, but by logical inference it has grown beyond that in order to increase its utility.

mls:

Any relationship with sensible intuition is eradicated.Not

anyrelationship. Mathematics can trump one's sensible intuition; that is a good thing. Our brains work by "rules of thumb," they work with neural models that are probabilistic in nature and therefore not precise. Mathematics allows precise reasoning and precise predictions; some beyond the capabilities of "intuition".Dr. Hossenfelder recently tweeted an article on superconductivity appearing in stacked graphene sheets, with one rotated by exactly 1.1 degrees with respect to the other. This effect was dismissed by many researchers out of hand, their intuition told them the maths predicting

somethingwould be different were wrong. But it turns out, the maths were right;something(superconductivity) does emerge at this precise angle. Intuition is not precise, and correspondence with intuition is not the goal; correspondence with reality is the goal.mls:

What is largely lost is the ability to account for the utility of mathematics in applications.No it isn't, mathematics has been evolving since the beginning to have utility and applications. I do not find it surprising that when our goal is to use mathematics to model the real world, by trial and error we find or invent the mathematics to do that, and then have successes in that endeavor.

What is hard to understand about that? It is not fundamentally different than wanting to grow crops and by trial and error figuring out a set of rules to do that.

mls:

The issues are just not that simple.I think they are pretty simple. Neural models of physical behaviors are not precise; thus intuition can be grossly mistaken. We all get fooled by good stage magicians, even good stage magicians can be fooled by good stage magicians. But the rules of mathematics can be precise, and thus precisely predictive because we designed it that way, and thus mathematics can predict things that test out to be true in cases where our "rule of thumb" intuition predicts otherwise; because intuition evolved in a domain in which logical precision was not a necessity of survival, and

fast"most likely" or "safest" decisions were a survival advantage.“Falsifiable” continues to be a poor term that I’m surprised so many people are happy using. Yeah, yeah, I know..Popper. It’s still a poor term. Nothing in empirical scientific inquiry is ever truly proven false (or true), only shown to be more or less likely. “Testable” is a far better word to describe that criterion for a hypothesis or a prediction. It renders a lot of the issues raised in this thread much less sticky.

ReplyDelete"various statements in it are true of this world "

ReplyDeleteYou keep getting it the wrong way round. The world came first. Human maths started by people counting objects in the physical world. Physical arithmetic was already there, then people observed it.

OK, so what I strictly mean but couldn't be bothered to write out, was that if you take a huge number of what appear to observation at a certain level of precision as quantum objects they combine to produce at the natural level of observation of the senses of humans and other animals enough discrete-yness to embody arithmetic. This discrete-yness and this physical arithmetic exist (are available for observation) for anything coming along with senses at the classical level. In this arena of classical discrete-yness, 5 discrete-y objects can't be arranged into a rectangle, for example. I am aware of my observations, so I'll take a punt that you are similarly aware of your observations, that what I observe as my body and your body exist in the sense that they are available for observation to observers like ourselves and now it makes no sense not to accept the existence of the 5 objects, in the sense that they are available for observation.

ReplyDeleteAs I said, arithmetic exists in reality and human maths comes from an observation of that arithmetic. It is that simple.

"Where in sensible experience, for example, can one find a line without breadth?"

The reality of space at the human level is 3-D Euclideanesque. A room of length (roughly) 3 metres and breadth (roughly) 4 metres will have a diagonal of (roughly) 5 metres. For best results, count the atoms.

"The Christians I know use the same strategy to assure themselves of God's existence and the efficacy pf prayer."

"God" doesn't exist - it's a story. However, 5 objects really can't be arranged into a rectangle - try it.

"Of major importance is that the sense of necessity attributed to mathematical truth seems to be undermined."

I would stake my life on the validity of the proof that sqrt(2) is irrational. Undermined by whom? Dodgy philosophers who have had their papers read by a couple of other dodgy philosophers? Meanwhile, Andrew Wiles has proved Fermat's Last Theorem for infinite cases.

"Modern analytical philosophy recovers this sense of necessity by reducing mathematics to a priori stipulations presentable in formal languages with consequences obtained by rules for admissible syntactic transformations."

Also known as proving from axioms as Euclid did over 2,000 years ago. And all originally based on our observations of the world.

Well, with Dr. Hossenfelder's permission, perhaps I might respond with a post or two that actually reflect my views rather than what one finds in the literature.

ReplyDeleteAt the link,

https://en.m.wikipedia.org/wiki/File:Free-boolean-algebra-hasse-diagram.svg

one may find the free Boolean lattice on two generators. Its elements are labeled with the symbols typically taught in courses on propositional logic. If one really wants to argue that the claims of philosophers and their logicians are of questionable merit, thos is one of the places to start.

Let's see a show of hands. Who sees the tetrahedron?

In combinatorial topology, one decomposes a tetrahedron into vertices, edges, faces, and an interior. With exception for the bottom element, the order-theoretic representation of this decomposition is order-isomorphic with the lattice above. And, one need only hold that the bottom element denote the exterior to complete the sixteen set here.

Philosophers and their logicians hold that mathematics has been arithmetized. Even though the most basic representation of how their logical connectives relate to one another can be directly compared with a tetrahedron, they will insist that geometry has been eliminated from mathematics.

You can thank David Hilbert and the formalists for that. Say all that you want about Euclid, Hilbert's "Foundations of Geometry" reconstructs the Euclidean corpus without reference to motions or temporality.

Remember this the next time you want to recite some result from mathematical logic which is contrary to your beliefs about mathematics.

So, if logicians have simply put labels on a tetrahedron, one has just cause for questioning the relevance of their claims concerning the foundations of mathematics.

But, that bottom element is still bothersome because it is not typically addressed in combinatorial topology.

In the link,

https://en.m.wikipedia.org/wiki/File:Tesseract_tetrahedron_shadow_matrices.svg

one can find the 3-dimensional projection of a tesseract, although Wikipedia does not show the edges connecting the vertices to a point at infinty. When this is added, all of the elements are 4-connected as in the Boolean order. The bottom of the Boolean order would coincide with the point at infinity.

Amazing, is it not? Our logic words have a 4-dimensional character.

Let me repeat something I have maintained repeatedly in blog posts here. If the theory of evolution is among our best science, then we have no more facility for knowing the truth of reality than an earthworm. I do not need Euclid's axioms to make two paper tetrahedra with vertices colored so that they cannot be superimposed with all four colors matched. One can do a lot with that to criticize received views in the foundations community.

Ignoring their arguments because you believe differently just puts you in the queue of "he said, she said" that Steven Evans has used to discredit philosophers.

I read a preview of one of Smolin's books on Amazon in which he proclaims the importance of Leibniz' identity of indiscernibles. Since I have read Leibniz, I would tend to agree with him. However, Leibniz also motivated the search for a logical calculus. So, the principle is more often associated with logical contexts.

ReplyDeleteLeibniz attributes the principle to St. Thomas Aquinus to answer how God knows each soul individually. In keeping with Smolin's account, Leibniz does claim to be generalizing the principle to a geometric application. But in the debates over how Leibniz and Newton differed, the principle became associated with its logical application.

Steven Evans would like me to acknowledge the reality of arithmetic in some sense. Kant had probably been the first critic of the logical principle. He asserted that numerical difference is known through spatial intuition. In modern contexts, the analogous portrayal can be found in Strawson's book "Individuals". He uses a diagram with different shapes to explain the distinction between qualitative identity and quantitative identity. In other words, numerical difference is grounded by spatial intuition.

Since mathematician's make it a habit to work from axioms, I wrote a set of axioms intended to augment set theory by interpreting the failure of equality as topological separation.

In other words, two points in space are distinct if one is in a part of space that the other is not.

When you run around using a membership relation while thinking in terms of geometric incidence, keep in mind that this is not what a membership predicate means. One may say that the notion of a set is not yet decided, but the received view is one where geometry is deprecated because mathematics has been arithmetized. And, since numbers can be defined in logic, any relation of the membership predicate with numerical identity associated with spatial intuition has been lost.

My views on mathematics are far closer to those who study the physical sciences than not. So do not hold me accountable for a summary of what is the case in the foundations of mathematics.

You have physicists running around pretending that the mathematics is telling them truths about the universe and others using mathematics to say that they should be believed.

My point is that they are further enabled by what is going on in the foundations of mathematics.

@Steven Evans

ReplyDelete"...is a story"

You have probably never heard of deflationary nominalism. It is one way of speaking of mathematical objects without committing to their reality:

https://philosophy.stackexchange.com/questions/29104/can-nominalist-logicians-reject-universals-but-accept-universal-statements

https://plato.stanford.edu/entries/nominalism-mathematics/#DefNom

Motivated by the fact that core mathematicians actually define their terms, I needed a logic that supported descriptions. Free logics do that, although the general discussion of free logics does not apply to my personal work. The logic I had to write for my own purposes is better compared with how free logics can be used for fictional accounts,

https://plato.stanford.edu/entries/logic-free/#fiction

My logic is classical (rather than paraconsistent) and the method "works" because proofs are finite.

The standard account of formal systems relies on a completed infinity outside of the context of an axiom system. I doubt that Euclid ever had this in mind. David Hilbert turned his attention to arithmetical metamathematics with the objective of a finite consistency proof precisely because completed infinities are *NOT* sensibly demonstrable.

@Dr. Castaldo

ReplyDeleteI really have no reason to accept reductionist arguments in physics. If you can substantiate your claim, then do so. Words explaining words is how we get into these problems to begin with.

Having said that, a comment in another thread made some small reference to circularity. I forget the specifics right now, but I pointed out the result of a 2016 Science article about concept formation and hexagonal grid cells.

It is a beautiful circularity. Abstract concepts depend upon neural structures that exhibit hexagonal symmetry. A book on my shelf which explicitly classifies hexagons and relates them to tetrahedra. String theorists asking people to believe in six rolled up dimensions. And the need for physical theories to build the instruments and interpret the data so we can identify how hexagonal symmetries pertain to abstract concept formation.

You are a pragmatic gentleman. Thank you for your other replies as well. For what this is worth, I am certainly not looking for truth. When Frege retracted his logicism he suggested that all mathematics is geometrical. That is mostly what I have uncovered from my own deliberations. It really does not make sense to speak of truth and falsity in geometry.

@mls: What is "amazing" about that? I can do the same thing on paper better with bits; given 2 binary states there are 2^2 = 4 possible states. In binary we can uniquely number them, [0,1,2,3]. That is not "four dimensional" any more than 10 states by 10 states is "100" dimensional.

ReplyDeleteOn your "earthworm" comparison, obviously that is wrong. We have far more facility than an earthworm for knowing the truth of reality, or earthworms wouldn't let us use them as live bait. And fish wouldn't fall for that and bite into a hook, if they could discern reality equally as well as us.

Humans understand the truth of reality well enough to manipulate chemistry on the atomic level, to build microscopic machines, to create chemical compounds and materials on a massive scale that simply do not exist in nature. Only humans can make and execute plans that require decades, or even multiple lifetimes, to complete. Where are the particle colliders built by any other non-human ape or animal?

I have no idea how you think the theory of evolution creates any equivalence between the intelligence of earth worms and that of humans. I suspect you don't understand evolution.

@mls

ReplyDeleteYou don't address the point that maths exists in reality and came from reality. Obviously, the field of logic has something to say about maths and has a credible standard of truth and method like science and maths.

I do not need to discredit the field of philosophy as it discredits itself - there are professional philosophers who are members of the American Philosophical Association who publish "proofs of God"(!!); in the comments in this very blog a panpsychist professional philosopher couldn't answer the blog author's point that the results of the Standard Model are not compatible with panpsychism being an explanation of consciousness in the brain.

Philosophers can churn out such nonsense because the "standard" of truth in philosophy is to write a vaguely plausible-sounding natural language "proof". This opens the field to all kinds of cranks and frauds. And these frauds want to have their say about natural science, too, but fortunately the falsifiability barrier keeps them at bay.

It is not a "he said, she said" argument. I have explained why I think maths exists in reality.