Monday, August 15, 2016

The Philosophy of Modern Cosmology (srsly)

Model of Inflation.
img src: umich.edu
I wrote my recent post on the “Unbearable Lightness of Philosophy” to introduce a paper summary, but it got somewhat out of hand. I don’t want to withhold the actual body of my summary though. The paper in question is


Before we start I have to warn you that the paper speaks a lot about realism and underdetermination, and I couldn’t figure out what exactly the authors mean with these words. Sure, I looked them up, but that didn’t help because there doesn’t seem to be an agreement on what the words mean. It’s philosophy after all.

Personally, I subscribe to a philosophy I’d like to call agnostic instrumentalism, which means I think science is useful and I don’t care what else you want to say about it – anything from realism to solipsism to Carroll’s “poetic naturalism” is fine by me. In newspeak, I’m a whateverist – now go away and let me science.

The authors of the paper, in contrast, position themselves as follows:
“We will first state our allegiance to scientific realism… We take scientific realism to be the doctrine that most of the statements of the mature scientific theories that we accept are true, or approximately true, whether the statement is about observable or unobservable states of affairs.”
But rather than explaining what this means, the authors next admit that this definition contains “vague words,” and apologize that they “will leave this general defense to more competent philosophers.” Interesting approach. A physics-paper in this style would say: “This is a research article about General Relativity which has something to do with curvature of space and all that. This is just vague words, but we’ll leave a general defense to more competent physicists.”

In any case, it turns out that it doesn’t matter much for the rest of the paper exactly what realism means to the authors – it’s a great paper also for an instrumentalist because it’s long enough so that, rolled up, it’s good to slap flies. The focus on scientific realism seems somewhat superfluous, but I notice that the paper is to appear in “The Routledge Handbook of Scientific Realism” which might explain it.

It also didn’t become clear to me what the authors mean by underdetermination. Vaguely speaking, they seem to mean that a theory is underdetermined if it contains elements unnecessary to explain existing data (which is also what Wikipedia offers by way of definition). But the question what’s necessary to explain data isn’t a simple yes-or-no question – it’s a question that needs a quantitative analysis.

In theory development we always have a tension between simplicity (fewer assumptions) and precision (better fit) because more parameters normally allow for better fits. Hence we use statistical measures to find out in which case a better fit justifies a more complicated model. I don’t know how one can claim that a model is “underdetermined” without such quantitative analysis.

The authors of the paper for the most part avoid the need to quantify underdetermination by using sociological markers, ie they treat models as underdetermined if cosmologists haven’t yet agreed on the model in question. I guess that’s the best they could have done, but it’s not a basis on which one can discuss what will remain underdetermined. The authors for example seem to implicitly believe that evidence for a theory at high energies can only come from processes at such high energies, but that isn’t so – one can also use high precision measurements at low energies (at least in principle). In the end it comes down, again, to quantifying which model is the best fit.

With this advance warning, let me tell you the three main philosophical issues which the authors discuss.

1. Underdetermination of topology.

Einstein’s field equations are local differential equations which describe how energy-densities curve space-time. This means these equations describe how space changes from one place to the next and from one moment to the next, but they do not fix the overall connectivity – the topology – of space-time*.

A sheet of paper is a simple example. It’s flat and it has no holes. If you roll it up and make a cylinder, the paper is still flat, but now it has a hole. You could find out about this without reference to the embedding space by drawing a circle onto the cylinder and around its perimeter, so that it can’t be contracted to zero length while staying on the cylinder’s surface. This could never happen on a flat sheet. And yet, if you look at any one point of the cylinder and its surrounding, it is indistinguishable from a flat sheet. The flat sheet and the cylinder are locally identical – but they are globally different.

General Relativity thus can’t tell you the topology of space-time. But physicists don’t normally worry much about this because you can parameterize the differences between topologies, compute observables, and then compare the results to data. Topology is, in that, no different than any other assumption of a cosmological model. Cosmologists can, and have, looked for evidence of non-trivial space-time connectivity in the CMB data, but they haven’t found anything that would indicate our universe wraps around itself. At least so far.

In the paper, the authors point out an argument raised by someone else (Manchak) which claims that different topologies can’t be distinguished almost everywhere. I haven’t read the paper in question, but this claim is almost certainly correct. The reason is that while topology is a global property, you can change it on arbitrarily small scales. All you have to do is punch a hole into that sheet of paper, and whoops, it’s got a new topology. Or if you want something without boundaries, then identify two points with each other. Indeed you could sprinkle space-time with arbitrarily many tiny wormholes and in that way create the most abstruse topological properties (and, most likely, lots of causal paradoxa).

The topology of the universe is hence, like the topology of the human body, a matter of resolution. On distances visible to the eye you can count the holes in the human body on the fingers of your hand. On shorter distances though you’re all pores and ion channels, and on subatomic distances you’re pretty much just holes. So, asking what’s the topology of a physical surface only makes sense when one specifies at which distance scale one is probing this (possibly higher-dimensional) surface.

I thus don’t think any physicist will be surprised by the philosophers’ finding that cosmology severely underdetermines global topology. What the paper fails to discuss though is the scale-dependence of that conclusion. Hence, I would like to know: Is it still true that the topology will remain underdetermined on cosmological scales? And to what extent, and under which circumstances, can the short-distance topology have long-distance consequences, as eg suggested by the ER=EPR idea? What effect would this have on the separation of scales in effective field theory?

2. Underdetermination of models of inflation.

The currently most widely accepted model for the universe assumes the existence of a scalar field – the “inflaton” – and a potential for this field – the “inflation potential” – in which the field moves towards a minimum. While the field is getting there, space is exponentially stretched. At the end of inflation, the field’s energy is dumped into the production of particles of the standard model and dark matter.

This mechanism was invented to solve various finetuning problems that cosmology otherwise has, notably that the universe seems to be almost flat (the “flatness problem”), that the cosmic microwave background has the almost-same temperature in all directions except for tiny fluctuations (the “horizon problem”), and that we haven’t seen any funky things like magnetic monopoles or domain walls that tend to be plentiful at the energy scale of grand unification (the “monopole problem”).

Trouble is, there’s loads of inflation potentials that one can cook up, and most of them can’t be distinguished with current data. Moreover, one can invent more than one inflation field, which adds to the variety of models. So, clearly, the inflation models are severely underdetermined.

I’m not really sure why this overabundance of potentials is interesting for philosophers. This isn’t so much philosophy as sociology – that the models are underdetermined is why physicists get them published, and if there was enough data to extract a potential that would be the end of their fun. Whether there will ever be enough data to tell them apart, only time will tell. Some potentials have already been ruled out with incoming data, so I am hopeful.

The questions that I wish philosophers would take on are different ones. To begin with, I’d like to know which of the problems that inflation supposedly solves are actual problems. It only makes sense to complain about finetuning if one has a probability distribution. In this, the finetuning problem in cosmology is distinctly different from the finetuning problems in the standard model, because in cosmology one can plausibly argue there is a probability distribution – it’s that of fluctuations of the quantum fields which seed the initial conditions.

So, I believe that the horizon problem is a well-defined problem, assuming quantum theory remains valid close by the Planck scale. I’m not so sure, however, about the flatness problem and the monopole problem. I don’t see what’s wrong with just assuming the initial value for the curvature is tiny (finetuned), and I don’t know why I should care about monopoles given that we don’t know grand unification is more than a fantasy.

Then, of course, the current data indicates that the inflation potential too must be finetuned which, as Steinhardt has aptly complained, means that inflation doesn’t really solve the problem it was meant to solve. But to make that statement one would have to compare the severity of finetuning, and how does one do that? Can one even make sense of this question? Where are the philosophers if one needs them?

Finally, I have a more general conceptual problem that falls into the category of underdetermination, which is to which extent the achievements of inflation are actually independent of each other. Assume, for example, you have a theory that solves the horizon problem. Under which circumstances does it also solve the flatness problem and gives the right tilt for the spectral index? I suspect that the assumptions for this do not require the full mechanism of inflation with potential and all, and almost certainly not a very specific type of potential. Hence I would like to know what’s the minimal theory that explains the observations, and which assumptions are really necessary.

3. Underdetermination in the multiverse.

Many models for inflation create not only one universe, but infinitely many of them, a whole “multiverse”. In the other universes, fundamental constants – or maybe even the laws of nature themselves – can be different. How do you make predictions in a multiverse? You can’t, really. But you can make statements about probabilities, about how likely it is that we find ourselves in this universe with these particles and not any other.

To make statements about the probability of the occurrence of certain universes in the multiverse one needs a probability distribution or a measure (in the space of all multiverses or their parameters respectively). Such a measure should also take into account anthropic considerations, since there are some universes which are almost certainly inhospitable for life, for example because they don’t allow the formation of large structures.

In their paper, the authors point out that the combination of a universe ensemble and a measure is underdetermined by observations we can make in our universe. It’s underdetermined in the same what that if I give you a bag of marbles and say the most likely pick is red, you can’t tell what’s in the bag.

I think physicists are well aware of this ambiguity, but unfortunately the philosophers don’t address why physicists ignore it. Physicists ignore it because they believe that one day they can deduce the theory that gives rise to the multiverse and the measure on it. To make their point, the philosophers would have had to demonstrate that this deduction is impossible. I think it is, but I’d rather leave the case to philosophers.

For the agnostic instrumentalist like me a different question is more interesting, which is whether one stands to gain anything from taking a “shut-up-and-calculate” attitude to the multiverse, even if one distinctly dislikes it. Quantum mechanics too uses unobservable entities, and that formalism –however much you detest it – works very well. It really adds something new, regardless of whether or not you believe the wave-function is “real” in some sense. For what the multiverse is concerned, I am not sure about this. So why bother with it?

Consider the best-case multiverse outcome: Physicists will eventually find a measure on some multiverse according to which the parameters we have measured are the most likely ones. Hurray. Now forget about the interpretation and think of this calculation as a black box: You put in math one side and out comes a set of “best” parameters the other side. You could always reformulate such a calculation as an optimization problem which allows one to calculate the correct parameters. So, independent of the thorny question of what’s real, what do I gain from thinking about measures on the multiverse rather than just looking for an optimization procedure straight away?

Yes, there are cases – like bubble collisions in eternal inflation – that would serve as independent confirmation for the existence of another universe. But no evidence for that has been found. So for me the question remains: under which circumstances is doing calculations in the multiverse an advantage rather than unnecessary mathematical baggage?

I think this paper makes a good example for the difference between philosophers’ and physicists’ interests which I wrote about in my previous post. It was a good (if somewhat long) read and it gave me something to think, though I will need some time to recover from all the -isms.

* Note added: The word connectivity in this sentence is a loose stand-in for those who do not know the technical term “topology.” It does not refer to the technical term “connectivity.”

80 comments:

  1. "So for me the question remains: under which circumstances is doing calculations in the multiverse an advantage rather than unnecessary mathematical baggage?"

    As I understand it, quantum computing requires a multiverse view to make sense of the "calculations" it does, but I don't know if quantum computing can just be done by "shut up and calculate".

    The multiverse is an agent of explanation at heart. It explains why we think we have the measurement problem, and why it's not really a problem. For it to be necessary to do the maths in a multiverse way, we need situations which can only be explained in multiverse terms, in an analogous way to relativity and the precession of Mercury.

    ReplyDelete
  2. Perhaps some legitimate work there for philosophy to do in parallel with physics. Helpful to clarify how the issues are related and what would be essential vs superfluous. In the end, I think it would prove easier to train physicists to be good philosophers than philosophers to be competent physicists

    ReplyDelete
  3. One sine wave cycle resembles an odd-power polynomial. That interval can be arbitrarily closely fit "almost true." Predictions are unlimitedly worse than drawing an also wrong horizontal straight line outside the fitted cycle.

    "existence of a scalar field – the 'inflaton'" A pseudoscalar component allows baryogenesis. "non-trivial space-time connectivity" Non-trivial space-time torsion is chiral anisotropic, like Lorentz force. Test space-time geometry with geometry that cannot succeed for being outside the theory-fit interval. Look.

    http://skullsinthestars.files.wordpress.com/2014/08/vcrossb.jpg
    Parameterization of elegant, natural, simple space-time mirror symmetry.

    ReplyDelete
  4. What utter nonsense. The money fountain that is Templeton Foundation funding appears to be at the root of all this in various ways - they have a very strong influence on academics today.

    Rutgers Templeton Project in Philosophy of Cosmology, The Philosophy of Cosmology project, the Templeton Cosmology group conference & so on & so forth. I note that Juha Saatsi was funded by Templeton for his "Emergence & Laws of Nature" project from 2014-15 & Butterfield has appeared at events they've funded.

    I wonder if it's possible to 'do' philosophy of science these days without tripping over the Templeton crew encouraging idle speculations on god, dualism, purpose & the rest of it.

    ReplyDelete
  5. Michael,

    Urm, what's anything what I wrote or what's written in the paper got to do with god? Your comment is an entirely empty dismissal that merely demonstrates your inability or unwillingness to engage with what are very difficult questions.

    ReplyDelete
  6. No doubt this will thrill many people!
    https://news.artnet.com/art-world/nicolas-berggruen-philosophy-prize-333452

    "Billionaire Art Collector Nicolas Berggruen Announces $1 Million Philosophy Prize"

    ReplyDelete
  7. Robert: The sort of multiverse which is useful for thinking about quantum computing was the Everett many-worlds style multiverse, not the inflation cosmology style multiverse. And even then, it was useful for helping Deutsch think about such things, it is not actually a necessity for any calculations or modeling. People can think about quantum computing using non-multiverse paradigms all they like.

    But this is not to dismiss it entirely; because this kind of multiverse is a matter of interpretation, the criteria for evaluating it is different than the criteria for evaluating models - if it is useful for Deutsch (and others), let them think this way! I don't particularly like it.

    But this sort of split between what is a model and what is an interpretation may be a little glib, and philosophers of science may have something to say here too. In particular, the cosmological multiverse seems very much like a model, such that we really do need things like cross sections for bubble collisions and CMB signatures. Bee has talked about some ways where it might be just an interpretation, good for what interpretations are good for, with other folks free to "shut up and calculate" if their heart moves them thusly. It's a meaningful but tricksy distinction, I think.

    ReplyDelete
  8. Well, all this latest empathy of yours against philosophy is totally unexplainable, at least to someone like me who has read your previous good work. Historically the first philosophers were also the first scientists. Science sprang from philosophy. The two pillars of modern physics, GR and QM are the products of a generation of scientist steeped in the philosophical traditions of humanism. Schroedinger for example wrote a very influential philosophical book, "What is Life?", full of deep philosophical thinking. Maybe the "problem with physics" today is that modern scientists are lacking philosophical or critical training and thought in general. Your attitude against the philosophers ("shut up and don't bark. You don't understand anything"), reminds me your own critic against the string theory community about its attitude towards their critics. Please refrain from being somewhat rude in your writing and tone. It reminds more and more the tone of an other physics blog, which I despise. Thank you, for your

    ReplyDelete
  9. "in cosmology one can plausibly argue there is a probability distribution – it’s that of fluctuations of the quantum fields which seed the initial conditions." But what precisely is that distribution? On some background you can choose, eg, the vacuum Bunch-Davies state for the IC of the fluctuations. But why vacuum? And why that background? Both choices will affect your late-time predictions. I don't think it's fair to say that "there is a distribution". And quantum cosmology doesn't provide clear answers yet.

    "I don’t see what’s wrong with just assuming the initial value for the curvature is tiny". *If* we have a physical ensemble with a wide range of different initial spatial curvatures, then a simple anthropic (non-inflationary) explanation for the small curvature would work. But of course we would need to justify that ensemble. Without such an ensemble and without inflation, the IC's *could* have been just so. I don't think we could say that that was "wrong" in your words, but we have no ideas (yet) for why the ICs should be thus. Inflation is usually supposed to make spatial flatness a consequence of more "generic"-sounding ICs. Of course, without a precise ensemble and measure, "generic" is vague. But some such ensemble is usually implicit in such genericity arguments. So the choice is between having no idea why the IC's are just so, and adding the baggage of an ensemble.

    ReplyDelete
  10. Sabine,

    "Vaguely speaking, they seem to mean that a theory is underdetermined if it contains elements unnecessary to explain existing data ..."

    I think that would be a theory which was overdetermined (i.e., containing unnecessary elements) not underdetermined (i.e., lacking necessary elements).

    ReplyDelete
  11. Σταύρος Γκιργκένης,

    If you read me as being "against philosophy" you totally got me wrong. Maybe try again.

    ReplyDelete
  12. Jim,

    Yes, you are right about the initial state problem. But at least in this case there is a theory in the context of which one can discuss the matter is what I mean. If you speak about finetuning in theory space or the multiverse or something, there is no theory at all.

    ReplyDelete
  13. Unknown:

    You are right; I am such a fool :-)

    ReplyDelete
  14. Big Henry,

    I think you misread that. My sentence does not say that an underdetermined theory "lacks necessary elements" but that such a theory contains unnecessary elements.

    ReplyDelete
  15. Arun,

    This looks like a very interesting initiative. I'll be curious to see what comes out of this.

    ReplyDelete
  16. Underdetermination of a scientific theory refers to the conundrum of having more than one scientific theory fitting all observations so far. This was raised by philosopher of science Pierre Duhem at the beginning of the 20th century and elaborated on later by W.V.O. Quine.

    For an introduction, I recommend the article of the same name in the Stanford Encyclopedia of Philosophy. (Plato.stanford.edu)

    ReplyDelete
  17. Totto,

    There always exists an infinite amount of theories that fit all observations, so I don't understand what that definition explains.

    ReplyDelete
  18. As a typical layman I can't measure nor to put condition an all the above multi voice replies. I read the article of Buttefield and as a layman I just understand only 4% of it (remarkable the same amount of matter against dark-matter-energy). But I want to defend the position of the greek friend about the importance of the philosophy in the extrapolation of the critical points undelying our cosmologica knowledge premiered by Standard Model. Not to mention that Sabine want to defend the cosmologistìs position nor that of philosophers, and not to make analysis on the aims of Butterfield' thesis about undedetermination of scientific theories ( a huge and very debatable field on his own) I want to point out ( as the article seem to suggest, at most for one point)how is it possible for a cosmologist could to pretend to extrapolate from a 4% of information of well-established matter-energy behaviour against a 96% of inexplicable, not-interacting, quasi-all-influencing the very shape, form, geometry, time flowing and arising of the universe? And How is it possible for a quantum researcher to defend his "robotical" and "shutting up and calculate" positions just manipulating symbols when from his work pop up quantum-computers, superconducting materials, Bose-Einstein quantum dot devices? ( I subtly want to promote a counterfactual thought experiment: how is it possible with only 4% of my knowledge about the article to infer and subsequently "fight" the cosmologist's point of view?)

    I want to thanks to Sabine for mantain and write on this fantastic blog and for her works to explain and to precisely put on demarcations from Science against pseudoscience, because as a typical layman I want to know all about our stunning, amazing world.
    PS: Sorry for my english

    ReplyDelete
  19. Sabine,

    If two theories describe the data so far, but you would like to know what the "true" description of the phenomenon is, you need to come up with a crucial experiment that will rule out one of the concurring theories.

    Extensions to the SM are presently underdetermined (not enough evidence to decide which is valid) and the results of LHC experiments are ruling out various types of super symmetric extensions of the SM. MSSM was the first to go, and others, more exotic ones follow. One would hope that this process converges on a theory which is a more complete description of observations than previous ones, or on one which simplifies present descriptions (for example with less free parameters).

    ReplyDelete
  20. Sabine,

    I quoted what your sentence said, "Vaguely speaking, they seem to mean that a theory is underdetermined if it contains elements unnecessary to explain existing data ..."

    Then I asserted that what you said is a definition of overdetermined, not (as you said) underdetermined, because it would be an overdetermined theory that would contain elements unnecessary to explain existing data.

    Then I asserted that a truly underdetermined theory would be a theory that doesn't contain (i.e., lacks) elements that are necessary to explain existing data.

    In other words, what you said defines an overdetermined theory NOT an underdetermined theory. More elements than necessary means "over" NOT "under". This is really not a very complicated concept.

    ReplyDelete
  21. >“This is a research article about General Relativity which has something to do with curvature of space and all that. This is just vague words, but we’ll leave a general defense to more competent physicists.”

    Sabine, be honest. Have you seen my latest pre-print? You have quoted me!

    ReplyDelete
  22. I have no complicated analysis to offer. I believe that philosophy in science works best as a seed. Its not there to inform the large scale technical mechanics of the discipline but rather to provide a general set of guiding principles to aid the thinking process on its path from underdeterminism to maybe the next baby step beyond underdeterminism but nothing more. Beyond that ... shut up and calculate rules the day. E.g. -> Philosophy ... 'in my observations, the universe proceeds dynamically out of an imbalanced set initial conditions established before inflation' ... That is a guiding principle, a base philosophy, that will assist in how I think about approaching the various inflation problems. I can't see much use for philosophy beyond this general application but to this extent science benefits from a bit of philosophical grounding.

    ReplyDelete
  23. Underdetermination is not when you have unnecessary content, but when you have empirically equivalent, yet different theories or hypothesis. (In a sense you *always* have unnecessary content since no theory looks like a big data table!) Some of the debates then turn on the status of non-empirical criteria that make scientists opt for one theory or hypothesis rather than the other: simplicity for example, or theoretical unification, or some invoke explanatory power or fructuosity (don't ask me what they mean), or conservatism (not starting from scratch with a "new paradigm" when you can adapt a well functioning theory). It's a problem for realists since these criteria don't look like indicators of truth (why should nature be simple?), but of course there's no problem for an empiricist or an instrumentalist like you, who can view them as pragmatic criteria. Personally I think most of them are strategic: it's about what hypothesis to test first (try simple ones before more complex ones).

    ReplyDelete
  24. Totto,

    I think you're not getting the point, you're just using examples in which you believe the situation is clear because scientists haven't decided on a particular extension (like the authors of the paper). I am telling you it's a quantitative question not a yes-or-no question Look, there are for example a lot of modifications of LambdaCDM that actually fit the data *better*. Are these models underdetermined? If so, how much, and how do they compare to LambdaCDM? I am very sure that you can cook up all kinds of extensions of the SM that fit perfectly every little bump in the data. Are these models underdetermined? Eg, were the diphoton models underdetermined while there was the fluke in the data? Which one was the least underdetermined one? And so on. Best,

    B.

    ReplyDelete
  25. Big Henry,

    I think what you say is just wrong.

    ReplyDelete
  26. Sabine,

    I think philosophers simply state that a theory is underdetermined, and don't ask by how much. Philosophy is not a quantitative science as physics. It is qualitative, as in quality of the arguments in a scientific debate. I value the input from philosophy in debates which border the unscientific and purely speculative realm. (I follow Popper, science is when you can test it, at leasthypothetically. This means that the Planck scale and Inflation are scientific, but Multiverses out of the light cone are not). Philosophy of science is not there to improve any one disciplined of science, but it describes and discusses the attributes of a scientific theory. Call it meta-science.

    Trying to determine the degree of underdetermination would imply that you know the "true" theory (plus you would have to define a measure on the space of theories, for example, as a sketch, rms difference of prediction to "true" value). Once you start with this, you rather look for a best fit, but no longer for the underlying, generalizable structures of science.

    ReplyDelete
  27. Totto,

    Well, yes, philosophers simply sate that a theory is underdetermined and don't ask by how much. That's what I wrote, no? I was saying that I don't understand how "simply stating" something is enlightening. How do they know what to "simply state"? For all I can tell, they just defer the matter to physicists. Which doesn't really make a lot of sense philosophically if you see what I mean.

    ReplyDelete
  28. @Sabine It doesn't make sense to ask "how much" a theory is underdetermined. It's not a matter of fitting more or less the data: a theory can be underdetermined while fitting perfectly the data. The point is not to enlighten anything but to put a word on something, for ease of discussion. If we were saying "that theory is such that another incompatible theory could make the same predictions in every experience we could make" every time that would not be practical so a bit of jargon can be useful sometimes.

    ReplyDelete
  29. Quentin,

    Of course a theory can be underdetermined while perfectly fitting the data. Indeed, as I wrote in my post, the more underdetermined the theory the more perfectly it will generically fit the data.

    "If we were saying "that theory is such that another incompatible theory could make the same predictions in every experience we could make""

    That is always the case for any theory an an entirely useless criterion. Try again. I am serious. Before you dismiss my problem, please try to come up with a definition for "underdetermined theory".

    ReplyDelete
  30. That's the definition. I already gave it twice. I didn't made it up or "come up with it", that's not how philosophy works, it's a term of the art, use it or not. Be sure you know what people are talking about before dismissing an entire conversation, it's your job, not ours to know what you are talking about http://plato.stanford.edu/entries/scientific-underdetermination/
    Yes that's the case for all theories. So what? How is it useless to note a feature all theories have? The term is just a shortcut for discussing this precise feature of scientific theories without repeating complicated sentences every time, and discuss its implication in the debate on realism for example. Is it so complicated to understand?



    Please

    ReplyDelete
  31. Quentin,

    As you have surely noted it's the same definition that I had looked up, because I mentioned it in my post. I then explained why it doesn't make sense, but apparently that's an argument that you are either unwilling or unable to engage with.

    Now that we have agreed that according to this definition it's a feature all theories have, why worry about it? How to make sense of phrases that the authors use in the paper like "intractable cases of under-determination of theory by data" - if it's something that every theory has anyway there aren't any "tractable cases", or "this under-determination is robust" if we have just agreed you can't make it go away it's always "robust". Clearly they seem to believe that some theories are more underdetermined than others or worse underdetermined than others, and the definition you offer doesn't explain what that means.

    ReplyDelete
  32. Sabine,

    You talked about unnecessary content in your post and that's not the good definition. Now I understand your concern so let me recapitulate the debate on underdetermination as I view it.
    1/ naive view: hypothesis are simply confronted to experience, so we know whether they're true or not
    2/ wait, empirical data is never enough to choose between competing theories. There must be non-empirical criteria involved. This concerns *all* theories. Let us call this problem: the underdetermination of data by experience.
    3/ realists respond by arguing that all cases of underdetermination are artificially constructed cases and are irrelevant, or that the non-empirical criteria involved such as simplicity are rational and indicator of truth (we make inference to the best explanation and it is legitimate) or that they are indirectly empirical after all or what have you (see stanford's entry). They stress that no genuine cases are worrying scientists. Some antirealists (e.g. van Fraassen) still disagree but anyway...
    4/ All these debates are now 50y old or so but the term "underdetermination" remains a term of art, and can still be invoked to address specific cases that one think are particularly problematic. For example in the paper you read. All philosophers understand what the authors mean by underdetermination: that we have a problem choosing between competing theories or models. As the authors claim to be realists, one would suspect that these are cases that cannot be easily accommodated by the realist the usual way. That's probably what they mean by "robust" (I didn't read the full article).

    Does it make sense to you?

    ReplyDelete
  33. After exactly 2^5 comments, I see this last one here, verily fit!

    http://www.saybrook.edu/newexistentialists/posts/11-14-11/

    ReplyDelete
  34. It looks like they added value to me. I think there is a place for this sort of philosophy, which is just about clean reasoning. The problem for physicists is they have been demoralised by decades of stagnation. When there is a large distributed group of peers sharing in common negatives much more than positives, the result is a kind of group-depression. A symptom is declining attention-span, and attention to detail.
    p.s. I made that up

    ReplyDelete
  35. "Then I asserted that what you said is a definition of overdetermined..."

    I was wondering why the prefix meant the opposite of what it meant

    ReplyDelete
  36. "Historically the first philosophers were also the first scientists."

    No it was astrologers and alchemists. Philosophy is clearly a major ancestor of modern science, but that does not legitimize philosophy as it stands today. Nor does it amount to a case-to-answer for a resurgent philosophy in science.

    You have to make the case without misdirecting devices. For example that scientists are already doing philosophy by forming arguments. The only reason that is true is because 'philosophy' is the generic designate in its stream. So basically it's applicable at any level. Which means you can say everyone is philosophizing every time they think. It's not a legitimate argument. And even if it was, it immediately becomes a trivial argument.

    If philosophy wants in, then it will be decided not by these sort of arguments but by a track-record of significant contributions. I bearly hear philosophers mention this as part of any argument. There's something relevant in that poignancy

    Earn it. Make contributions. Science will no ignore forever any contribution that solves problems science needs solving.

    ReplyDelete
  37. http://youtu.be/QdMmbaQHIs0

    Richard Feynman on scientific method. It's easy it's clear its to the point.when philosophers begin to interject themselves into the dialog, there already are deep seated nonsensical philosophical fantasies going on in the scientists heads that they individually and collectively may not realize. They are a symptom not a solution. Sort of like psychology, or theology. All believe they are solutions they are symptoms of confusion in there respective domains.sophism, I tend to see it as a mental disorder where reality starts in the cranium and manifests reality. A lot of scientists and mathematical types are exhibiting that. Mother is running the show not our ideas.

    ReplyDelete
  38. Quentin,

    The both definitions are for all I can tell equivalent. If you have two different theories that describe the existing data equally well it means they must have assumptions (or call it axioms) that are unnecessary to describe the data to that level of accuracy. The opposite is also true, if two theories differ by an assumption that isn't necessary to explain data then they are empirically equivalent. Hence the both definitions mean the same.

    In any case, regarding your further remarks, I actually agree. The problem is then that I fail to see how some underdetermination is presumably worse than others as the authors suggest. Again that means they are speaking about some order-relation which I can only interpret as an implicit but unstated quantification. Or in which sense is underdetermination sometimes supposedly worse than in other cases?

    Btw, I don't think simplicity is necessarily realist stance.

    ReplyDelete
  39. The Weyl tensor is also not given by the GR. It is the second part, with the Ricci tensor, of the Riemann tensor.

    ReplyDelete
  40. Sabine,
    What you explain won't work in general because of confirmation holism. All theoretical axioms are used in conjunction when building models for empirical confrontation, none is unnecessary or can simply be removed (or the "unnecessariness" if it exists would be somehow diluted among the axioms, so to speak).
    Otherwise we would simply remove unnecessary assumptions and there would be no problem of underdetermination.

    Your idea also supposes that theories merely describe the data we observe but then why not be content with descriptions of observational regularities? It doesn't seem to be what theories do. Later you say "explain the data", which better correspond to what theories do, but explaining is not describing.
    At one point is an explanation "unnecessary"? This is a more complex issue. Explanations have to stop somewhere but where? When do we enter the realm of metaphysics? If you have a clear answer I'd be happy to hear it. In any case it's better to define undetdetermination as empirical equivalence without entering all the debates that talk of necessity generates.

    As for different levels of underdetermination, I think what can vary is the observation basis that we take: is it all phenomena observed so far? Or that we could observe in the future? Or that could be observed in principle even if they'll never be?

    Simplicity is not a realist stance. Obviously we prefer simple explanations but an antirealist will view it as a pragmatic or strategic criteria not as a marker of likelihood.

    Best
    Quentin

    ReplyDelete
  41. Quentin,

    "What you explain won't work in general because of confirmation holism. All theoretical axioms are used in conjunction when building models for empirical confrontation, none is unnecessary or can simply be removed (or the "unnecessariness" if it exists would be somehow diluted among the axioms, so to speak)."

    Huh? Have you even looked at all these inflation models or standard model extensions or dark energy fields, dark matter particles, or whatever? There's nothing "holistic" about them. You can perfectly well scrape out some of the assumptions and still have a theory. How about, for example, just not assuming that the world is supersymmetric? (And that this supersymmetry is broken and that the masses of these particles are larger than something and that the interaction obeys R-symmetry and so on.) Or how about just dropping the strong nuclear force from the standard model, what forbids that (other than fitting data).

    I don't know why you would say something like that, but it clearly doesn't have anything to do with the practice of theory development in the foundations of physics.

    Otherwise we would simply remove unnecessary assumptions and there would be no problem of underdetermination.

    I thought we just agreed that any theory is always underdetermined.

    ReplyDelete
  42. The point about confirmation holism is this: one never tests an assumption in isolation, one tests a whole set of assumptions as a block, inside a theoretical framework. To take a common example: the presence of light interferences does not invalidate the assumption that light is corpuscular, it only invalidates the whole package constituted by newtonian mechanics + corpuscular light. But another theory (quantum mechanics) might well restore the assumption that light is corpuscular, which proves that this particular assumption was not really disconfirmed.

    Back to your examples: imagine a supersymmetric theory fails to account for observations. You can drop supersymmetry in order to get good predictions, or you can maintain supersymmetry but drop other assumptions instead (change the background framework) and, again, get good predictions. That's where the "holism" is: the two resulting theories would be underdetermined, but that does not mean that one assumption in particular (supersymmetry) rather than another (the background framework) was unnecessary. In general, you cannot tell where the "unnecessity" is located.



    ReplyDelete
  43. “Scientific underdetermination addresses the freedom of theory choice within the limits of scientific thinking. The claim of scientific underdetermination in a certain field at a given time asserts that it would be possible to build several or many distinct theories which qualify as scientific and fit the empirical data available in that field at the given time. Since these alternative theories are merely required to coincide with respect to the presently available data, they may well offer different predictions of future empirical data which can be tested by future experiments. It is scientific underdetermination due to the existence of such empirically distinguishable theories which will be of primary interest in the following analysis. The assumption of scientific underdetermination constitutes a pivotal element of the modern conception of scientific progress. If science proceeds, as emphasised e.g. by [Kuhn 1962] or [Laudan 1981], via a succession of conceptually different theories, all future theories in that sequence must be alternative theories which fit the present data and therefore exemplify scientific underdetermination8 . Theoretical progress without scientific underdetermination, to the contrary, would have to be entirely accumulative.9 Based on scientific underdetermination, two types of theories can be roughly distinguished. Well-established scientific theories are those whose distinctive predictions10 have been experimentally well tested and confirmed in a certain regime. The general viability of the theory’s predictions in that regime is considered a matter of inductive inference.11 Speculative theories, on the other hand, are those whose distinctive predictions have not yet been experimentally confirmed. Even if a speculative theory fits the currently available experimental data, its distinctive predictions might well be false due to the scientific underdetermination principle.”

    The above is from your absolute favorite science philosopher, Richard Dawid:

    http://inspirehep.net/record/1346448/files/12.pdf

    So maybe what is meant by one theory being more underdetermined than another has something to do with the ability to test speculative predictions, i.e. a theory which results in “n” speculative predictions, “m” of which are testable by current technological capabilities is less underdetermined than a theory which results in “n+k” speculative predictions of which “m-l” are testable by current technologies. This is supported by the following passage from the Stanford Plato article which also, I feel anyway, isolates the key point: the reliability of inductive methods. I’m quoting more than necessary but the support for the above conjecture is the sentence: “In other words, he shows that there are more reasons to worry about underdetermination concerning inferences to hypotheses about unobservables than to, say, inferences about unobserved observables.”

    ReplyDelete
  44. Anyway, the extended quote:

    John Earman (1993) has argued that this dismissive diagnosis does not do justice to the threat posed by underdetermination. He argues that worries about underdetermination are an aspect of the more general question of the reliability of our inductive methods for determining beliefs, and notes that we cannot decide how serious a problem underdetermination poses without specifying (as Laudan and Leplin do not) the inductive methods we are considering. Earman regards some version of Bayesianism as our most promising form of inductive methodology, and he proceeds to show that challenges to the long-run reliability of our Bayesian methods can be motivated by considerations of the empirical indistinguishability (in several different and precisely specified senses) of hypotheses stated in any language richer than that of the evidence itself that do not amount simply to general skepticism about those inductive methods. In other words, he shows that there are more reasons to worry about underdetermination concerning inferences to hypotheses about unobservables than to, say, inferences about unobserved observables. He also goes on to argue that at least two genuine cosmological theories have serious, nonskeptical, and nonparasitic empirical equivalents: the first essentially replaces the gravitational field in Newtonian mechanics with curvature in spacetime itself, [9] while the second recognizes that Einstein's General Theory of Relativity permits cosmological models exhibiting different global topological features which cannot be distinguished by any evidence inside the light cones of even idealized observers who live forever.[10] And he suggests that “the production of a few concrete examples is enough to generate the worry that only a lack of imagination on our part prevents us from seeing comparable examples of underdetermination all over the map” (1993, 31) even as he concedes that his case leaves open just how far the threat of underdetermination extends (1993, 36).



    And this makes me wonder if you have perhaps perused one of Kevin Knuth’s (et.al.), latest papers, “Bayesian Evidence and Model Selection:”

    http://arxiv.org/pdf/1411.3013v2.pdf

    It has examples . . .

    ReplyDelete
  45. Yeah, here's even further support for the conjecture in my previous comment which also delves into the philosophical "meaning" of underdetermination better than the Plato article, or so I think:

    http://www.unige.ch/lettres/baumgartner/docs/real/newton.pdf

    This is actually a pretty interesting subject and I can certainly understand the reason for your frustration. However, I probably would've constrained any overly harsh criticism to the philosophers responsible for the paper in question rather than going after the community at large. Not that I'm passing any kind of judgement or anything, that's just how I tend to operate; but as the saying goes, to each their own. For certain you have a valid point . . . philosophers are supposed to "obsess over the ambiguity of words" I think is how Maudlin put it!

    With regards,
    Wes Hansen

    ReplyDelete
  46. Quentin,

    Of course we never test assumptions in isolation, but I never said anything to that extent. You test assumptions in the context of a theory. You can very well test whether a theory with axiom A still works (to precision e) or doesn't work. If it still works, you can throw out axiom A and hence say the theory was underdetermined. In the requirement "still works to precision e" however enters a quantification.

    Or use your own example for that matter: Take the SM with low-mass susy. It doesn't fit the data. Obvious fix: throw out susy and all works. What else do you think is in the SM that you can throw out while maintaining high precision? I would be very surprised if you can actually come up with an example. Seriously, do you really believe that this is possible? Don't you think some susy-supporter would have tried that? That it's technically impossible to prove there is no theory (a set of axioms) which together with the (low mass) susy axiom will fit the data isn't the point of discussion here. We're talking about underdetermination of theories, not of axioms. Or at least I thought we were.

    I hope you see then why I say the two definitions are equivalent. In any case, it still didn't become clear to me why, if all theories are underdetermined and there's no quantification of underdetermination, I should sometimes worry about it and sometimes presumably not.

    ReplyDelete
  47. Wes,

    Yes, Richard Dawid writes a lot about this in his book. I first thought I understood what he meant, but the more I thought about it, the less it made sense. As Quentin points out above (and as I also said) scientists draw upon other criteria than empirical fit to select a theory, notably simplicity. That's necessary to limit underdetermination, but I fail to see how you can do that without quantification. Yes, I suppose one could do a kind of Bayesian assessment. There are a variety of statistical methods that are being used to quantify which is the "least underdetermined" model. (Dawid of course wants to quantify other criteria than simplicity which I think most scientists would object on.) Best,

    B.

    ReplyDelete
  48. Sabine,

    " That it's technically impossible to prove there is no theory (a set of axioms) which together with the (low mass) susy axiom will fit the data isn't the point of discussion here. "

    Yes it is! That's the whole point of underdetermination: that another set of axioms would fit the data... And the point of confirmation holism is that you can never isolate a single axiom to incriminate: underdetermination is true whatever the axiom you wish to retain (or even if you retain none). You've got it now I hope.

    " What else do you think is in the SM that you can throw out while maintaining high precision? I would be very surprised if you can actually come up with an example"

    Are you saying now that SM without susy would NOT be underdetermined? I thought you accepted that all theories are underdetermined. I am confused. Now what I have in mind is a completly different framework from SM, and no, I have no example to come up with, but any time a scientific framework is replaced (such as newtonian physics), holistic underdetermination is exemplified, so I see no reason why it couldn't happen in the future.

    "I hope you see then why I say the two definitions are equivalent. "

    Let me show you that they're not.

    (1) Imagine that what you say is true: a theory is underdetermined just if it has (at least) one unnecessary axiom.

    (2) Assume all theories are underdetermined

    (3) Remove one unnecessary axiom: you get a new theory, and per (2), it is still underdetermined.

    (4) repeat step 3 until there is no more axioms.

    (C) all axioms are unnecessary. Which is absurd. Hence either (1) or (2) is false. Since you accepted (2), you shouldn't think that (1) your definition is equivalent to the standard one.

    In any case, if you want to do philosophy seriously, don't come up with your own definitions and just assume they're "equivalent" unless you have a strong case and good reasons to use a non-standard definition. I hope this will close the debate.

    " it still didn't become clear to me why, if all theories are underdetermined and there's no quantification of underdetermination, I should sometimes worry about it and sometimes presumably not."

    Let me repeat what I said in previous comments: one should worry (as a realist) when (1) underdetermination cannot be accomodated by the usual arguments (appeal to non-empirical criteria such as simplicity, eliminate artificially constructed cases) and (2) when underdetermination concerns not only data collected so far, but all data that we could in principle collect in the future.
    That's what these authors could mean by "robust". I should read the whole paper to tell you more, but my guess is that they mean (1): in particular, it's robust because even scientists say they cannot choose between two models, so it's not an artificial case only philosophers worry about: it's "robust".

    Note that you don't have to worry at all if you're not a realist. Also note that philosophers of science are generally not prescriptivists: you don't have to worry about underdetermination as a practicing scientist, unless you're into philosophical questions.

    If you're still interested in understanding better underdetermination and confirmation holism, I suggest you read Duhem (1906) "The Aim and Structure of Physical Theory" before to tackle contemporary debates. This is where the discussion originates, and it's where the terms got their meaning. And it's a classical, which covers lots of issues that later became central in philosophy of science! So a very good introduction for anyone interested in the field.

    ReplyDelete

  49. @Wes Hansen

    Thank you for these good references. Now note that it's not the authors in the original paper who imply that underdetermination should be quantified, it's Sabine who insists. The authors merely say that some cases are "robust", and as I understand it, they mean something like "serious, nonskeptical, and nonparasitic", as explained in Stanford's entry (see my response to Sabine above).

    ReplyDelete
  50. Quentin,

    Your reading of my argument is sloppy hence you continue to fail to see what I say.

    "That's the whole point of underdetermination: that another set of axioms would fit the data..."

    Well, if you say that's the point for you then I'll not disagree. But for the scientist it's irrelevant that there *could* be a set of axioms if he or she doesn't know that set of axioms, so why should I care. It is also not what the paper is about, it's about the question whether we can decide which of the existing theories (of early universe cosmology) describe nature best. I am saying that "best" needs to be quantified. (A point we should have learned from Leibniz at the latest.)

    "Are you saying now that SM without susy would NOT be underdetermined?"

    No, what I said, if you read what I wrote, was that the SM without low mass susy works to higher precision. I explained several times now that precision always implies a trade-off between accuracy of fit and number of parameters, hence the SM without low mass susy is less underdetermined in a well-defined way. It's not not underdetermined, it's less underdetermine. Just that no such quantifyable definition is apparently used by philosophers.

    And in any case, you're distracting from the reason I brought this up: there's no known theory with low mass susy that works to higher precision that the SM, hence your "example" doesn't work.

    "Let me show you that they're not."

    You didn't show that they're not equivalent, you showed that both definitions are absurd. Which is what I've tried to tell you all the time. Incidentally, the logical mistake you make is psychologically very interesting. Your argument is basically that if a terminology doesn't make sense ("is absurd") it can't be one that philosophers are using, consequently it must be me making a mistake. Some attitude.

    Anyway, thanks for proving my point then. At least I don't have to repeat it one more time.

    ReplyDelete
  51. No, they would be both absurd if they were equivalent but they're not! What you have in mind is not underdetermination, but something like "simplest data fit". That can be quantified (reminds me of Lewis best system analysis) at least once you assume a background framework from which to build your models. And that might be an interesting concept. But it's not called "underdetermination", which applies at the theory level, not only the model level, and which is a more general concept. You get confused only because the paper your read discussed particular cases. I am talking about the general concept. That could be one reason we talk passed each other.
    Yes I assume concepts in use for more than one century make sense. I loved your "crackpot service industry"story published elsewhere. Now your being a crackpot toward philosophy by assuming we're just wrong since centuries with no more than a bachelor's degree level in philosophy (I'm saying this on the basis of your essay on relations between math & reality. Sorry to say), just as those engineers assume Einstein is wrong without do knowledge of physics (our "crackpots"are not engineer, but often scientists in their end career, but it's really the same. And I am not saying that no scientist make good philosophers: there are many).
    Thanks for the discussion

    ReplyDelete
  52. Quentin,

    Ok, let me see. I point out a mistake in your argument. Rather than correcting your argument, you repeat what you tried (and failed) to show. Then you call me a crackpot.

    I haven't said philosophers have been "wrong since centuries." I've said they're useless. Can't say you changed my mind. Your level of argumentation has turned out to be depressingly low.

    ReplyDelete
  53. To quote wikipedia (Metaphysics):

    "In the eighteenth century, David Hume took an extreme position, arguing that all genuine knowledge involves either mathematics or matters of fact and that metaphysics, which goes beyond these, is worthless."

    Close?




    ReplyDelete
  54. Sorry but my argument is valid. If you accept that underdetermination applies to all theories and that it's equivalent to having unnecessary axioms, you have to accept that all axioms are unnecessary. You're being incoherent, accepting that underdetermination applies to all theory, then saying its definition is absurd (how an absurd concept could apply to anything?). If philosophers have been discussing seriously an absurd concept for a century, then they were wrong for a century, so that's obviously something you implied. When it comes to such bad faith I have no more choice but lower the discussion. I don't want to lose any more time explaining in length what you can find in textbooks of you don't make an effort, and start from the assumption that philosophers don't know what they're talking about and that you know better.
    Good thing: you challenge the concepts offered to you. Bad thing: you do it without knowing the concept at all, and with contempt.

    ReplyDelete
  55. I subscribe to the theory of 'personal field bias'. Basically it means I think what matters is what I do and if people want to do something else, all the worse for them. As for the physicists? Most of them are going to end up without a Nobel, and if they're lucky a few high-citation papers, and then they'll die--basically having wasted their life arguing over how many angels are dancing on the head of a pin, sorry, I mean free parameters there are, when there are so many more important things going on in the world. Too bad they didn't do something important like go into medicine. Oh well, it happens.

    ReplyDelete
  56. Quentin,

    Ok, then, it seems you really didn't understand what I said. So let me try this again.

    "Sorry but my argument is valid."

    Your argument is technically wrong. I hope we can agree on this. You want to show that A is not equal to B. Instead you show that (in your own words) B is "absurd" and since you don't believe A is absurd (since philosophers have used it) you conclude A cannot equal B. It's faulty logic.

    What's maybe more important though is that it misses the point.

    "If you accept that underdetermination applies to all theories and that it's equivalent to having unnecessary axioms, you have to accept that all axioms are unnecessary. You're being incoherent, accepting that underdetermination applies to all theory, then saying its definition is absurd (how an absurd concept could apply to anything?)"

    You have demonstrated that the definitions that I told you don't make sense are absurd. How does that mean that *I* am the one being incoherent? I have explained already in my post that this concept of underdetermination only makes sense if you quantify it.

    What's wrong with your "proof" is that I told you repeatedly that both what's "necessary" has to be quantified ("necessary to fit data to some precision") and that "underdetermination" also has to be quantified (required to fit data to some precision). You have omitted these additions.

    Using a proper quantification, your 4-step procedure a) starts from wrong assumptions ("necessary" and "underdetermined" aren't binary) and b) fails in step 3) because it's still underdetermined, yes, but not underdetermined at the same level.

    More disturbing is maybe that this should be obvious if you look at any example, eg what happens to any actual theory if you do this. Take some SM extension. It has axioms that are "unnecessary" to maintain the precision of the fit, so you can remove them. Each time you remove an unnecessary axiom, the theory will be less underdetermined. But you finally get to a point where removing further axioms will dramatically worsen the fit of the data. Exactly where this happens depends on the statistical measures you use etc - there isn't any one specific procedure that everyone agrees on. (I can give you some examples for this but it's somewhat tangential.)

    The point is that the scientist would say at this level the theory has reached a minimum of underdeterminiation. It's still underdetermined because, as you say yourself, no fit is ever perfect and so on. But that's as good as you can do it without sacrificing accuracy.

    Now look, if you do *not* pay attention to quantifying underdetermination, then you have to accept that all levels of underdetermination are equally good or bad. In that case indeed a theory without axioms whatsoever is equally underdetermined as the standard model. But to tell you this your "proof" is unnecessary because by saying that underdetermination is a yes/no criterion you've already accepted this by agreeing that all theories are equally underdetermined.

    "If philosophers have been discussing seriously an absurd concept for a century, then they were wrong for a century, so that's obviously something you implied."

    No, this isn't something I implied or meant to imply. You are putting words into my mouth which I didn't use and didn't mean to use. If you look at what I actually wrote, I said that philosophers in their arguments do implicitly use a quantified version of underdetermination, they just don't explicitly state it. Instead they implicitly delegate the quantification to scientists. Which is all well and fine (best thing they can do). But then I don't know how they can make statements about situations where even scientists haven't yet done that quantification, hence my problem with the paper.

    Hope that clarifies it. Thanks for your patience.

    ReplyDelete
  57. Look at the argument again: I showed that A=B leads to an absurdity. So I showed that A=B is absurd, not that B is absurd. Obviously, B ("having unnecessary axioms") is not an absurd concept, I can make up an example if you want, nor is A ("having other theories with the same empirical consequences") absurd, nor is the view that all theories have alternatives with the same empirical consequences.
    What leads to an absurdity is A=B and nothing else.

    Take the example of underdetermination discussed by philosophers, for example Earman's newtonian physics with curvatures of space-time instead of gravitational forces. They have the same number of axioms, and they make the same predictions. They are underdetermined. Nothing to do with levels of data fit and simplicity here. Moreover, all their axioms are necessary: remove one axiom of newtonian physics, and you can't make predictions anymore. How would you accomodate this case?

    Underdetermination *is* binary. Either there exists other theories with the same empirical consequences, or not. This is true whether the theory fits perfectly the data, or fits it loosely: that's not the point. The only way we could quantify underdetermination (as another commentator noted) is to tell how many other theories have the same empirical consequences, but that's not something we can do: there are infinitely many possible theories, we don't have a measure on an hypothetical space of theories, and history of science shows that unconceived alternative theories can be discovered at some point. Newton didn't expect that relativity could account for the same data. He didn't envisage that gravitational forces could be replaced by curvature of space-time in his own theory. Whitehead proposed an alternative to relativity theory that were never developed in details. How could we quantify the underdetermination of newtonian physics or relativity if we don't know all the possible alternatives?

    A theory that fits data loosely is just a theory with an additional axiom "there are interferences that explain the discrepencies", so the theory that fits data perfectly, and the one that doesn't are equally underdetermined with each other (they both explain the same set of data) even though the one with better fit has more "unnecessary axioms" than the other. Which again shows that the two concepts are not the same. Just stop working on your own definitions, read Stanford's entry again, and all will be clear.

    I accept your point that better data fit enters in competition with simplicity. It's perhaps interesting, but that concerns what you call "unnecessariness", not underdetermination.
    Again, a theory with perfect fit and another with loose fit are underdetermined *with each other*, so they're equally underdetermined. Take newtonian physics and relativity as examples of underdetermined theories (considering only pre-relativistic experimental data). How would you quantify their respective simplicity?

    Underdetermination is a qualitative concept. You don't need to quantify it to accept that some cases are worse than others. No philosophers ever used a quantified version, implicitely or not. I already gave you the qualitative criteria that are used to tell that some cases are worse.

    Maybe we can come to an agreement if we say that your quantification proposal is not telling us "how much underdetermined" a theory is (which doesn't make sense), but how much we should worry about its underdetermination. Could you agree with that?

    Having said that, I think your proposal would work well for underdetermined models of the same theory (different cosmological models of SM for example), where statistical tools can be used straightforwardly on the same data set, but not for completely different theories that apply to various domains of experience. So it is a more restricted concept. The concept of underdetermination is much more general than that, as I hope I have explained.


    ReplyDelete
  58. Quentin,

    "I showed that A=B leads to an absurdity."

    I used A to refer to [your definition of underdetermined] and B to refer to [my definition of underdetermined]. Your first assumption is not that A=B (which you claim you show to be absurd) but merely the statement B, which, as I have now told you several times I agree is absurd. You hence haven't shown what you claim to show.

    In any case, look, I have merely tried to make sense of the word "underdetermined" as it is used in the paper which I commented on. If you say it's used as a binary variable in the rest of the philosophical literature, fine, I'll certainly not disagree with that. This isn't the impression I got from reading the paper which left me thinking that some theories (multiverses, models of inflation and so on) are somehow more badly underdetermined as others (say, susy).

    Yes, I think I could agree that the quantification would not tell us whether a theory is underdetermined, if you don't like to use the word this way, but would tell us how much we should worry about it being underdetermined. It actually sounds good to me.

    I don't understand your example with comparing Newtonian gravity to general relativity using only pre-relativistic data because I don't know what you mean by "pre-relativistic". Suppose you make an expansion of GR, then that expansion will constrain several parameters that could a priori have been different. Hence GR is less underdetermined by the data, provided you are sensitive to these terms. Though of course you'll say that doesn't make sense. Or maybe use a simpler example, why is the power in the Newtonian gravitational law -2. You could put a parameter there and then you'll have to constrain it with data. You don't have to do that in GR, it will tell you the power is -2. Again GR is less underdetermined than Newtonian gravity. At least that's how I would put it.

    I don't know if there is a way to compare different theories in various domains of experience.

    ReplyDelete
  59. Ok this wasn't very clear, but my first assumption was A=B.
    I said: "(1) assume a theory is underdetermined just if it has at least one unnecessary axiom."
    By "underdetermined", I meant having other empirically equivalent theories ("my" definition), since it is how underdetermination is defined everywhere as far as I know. (Otherwise (1) would not have been an assumption at all, but a new definition). This is important for (2) because then (2) says that all theories have empirical equivalents, which seems reasonable.

    Anyway let's not quibble anymore on definitions if we can agree, in substance, that it's about being worried or not by theory choice.

    As for your impression, I looked up the article and found this:
    "There is, however, a very considerable under-determination of the mechanism of inflation by our data—both today’s data, and perhaps, all the data we will ever have".
    Which I take as meaning that underdetermination is pretty bad because no more data could help us fix it.
    Later "more serious, problem of under-determination: a problem
    relating to distance scales that far outstrip those we have observed"
    It's bad because the required data to fix it will never be gathered.
    Later they talk about "a worrying plethora of models", so it could be pretty bad because many different models of the same theory fit the data.
    Personnally, I would take all this as a qualitative evaluation of the "seriousness" of underdetermination on the basis of various criteria, not as a quantitative approach. I might be wrong...

    By pre-relativistic, I meant something like, say: different models of the solar system, with relativistic effects being irrelevant at the precision and period duration at which we observe planet's positions.

    I suppose that if we parameterize the "^2", we consider a generalisation of newtonian physics. This would make sense if the parameter could vary from places to places, but then GR would no more fit the data? Or only the fact that there is a constant makes the theory less simple? I remember discussions on whether simplicity is objective or not (about Lewis' best system analysis). For example, is a polynomial simpler than a sinusoide? I also found this http://philsci-archive.pitt.edu/10329/ perhaps you'd be interested.

    My point about different domains is that your quantification seems to apply to a single, particular situation to which we apply different models. Now perhaps one theory could fit well one domain, and the other would fit better another one? (for example, MOND would fit better galaxy rotation, and dark matter models would fit better other areas? You're more competent than me on these issues).

    ReplyDelete
  60. You see, this is why I linked to the article, "Underdetermination of Theory by Data," above: underdetermination, as defined by its originator, Quine, has absolutely nothing to do with axioms; in the concept as proposed by Quine, theories which are underdetermined are empirically undecidable, as to which is preferable, past, present, and future, and future being the key point! I know that sounds kinda crazy, but here is the definition:

    That is, we have a case of underdetermination if for some subject matter we have two theories, T1 and T2, which are 1) incompatible and 2) compatible with all actual and possible observations.

    The key point is POSSIBLE OBSERVATIONS! And when you think about it, it isn't really crazy at all.

    So now assume you have two theories, T1 and T2, which both account for all empirical data in the same domain equally well. Let A be the set of all possible alternative theories which underdetermine T1 and let B be the set of all possible alternative theories which underdetermine T2. Then either A=B or A is disjoint from B by virtue of T2 making untested predictions which are incompatible with those made by T1. If A=B then the theories are not comparable using underdetermination as a benchmark simply because they are equally underdetermined. If A doesn't equal B then one would like to be able to say that one theory is more or less underdetermined than the other based on the cardinality of A and B, but there is no way, according to Quine's definition, to determine these cardinalities, since there is no way of knowing whether one has exhausted all possibilities. So the only other benchmark which is left, in my view, is the number of predictions made which are technologically realizable with current technologies.

    But, having speed read the paper, it seems to me the authors are simply trying to argue that underdetermination of theory by data is no cause to give up scientific realism which they define as:

    "We take scientific realism to be the doctrine that most of the statements of the mature scientific theories that we accept are true, or approximately true, whether the statement is about observable or unobservable states of affairs. Here, “true” is to be understood in a straightforward correspondence sense, as given by classical referential semantics. And, accordingly, scientific realism holds our acceptance of these theories to involve believing (most of) these statements—i.e. believing them to be true in a straightforward correspondence sense."

    But then, and this is what I find absurd (although they do include the words "approximately" and "believe"), they conclude with:

    " But as we said in Section 1, we do not see these cases of under-determination as threatening scientific realism. For it claims only that we can know about the unobservable, and indeed do know a lot about it—not that all the unobservable is knowable."

    We don't know a damn thing about the unobservable, we just assume we know. In fact, this is made explicit in the Copenhagen Interpretation of Quantum Theory, the S-Matrix paradigm, and, one would think that it is inherent to String theory. We make assumptions and it is quite possible that our assumptions are flat wrong; for certain they are prejudiced by our acquired "intuition," an intuition which is a function of our domain of perceptual experience! I mean, to me, underdetermination, and it permeates all of science, if it demonstrates anything, it demonstrates that all scientific knowledge is probabilistic, i.e. contingent. Quantum Theory plainly shows that even with observation we don't really KNOW what the hell is going on. I just don't think one can extend scientific realism to the realm of unobservables, certainly not using the correspondence theory of truth! I mean, that's so absurd it's funny . . .

    ReplyDelete
  61. Wes,

    That's all well and fine, but for most scientists two theories that make the same predictions in all possible observable cases aren't worth calling different theories. There is (trivially) an isomorphism between the two, so why worry. Just pick the one you prefer to work with. As I said earlier, I'm an instrumentalist, not a realist. It somewhat depends of course on what you mean by "possibly observable" which is honestly a discussion I'm not very interested in.

    Look, in physics every theory is basically a set of axioms. Or maybe I should say two such sets, one "pure math" set and one set that identifies the math with observables. Hence, if underdeterminism is supposed to be a meaningful (ie well-defined) concept, there must be a way to express it as a property of that set of axioms.

    ReplyDelete
  62. @Wes

    Indeed, scientific realism is about our theories being true about the unobservable.

    Observable in philosophy means *directly* observable (without instruments), so ADN and cells are unobservable. If you think they exist you're a realist. The same goes for electrons.

    Realists think that this is the best explanation to our theories success: that they're true. In particular when they make unexpected prediction. That seems miraculous from a purely instrumentalist view.

    @Sabine

    I don't think there's necessarily an isomorphism, except under a simplistic version of how theories relate to observations.

    What is this set that identifies the math with observable? By observable do you mean direct perception only or also observation with instruments (is an electron having a direction of spin something we can observe for example?) It wasn't clear on your essay either.

    If the former, I doubt you can show me any such mapping in any physics textbook.
    If the latter, you're in a better position, but note that most measuring instruments functioning are based on physical theories, so there's a threat of circularity.

    ReplyDelete
  63. Quentin,

    As I said I'm an instrumentalist. If you have an isomorphism from the predictions of theory one to the predictions of theory two, they're not different theories because you've just showed that they're not. A duality relation, for example, actually shows that the theories you thought were different are not - they're the same. (It might be the case that sometimes using one expression is simpler and sometimes using the other, but that's a different discussion.)

    As to the two sets of axioms that I mentioned, when I say observable I mean anything that can be measured. I'm a physicist and don't care very much about 'direct perception,' to begin with because I'm not sure what means 'direct' (I wrote about this extensively elsewhere, I'm afraid I have to tell you that I am of the opinion the direct/indirect transition is another continuous divide that often gets mistaken for being binary) and 'perception' I'd hand over to a biologist.

    I just meant to point out that by writing down a set of mathematical axioms (say for GR) you're not done building a physical theory. You also need to specify, for example, how the stress-energy-tensor relates to matter (actually the name already says it, but just from the math that isn't obvious). Or how components of the curvature tensor/connection give rise to tidal forces/redshift and so on. In a quantum theory you also have the measurement axiom. The math alone doesn't tell you how to identify the numbers which you measure with the math. If you're a Platonist you can think of this as yet-another isomorphism.

    ReplyDelete
  64. Sabine,

    If you're not talking about direct observation, your notion of equivalence makes more sense and I'd agree.
    That direct/indirect is not binary was indeed emphasised by philosophers decades ago, as well as the fact that measurement technics are various and can evolve independently of the theories, and that direct perception is as much, if not more revisable than observation with instruments (from our biological knowledge for example).

    From a philosophical perspective, that makes your position closer to realism than to radical instrumentalism (especially when you resort to biology to explain perception, thus implying that biology is somehow true), but to you, that's a mere question of labels I suppose.

    Your view could be challenged by incommensurability theories (the idea that what counts as observable is somehow shaped by the theory itself, so that different theories work on a different observational base and cannot be directly compared). This was a trending topic in the 70s (Kuhn, Feyerabend), but being personnally sceptical about the whole thing, I won't dwell on this.

    I suppose your mapping between observation and math just amount to assign a theoretical vocabulary (physical properties...) to elements of the mathematical structure. That's how I'd put it, correct me if I'm wrong.

    Best,
    Quentin

    ReplyDelete
  65. Quentin,

    Yes, I would agree on all in your last comment.

    I would be interested in a reference on the direct/indirect distinction in the philosophy literature that you mention. As someone who works on 'indirect' tests (of quantum gravity) it's a topic that keeps coming back to me. Best,

    B.

    ReplyDelete
  66. Stanford's encyclopedia is always a good entry point to find references.

    You might find some in this article, in particular the paragraph I linked directly
    http://plato.stanford.edu/entries/science-theory-observation/#ObsExcPerPro

    See also this paragraph
    http://plato.stanford.edu/entries/theoretical-terms-science/#2.1

    And perhaps this article (depending on what you're looking for exactly)
    http://plato.stanford.edu/entries/physics-experiment/

    If you're interested in older discussions about operationalism (a radical form of instrumentalism that attemps to reduce the content of theories to operations in a laboratory) you can read this, but it's more historical
    http://plato.stanford.edu/entries/operationalism/

    Let me apologize if I misunderstood your positions from time to time during the discussion. I suppose it was most of the time a problem of acquaintance with the vocabulary used in philosophy, which has often a quite precise meaning (less liberal than natural use: for example, "observation").

    ReplyDelete
  67. Quentin,

    Thanks for the references & apology accepted. I am glad we managed to sort this out and I think I learned a lot thanks to your patience :o) Best,

    B.

    ReplyDelete
  68. I'm going to excuse myself from this discussion because I'm not well enough informed, but before I go . . .

    Apparently, and you're probably aware of this, there are two types of underdetermination in the literature: the one by Quine which asserts that there exist theories which are LOGICALLY INCOMPATIBLE but empirically equivalent (the paper I linked to above gives an example using toy models involving closed time and cyclic time); the one by Hume which is much weaker and simply asserts that theories are underdetermined by current knowledge, i.e. empirical data.

    The Humean underdetermination seems quite obvious and tied to the axioms. The Quineian underdetermination is much more egregious and perhaps not as obvious but it seems possible if not certain. You say, "If you have an isomorphism from the predictions of theory one to the predictions of theory two, they're not different theories because you've just showed that they're not." But this is not necessarily true. The toy example above involving time is one example.The point is, the possibility exists, which should be enough to prevent scientists from putting all of their eggs in one basket.

    Quentin,

    Thank you for the explanation regarding realism; I guess I'm not a realist! I don't see how we can possibly know anything definitive about unobservables, all we know are the patterns of interaction, which is limited knowledge at best - and that's unobservables which we do have access to through instrumentation.

    ReplyDelete
  69. @Wes Hansen

    I'm not a realist either, but it's quite mainstream in contemporary (analytic) philosophy... Now there's the intuition that our theories still tell us something about reality (its nomological structure perhaps? Structural realism is the most sensible version of scientific realism to me).

    If ever you're interested, I just started a blog recently to expose the content of my PhD dissertation in English, which is on these topics: http://modalempiricism.blogspot.fr/ Comments are very welcome!

    ReplyDelete
  70. Quentin,

    Thanks for the reference to your blog. I read the first two posts and will continue with the others time permitting. From what I've read, I believe Kevin Knuth's work with Information Physics, what he call's "Observer-Centric Foundations," could perhaps shed a bit of light on your philosophical question. I would direct your attention to an introductory paper, "Information-Based Physics: An Observer-Centric Foundation":

    http://arxiv.org/abs/1310.1667.

    You certainly picked a fascinating subject for a dissertation! Hopefully I'll be able to produce a few constructive comments - once I get up to speed, of course.

    With regards,
    Wes Hansen

    ReplyDelete
  71. "Hence, if underdeterminism is supposed to be a meaningful (ie well-defined) concept, there must be a way to express it as a property of that set of axioms."

    I'm sure you've probably moved on by now, but I find this topic rather interesting so I've been digging a bit deeper. You seem to reject Quine's underdetermination outright so I'll just address that of Hume. It seems to me that Hume's underdetermination is just an naive expression of Godel Incompleteness or, more generally, Turing Undecidability. Both of these are quite well-defined and since any scientific theory you would seem interested in contains more than enough arithmetic to enable Godel numbering and addition and multiplication on Godel numbers, they both apply.

    From the down and dirty intro to these issues, "A Problem Course in Mathematical Logic," Godel Completeness is defined as: Definition 15.2. A set of sentences Σ of a first-order language L
    is said to be complete if for every sentence τ either Σ proves t or Σ proves ¬t . By proves is meant there is a derivation concluding with t or not t from a finite subset of sentences in Sigma. Godel proved that any system, which includes enough arithmetic to enable his form of proof, is either incomplete or inconsistent. One would like to think that the axioms of any scientific theory must be both finite and consistent, hence, incomplete. Does it not seem likely that this is the source, if not an equivalent expression of underdetermined? I think the key point is that theories generally involve, besides the raw data, abduction or assumed hypotheses and it's these hypotheses that are underdetermined by the data - which seems rather obvious, especially given the history of science.

    The thing is, Incompleteness and Undecidability are a function of the set of axioms and the rules of inference used, hence, well-defined in the sense you seem interested in. And they are both used to some extent in theoretical considerations, certainly in computer science, metamathematics, and logic.

    I would recommend the above document and the book, "Godel's Proof, an incomplete guide to it's uses and abuses"

    https://www.amazon.com/G%C3%B6dels-Theorem-Incomplete-Guide-Abuse/dp/1568812388

    http://euclid.trentu.ca/math/sb/pcml/pcml-16.pdf

    ReplyDelete
  72. You know Sabine, you and I both find the paper you link to above ridiculous, the difference being, you find it ridiculous for invalid reasons while I find it ridiculous for valid reasons. You find it ridiculous because it doesn’t meet your standards of scientific rigor. But it’s not meant to be a scientific argument, it’s a philosophical argument; the issue at hand is not a scientific issue, rather, it is a philosophical one: what is the nature of our knowledge, or, what do our scientific theories really tell us? I find it ridiculous because it’s disingenuous; the author’s thesis is that underdetermination does not undermine the position of scientific realism. This is fine and dandy, but then they constrain their consideration to Hume’s definition of underdetermination, which is disingenuous.

    Hume’s underdetermination affects all positions regarding the nature of knowledge equally and it doesn’t undermine any of them, it simply points out that our knowledge is perpetually incomplete; the “theory of everything” is an oxymoron. Quine’s underdetermination, on the other hand, seems to completely devastate scientific realism, at least to me, and I think Newton-Smith rather formidably demonstrates this in the paper, “The Underdetermination of Theory by Data,” which I linked to above. But Quine’s underdetermination seems to be rendered benign by Quentin’s Modal Empiricism, at least as far as I understand it from the sketch he provided on his new blog, and this is really the point of philosophical exercise.

    Now you say that Quine’s underdetermination is without substance since, from a scientific perspective, there exists an isomorphism between empirically equivalent theories. But what is an isomorphism? It is a one-to-one correspondence between abstract structures, key word being abstract. Oftentimes, in philosophical arguments, you’ll find the devil in the details! And this is what philosophers do, they look for the devil. They take a question of interest, “What is the nature of knowledge” in the current discussion, and they analyze it via constructive debate. In the process they distill out, or abstract out, the essential features of the question under study and eventually, through this process of back and forth, they converge on a valid position which satisfies the question.

    This is the significance of Quine’s underdetermination. I mean, you can’t just dismiss Quine out of hand; he is a legend in both fields, Philosophy and Logic, on par with Wheeler or Feynman in Science. Now Newton-Smith, the author of the paper I linked to above, gives as examples cyclic time versus closed time and continuous space versus simply dense space. It does not matter whether or not these examples are scientifically relevant because it’s not a scientific question, it’s a philosophical question, and these examples are very much philosophically relevant! Quine was simply pointing out an essential feature of the question, which is to say, any valid position on the question, “What is the nature of our knowledge,” must be immune to Quine’s underdetermination; scientific realism is not so immune whereas Quentin’s Modal Empiricism would seem to be. Do you see the value of Quine’s definition?

    You know, I’m not trying to be critical of you, rather, I’m simply trying to express to you why some people find philosophical exercises important, valuable, and stimulating. And if you think about it, most, if not all, scientific revolutions have come about due to the asking and answering of the correct and relevant counterfactual, Einstein’s relativity theories being prime examples, and I would argue that the asking half, anyway, is a philosophical exercise. I would also suggest that a devaluation or disregard for this type of philosophical exercise could very well be a prime cause of the current impasse in theoretical physics; the search for mathematical beauty, it would seem, is not, in and of itself, sufficient. I think Lee Smolin would probably be in agreement with this . . .

    With regards,
    Wes Hansen

    ReplyDelete
  73. Wes,

    I don't find the paper ridiculous. I don't know why you think so. And I haven't said anything about expecting scientific rigor from philosophers. Please stop assigning opinions to me that I have never expressed anywhere and that I de facto don't hold.

    ReplyDelete
  74. Okay, now you're being disingenuous, I mean, come on:

    "In any case, it turns out that it doesn’t matter much for the rest of the paper exactly what realism means to the authors – it’s a great paper also for an instrumentalist because it’s long enough so that, rolled up, it’s good to slap flies."

    Implicit in the previous post and this one is an expression of general disregard for philosophy period, which seemed to be enhanced, the disregard I mean, by the above paper. I don't think I'm alone in this assessment but whatever. It's your blog, I was just trying to be helpful . . . next time I'll just bang my head on the wall a few dozen times!

    With regards,
    Wes Hansen

    ReplyDelete
  75. Wes,

    I have said very clearly in this post in the previous post and in many other posts that I think philosophy is more important to the foundations of physics than many theoretical physicists want it to be. You seem to believe that this means I am not allowed to criticize anything a philosopher produces, hence you misread my criticism of certain conclusions as a dismissal of philosophy in general, which greatly misstates my opinion. (Also, we clearly don't share the same sense of humor.) Best,

    B.

    ReplyDelete
  76. Dr. H I have a few times recommend the book by Unger & Smolin "The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy". The first part (by Unger) in particular a very good illustration of how philosophy can help unmuddle some long buried assumptions in physics and cosmology as concerns the special relativity view of time.

    ReplyDelete
  77. Well now, I have an all inclusive sense of humor; I find everything funny, even those things physically discomforting to me personally! And I certainly don't believe a person is not allowed to criticize anything a philosopher produces, quite the contrary, I believe a person is responsible for criticizing anything and everything spewed forth by philosophy - or science for that matter. I certainly find the paper you linked to without philosophical merit, in fact, as I stated, it is clearly disingenuous, by which I mean, "not candid or sincere, typically by pretending that one knows less about something than one really does." That definition being from Google's online dictionary. Anyway, my statements regarding Quine's underdetermination remain valid although I often wonder how he justified his mathematical realism in his own mind - given his rationality regarding the scientific realism question!

    Mathew,
    Yeah, I've read some of Unger's and Smolin's writings about their naturalist philosophy and I find it interesting. Time is certainly a great mystery but I think their position, regarding time, is certainly partly correct; I don't see how it couldn't be. I'm just not convinced that it's fundamental, not to Ultimate Reality anyway, whatever that may be. I think time emerges, but I'm not a mathematical realist like Connes or Tegmark; I don't believe in an archaic mathematical reality, rather, I believe in emptiness, the Buddhist concept I mean. But anyway, Unger's and Smolin's position on time in no way contradicts the atemporal perspective and of course it's important to remember that Smolin has a great deal of skin in the game . . .

    ReplyDelete
  78. I was referring specifically to Unger's claim that what has come to be called "block time" is a metaphysical assumption hidden in SR and not a fundamental theorem of it.

    ReplyDelete
  79. Matthew,

    Yeah, I kind of assumed that was what you were referring to, which is why I concluded my comment above in the manner I did. And I wasn't trying to denigrate Smolin either; I'm personally a fan of LQG. But you can't expect Smolin, or his metaphysics, to not be influenced by LQG; in fact, I would naturally expect it (his metaphysics) to be informed by LQG. But besides that, there do exist valid reasons for at least accepting the "block universe" as a reasonable conjecture, most of which I have mentioned and linked to on this blog at one time or another. But besides all of that, from a purely philosophical/metaphysical perspective, consider the following from the book by Rueben Hersh, "What Is Mathematics, Really?"

    "Niagara Falls is an example of the dialectic interplay of object and process. Niagara Falls is the outlet of Lake Ontario. It's been there for thousands of years. It's popular for honeymoons. To a travel agent, it's an object. But from the viewpoint of a droplet passing through, it's a process, an adventure:

    over the cliff!
    fall free!
    hit bottom!
    flow on!

    Seen in the large, an object; felt in the small, a process. (Prof. Robert Thomas informs me that the Falls move a few feet or so, roughly every thousand years.)

    Movies show vividly two opposite transformations:

    A. Speeding up time turns an object into a process.
    B. Slowing down time turns a process into an object.

    To accomplish (A), use time-lapse photography. Set your camera in a quiet meadow. Take a picture every half hour. compose those stills into a movie.

    Plant stalks leap out the ground, blossom, and fall away!
    Clouds fly past at hurricane speed!
    Seasons come and go in a quarter of hour!
    Speeding up time transforms meadow-object into meadow-process.

    To accomplish (B), use high-speed photography to freeze the instant. A milk drop splashed on a table is transformed to a diadem - a circlet carrying spikes, topped by tiny spheres. By slowing down time, splash-process is transformed to splash-object.

    A human body is ordinarily a recognizable, well-defined object. But physiologists tell us our tissues are flowing rivers. The molecules pass in and out of flesh and bone. Your friend returns after a year's absence. You recognize her, yet no particle of her now was in her when she left. Large-scale, object - small-scale, process.

    [...]

    Any phenomenon is seen as an object or a process, depending on the scale of time, the scale of distance, and human purposes. Consider nine different time-scales - astronomical time, geologic time, evolutionary time, historic time, human lifetime, daily time, firing-squad time, switching time for a microchip, unstable particle lifetime."

    The point is, time considerations are not scale free, hence, the necessity of the Lorentz transforms! Now zoom the time-scale out way past astronomical time and explore the asymptotic limits; what happens to the region of space we consider astronomical? From the perspective of time, it becomes an object - eternity in a single moment! Ha, Ha, Ha . . . I love this stuff, I think it's a blast! Of course I try my hardest to avoid firing-squad time . . .

    With regards,
    Wes Hansen

    ReplyDelete
  80. All phenomena a mix of process and substance at all scales. I don't think this is a particular issue in either view of time.

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.