Wednesday, January 06, 2010

Is Physics Cognitively Biased?

Recently we discussed the question “What is natural?” Today, I want to expand on the key point I was making. What humans find interesting, natural, elegant, or beautiful originates in brains that developed through evolution and were shaped by sensory input received and processed. This genetic history also affects the sort of question we are likely to ask, the kind of theory we search for, and how we search. I am wondering then may it be that we are biased to miss clues necessary for progress in physics?

It would be surprising if we were scientifically entirely unbiased. Cognitive biases caused by evolutionary traits inappropriate for the modern world have recently received a lot of attention. Many psychological effects in consumer behavior, opinion and decision making are well known by now (and frequently used and abused). Also the neurological origins of religious thought and superstition have been examined. One study particularly interesting in this context is Peter Brugger et al’s on the role of dopamine in identifying signals over noise.

If you bear with me for a paragraph, there’s something else interesting about Brugger’s study. I came across this study mentioned in Bild der Wissenschaft (a German popular science magazine, high quality, very recommendable), but no reference. So I checked Google scholar but didn’t find the paper. I checked the author’s website but nothing there either. Several Google web searches on related keywords however brought up first of all a note in NewScientist from July 2002. No journal reference. Then there’s literally dozens of articles mentioning the study after this. Some do refer to, some don’t refer to the NewScientist article, but they all sound like they copied from each other. The article was mentioned in Psychology Today, was quoted in Newspapers, etc. But no journal reference anywhere. Frustrated, I finally wrote to Peter Brugger asking for a reference. He replied almost immediately. Turns out the study was not published at all! Though it is meanwhile, after more than 7 years, written up and apparently in the publication process, I find it astonishing how much attention a study could get without having been peer reviewed.

Anyway, Brugger was kind enough to send me a copy of the paper in print, so I know now what they actually did. To briefly summarize it: they recruited two groups of people, 20 each. One were self-declared believers in the paranormal, the other one self-declared skeptics. This self-description was later quantified with commonly used questionnaires like the Australian Sheep-Goat Scale (with a point scale rather than binary though). These people performed two tasks. In one task they were briefly shown (short) words that sometimes were sensible words, sometimes just random letters. In the other task they were briefly shown faces or just random combination of facial features. (These both tasks apparently use different parts of the brain, but that’s not so relevant for our purposes. Also, they were shown both to the right and left visual field separately for the same reason, but that’s not so important for us either.)

The participants had to identify a “signal” (word/face) from the “noise” (random combination) in a short amount of time, too short to use the part of the brain necessary for rational thought. The researchers counted the hits and misses. They focused on two parameters from this measurement series. The one is the trend of the bias: whether it’s randomly wrong, has a bias for false positives or a bias for false negatives (Type I error or Type II error). The second parameter is how well the signal was identified in total. The experiment was repeated after a randomly selected half of the participants received a high dose of levodopa (a Parkinson medication that increases the dopamine level in the brain), the other half a placebo.

The result was the following. First, without the medication the skeptics had a bias for Type II errors (they more often discarded as noise what really was a signal), whereas the believers had a bias for Type I errors (they more often saw a signal where it was really just noise). The bias was equally strong for both, but in opposite directions. It is interesting though not too surprising that the expressed worldview correlates with unconscious cognitive characteristics. Overall, the skeptics were better at identifying the signal. Then, with the medication, the bias of both skeptics and believers tended towards the mean (random yes/no misses), but the skeptics overall became as bad at identifying signals as the believers who stayed equally bad as without extra dopamine.

The researcher’s conclusion is that the (previously made) claim that dopamine generally increases the signal to noise ratio is wrong, and that certain psychological traits (roughly the willingness to believe in the paranormal) correlates with a tendency to false positives. Moreover, other research results seem to have shown a correlation between high dopamine levels and various psychological disorders. One can roughly say if you fiddle with the dose you’ll start seeing “signals” everywhere and eventually go bonkers (psychotic, paranoid, schizoid, you name it). Not my field, so I can’t really comment on the status of this research. Sounds plausible enough (I’m seeing a signal here).

In any case, these research studies show that our brain chemistry contributes to us finding patters and signals, and, in extreme, also to assign meaning to the meaningless (there really is no hidden message in the word-verification). Evolutionary, type I errors in signal detection are vastly preferable: It’s fine if a breeze moving leaves gives you an adrenaline rush but you only mistake a tiger for a breeze once. Thus, today the world is full of believers (Al Gore is the antichrist) and paranoids who see a tiger in every bush/a feminist in every woman. Such overactive signal identification has also been argued to contribute to the wide spread of religions (a topic that currently seems to be fashionable). Seeing signals in noise is however also a source of creativity and inspiration. Genius and insanity, as they say, go hand in hand.

It seems however odd to me to blame religion on a cognitive bias for Type I errors. Searching for hidden relations on the risk that there are none per se doesn’t only characterize believers in The Almighty Something, but also scientists. The difference is in the procedure thereafter. The religious will see patterns and interpret them as signs of God. The scientist will see patterns and look for an explanation. (God can be aptly characterized as the ultimate non-explanation.) This means that Brugger’s (self-)classification of people by paranormal beliefs is somewhat besides the point (it likely depends on the education). You don’t have to believe in ESP to see patterns where there are none. If you read physics blogs you know there’s an abundance of people who have “theories” for everything from the planetary orbits, over the mass of the neutron, to the value of the gravitational constant. One of my favorites is the guy who noticed that in SI units G times c is to good precision 2/100. (Before you build a theory on that noise, recall that I told you last time the values of dimensionful parameters are meaningless.)

The question then arises, how frequently do scientists see patterns where there are none? And what impact does this cognitive bias have on the research projects we pursue? Did you know that the Higgs VEV is the geometric mean of the Planck mass and the 4th root of the Cosmological Constant? Ever heard of Koide’s formula? Anomalous alignments in the CMB? The 1.5 sigma “detection?” It can’t be coincidence our universe is “just right” for life. Or can it?

This then brings us back to my earlier post. (I warned you I would “expand” on the topic!) The question “What is natural” is a particularly simple and timely example where physicists search for an explanation. It seems though I left those readers confused who didn’t follow my advice: If you didn’t get what I said, just keep asking why. In the end the explanation is one of intuition, not of scientific derivation. It is possible that the Standard Model is finetuned. It’s just not satisfactory.

For example Lubos Motl, a blogger in Pilsen, Czech Republic, believes that naturalness is not an assumption but “tautologically true.” As “proof” he offers us that a number is natural when it is likely. What is likely however depends on the probability distribution used. This argument is thus tautological indeed: it merely shifts the question what is a natural from the numbers to what is a natural probability distribution. Unsurprisingly then, Motl has to assume the probability distribution is not based on an equation with “very awkward patterns,” and the argument collapses to “you won't get too far from 1 unless special, awkward, unlikely, unusual things appear.” Or in other words, things are natural unless they’re unnatural. (Calling it Bayesian inference doesn’t improve the argument. We’re not talking about the probability of a hypothesis, the hypothesis is the probability.) I am mentioning this sad case because it is exactly the kind of faulty argument that my post was warning of. (Motl also seems to find the cosine function more natural than the exponential function. As far as I am concerned the exponential function is very natural. Think otherwise? Well, zis why I’m saying it’s not a scientific argument.)

The other point that some readers misunderstood is my opinion on whether or not asking questions of naturalness is useful. I do think naturalness is a useful guide. The effectiveness of the human brain to describe Nature might be unreasonable (or at least unexplained), but it’s definitely well documented. Dimensionless numbers that are much larger or smaller than one have undeniably an itch-factor. I’m not claiming one should ignore this itch. But be aware that this want for explanation is an intuition, call it a brain child. I am not saying thou shell disregard your intuition. I say thou shell be clear what is intuition and what derivation. Don’t misconstrue for a signal what is none. And don’t scratch too much.

But more importantly it is worthwhile to as ask what formed our intuitions. On the one hand they are useful. On the other hand we might have evolutionary blind spots when it comes to scientific theories. We might ask the wrong questions. We might be on the wrong path because we believe to have seen a face in random noise, and miss other paths that could lead us forward. When a field has been stuck for decades one should consider the possibility something is done systematically wrong.

To some extend that possibility has been considered recently. Extreme examples for skeptics in science are proponents of the multiverse, Max Tegmark with his Mathematical Universe ahead of all. The multiverse is possibly the mother of all Type II errors, a complete denial that there is any signal.

In Tegmark’s universe it’s all just math. Tegmark unfortunately fails to notice it’s impossible for us to know that a theory is free of cognitive bias which he calls “human baggage.” (Where is the control group?) Just because we cannot today think of anything better than math to describe Nature doesn't mean there is nothing. Genius and insanity...

For what the multiversists are concerned, the “principle of mediocrity” has dawned upon them, and now they ask for a probability distribution in the multiverse according to which our own universe is “common.” (Otherwise they had nothing left to explain. Not the kind of research area you want to work in.) That however is but a modified probabilistic version of the original conundrum: trying to explain why our theories have the features they have. The question why our universe is special is replaced by why is our universe especially unspecial. Same emperor, different clothes. The logical consequence of the multiversial way is a theory like Lee Smolin’s Cosmological Natural Selection (see also). It might take string theorists some more decades to notice though. (And then what? It’s going to be highly entertaining. Unless of course the main proponents are dead by then.)

Now I’m wondering what would happen if you gave Max Tegmark a dose of levodopa?

It would be interesting if a version of Brugger’s test was available online and we could test for a correlation between Type I/II errors and sympathy for the multiverse (rather than a believe in ESP). I would like to know how I score. While I am a clear non-believer when it comes to NewScientist articles, I do see patterns in the CMB ;-)


[Click here if you don't see what I see]


The title of this post is of course totally biased. I could have replaced physics with science but tend to think physics first.

Conclusion: I was asking may it be that we are biased to miss clues necessary for progress in physics? I am concluding it is more likely we're jumping on clues that are none.

Purpose: This post is supposed to make you think about what you think about.

Reminder: You're not supposed to comment without first having completely read this post.

216 comments:

  1. Phil:

    I’m sorry if my reply seemed evasive to you. It wasn’t intended to be. I take probability to be subjective, field quantization to be unnecessary, and zitterbewegung to be physical. From this point of view, many of the apparently paradoxical aspects of quantum theory simply cease to appear paradoxical.

    I wouldn’t call symmetries -- or the conservation laws that follow from them -- “biases” of nature. They are structurally necessary for the self-consistency of nature. One may as well say that 1+1 has a tendency to equal 2. In fact, 1+1 is always exactly equal to 2, notwithstanding the practical difficulties that make physical exemplars subject to uncertainty.

    Of course, “1”, “2”, “+”, and “=” are idealizations abstracted out from the actual observations we make. But without such abstractions, there is no way to reason about why, nor is there any way to even deduce recipes.

    I am in complete agreement with the view expressed by your Einstein quote.

    ReplyDelete
  2. Andrew:

    The best answer I can give you begins by repeating what I’ve already said:

    I take probability to be subjective, field quantization to be unnecessary, and zitterbewegung to be physical. From this point of view, many of the apparently paradoxical aspects of quantum theory simply cease to appear paradoxical.

    Reality may look strange when you can’t take the speed of light to be infinite, or neglect the quantum of action, or treat your densities as delta functions, and even stranger in the face of all three.

    But it definitely makes sense.

    And we will understand it one day... unless our will to do so is killed by anti-rational thought viruses.

    ReplyDelete
  3. Ain Soph, I was teasing about "A is A" being a saying of Ayn Rand followers. Also maybe you don't get the difference between something being false, v. being true but not revealing. So if A = A, what is A equal to? It doesn't have to mean the universe is a certain way, that was clearly my point. And if the initial conditions can lead to a whole set of outcomes, that is clearly not classical determinism, it is not Laplace. As for me reveling in weirdness, even if I do it doesn't weaken the argument which many astute observers note about the genuine intelligibility problem. Actually, many of us find a determined, clockwork style universe to be a suffocating mausoleum and are glad to have some kind of real freedom, whatever it may really be. (I guess Rand must have thought it better too FWIW.)

    I brought up some specific issues and haven't gotten good answers from anyone. Re zitterbewegung: it was recently observed, see http://www.azom.com/News.asp?NewsID=20262. Heh, Ca atoms - the same "culprits" as for entanglement.
    But I don't see salvation in any of this from a strange but freer universe.

    ReplyDelete
  4. Hi Ain Soph,

    ¬” I take probability to be subjective, field quantization to be unnecessary, and zitterbewegung to be physical”

    Well I simply can’t imagine what such convictions can amount to represent as being from an ontological perspective and thus we end where we started in this discussion. If you agree with Einstein’s statement, it demands that QM must be objectified as to have clearly explained where the subjectivism is totally unnecessary as being a consequence of its incompleteness by demonstrating what’s objectively required to have it be complete. So jitterbugging particles or schizophrenic waves haven’t been able to encompass as to have explained all that’s observed, nor should they be expected to. From my perspective the two important aspects to be addressed is nature’s bedrock qualities of holism and discreteness, with things like probability and collapse scenarios as being simply artefacts, born of our ignorance as to include them as true aspects of reality to begin with.

    “The Bohr-Heisenberg tranquilizing philosophy forms a gentle pillow from which the true believer cannot easily be roused."

    -Albert Einstein

    Best,

    Phil

    ReplyDelete
  5. Phil, Ain Soph, anyone: There's a diagram up now at my blog, just click on Neil B and zip right on over!

    ReplyDelete
  6. Neil, your link to orzel's website returned "Not Found" when I clicked on it.

    ReplyDelete
  7. Steve, I'll check again but meanwhile pls. check my own blog, first post from my name link. (You can also find it, I'm proud to say, by Googling [actual quotes] "quantum measurement paradox" - the post or similar is in the top five.) Indeed, I think the argumentative squabbles about DI are just so last year given I have now proposed an actual experiment to recover the presumptively lost information. It's a game-changer, and its importance does not depend on interpretations per se. The phase mixing was supposed to destroy the info, aside from how you viewed the collapse/MWI/etc.

    ReplyDelete
  8. Phil:

    “two important aspects to be addressed is nature’s bedrock qualities of holism and discreteness, with things like probability and collapse scenarios as being simply artefacts, born of our ignorance as to include them as true aspects of reality to begin with.”

    Very well put!

    The universe is only one place, and there is only one thing in there.

    But to understand it, we must treat it as made of discrete entities which stand in certain relationships to each other and evolve in time. To describe the being and becoming of this arrangement, we must name the entities and the relationships, and to do that we must introduce coordinates, and taxonomies, and so on.

    All of this is fiction. But if it approximates the reality closely enough, it is a useful fiction.

    The point is that it is impossible to describe nature without introducing artefacts which arise out of the very act of describing. And it is then very difficult to tell the artefactual properties of the description from the real properties of nature. This is deeper than a mere issue of numerical resolution; it already manifests at the symbolic level.

    Quantum theory, properly understood, is the study of how to minimize these inherent distortions, and maximize the descriptive fidelity of the formalism in spite of them.

    ReplyDelete
  9. Neil:

    “that is clearly not classical determinism, it is not Laplace”

    No, it certainly isn’t. When did I ever say anything about Laplace?

    Recognizing that there is no difference between randomness and unpredictability means you can have both determinism and free will.

    Why? Because there aren’t enough bits in the universe to represent the exact position of even one particle. So Laplace’s infinite intelligence with perfect knowledge of initial conditions cannot exist. And all the intelligences which can exist operate in a universe in which they can form rational expectations regarding the outcome of exercising their free will.

    The significance of A is A is that A must be consistent with itself. As I said, reality cannot be inconsistent with itself. Therefore, if our formalism is paradoxical, it is our formalism which is at fault, and not reality. See my reply to Phil for additional thoughts on this.

    And thank you for the pointing out the work of Gerritsma et al. That paper is awesome!

    ReplyDelete
  10. Hi Ain Soph,

    ” The universe is only one place, and there is only one thing in there.

    I don’t find to call the universe as one place with having one thing as an explanation of holism in respect to discreteness. That is any structure that is found to be a reactant one is necessarily so as resultant of a reaction. That’s to say that something holistic as to be homogeneous cannot react with itself, rather it requires an antagonist for such to be observed. To think otherwise I find to be the main ingredient in the making of those quantum pillows of which Einstein complained. That is if one is to except as minimal that reality is casual then it must be at least allowed to consist of two individually fundamental entities. To say otherwise would be as to argue that entropy can manifest without a place to expand in with while also having no agents to mark its progress.

    That’s also what I find so curious about many of the emergent considerations, as in my way of thinking to have something to emerge one requires first a place in which it is to emerge as to be found. This is where despite their differences those like Penrose and Lisa Randal find common ground, with Penrose having energy and space as both entities of mutual potential in respect to each other, while Randal pictures this as the potential found between two distinct branes. This is simply an admission to the old adage that it takes at least two to tango. So I reiterate the bias I find to be the most unjustified one in physics is not in understanding the role of its constituents, rather how many must be considered real to even be able to begin such considerations.

    I suggest that with this we end the whole discussion here, for if you do have two distinct enities I haven’t been able to find them and perhaps that’s my failing and if not I find then that to be yours. This of course is the very essence of bias

    Best,

    Phil

    ReplyDelete
  11. Re Laplace: I don't accept that it matters whether any real computer could even in principle compute what's going to happen in the future. As a Platonist about conceptuality but accepting the "real universe" as whatever it may be, I think the matter of principle is the truth of things. If you're a realist, and think there is a true state A at a given time, and the world is deterministic then it (as a mathematical model in the ideal) will evolve to a unique state A' in the future. That is what the "Platonic computer" could do, that is how math works and does in the same sense as stating rules of a game makes for "all possible chess games" etc. It doesn't matter what any real piece of stuff or bits in the universe etc. can actually represent or accomplish. If you think the latter, you aren't a realist in principle. I'm not saying I am (I think there is an ultimately unintelligible - sorry - structure behind appearances and we just do the best we can, with no final revelation or comprehensible model being attainable.) I'm just saying, watch for self-consistency.

    I think it's funny that positivists (whether anyone here is or not - I make segue critiques per what reminds me of what, so previous commenters don't need to feel necessarily referenced in my general broadsides) worry so much about whether we could find out in principle. If being "meaningful" means being able to find out, the what we actually in practice can find out should matter most. IOW, instead of the middle ground of the possible best computer physics allows being the framing point, I say it's the other way around: the ideal Platonic results in principle matter, and the actual tools we have to use matter, but not so much the former.

    ReplyDelete
  12. Phil:

    Why you count space and energy as two things, yet claim that one place with one thing in it count as only one, is beyond me.

    But if this is where you want to let it rest, then this is where we will let it rest.

    Thanks for the discussion. It has been interesting.

    ReplyDelete
  13. Neil:

    The solution to a differential equation exists in a very different way than a physical system which is governed by that differential equation. To say that one is more real than the other -- that the noumenon is more or less real than the phenomenon -- is completely unjustified. You may as well say that thought is more or less real than action. The truth is, you won’t get very far by neglecting either.

    It’s telling that you put the phrase, “real universe,” in quotes. Meaning, it isn’t that, but we’re going to call it that for lack of a better phrase. Well, don’t kid yourself: the real universe is the real universe. No quotes needed.

    It does indeed matter “what any real piece of stuff or bits in the universe etc. can actually represent or accomplish.” In the last analysis, that is all that really matters. The abstract logical necessity of a correctly structured syllogism does indeed exist in a Platonic noumenal way, but it is completely inconsequential unless its premises actually apply to some real phenomena in the real world.

    You can say that fundamental particles are just approximate realizations of certain truly fundamental symmetries. Or you can say that fundamental symmetries are just idealized abstractions of certain truly fundamental particles. The truth is, neither could exist without the other. Without phenomena, there would be nothing to abstract, and without noumena, there would be nothing to realize.

    There are, as I’ve pointed out, irreducible distortions inherent in the very act of comprehending. And to that extent, you could say that reality is ultimately unintelligible. But I don’t see that there are any other fundamental limitations on the ultimate intelligibility of reality.

    Incidentally, I have no patience for the pedantic literal-mindedness of positivism either. The resulting impoverishment of the meaning of meaning deprives people of the ability to think. It is part of the corrosive anti-rationality that has reduced philosophy and epistemology to pathetic shadows of what they once were.

    ReplyDelete
  14. Hi Ain Soph,

    If you consider the place and the thing as two distinctly individual non emergent elements (fundamental) of reality, then I have no problem with that. However, if they are to be considered one in the same, as different simply different characteristic rather then qualities, I for myself can find no reason in your position as it relates to a universe that is evidently a structure born resultant of reaction.

    Best,

    Phil

    ReplyDelete
  15. Well, getting back to decoherence: Those of you who believe in PW etc. are at least implicitly acknowledging that the DI is inadequate. There's no point in trying to solve the collapse/localization (Penrose "R") with guided particles if decoherence of unguided, original QM particles/waves would solve the problem. While I'm waiting to write up a good new put-down of DI in comments at my blog (and maybe new post too), then Ain Soph, Phil, and I should agree that diddling with the phases between waves (which is what WFs are, pace Orzel in his "explanation" for the dogs) - and especially bad, pretending to explain what happens in a given instance by appeal to what happens in other trials - is not going to plop a "particle" down at here v. there; unless it either was a particle all along, or unless some weird R comes along to intervene. (Or, unless we can't model the universe anyway.)

    I don't know how Steven takes PW etc. but he seems OK with decoherence. I want to better explain why it is literally a fallacious argument (but I already at least named the main fallacy here and/or in comments at my joint.) For convenience:
    http://tyrannogenius.blogspot.com/2009/12/decoherence-interpretation-falsified.html

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.