Pages

Saturday, December 14, 2019

How Scientists Can Avoid Cognitive Bias

Today I want to talk about a topic that is much, much more important than anything I have previously talked about. And that’s how cognitive biases prevent science from working properly.


Cognitive biases have received some attention in recent years, thanks to books like “Thinking Fast and Slow,” “You Are Not So Smart,” or “Blind Spot.” Unfortunately, this knowledge has not been put into action in scientific research. Scientists do correct for biases in statistical analysis of data and they do correct for biases in their measurement devices, but they still do not correct for biases in the most important apparatus that they use: Their own brain.

Before I tell you what problems this creates, a brief reminder what a cognitive bias is. A cognitive bias is a thinking shortcut which the human brain uses to make faster decisions.

Cognitive biases work much like optical illusions. Take this example of an optical illusion. If your brain works normally, then the square labelled A looks much darker than the square labelled B.

[Example of optical illusion. Image: Wikipedia]
But if you compare the actual color of the pixels, you see that these squares have exactly the same color.
[Example of optical illusion. Image: Wikipedia]
The reason that we intuitively misjudge the color of these squares is that the image suggests it is really showing a three-dimensional scene where part of the floor is covered by a shadow. Your brain factors in the shadow and calculates back to the original color, correctly telling you that the actual color of square B must have been lighter than that of square A.

So, if someone asked you to judge the color in a natural scene, your answer would be correct. But if your task was to evaluate the color of pixels on the screen, you would give a wrong answer – unless you know of your bias and therefore do not rely on your intuition.

Cognitive biases work the same way and can be prevented the same way: by not relying on intuition. Cognitive biases are corrections that your brain applies to input to make your life easier. We all have them, and in every-day life, they are usually beneficial.

The maybe best-known cognitive bias is attentional bias. It means that the more often you hear about something, the more important you think it is. This normally makes a lot of sense. Say, if many people you meet are talking about the flu, chances are the flu’s making the rounds and you are well-advised to pay attention to what they’re saying and get a flu shot.

But attentional bias can draw your attention to false or irrelevant information, for example if the prevalence of a message is artificially amplified by social media, causing you to misjudge its relevance for your own life. A case where this frequently happens is terrorism. Receives a lot of media coverage, has people hugely worried, but if you look at the numbers for most of us terrorism is very unlikely to directly affect our life.

And this attentional bias also affects scientific judgement. If a research topic receives a lot of media coverage, or scientists hear a lot about it from their colleagues, those researchers who do not correct for attentional bias are likely to overrate the scientific relevance of the topic.

There are many other biases that affect scientific research. Take for example loss aversion. This is more commonly known as “throwing good money after bad”. It means that if we have invested time or money into something, we are reluctant to let go of it and continue to invest in it even if it no longer makes sense, because getting out would mean admitting to ourselves that we made a mistake. Loss aversion is one of the reasons scientists continue to work on research agendas that have long stopped being promising.

But the most problematic cognitive bias in science is social reinforcement, also known as group think. This is what happens in almost closed, likeminded, communities, if you have people reassuring each other that they are doing the right thing. They will develop a common narrative that is overly optimistic about their own research, and they will dismiss opinions from people outside their own community. Group think makes it basically impossible for researchers to identify their own mistakes and therefore stands in the way of the self-correction that is so essential for science.

A bias closely linked to social reinforcement is the shared information bias. This bias has the consequence that we are more likely to pay attention to information that is shared by many people we know, rather than to the information held by only few people. You can see right away how this is problematic for science: That’s because how many people know of a certain fact tells you nothing about whether that fact is correct or not. And whether some information is widely shared should not be a factor for evaluating its correctness.

Now, there are lots of studies showing that we all have these cognitive biases and also that intelligence does not make it less likely to have them. It should be obvious, then, that we organize scientific research so that scientists can avoid or at least alleviate their biases. Unfortunately, the way that research is currently organized has exactly the opposite effect: It makes cognitive biases worse.

For example, it is presently very difficult for a scientist to change their research topic, because getting a research grant requires that you document expertise. Likewise, no one will hire you to work on a topic you do not already have experience with.

Superficially this seems like good strategy to invest money into science, because you reward people for bringing expertise. But if you think about the long-term consequences, it is a bad investment strategy. Because now, not only do researchers face a psychological hurdle to leaving behind a topic they have invested time in, they would also cause themselves financial trouble. As a consequence, researchers are basically forced to continue to claim that their research direction is promising and to continue working on topics that lead nowhere.

Another problem with the current organization of research is that it rewards scientists for exaggerating how exciting their research is and for working on popular topics, which makes social reinforcement worse and adds to the shared information bias.

I know this all sounds very negative, but there is good news too: Once you are aware that these cognitive biases exist and you know the problems that they can cause, it is easy to think of ways to work against them.

For example, researchers should be encouraged to change topics rather than basically being forced to continue what they’re already doing. Also, researchers should always list shortcoming of their research topics, in lectures and papers, so that the shortcomings stay on the collective consciousness. Similarly, conferences should always have speakers from competing programs, and scientists should be encouraged to offer criticism on their community and not be avoided for it. These are all little improvements that every scientist can make individually, and once you start thinking about it, it’s not hard to come up with further ideas.

And always keep in mind: Cognitive biases, like seeing optical illusions are a sign of a normally functioning brain. We all have them, it’s nothing to be ashamed about, but it is something that affects our objective evaluation of reality.

The reason this is so, so important to me, is that science drives innovation and if science does not work properly, progress in our societies will slow down. But cognitive bias in science is a problem we can solve, and that we should solve. Now you know how.

47 comments:

  1. Knowing some basic psychology is certainly useful, and interesting. But hard sciences like physics have an unbiased referee: nature. For example, theorists spent decades in writing about SUSY and extra dimensions at the weak scale, until this stopped after enough negative results. In other fields biases are constructed and enforced by politics and media. Some soft "studies" seem to me now lost in their biases.

    ReplyDelete
    Replies
    1. Alessandro,

      What you say is correct to some extent, but note that weeding out some wrong ideas doesn't necessarily lead us to a correct idea.

      Delete
    2. There's also the problem of research - notably in physics - where the referee is mute, when there is no direct evidence available eg in many worlds or other universes. In these cases, the physics can resemble competing art movements.

      Delete
  2. We can never understand all the different perspectives people have. This is why democracy, multiple levels of government, and nation states are so important. No amount of math and statistics can improve on this arrangement.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. Sabine,

    I have to take issue with this post, whose optimism is, I am afraid, not warranted by circumstance. Researchers have uncovered not only a horde of irrational behaviors (were we to believe that profit or resource maximization is rational) but also the extreme difficulty in even recognizing them in one or one’s group’s actions. The ones that are somewhat more recognizable are. algorithmic behaviors usually rooted in memory functions, for example, the availability bias with which you lead your note. Those which could be termed “social” biases are almost ineradicable, implicit, and have effects on which we are largely not aware.

    So much for glumness. Your suggestions are good, good luck with instituting them. In particular, as much “scientific” work now is group work, I would suggest that your next blog on this topic should be able “groupthink,” and how to limit its influence.

    ReplyDelete
  5. 'researchers should be encouraged to change topics rather than basically being forced to continue what they’re already doing.'

    The problem here might be that some topics require many years to sufficiently master, in order to make a meaningful contribution.

    ReplyDelete
    Replies
    1. Martien,

      Yes, indeed, which is why we need organizational structures to support people if they want to leave a field that is no longer promising. Tenure should enable scientists to do that, but the majority of researchers today don't have tenure, and even those who have need to bring in research grants continuously, so currently we are working be a "more of the same" strategy.

      Delete
  6. applause! Well Said! Reminds me of a recent article in Quanta which mentioned that a leading Mathematician regularly took time from his current work to explore different problems.

    ReplyDelete
  7. Sabine,

    I think your central point can be summed up in the famous quote from Feynman:

    >”The first principle is that you must not fool yourself -- and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists.”

    Or as the Bard of Avon put it more poetically:

    >“This above all: to thine own self be true, And it must follow, as the night the day, Thou canst not then be false to any man.”

    In the same talk, Feynman also made a point that, superficially, is directed at experimentalists but that I think also applies to us theorists:

    >”When I was at Cornell, I often talked to the people in the psychology department. One of the students told me she wanted to do an experiment that went something like this... I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person...”

    That basic principle -- you have to master earlier work first -- is key.

    I think way too often a grad student or post-doc, eager to get out a publication that contributes to the field, starts working on, e.g., string theory phenomenology without fully understanding the underlying, previous work in string theory. This is certainly what I saw at Stanford back around 1980 (it was then the Standard Model, not string theory, that too few physicists really grasped but did research on anyway), and I think it has only gotten worse since.

    As you have emphasized in the past, a physicist should be asking: what are the paradoxes in string theory, what are the blank spots, where are people blindly following the formalism without understanding what it means? E.g., by analogy with QFT, “first-quantized” string theory would seem to be simply a classical field theory of strings, while only “string field theory” is truly a quantum theory: does this analogy actually hold (the issue is clearer for bosonic string theory than for superstring theory)?

    The story is that when Feynman died, he left on his blackboard the epigram: “What I cannot create, I do not understand.”

    To give a concrete example, unless a theorist can work out the derivation that shows why, in 9+1 dimensions, world-sheet spinors are also spacetime spinors, he is not entitled to be working on superstring phenomenology.

    Of course, the problem with the points I am making is, as you say, the institutional and incentive structure physicists face today: Einstein, sitting in the Swiss Patent Office did not need to worry about churning out publications to renew his grant or to pass the tenure review.

    A grad student or post-doc may not feel they have even three or four years to get up to speed on superstring theory, much less the ten years it took Einstein to work out General Relativity.

    Fundamentally, modern physics is really a bit similar to the centrally-managed economy of the old Soviet Union: Szilard warned against the dangers of making science depend on grants from centralized committees of established scientists in his famous satirical short story, “The Mark Gable Foundation.” I think before WW I, there was more decentralization in physics, a bit more like a free market: to be sure, funding, tenure, and all the rest may still have been arbitrary and autocratic at the local level, but at least different localities were arbitrary in different ways.

    Perestroika in the old Soviet Union was doomed to failure without a radical revision of the institutional structure of the Soviet economy: Gosplan needed to go.

    I am afraid that we need an equally radical change in the institutional structure of physics. And radical change is hard to bring about.

    All the best,

    Dave

    ReplyDelete
    Replies
    1. Thanks for the insightful essay, Dave. I have noticed that students rarely go back to original source documents and neither do their professors. One thing that got me interested in string theory was perusing early journal literature (especially, regards Nambu-Goto action and Born-Infeld). Following the trail of journal literature from its beginnings to today, it is easy to see why string theory continues to attract adherents. Also, beneath that quote from the first line on Feynman's last blackboard is another line: "know how to solve every problem that has been solved." That is another thing that professors rarely transmit to their students (too often, professors merely grab 'the solutions manual' to a textbook and literally copy that solution to the blackboard) they do not actually demonstrate to students how to arrive at a solution to a new problem-- ab initio. Finally, I remark that part of what I perceive as 'cognitive bias' in science stems from the over-reliance on computers (both educational and research levels). That is, if a computer simulation shows it-- then it must exist mentality.

      Delete

  8. Only three articles ago I discovered this blog and each article are great; Luckily for me, I can still read the previous articles.

    ReplyDelete
  9. If “cognitive biases prevent science from working properly,” then how did science progress to its current state? The standard model of particle physics, astronomy, cosmology, solid state, semiconductors, DNA, medicine. Humankind seems to have made huge scientific progress despite cognitive biases. We tend to forget all the research that failed, and maybe some of these failures were related to cognitive bias, but it in the grand scheme of things, it seems that the scientific process works.

    There is certainly room to improvement, but I would not like to see any disruptive changes in a system with such an amazing track record.

    ReplyDelete
    Replies
    1. Udi,

      The question you should ask is if scientists had not been hampered by cognitive bias, how much further along could we be. Saying that the car is still moving isn't the same as saying we'll make it to Rome.

      Delete
    2. Udi,

      The problem is that we are living off theoretical ideas created before 1980 in fundamental physics.

      You wrote:
      >If “cognitive biases prevent science from working properly,” then how did science progress to its current state? The standard model of particle physics, astronomy, cosmology, solid state, semiconductors, DNA, medicine.

      Again: all of the physics you describe is based on theoretical ideas developed well over forty years ago.

      I lived through the "November Revolution" -- the discovery of the ψ/J particle that solidified the quark model. I learned about it from Feynman himself: I was taking QM from him at the time. My thesis work was on the τ lepton, discovered about the time I entered grad school. Non-abelian gauge theories were new hot stuff: Weinberg-Salam had not been fully verified and QCD was just being worked out.

      Although I became a theorist, the summer before starting grad school I worked on a prototype drift chamber for the Mark II detector at SLAC: I created the algorithm used to calculate E fields in the chamber.

      It was an exciting time, and there was a close coupling between theory and experiment.

      It's worth reading the memoir by Marty Perl, the discoverer of the τ lepton, with whom I worked.

      I think Marty's memoir gives an excellent description of the close coupling between theory and experiment back then, which my own grad school experience illustrated.

      One of Marty's comments is apt (and, like Marty himself, truly understated):
      >"This brings me to the question I raised in the Introduction: is there more and broader speculation these days in particle physics theory than forty years ago? Judging by the various ideas of forty years ago about possible types of leptons, we were rather timid about speculations. There was a fear of being thought unsound. There was reluctance to stray too far from what was known. Today the only limit to theoretical speculations about particle physics is that the mathematics be correct and that there be no obvious conflict with measured properties of particles and reactions. One example is string theory with all its different forms and extensions."

      That is, of course, way too kind: string theory as it now exists may or may not be mathematically consistent, and it certainly does not agree with experiment!

      Can you see why it seems to a lot of us with real experience in the field -- ranging from giants no longer with us such as Marty Perl, Burt Richter, and Dick Feynman to old timers like me and Peter Woit to younger physicists such as Sabine -- that something is wrong?

      Do you see why we think that theory should somehow connect to experiment, as the theory built into the Standard Model did so spectacularly?

      I am not disdainful of string theory: I knew some of the giants of string theory back in the old days -- Lenny Susskind was on my thesis committee, I took an overview of particle physics class from John Schwarz as an undergrad, and Polchinski was a college friend.

      I find string theory intriguing, and, indeed, I aspire to find simpler, conceptually clearer explanations of some of the basic concepts of string theory: in any field of science, someone has to come along and simplify the ideas historically developed through a tortuous and confusing struggle.

      And, yet... naturalness, the landscape, the multiverse, the strong anthropic principle... how can any of this ever connect with experiment? Is this really physics? Can you see why so many of us wonder if physics has taken a wrong turn?

      Please: read Marty's memoir.

      All the best,

      Dave

      Delete
    3. The Forrest Gump of physics.

      Delete
    4. Greg Feild wrote (I think referring to me):
      >The Forrest Gump of physics.

      Well, I do sometimes feel like Forrest Gump: I once had a chance to chat with the now (in)famous billionaire and financier of right-wing causes, Charles Koch: I fear that I dismissed him as boring. I've performed on stage with the actor John Goodman (he went to my high school). I met Steve Hawking. I shared an office with the guy who invented TTL logic, and I worked in a department with the inventor of the Lange coupler. I heard one of Luis Alvarez's earliest talks on the evidence he and his son had on the impact that killed the dinosaurs (best scientific talk I've ever heard -- gripping and utterly convincing). I've discussed ethical philosophy with the sci-fi author Robert Heinlein. My daughter's dance teacher performed with Sinatra and Martin and knew Sammy Davis.

      On the other hand, I'm not sure this is really that unusual: there are nowadays so many celebrities and, given "six degrees of separation," so many links to people who know famous people that I suspect lots of people know moderately famous people or people who know famous people.

      I think really the only unusual thing is that I did know lots of famous physicists in the late twentieth century, but, if you got a bachelor's at Caltech and a Ph.D. at Stanford in physics in the late twentieth century, that was pretty much inevitable.

      So, alas, I am no doubt more boring than you humorously suggest.

      Dave

      Delete
    5. @Sabine
      "The question you should ask is if scientists had not been hampered by cognitive bias, how much further along could we be".
      No, this is not the good question. Past scientific progress is a fact, how much better we could have done is a conjecture. They cannot be given the same importance. Example: I am in good health. Someone comes and says: you could have been in even better health if you had taken this food supplement, or if you had done more sport. The first hypothesis is likely to be wrong, the second likely to be true.
      We only gain some insight by looking at the actual effect of these cognitive biases. Some may have been important, others not. Generic statements about "biases" are not very helpful. Also, of course, we need some PROOF that some biases indeed play a deleterious role in some concrete situations. This is exactly what Kahnemann et al. did in their work. Just repeating "cognitive bias!" is perhaps a good start, but not very compelling without documented examples in the scientific process.

      Delete
    6. @PhysicistDave
      "all of the physics you describe is based on theoretical ideas developed well over forty years ago."
      This is not the point that Udi is trying to make, I think.
      The point is, cognitive biases have always existed, even when physics was advancing at an incredibly fast pace. So why didn't they hamper progress before the 1980s, but they did so afterwards?

      Delete
    7. opamanfred,

      Yes, you are right, so let me say this more clearly albeit less politely. The commenter stated that if cognitive biases were a problem then science could not have progressed to the present stage, which is stupid because no one ever said that cognitive biases entirely prevent scientific progress. It is even more stupid as I address this "objection" in literally each of my public lectures at the very beginning and it is also explained in my book and I have addressed in in many instances on this blog. I hope the message got across now.

      Delete
    8. opamanfred wrote to me:

      >This is not the point that Udi is trying to make, I think.

      >The point is, cognitive biases have always existed, even when physics was advancing at an incredibly fast pace. So why didn't they hamper progress before the 1980s, but they did so afterwards?

      Sabine and I have separately addressed that question at some length. There are lots of reasons:

      The institutional structure of science changed very dramatically after WW II, especially in physics. Physics was the pioneer in "Big Science," partly because of the role of physicists in building the Bomb. As I have said, Szilard anticipated the resulting problems early on in his satire, "The Mark Gable Foundation." The bureaucratization of physics has tended to amplify the natural human tendency towards group-think.

      Sabine has famously argued that a major problem is that in the last several decades physicists got "lost in math." Again, there are a variety of reasons. The increasing cost of accelerators made it harder to rapidly advance energy frontiers experimentally. The very success of the Standard Model made it hard to progress by the usual method of looking for anomalies where existing theories fail: the Standard Model just does not fail very often. Some new theories, notably string theory, were both so appealing and so mathematically intractable that there was a willingness to overlook their failure to connect with experiment. When theories are judged by mathematical "beauty" rather than rigorous experimental tests, group-think becomes more powerful.

      And Sabine and I both think there are real problems in existing physics -- notably the foundations of quantum mechanics -- that physicists have unwisely ignored: I'm happy to say that physicists far more famous than she or I, most notably Steve Weinberg, have come to the same conclusion.

      Yes, physicists, like all human beings, have always been subject to cognitive biases. But for the reasons I have just listed and no doubt some I have not mentioned, this has become a bigger problem in the last forty years.

      But to see that this is a problem, it is necessary to look at what has happened in the last forty years in fundamental theoretical physics and see that things have not gone well.

      You cannot diagnose the causes of a disease if you do not acknowledge you are sick.

      Seeing the objective fact that something has really gone wrong in theoretical physics and understanding what has caused the wrong turn are two problems that are intrinsically interconnected.

      Dave

      Delete
  10. There is something worse than cognitive bias and it is that many researchers who get funding on very complex issues, such as cancer therapies, falsify the results and expectations to continue living and dating in the media. After a while, the pharmaceutical companies fail to reproduce the results in most cases.

    ReplyDelete
  11. It is often said we humans evolved a brain to solve problems. Instead it might be that problem solving has been more an indirect consequence of Hominid cerebral evolution. I would say it is more likely the brain evolved to develop language and to communicate information in narrative form. Since this evolution meant expanded mental capabilities with abstractions, say a word standing in for an object or person, this may have then had an indirect consequence of permitting humans to think more in abstract ways.

    I would say the main evolutionary selection process was for the ability to communicate information about a local environment in stories or narratives that projected human thoughts, emotions and perceptions onto the world. In this way the functions in the natural world could be understood in anthropic terms. Most subsistence based tribal cultures have magical nature religions of spirits in the forest or world, and often have totems where a person assumes the character of a spirit. If you don't think we modern people do the same, consider the names of sports teams ---- pure totem forms of projections. The various stories about these spirits communicate information about the local environment from generation to generation. Homo sapiens is almost a misfit animal in a way, for we are not fast runners, we are not good hunters without tools, and yet we are omnivorous. It is our ability to communicate things that give us the edge.

    As a result in more modern cultures these spirits have been merged into gods and then later into God, and many people have a difficult time imagining a world without humans. This is either in the past or the future. While the mental art of projection is important, after all Einstein did much the same when he imagined himself on a frame moving with an EM field, this also sets up the possibilities of big biases. All evidence points to us humans as a rather small aspect of the universe on a small planet around a fairly average star in an average galaxy with nearly a trillion of stars, and one galaxy out of a trillion galaxies. I dare say though the majority of people, certainly in the US and probably much of the developing world, see humanity as the complete center of existence. The sorts of bias Sabine points out here in science I think is a form of this, where a research topic or area becomes all important and the crowning pinnacle of one's life and career. Our brains are I think frankly evolved to permit this to happen.

    ReplyDelete
    Replies
    1. Lawrence,
      The need is the mother of the behavior of all living beings, including human beings, who also have to satisfy their subjective and cognitive needs, and like the rest of living beings, the human being does so by extrapolating their properties to the rest of nature; of course, the dog believes that the ball has life until he discovers that it is a toy; it happens to us the same, little by little we discover that what we thought had consciousness, really does not have it; reason why throughout history "the lion and the mammoth" had conscience, after we massacred them we realized that no; the rain and the rivers, after the sun and the stars, even the social evolution, now we are in the general frames of all physical existence and of our own mathematical physical creations. In short, we are a source of subjectivity that we must control.

      Delete
    2. Animals have consciousness. My love of dogs and horses has shown how they have emotional attributes similar to us. This is especially the case with dogs. Horses are not as smart as dogs, but they have a bit more going on in their heads than most hoofstock animals. Dogs have social and emotional qualities remarkably similar to ours. One of my dogs is a sort of "dog diplomat" who manages to end dog conflicts between my dogs and neighboring dogs. It is fascinating to watch actually. Dogs are at best D- students when it comes to enumerating skills or spatial reasoning, Cats are better at these things. 
      However, as for projecting themselves mentally, that is I think largely something we humans are able to do. If dogs have an idea of a God or gods these would bark and chase after squirrels. Projecting oneself into another person, think of fictional narratives, or to place one's self in the future or to imagine being someone else is a higher-order "theory of mind" property. Some higher intelligent animals might have some small element of this, but it is nowhere as expansive as with humans. The reason why we probably have such an enormous capability here is that we have language. Using words to represent things is a profound abstracting skill. Fouts et al found that chimpanzees have some limited ability here, which is probably far more than what dogs are able to do. Chimps are our closest evolutionary relative still existing. 

      My dogs have taken a toll on me. My right hand was injured by them jerking the leashes or leads two years ago. Last month I had a horse step on my right foot and big toe, where recently the toenail came off. But I still love them.

      Delete
    3. Lawrence,

      Re “this evolution meant expanded mental capabilities with abstractions, say a word standing in for an object or person, this may have then had an indirect consequence of permitting humans to think more in abstract ways.”:

      We human beings are always deeply immersed in our symbolic representations of the world (e.g. words, sentences, equations, binary digits). Whether speaking or writing, human beings communicate by using symbols and codes. This has become so automatic that people don’t even notice that this is what they are doing all the time.

      So how do human beings mentally disentangle something that is a representation of the world from something that is not a representation? The equations of physics (i.e. something that is a representation) seem to indicate that relationships exist in the micro-world (i.e. something that is not a representation); whether there is a one to one correspondence with the representation is not the point. But if human beings like yourself ever needed to use logical symbols (e.g. AND or OR) when analysing aspects of the micro-world, would this mean that there must exist a corresponding logical aspect in the micro-world, something that is not a representation (whether there is a one to one correspondence with the symbolic representation is not the point)?

      Does our need to use symbols to represent everything, including the micro-world, create a form of cognitive bias?

      Delete
    4. I think our brain works with emotional equivalents of words, symbols, axioms, ideas and systems of ideas; it is the only way in which they can integrate and modify the emotional representation of the outside world that comes from our senses; there are systems of ideas that are integrated in such a way to that external representation that seems part of it; example: the Earth is the one that moves, not the Sun, or once understood the theory of relativity it is almost impossible to think in the previous way. It is really a very sophisticated emotional process. If this is so; then no animal has consciousness, only us; The example of the dog is only to show that he extrapolates his property to everything that moves.

      Delete
    5. The question is whether the map can become the territory. The Argentine writer Jorge Borges wrote one of his quirky short stories about a map that covered all the ground. It seems implausible there can ever be a one to one correlation. We also have Gödel's theorem where a number code for a preposition is used as the free variable, and then these are enumerates in a Cantor diagonal list. The result is the system can't "capture itself." Our use of words and symbols to represent the world are in a sense similar to a Gödel numbering.

      This is though very important for the sort of conscious experience we have. Babies and young toddlers clearly remember things, they clearly are able to know about the world. Then from my experience their babbling becomes more coherent and somewhat understandable as a parent. Then they start to pick up speaking and the odd thing is they lose all memory of the time prior to speaking. It is as if there is a different level of conscious awareness the child enters into. It might be called consciousness of consciousness.

      Delete
    6. Lawrence,

      Despite our possible bias about these matters, if we need to analyse and represent what is happening in the micro-world using AND or OR symbols, does this mean that the micro-world must somehow be employing logic?

      Delete
    7. In the threat on physical laws are not inevitable I discuss quantum logic. In some ways quantum mechanics is a sort of logic, and it might be the lowest common denominator of any physics we can know.

      Delete
    8. It’s not just quantum mechanics. Judging by our symbolic representations of the micro-world, I would think that a logic (i.e. what is represented by algorithmic steps) necessarily somehow underlies relationships (i.e. what is represented by equations, variables and numbers). To me, one obvious example of an algorithmic step is the delta symbol: do you agree that the delta symbol represents an algorithmic step in the middle of an equation?

      Delete
    9. So Lawrence, what I’d say is this:

      1. The delta symbol in an equation represents an algorithmic/logical step.
      2. Apart from the delta symbol, relationships (like law of nature relationships) that are represented by equations can’t be broken down into algorithmic/logical steps. Equations don’t logically imply that they need to be solved: its only if a human being decides that they want to “solve” a set of equations that the human being needs to employ logical steps.
      3. An algorithmic/logical step is required to introduce equations to, or remove equations from, a system. In that sense, algorithmic/logical steps come before the relationships that are represented by equations.

      So, I’d say that the symbols that we need to use to represent the world create a cognitive bias about the nature of the world because we are looking at symbols instead of the real thing. But on the other hand, the symbols we use to represent the world can demonstrate that the underlying structure of the world must comprise relationships and logical steps. Physics is blind to the logical steps.

      Delete
  12. There is an interesting parallel in the world of back-country (not inside a resort) skiing, where avalanche safety is a key concern. There was a famous accident where a group of world-class avalanche experts attended a ski safety seminar and the next day went skiing together. They were caught in an avalanche and several died.
    The accident reconstruction analysis determined that cognitive bias (mainly group think) was a major contributor to the event.
    Cognitive bias is now part of the curriculum in snow safety courses.

    ReplyDelete

  13. When things do not progress; then you have to rethink them from the beginning; seek to eliminate the inconsistencies and contradictions; but if the theoretical physicists in charge of doing this work feel so much adversion towards any type of subjectivity to the point that philosophical thinking is subjective garbage; then forget, there will be 40 more years without advancing; and don't expect help from the philosophers; they are still traumatized with the double grid experiment and engaged in social sciences; it is you who know the "animal" in all its peculiarities and are the closest to having any intuition about all this.

    ReplyDelete
  14. "researchers should be encouraged to change topics rather than basically being forced to continue what they’re already doing"

    I remember that, in an old episode of The Big Bang Theory, Sheldon (a theoretical physicist) had tried to switch away from String Theory (because he did not see it as a promising line of research anymore) but University management had refused!

    I really hope such a thing does NOT happen in the real world because that is definitely NOT how OBJECTIVE SCIENTIFIC RESEARCH would/should/must work!

    ReplyDelete
    Replies
    1. FB36: I hate to break it to you, but The Big Bang episode in which Sheldon was forced to stay with string theory was likely inspired by dozens if not hundreds of real incidents.

      There were a couple of decades when if you refused to write papers on string theory, the US NSF, which is one of the largest funders of purely theoretical research in the world, simply would not fund you. That in turn meant no one else would fund you, out of fear of losing their institutional funding.

      I worked only on the US DoD side of research funding at that time, and since we did not fund pure physics theory, we thankfully could just ignore that unfortunate subset of NSF. We did work with NSF a fair bit on topics such as AI, robotics, and materials, sometimes with hilarious mutual stereotypes in play. The oddball case of string theory was the result of PR-savvy string theorists gaining control of the NSF internal review process, and keeping it for decades. One curious impact of this unfortunate situation was the rise of the "quants", theoretical physicists who gave up and decided to make money in private industry by applying their mathematical skills to stock market protection. Naive misuse of one quant economic model, which was actually quite good when used wisely (it was not), contributed powerfully to the near collapse of the world economy around 2007. I like to think of that collapse as the greatest impact string theory ever had or ever will have on the real world.

      Not that the SUSY hypothesis for which Sheldon supposedly got a Nobel Prize has done any better! Like string theory, SUSY is a hypothesis that prefers to ignore actual experimental outcomes in favor of continuing its faith-based elaboration of a set of axioms that "looks right" to a particular group.

      Such biases are biologically built-in, and oddly, they are there for good reasons. Shared beliefs enable superorganism-like coordinated behaviors, and such group behaviors can provide a powerful survival advantage even when the actual beliefs are far from optimal, or even negative. After all, it is the fish that swims outside the school that gets eaten first, even if the overall behavior of the school is far from optimal. Incidentally, fish schools are one example of a subset of shared beliefs in which the sharing is real-time and situation-dependent. There are many variants of such pack and swarming behaviors, e.g. for wolves, fish, birds, and bats [bats, wow!]. There are mysteries there that make this an absolutely fascinating and AI-relevant research topic, although I've not checked in on its status lately.

      As Sabine notes, you must recognize and understand such built-in biological biases if you want to make the most effective possible use of your analytical skills. For example, explicitly choosing between the collective (respect your group!) and individualistic (I'll do my own thing!) problem solution strategies is always more effective than blindly taking the same strategy every time. Surprisingly, this need for flexibility (e.g. fast vs slow think) also applies to individuals, since for example every level of human decision making involves a consensus of many neurons with very different recommendations on what to do, e.g. even on how to move your arm. Society of the mind indeed! One also needs to switch modes quickly based on situation. When a twig snaps behind you at night in a dangerous forest, it's best to flip as quickly as possible from focusing on anomaly detection to focusing on how to escape a threat.

      (BTW, I never watched The Big Bang Theory, other than once noticing they were using real equations on some odd comedy on TV. I did make myself sit through the last episode, but found it stilted, a bit bizarre, and not at all funny, especially the Sheldon character. I gather this is how Sheldon supposedly would have reacted to watching his own show? Sigh. Maybe my sister-in-law is right…)

      Delete
  15. At the most fundamental level the way we still see quantum theory as being fuzzy may be that it roots are bound in cognitive bias-thinking too.

    ReplyDelete
    Replies
    1. Brains are classical systems. If there were such a thing as quantum consciousness the eyes as extensions of the brain could form a type of interferometer. We could then have vision equal to a 5cm diameter telescope. We would have known what the planets look like long before Galileo. As such our classical brains have some difficulty wrapping themselves around the quantum world.

      Delete
  16. I believe that Francis Crick, whose first love was physics, was advised it was too late for him to learn physics after his service in world war 2, so he went into the less challenging field of molecular biology.
    Physics is even more challenging now than it was then. Perhaps it is impractical to expect a physicist in her 30s to switch to and master a new subfield, especially while trying to make a living?
    In other words, perhaps we're beginning to run up against the limits of what one human can do in a single lifetime. Progress may have become unavoidably slower after the vast edifice of the standard model was established.

    ReplyDelete
  17. I have spent some time perusing the August 2018 Review of Particle Physics. We read there: "superstring theory stands out as the best-studied and technically most developed proposal, possessing in particular a high-level internal, mathematical consistency." (page 849). It is simply untrue that good physics is being ignored (both on the experimental and theoretical fronts). As an outsider who has kept an eye on supersymmetry and string theory, I remain delighted in the interplay between those topics and mathematics. That said, were it termed pure mathematics and not physics, perhaps there would not exist such a hostility to its research. In my experience, systemic problems in how undergraduate university physics is taught do more harm than any amount of string theory could ever do. Bias, when it is evident, begins far earlier than research level, perhaps as early as (or, earlier than) high-school science courses.

    ReplyDelete
  18. Cognitive Bias as it is described here is a general and important problem, not doubt about it. Anything written here by Sabine is justified. But it also exists in a greater dimension, which in some way feels like a religious faith.

    One example from the past: In the 1930s the great mathematician John von Neumann presented a proof for quantum mechanics which says that a deterministic view of the world is not compatible with QM. This was seen as a confirmation of the Copenhagen interpretation of QM. All its representatives like Bohr, Heisenberg, also the follower of Heisenberg, von Weizsäcker, were very enthusiastic about this paper. And one should assume that all physicists involved in this topic have carefully investigated the arguments. But then, about 30 years later, the Irish physicist John Bell showed that this proof of von Neumann was based on a very fundamental logical error (v. Neumann had presumed that the spin operator is linear, which is clearly not the case).

    How is it possible that such an error in a paper, which excited so many great physicists, was not detected for a long time? The reason was obviously that the result fitted perfectly into the expectation of the Copenhagen followers. And so any critical view was paralyzed.

    To my knowledge there are a lot of similar situations in today’s physics. In the example above, von Weizsäcker has never cared about the proof of the contrary by Bell, even though it was generally accepted.

    Another example are open problems in relativity, where Lorentz once criticized Einstein and Einstein accepted the arguments of Lorentz. But neither Einstein himself nor someone else ever presented according arguments in favor of Einstein’s view. Physicists who try to clarify these questions now – known professors among them – do simple not find someone open for a discussion.

    I think that this is a somewhat similar but from its dimension even greater problem in present physics than the point of cognitive bias.

    ReplyDelete
  19. For US experimental physicists, avoiding cognitive biases as you describe is hopeless. There, the funding agencies appoint a panel called the P5 --I suppose with members selected strong input from directors of major labs-- and that panel develops a long range research plan and even identifies the specific projects to be done to implement it. The agencies follow the plan with religious fervor (see https://science.osti.gov/hep/hepap/Meetings/201911). Experimenters who want to do something that is not in the P5 plan are out; they have no chance for funding. The flip side is that if you follow the crowd and join an accelerator-based project (favored by lab directors) or a large DM search projects (of the kind you point out are poorly motivated), etc., you are essentially guaranteed continuous funding.

    ReplyDelete
  20. Is "group think" better known perhaps as "Intellectual Phase Locking" where those professionally invested in the certainty of specific 'knowledge' tend to ignore contradictory data as outliers because "everyone knows" that such things are not possible?

    ReplyDelete
  21. @Terry Bollinger: Thanx for your response!

    IMHO, world of theoretical physics is like a dynamic system of constantly/slowly changing set of theories/ideas! & if theoretical physicists of our world are really objective scientists, then, how many theoretical physicists studying which theory, should/must constantly/slowly change accordingly!
    Otherwise, the system is absolutely/definitely broke/stuck!!!

    (BTW, IMHO, best episodes of The Big Bang Theory are the earlier ones! Start watching from the 1st episode! :-)

    ReplyDelete
  22. lots and lots of (uncited) studies lol

    ReplyDelete
  23. Astronomy is a little different, in terms of biases. Yes, of course astronomers have biases, but a pretty strong motivation for many astronomers is something like "what's in the sky?"

    This leads to fairly good support for surveys, which take an unbiased (ha!) look at all parts of the sky (within the survey footprint) with a particular camera, or similar. And the results are almost always made public, quite quickly.

    ReplyDelete
  24. Sabine's following definition is not correct: "Take for example loss aversion. This is more commonly known as “throwing good money after bad”. It means that if we have invested time or money into something, we are reluctant to let go of it and continue to invest in it even if it no longer makes sense, because getting out would mean admitting to ourselves that we made a mistake. Loss aversion is one of the reasons scientists continue to work on research agendas that have long stopped being promising."

    Loss aversion actually refers to feeling penalties more strongly than rewards, to the extent that a clever framing of the same outcome in terms of avoiding a loss rather than receiving a gain can increase someone's valuation of that outcome. Loss aversion has been used to try to explain what Sabine is actually describing, which is the sunk cost fallacy (sometimes called escalation behavior by some psychologists).

    ReplyDelete

COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.

Note: Only a member of this blog may post a comment.