|
[Image: Everett Collection] |
In the years of World War II, National Socialists executed about six million Jews. The events did not have a single cause, but most historians agree that a major reason was social reinforcement, more widely known as “group think.”
The Germans who went along with the Nazis’ organized mass murder were not psychopaths. By all accounts they were “normal people.” They actively or passively supported genocide because at the time it was a socially acceptable position; everyone else did it too. And they did not personally feel responsible for the evils of the system. It eased their mind that some scientists claimed it was only rational to prevent supposedly genetically inferior humans from reproducing. And they hesitated to voice disagreement because those who opposed the Nazis risked retaliation.
It’s comforting to think that was Then and There, not Now and Here. But group-think think isn’t a story of the past; it still happens and it still has devastating consequences. Take for example the 2008 mortgage crisis.
Again, many factors played together in the crisis’ buildup. But oiling the machinery were bankers who approved loans to applicants that likely couldn’t pay the money back. It wasn’t that the bankers didn’t know the risk; they thought it was acceptable because everyone else was doing it too. And anyone who didn’t play along would have put themselves at a disadvantage, by missing out on profits or by getting fired.
A vivid recount comes from an anonymous Wall Street banker quoted in
a 2008 NPR broadcast:
“We are telling you to lie to us. We’re hoping you don't lie. Tell us what you make, tell us what you have in the bank, but we won’t verify. We’re setting you up to lie. Something about that feels very wrong. It felt wrong way back then and I wish we had never done it. Unfortunately, what happened ... we did it because everyone else was doing it.”
When the mortgage bubble burst, banks defaulted by the hundreds. In the following years, millions of people would lose their jobs in what many economists consider the worst financial crisis since the Great Depression of the 1930s.
It’s not just “them” it’s “us” too. Science isn’t immune to group-think. On the contrary: Scientific communities are ideal breeding ground for social reinforcement.
Research is currently organized in a way that amplifies, rather than alleviates, peer pressure: Measuring scientific success by the number of citations encourages scientists to work on what their colleagues approve of. Since the same colleagues are the ones who judge what is and isn’t sound science, there is safety in numbers. And everyone who does not play along risks losing funding.
As a result, scientific communities have become echo-chambers of likeminded people who, maybe not deliberately but effectively, punish dissidents. And scientists don’t feel responsible for the evils of the system. Why would they? They just do what everyone else is also doing.
The reproducibility crisis in psychology and in biomedicine is one of the consequences. In both fields, an overreliance on data with low statistical relevance and improper methods of data analysis (“p-value hacking”) have become common. That these statistical methods are unreliable has been known for a long time. As Jessica Utts, President of the American Statistical Association, pointed out in 2016 “
statisticians and other scientists have been writing on the topic for decades.”
So why then did researchers in psychology continue using flawed methods? Because everyone else did it. It was what they were taught; it was generally accepted procedure. And psychologists who’d have insisted on stricter methods of analysis would have put themselves at a disadvantage: They’d have gotten fewer results with more effort. Of course they didn’t go the extra way.
The same problem underlies an entirely different popular-but-flawed scientific procedure: “Mouse models,” ie using mice to test the efficiency of drugs and medical procedures.
How often have you read that Alzheimer or cancer has been cured in mice? Right – it’s been done hundreds of times. But humans aren’t mice, and it’s no secret that mice-results – while not uninteresting – often don’t transfer to humans. Scientists only partly understand why, but that mouse models are of limited use to develop treatments for humans isn’t controversial.
So why do researchers continue to use them anyway? Because it’s easy and cheap and everyone else does it too. As
Richard Harris put it in his book Rigor Mortis: “One reason everybody uses mice: everybody else uses mice.”
It happens here in the foundations of physics too.
In my community,
it has become common to justify the publication of new theories by claiming the theories are falsifiable. But falsifiability is a weak criterion for a scientific hypothesis. It’s necessary, but certainly not sufficient, for many hypotheses are falsifiable yet almost certainly wrong. Example: It will rain peas tomorrow. Totally falsifiable. Also totally nonsense.
Of course this isn’t news. Philosophers have gone on about this for at least half a century. So why do physicists do it? Because it’s easy and because all their colleagues do it. And since they all do it, theories produced by such methods will usually get published, which officially marks them as “good science”.
In the foundations of physics, the appeal to falsifiability isn’t the only flawed method that everyone uses because everyone else does.
There are also those theories which are plainly unfalsifiable. And another example are arguments from beauty.
In hindsight it seems perplexing, to say the least, but physicists published ten-thousands of papers with predictions for new particles at the Large Hadron Collider because they believed that the underlying theory must be “natural”. None of those particles were found.
Similar arguments underlie the belief that the fundamental forces should be unified because that’s prettier (no evidence for unification has been found) or that we should be able to measure particles that make up dark matter (we didn’t). Maybe most tellingly, physicists in these community refuse to consider the possibility that their opinions are affected by the opinions of their peers.
One way to address the current crises in scientific communities is to impose tighter controls on scientific standards. That’s what is happening in psychology right now, and I hope it’ll also happen in the foundations of physics soon. But this is curing the symptoms, not the disease. The disease is a lacking awareness for how we are affected by the opinions of those around us.
The problem will reappear until everyone understands the circumstances that benefit group-think and learns to recognize the warning signs: People excusing what they do with saying everyone else does it too. People refusing to take responsibility for what they think are “evils of the system.” People unwilling to even consider that they are influenced by the opinions of others. We have all the warning signs in science – had them for decades.
Accusing scientists of group-think is standard practice of science deniers. The tragedy is, there’s truth in what they say. And it’s no secret: The problem is easy to see for everyone who has the guts to look. Sweeping the problem under the rug will only further erode trust in science.
Read all about the overproduction crisis in the foundations of physics and what you – yes you! – can do to help in
my book “Lost in Math,” out June 12, 2018.