When
Senator Rand Paul last year proposed that non-experts participate in review panels which award competitive research grants, my first reaction was to laugh. I have reviewed my share of research proposals, and I can tell you that without experience in the respective discipline you can’t even judge whether the proposal is feasible, not to mention promising.
I nodded to myself when I read that Jeffrey Mervis, reporting for Science Magazine,
referred to Sen Paul’s bill as an “attack on peer review,” and Sean Gallagher from the American Association for the Advancement of Science called it
“as blatant a political interference into the scientific process as it gets.”
But while Sen Paul’s cure is worse than the disease (
and has, to date, luckily not passed the Senate), I am afraid his diagnosis is right. The current system is indeed “baking in bias,” as he put it, and it’s correct that “part of the problem is the old adage publish or perish.” And, yes, “We do have silly research going on.” Let me tell you.
For the past 15 years, I have worked in the foundations of physics, a field which has not seen progress for decades. What happened 40 years ago is that theorists in my discipline became convinced the laws of nature must be mathematically beautiful in specific ways. By these standards, which are still used today, a good theory should be simple, and have symmetries, and it should not have numbers that are much larger or smaller than one, the latter referred to as “naturalness.”
Based on such arguments from beauty, they predicted that protons should be able to decay. Experiments have looked for this since the 1980s, but so far not a single proton has been caught in the act. This has ruled out many symmetry-based theories. But it is easy to amend these theories so that they evade experimental constraints, hence papers continue to be written about them.
Theorists also predicted that we should be able to detect dark matter particles, such as axions or weakly interacting massive particles (WIMPs). These hypothetical particles have been searched for in dozens of experiments with increasing sensitivity – unsuccessfully. In reaction, theorists now write papers about hypothetical particles that are even harder to detect.
The same criteria of symmetry and naturalness led many particle physicists to believe that the Large Hadron Collider (LHC) should see new particles besides the Higgs-boson, for example supersymmetric particles or dark matter candidates. But none were seen. The LHC data is not yet fully analyzed, but it’s clear already that if something hides in the data, it’s not what particle physicists thought it would be.
You can read the full story in my book
“Lost in Math: How Beauty Leads Physics Astray.”
Most of my colleagues blame the lack of progress on the maturity of the field. Our theories work extremely well already, so testing new ideas is difficult, not to mention expensive. The easy things have been done, they say, we must expect a slowdown.
True. But this doesn’t explain the stunning profusion of blundered predictions. It’s not like we predicted one particle that wasn’t there. We predicted hundreds of particles, and fields, and new symmetries, and tiny black holes, and extra-dimensions (in various shapes, and sizes, and widths), none of which were there.
This production of fantastic ideas has been going on for so long it has become accepted procedure. In the foundations of physics we now have a generation of researchers who make career studying things that probably don’t exist. And instead of discarding methods that don’t work, they write increasingly more papers of decreasing relevance. Instead of developing theories that better describe observations, they develop theories that are harder to falsify. Instead of taking risks, they stick to ideas that are popular with their peers.
Of course I am not the first to figure beauty doesn’t equal truth. Indeed, most physicists would surely agree that using aesthetic criteria to select theories is not good scientific practice. They do it anyway. Because all their colleagues do it. And because they all do it, this research will get cited, will get published, and then it will be approved by review panels which take citations and publications as a measure of quality. “Baked in bias” is a pretty good summary.
This acceptance of bad scientific practice to the benefit of productivity is certainly not specific to my discipline. Look for example at psychologists whose shaky statistical analyses now make headlines. The most prominent victim is
Amy Cuddy’s “Power Posing” hypothesis, but the problem has been known for a long time. As Jessica Utts, President of the American Statistical Association, pointed out in 2016
“statisticians and other scientists have been writing on the topic for decades.”
Commenting on this
“False Positive Psychology,” Joseph Simmons, Leif Nelson, and Uri Simonsohn, wrote “Everyone knew it was wrong.” But I don’t think so. Not only have I myself spoken to psychologists who thought their methods were fine because it’s what they were taught to do. It also doesn’t make sense. Had psychologists known their results were likely statistical artifacts, they’d also have known other groups could use the same methods to refute their results.
Or look at
Brian Wansink, the Cornell Professor with the bottomless soup bowl experiment. He recently drew unwanted attention to himself with a blogpost in which he advised a student to try harder getting results out of data because it “cost us a lot of time and our own money to collect.” Had Wansink been aware that massaging data until it delivers is not sound statistical procedure, he’d probably not have blogged about it.
What is going on here? In two words: “communal reinforcement,” more commonly known as group-think. The headlines may say “research shows” but it doesn’t: researchers show. Scientists, like all of us, are affected by their peers’ opinions. If everyone does it, they think it’s probably ok. They also like to be liked, not to mention that they like having an income. This biases their judgement, but the current organization of the academic system does not offer protection. Instead, it makes the problem worse by rewarding those who work on popular topics.
This problem cannot be solved by appointing non-experts to review panels – that merely creates incentives for research that’s easy to comprehend. We can impose controls on statistical analyses, and enforce requirements for reproducibility, and propose better criteria for theory development, but this is curing the symptoms, not the disease. What we need is to finally recognize that scientists are human, and that we don’t do enough to protect scientists’ ability to make objective judgements.
We will never get rid of social biases entirely, but simple changes would help. For starters, every scientist should know how being part of a group can affect their opinion. Grants should not be awarded based on popularity. Researchers who leave fields of declining promise need encouragement, not punishment because their productivity may dwindle while they retrain. And we should generally require scientists to name both advantages and shortcomings of their hypotheses.
Most importantly, we should not sweep the problem under the rug. As science denialists become louder both in America and in Europe, many of my colleagues publicly cheer for their profession. I approve. On the flipside, they want no public discussion about our problems because they are afraid of funding cuts. I disagree. The problems with the current organization of research are obvious – so obvious even Sen Paul sees them. It is pretending the problem doesn’t exist, not acknowledging it and looking for a solution, that breeds mistrust.
Tl;dr: Academic freedom risks becoming a farce if we continue to reward researchers for working on what is popular. Denying the problem doesn’t help.