[See below for travel update in English]
Ab heute ist die Deutsche Übersetzung von „Lost in Math“ in Handel erhältlich unter dem Titel „Das hässliche Universum: Warum unsere Suche nach Schönheit die Physik in die Sackgasse führt.“ Wegen Kommunikationsproblemen mit dem Verlag habe ich die Deutsche Übersetzung nicht im voraus gesehen; tatsächlich habe ich das Buch selbst erst am Freitag erhalten. Ich hab’s bisher auch nicht gelesen. Lasst mich doch bitte wissen, was drin steht.
Ich werde auch in den nächsten Monaten noch Vorträge zum Thema „Mist in der Physik“ geben, sowohl in Deutsch als auch in Englisch. In der ersten Oktoberwoche bin ich in New Jersey (3. Oktober) und in Richmond, Kentucky (4. Oktober). In der zweiten Oktoberwoche bin ich auf der Buchmesse. Am 7. November gebe ich einen Vortrag am Planetarium „Am Insulaner“ in Berlin (und zwar nicht über das Buch sondern über Dunkle Materie). Am 8. November rede ich in der Urania, dann wieder über mein Buch. Am 29. November bin ich an der Chapman University, Los Angeles, und am 10. Dezember in Kaiserslautern.
Ausser der Deutschen Übersetzung wird es ausserdem Übersetzungen geben in Chinesisch, Japanisch, Spanisch, Französisch, Russisch, Koreanisch, Italienisch und Rumänisch.
On October 3rd I’m New Jersey at the Stevens Institute for Technology. I can’t recall sending either title or abstract, but evidently I’m speaking about “How Physics Went Wrong.” On October 4th I’m in Richmond, Kentucky, for a lecture and book signing.
The week after this I’m in Frankfurt on the International Book Fair. On November 7th I’m speaking at the Berlin observatory “Am Insulaner” about dark matter (not about the book!) and on November 8th I’m at the Urania in Berlin, back to speaking about the book. On November 29th I’m at Chapman University LA, on December 10th in Kaiserslautern, Germany.
Besides German, the book will also be translated to Chinese, Japanese, Spanish, Italian, French, Russian, Korean, and Romanian. The English audiobook is supposed to appear in December. The British, you guessed it, still haven’t bought the rights.
For updates, please follow me on twitter or facebook.
Pages
▼
Wednesday, September 26, 2018
Monday, September 24, 2018
Hawking temperature of black holes measured in fluid analogue
Fluid art by Vera de Gernier. |
Hawking notably was first to derive that black holes are not entirely black, but must emit what is now called “Hawking radiation”. The temperature of this radiation is inversely proportional to the mass of the black hole, a relation that has not been experimentally confirmed, so far.
Since the known black holes out there in the universe are very massive, their temperature is too small to be measurable. For this reason, physicists have begun to test Hawking’s predictions by simulating black holes in the laboratory using superfluids, that are fluids at a few degrees above absolute zero which have almost no viscosity. If a superfluid has regions where it flows faster than the speed of sound in the fluid, then sound waves cannot escape the fast-flowing part of the fluid. This is similar to how light cannot escape from a black hole.
The resemblance between the two cases more than just a verbal analogy, as was shown first by Bill Unruh in the 1980s: The mathematics of the two situations is identical. Therefore, physicists should be able to use the superfluid to measure the properties of the radiation predicted by Hawking because his calculation applies for these fluids too.
Checking Hawking’s predictions is what Jeff Steinhauer and his group at Technion in Israel are doing. They use a cloud of about 8000 Rubidium atoms at a temperature so low that the atoms form a Bose-Einstein Condensate and become superfluid. They then use lasers to confine the cloud and to change the number density in some part of it. Changing the number density will also change the speed of sound, and hence create a “sonic horizon”.
Number density (top) and velocity (bottom) of the superfluid. The drop in the middle simulates the sonic horizon. Figure 2 from arXiv:1809.00913 |
Using this method, Steinhauer’s group already showed some years ago that, yes, the fluid black hole emits radiation and this radiation is entangled across the horizon, as Hawking predicted. They measured this by recording density fluctuations in the cloud and then demonstrated that these fluctuations on opposite sides of the horizon are correlated.
Three weeks ago, Steinhauer’s group reported results from a new experiment in which they have now measured the temperature of the fluid black hole:
- Observation of thermal Hawking radiation at the Hawking temperature in an analogue black hole
Juan Ramón Muñoz de Nova, Katrine Golubkov, Victor I. Kolobov, Jeff Steinhauer
arXiv:1809.00913 [gr-qc]
The authors also point out in the paper that they see no evidence of a black hole firewall. A black hole firewall would have been conflict with Hawking’s prediction according to which radiation from the black hole does not carry information.
In 2012, a group of researchers from UCSB argued that preserving information would necessitate a barrier of highly energetic particles – the “firewall” – at the black hole horizon. Their argument is wrong: I demonstrated that it is very well possible to preserve information without creating a firewall. The original proof contains a mistake. Nevertheless, the firewall issue has arguably attracted a lot of attention. The new experiment shows that the fluid black holes obey Hawking’s prediction, and no firewall appears.
Of course the fluid black hole does not reproduce the mathematics of real black hole entirely. Most importantly, the emission of radiation does not reduce the mass of the black hole, as it should if the radiation would carry away energy. This is the lack of “backreaction” (which this blog is named after). Note, however, that Hawking’s calculation also neglects backreaction. So for what the premises of Hawking’s calculation are concerned, fluid analogies should work fine.
The fluid analogies for black holes also differ from real black holes also because they have a different symmetry (it’s a linear system, a line basically, rather than a sphere) and they have a finite size. You may complain that’s a rather unrealistic case, and I would agree. But I think that makes them more, not less, interesting. That’s because these fluids really simulate lower-dimensional black holes in a box. And this is exactly the case for which string theorists claim they can calculate what happens using what’s known as the AdS/CFT correspondence.
Now, if the string theory calculations were correct then the information should leak out of the black hole. If you want to avoid a black hole firewall – because that hasn’t been observed – you need to break the entanglement across the horizon. But this isn’t compatible with the earlier results of Steinhauer’s group.
So, this result documents that black holes in a box do not behave like string theorists think they should. Of course the current measurement results have large uncertainties and will have to be independently reproduced before the case can be considered settled. But I have little doubt the results of the Steinhauer group will hold up. And I’ll be curious to hear what string theorists say about this.
Wednesday, September 19, 2018
Will you come to outer space? [Music Video]
I’ve done it again. This times I layered up to nine copies of myself. I have also squeaked out a high D, drawn a space ship that looks like a crossover of shark and saucer, and bought a new lipstick. But really the biggest improvement comes from me finally replacing my crappy camcorder with a mid-tier camera, which is why you can now enjoy all my wrinkles and pimples in unprecedented clarity.
Wednesday, September 12, 2018
Book Review: “Making Sense of Science” by Cornelia Dean
Making Sense of Science: Separating Substance from Spin
By Cornelia Dean
Belknap Press (March 13, 2017)
It’s not easy, being a science journalist. On one hand, science journalists rely on good relations with scientists. On the other hand, their next article may be critical of those scientists’ work. On the one hand they want to get the details right. On the other hand they have tight deadlines and an editor who scraps that one paragraph which took a full day to write. That’s four hands already, and I wasn’t even counting the hands they need to write.
Like most scientists, I used to think if I see a bogus headline it’s the writers’ fault. But the more science writers I got to know, the better my opinion of them has become. Unlike scientists, journalists strongly adhere to professional guidelines. They want to get things right and they want the reader to know the truth. If they get something wrong, the misinformation almost always came from scientists themselves.
The amount of misinformation about research in my own discipline is so high that no one who doesn’t work in the field has a chance to figure out what’s going on. Naturally this makes me wonder how much I can trust the news I read about other research areas. Cornelia Dean’s book “Making Sense of Science” tells the reader what to look out for.
Cornelia Dean has been a science writer for the New York Times for 30 years and she knows her job. The book begins with a general introduction, explaining what science is, how it works, and why it matters. She then moves on to conflicts of interest, checking sources, difficulties in assessing uncertainty and risk, scientific evidence in court, pitfalls of statistical analysis and analytical modeling, overconfident scientists, and misconduct.
The book is full with examples, proceeds swiftly, and reads well. The chapters end with bullet-point lists of items to recall which is helpful if you, like I, tend to sometimes switch books half through and then forgot what you read already.
“Making Sense of Science” also offers quick summaries of topics that are frequently front-page news: climate change, genetically modified crops, organic food, and cancer risk. While I have found those summaries well-done they seem somewhat randomly selected. I guess they are mostly there because the author is familiar with those topics.
The biggest shortcoming of the book is its lacking criticism of the scientific disciplines and of journalism itself. While the author acknowledges that she and her colleagues often operate under time pressure and shit happens, she doesn’t assess how much of a problem it is or which outlets are more likely to suffer from it. She also doesn’t mention that even scientists who do not take money from the industry have agendas to push, and that both the scientists as well as the writers profit from big headlines.
In summary, I have found the book to be very useful especially for what the discussion of risk-assessment is concerned, but it presents a suspiciously clean and sanitized picture of journalism.
By Cornelia Dean
Belknap Press (March 13, 2017)
It’s not easy, being a science journalist. On one hand, science journalists rely on good relations with scientists. On the other hand, their next article may be critical of those scientists’ work. On the one hand they want to get the details right. On the other hand they have tight deadlines and an editor who scraps that one paragraph which took a full day to write. That’s four hands already, and I wasn’t even counting the hands they need to write.
Like most scientists, I used to think if I see a bogus headline it’s the writers’ fault. But the more science writers I got to know, the better my opinion of them has become. Unlike scientists, journalists strongly adhere to professional guidelines. They want to get things right and they want the reader to know the truth. If they get something wrong, the misinformation almost always came from scientists themselves.
The amount of misinformation about research in my own discipline is so high that no one who doesn’t work in the field has a chance to figure out what’s going on. Naturally this makes me wonder how much I can trust the news I read about other research areas. Cornelia Dean’s book “Making Sense of Science” tells the reader what to look out for.
Cornelia Dean has been a science writer for the New York Times for 30 years and she knows her job. The book begins with a general introduction, explaining what science is, how it works, and why it matters. She then moves on to conflicts of interest, checking sources, difficulties in assessing uncertainty and risk, scientific evidence in court, pitfalls of statistical analysis and analytical modeling, overconfident scientists, and misconduct.
The book is full with examples, proceeds swiftly, and reads well. The chapters end with bullet-point lists of items to recall which is helpful if you, like I, tend to sometimes switch books half through and then forgot what you read already.
“Making Sense of Science” also offers quick summaries of topics that are frequently front-page news: climate change, genetically modified crops, organic food, and cancer risk. While I have found those summaries well-done they seem somewhat randomly selected. I guess they are mostly there because the author is familiar with those topics.
The biggest shortcoming of the book is its lacking criticism of the scientific disciplines and of journalism itself. While the author acknowledges that she and her colleagues often operate under time pressure and shit happens, she doesn’t assess how much of a problem it is or which outlets are more likely to suffer from it. She also doesn’t mention that even scientists who do not take money from the industry have agendas to push, and that both the scientists as well as the writers profit from big headlines.
In summary, I have found the book to be very useful especially for what the discussion of risk-assessment is concerned, but it presents a suspiciously clean and sanitized picture of journalism.
Sunday, September 09, 2018
I’m now older than my father has ever been
Old photo. |
I’ve had troubles with my blood pressure ever since I was a teenager. I also have fainting episodes. One time I infamously passed out on a plane as it was approaching the runway. The pilot had to cancel take-off and call an ambulance. Paramedics carried me off the plane, wheeled me away, and then kept me in the hospital for a week. While noteworthy for the trouble I had getting hold of a bag that traveled without me, this was neither the first nor the last time my blood pressure suddenly gave in for no particular reason. I’ve been on the receiving end of epinephrine shots more than once.
Besides being a constant reminder that life is short, having a close relative who died young from heart failure has also added a high-risk stamp to my medical documents. This blessed me with countless extra exams thanks to which I now know exactly that some of my heart valves don’t properly close and the right chambers are enlarged. I also have a heart arrhythmia.
My doctors say I’m healthy, which really means they don’t know what’s wrong with me. Maybe I just have a fickle vagus nerve that pulls the plug every once in a while. Whatever the cause of my indisposition, I’ve spent most of my life in the awareness that I may not wake up tomorrow.
Today I woke up to find I reached the end of my subconscious life-expectation. In two weeks I’ll turn 42. I have checked off almost all boxes on my to-do list for life. Plant a tree, have a child, write a book. The only unchecked item is visiting New Zealand. But besides this, folks, I feel like I’m done here.
And what the heck do I do now with the rest of my life?
I didn’t really think about this until a few people asked what I plan on doing now that my book has been published. My current contract will run out next year, and then what? Will I write another book? Apply for another grant? Do something entirely different? To which my answer was, I have no idea. Ask me anything about quantum gravity and I may have a smarter reply.
I worry about the future, of course, constantly. Oh yes, I am a great worrier. But the future I worry about is not mine, it’s that of mankind. I’m just a blip in the symphony, a wheel in the machinery, a node in a giant information-processing network. Science, to me, is our collective attempt to accurately understand the laws of nature. It’s not about me, it’s not about you, it’s about us; it’s about whether the human race will last or whether we’re just too dumb to figure out how the world works.
Some days I am optimistic, but today I fear we are too dumb. Interactions of humans in large groups have consequences that we do not intuitively grasp, a failure that underlies not only twitter witch-hunts and viral fake news, but is also the reason why science works so inefficiently. I’m not sure we can fix this. Scientists have known for decades that the pressure to work on topics that produce results quickly and that are well-cited supports the widespread use of bad methodologies. But they do nothing about it except for the occasional halfhearted complaint.
Unsurprisingly, taxpayers who are financing research-bubbles with zero return on investment have taken cue. Some of them conclude, not entirely incorrectly, that much of the scientific enterprise is corrupt and conclusions cannot be trusted. If we carry on like this, science skeptics are bound to become more numerous. And that’s how it will end, the great human civilization: Not with a bang and not with a whimper, but with everyone yelling at each other that someone else was responsible to do something about it.
And if not even scientists can learn that social feedback influences their decisions, how can we expect the same of people who have not been trained to objectively evaluate evidence? Most scientists still believe their enterprise is governed by an invisible hand that will miraculously set things right should they go astray. They believe science self-corrects. Hahaha. It does not, of course. Someone, somewhere, has to actually do the correcting. Someone has to stand up and say: “This isn’t good science. We shouldn’t do this. Stop it.” Hence my book.
I used to think old people must hate all younger people because who wouldn’t rather be young. Now that I’ve reached a certain age myself I find the opposite is true. Not only am I relieved that my hyperactive brain is slowing down, making it much easier for me to focus on one thing at a time. I also love young people. They give me hope, hope that I lost in my own generation. Kids, I know you inherit a mess. I am sorry. Now hand me the wine.
But getting older also has an awkward side, which is that younger people ask me for advice. Worse, I get invited to speak about my experience as a woman in science. I am supposed to be a role model now, you see, I am supposed to encourage young women to follow my footsteps. If only I had something encouraging to say; if only those footsteps would lead elsewhere than nowhere. I decline these invitations. My advice, ladies, is to find your own way. And keep in mind, life is short.
Today’s advice to myself is to come up with an idea how I’ll make a living next year. But after two weeks of travel, 4 lectures and 2 interviews, with a paper and an essay and two blogposts squeezed in between, I am only tired. I have also quite possibly had a glass of wine too much.
Maybe I’ll make a plan tomorrow, first thing when I wake up. If I wake up.
Wednesday, September 05, 2018
Superfluid dark matter passes another check: strong gravitational lensing
[image: kamere.com] |
Astronomers have found that galaxies have regularities that are difficult to accommodate in theories of particle dark matter, for example the Tully-Fisher relation and the Radial Acceleration Relation. These observed patterns in the measurements don’t follow all that easily from the simple models of particle dark matter. Thrifty theorists have to invoke additional effects that are assigned to various astrophysical processes, notably stellar feedback. While these processes arguably exist, it isn’t clear that they actually act in galaxies in amounts necessary to explain the observations.
In the past 20 years or so, astrophysicists have improved computer simulations for galaxy formation until everything fit with the data, sometimes adapting the models to new observations. These computer simulations now contain about a dozen or so parameters (there are various simulations and not all of them list the parameters, so it’s hard to tell exactly) and the results agree well with observation.
But I find it somewhat hard to swallow that regularities that seem to be generic in galaxies follow from the theory only after much fiddling. Indeed, the very fact that it took astrophysicists so long to get galaxies right tells me that the patters in our observations are not generic to particle dark matter. It signals that the theories are missing something important.
One of the proposals for the missing piece has long been that gravity must be modified. But I, as many theorists, have not been particularly convinced by this idea, the reason being that it’s hard to change anything about Einstein’s theory of general relativity without running into conflict with the many high precision measurements that are in excellent agreement with the theory. On the other hand, modified gravity works dramatically well for galaxies and explains the observed regularities.
For a long time I’ve been rather agnostic about the whole issue. Then, three years ago, I read a paper in which Berezhiani and Khoury proposed that dark matter is a superfluid. The reason I even paid attention to this had nothing to do with dark matter; at the time I was working on superfluid condensates that can mimic gravitational effects and I was looking for inspiration. But I have since become a big fan of superfluid dark matter – because it makes so much sense!
You see, the superfluid that Berezhiani and Khoury proposed at isn’t just any superfluid. It has an interaction with normal matter and this interaction creates a force. This force looks like modified gravity. Indeed, I think, it is justified to call it modified gravity because the pull acting on galaxies it now no longer that of general relativity alone.
However, to get the stuff to condense, you need sufficient pressure, and the pressure comes from the gravitational attraction of the matter itself. Only if you have matter sufficiently clumped together will the fluid become a superfluid and generate the additional force. If the matter isn’t sufficiently clumped, or is just too warm, it’ll not condense.
This simple idea works remarkably well to explain why the observations that we assign to dark matter seem to fall into two categories: Those that fit better to particle dark matter and those that fit better to modified gravity. It’s because the dark matter is a fluid with two phases. In galaxies it’s condensed. In galaxy clusters, most of it isn’t condensed because the average potential isn’t deep enough. And in the early universe it’s too warm for condensation. On scales of the solar system, finally, it doesn’t make sense to even speak of the superfluid’s force, it would be like talking about van der Waals forces inside a proton. The theory just isn’t applicable there.
I was pretty excited about this until it occurred to me there’s a problem with this idea. The problem is that we know at least since the 170817 gravitational wave event with an optical counterpart that gravitational waves travel to good precision at the same speed as light. This by itself is easy to explain with the superfluid idea: Light just doesn’t interact with the superfluid. There could be various reason for this, but regardless of what the reason, it’s simple to accommodate this in the model.
This has the consequence however that light which travels through the superfluid region of galaxies will not respond to the bulk of what we usually refer to as dark matter. The superfluid does have mass and therefore also has a gravitational pull. Light notices that and will bend around it. But most of the dark matter that we infer from the motion of normal matter is a “phantom matter” or an “impostor field”. It’s really due to the additional force from the superfluid. And light will not respond to this.
As a result, the amount of dark matter inferred from lensing on galaxies should not match the amount of dark matter inferred from the motion of stars. My student, Tobias Mistele, and I hence sent out to have a look at strong gravitational lensing. We just completed our paper on this and it’s now available on the arXiv.
- Strong lensing with superfluid dark matter
Sabine Hossenfelder, Tobias Mistele
arXiv:1809.00840 [astro-ph.GA]
This finding hence exemplifies why criticisms on modified gravity that insist on there only being one way to fit a galaxy are ill-founded. If you modify gravity by introducing additional fields – and that’s how almost all modifications of gravity work – the additional fields will have additional degrees of freedom and generally require additional initial conditions. There will hence usually be several solutions for galaxies. Indeed, some galaxies may by some statistical fluke not have attracted enough of the fluid for it to condense to begin with, though we have found no evidence of that.
We have been able to fit all lenses in our sample – 65 in total – except for one. The one outlier is a near-miss. It could be off for a variety of reasons, either because the measurement is imprecise, or because our model is overly simplistic. We assume, for example, that the distribution of the superfluid is spherically symmetric and time-independent, which almost certainly isn’t the case. Actually it’s remarkable it works at all.
Of course that doesn’t mean that the model is off the hook; it could still run into conflict with data that we haven’t checked so far. That observations based on the passage of light should show an apparent lack of dark matter might have other observable consequences, for example for gravitational redshift. Also, we have only looked at one particular sample of galaxies and those have no detailed data on the motion of stars. Galaxies for which there is more data will be more of a challenge to fit.
In summary: So far so good. Suggestions for what data to look at next are highly welcome.
Further reading: My Aeon essay “The Superfluid Universe”, and my recent SciAm article with Stacy McGaugh “Is dark matter real?”
Monday, September 03, 2018
Science has a problem, and we must talk about it
Bad stock photos of my job. A physicist is excited to have found a complicated way of writing the number 2. |
I nodded to myself when I read that Jeffrey Mervis, reporting for Science Magazine, referred to Sen Paul’s bill as an “attack on peer review,” and Sean Gallagher from the American Association for the Advancement of Science called it “as blatant a political interference into the scientific process as it gets.”
But while Sen Paul’s cure is worse than the disease (and has, to date, luckily not passed the Senate), I am afraid his diagnosis is right. The current system is indeed “baking in bias,” as he put it, and it’s correct that “part of the problem is the old adage publish or perish.” And, yes, “We do have silly research going on.” Let me tell you.
For the past 15 years, I have worked in the foundations of physics, a field which has not seen progress for decades. What happened 40 years ago is that theorists in my discipline became convinced the laws of nature must be mathematically beautiful in specific ways. By these standards, which are still used today, a good theory should be simple, and have symmetries, and it should not have numbers that are much larger or smaller than one, the latter referred to as “naturalness.”
Based on such arguments from beauty, they predicted that protons should be able to decay. Experiments have looked for this since the 1980s, but so far not a single proton has been caught in the act. This has ruled out many symmetry-based theories. But it is easy to amend these theories so that they evade experimental constraints, hence papers continue to be written about them.
Theorists also predicted that we should be able to detect dark matter particles, such as axions or weakly interacting massive particles (WIMPs). These hypothetical particles have been searched for in dozens of experiments with increasing sensitivity – unsuccessfully. In reaction, theorists now write papers about hypothetical particles that are even harder to detect.
The same criteria of symmetry and naturalness led many particle physicists to believe that the Large Hadron Collider (LHC) should see new particles besides the Higgs-boson, for example supersymmetric particles or dark matter candidates. But none were seen. The LHC data is not yet fully analyzed, but it’s clear already that if something hides in the data, it’s not what particle physicists thought it would be.
You can read the full story in my book “Lost in Math: How Beauty Leads Physics Astray.”
Most of my colleagues blame the lack of progress on the maturity of the field. Our theories work extremely well already, so testing new ideas is difficult, not to mention expensive. The easy things have been done, they say, we must expect a slowdown.
True. But this doesn’t explain the stunning profusion of blundered predictions. It’s not like we predicted one particle that wasn’t there. We predicted hundreds of particles, and fields, and new symmetries, and tiny black holes, and extra-dimensions (in various shapes, and sizes, and widths), none of which were there.
This production of fantastic ideas has been going on for so long it has become accepted procedure. In the foundations of physics we now have a generation of researchers who make career studying things that probably don’t exist. And instead of discarding methods that don’t work, they write increasingly more papers of decreasing relevance. Instead of developing theories that better describe observations, they develop theories that are harder to falsify. Instead of taking risks, they stick to ideas that are popular with their peers.
Of course I am not the first to figure beauty doesn’t equal truth. Indeed, most physicists would surely agree that using aesthetic criteria to select theories is not good scientific practice. They do it anyway. Because all their colleagues do it. And because they all do it, this research will get cited, will get published, and then it will be approved by review panels which take citations and publications as a measure of quality. “Baked in bias” is a pretty good summary.
This acceptance of bad scientific practice to the benefit of productivity is certainly not specific to my discipline. Look for example at psychologists whose shaky statistical analyses now make headlines. The most prominent victim is Amy Cuddy’s “Power Posing” hypothesis, but the problem has been known for a long time. As Jessica Utts, President of the American Statistical Association, pointed out in 2016 “statisticians and other scientists have been writing on the topic for decades.”
Commenting on this “False Positive Psychology,” Joseph Simmons, Leif Nelson, and Uri Simonsohn, wrote “Everyone knew it was wrong.” But I don’t think so. Not only have I myself spoken to psychologists who thought their methods were fine because it’s what they were taught to do. It also doesn’t make sense. Had psychologists known their results were likely statistical artifacts, they’d also have known other groups could use the same methods to refute their results.
Or look at Brian Wansink, the Cornell Professor with the bottomless soup bowl experiment. He recently drew unwanted attention to himself with a blogpost in which he advised a student to try harder getting results out of data because it “cost us a lot of time and our own money to collect.” Had Wansink been aware that massaging data until it delivers is not sound statistical procedure, he’d probably not have blogged about it.
What is going on here? In two words: “communal reinforcement,” more commonly known as group-think. The headlines may say “research shows” but it doesn’t: researchers show. Scientists, like all of us, are affected by their peers’ opinions. If everyone does it, they think it’s probably ok. They also like to be liked, not to mention that they like having an income. This biases their judgement, but the current organization of the academic system does not offer protection. Instead, it makes the problem worse by rewarding those who work on popular topics.
This problem cannot be solved by appointing non-experts to review panels – that merely creates incentives for research that’s easy to comprehend. We can impose controls on statistical analyses, and enforce requirements for reproducibility, and propose better criteria for theory development, but this is curing the symptoms, not the disease. What we need is to finally recognize that scientists are human, and that we don’t do enough to protect scientists’ ability to make objective judgements.
We will never get rid of social biases entirely, but simple changes would help. For starters, every scientist should know how being part of a group can affect their opinion. Grants should not be awarded based on popularity. Researchers who leave fields of declining promise need encouragement, not punishment because their productivity may dwindle while they retrain. And we should generally require scientists to name both advantages and shortcomings of their hypotheses.
Most importantly, we should not sweep the problem under the rug. As science denialists become louder both in America and in Europe, many of my colleagues publicly cheer for their profession. I approve. On the flipside, they want no public discussion about our problems because they are afraid of funding cuts. I disagree. The problems with the current organization of research are obvious – so obvious even Sen Paul sees them. It is pretending the problem doesn’t exist, not acknowledging it and looking for a solution, that breeds mistrust.
Tl;dr: Academic freedom risks becoming a farce if we continue to reward researchers for working on what is popular. Denying the problem doesn’t help.