Garry Gutting's recent post What Do Scientific Studies Show? at the NYT blogs is utterly unremarkable. Or so I thought, being clearly biased because the guy is a professor of philosophy, and I - I'm at the other end of the circle. But then he puts forward a proposal I think is brilliant: A labeling system for scientific news "that made clear a given study’s place in the scientific process", ranging from the speculative idea and preliminary results all the way to established scientific theory.
I like the idea because it would be an easy way to solve a tension in science news, which is that what's new and exciting, and therefore likely to make headlines, is also often controversial and likely to be refuted later. The solution can't be to not report what's new and exciting, but to find a good way to make clear that, while interesting and promising, this isn't (yet) established scientific consensus.
23andMe has a star rating to indicate how reliable a correlation between a genetic sequence and certain traits/diseases is, based on what has been reported in the scientific literature. (See my earlier blogpost for screenshots showing how that looks like.) They have a white paper laying out the criteria for assessing the scientific status of these correlations. The 23andMe rating serves a similar purpose as the proposed rating for science news. It is handy as a quick orientation, and it is a guide for those who can't or don't want to dig into the scientific literature themselves. It doesn't tell you to disregard results with few stars, just to keep in mind that this might turn out to be a data glitch, and to enjoy or worry with caution.
I think that such a label indicating how established a scientific result or idea is would be easy to use. Writers could just assign it themselves with help from the researchers they have been in contact with while working on a piece. That might not always be very accurate, but undoubtedly bloggers would add their voice. There would most likely be a service popping up to aggregate all ratings on a given topic/press release (probably weighted by the source). I am guessing it would be pretty much self-organized because we're all so very used to these ratings for other purposes.
Do you think such a labeling would be helpful? If so, what criteria would you require for zero to five stars?