I read a lot of news about science and science policies.
This probably doesn’t surprise you. But it may surprise you that most of the
time I find science news extremely annoying. It seems to be written for an
audience which doesn’t know the first thing about science. But I wonder, is it
just me who finds this annoying? So, in this video I’ll tell you the 10 things
that annoy me most about science news, and then I want to hear what you think
about this. Why does science news suck so much? That’s what we’ll talk about
today.
1. Show me your uncertainty estimate.
I’ll start with my pet peeve which is numbers without uncertainty
estimates. Example: You have 3 months left to live. Plus minus 100 years.
Uncertainty estimates make a difference for how to interpret numbers. But
science news quotes numbers all the time without mentioning the uncertainty
estimates, confidence levels, or error bars.
Here’s
a bad example from NBC news, “The global death toll from Covid-19 topped 5
million on Monday”.
Exactly 5 million on exactly that day? Probably not. But if
not exactly, then just how large is the uncertainty? Here’s an example for how
to do it right, from the economist, with a central estimate and an upper and
lower estimate.
The problem I have with this is that when I don’t see the
error bars, I don’t know whether I can take the numbers seriously at all. In
case you’ve wondered what this weird channel logo shows, that’s supposed to be a
data point with an error bar.
2. Cite your sources
I constantly see websites that write about a study that was
recently published in some magazine by someone from some university, but that
doesn’t link to the actual study. I’ll then have to search for those researcher’s
names and look up their publication list and find what the news article was
referring to.
Here’s an example for how not to do it
from the Guardian. This work is published in the journal Physical Review Letters.
This isn’t helpful. Here’s the same paper covered by the BBC. This one has a
link. That’s how you do it.
Another problem with sources is that science news also frequently just repeats
press releases without actually saying where they got their information from. It’s
a problem because university press releases aren’t exactly unbiased.
Since you ask, the 95 percent confidence interval is 33 to
46 percent.
A
similar study in 2018 found a somewhat lower percentage of about 23 percent but
still that’s a lot. In short, press releases are not reliable sources and
neither are sources that don’t cite their sources.
3. Put a date on it
It happens a lot on social media, that magazines share the
same article repeatedly, without mentioning it’s an old story. I’ve unfollowed
a lot of pages because they’re wasting my time this way. In addition, some
pages don’t have a date at the top, so I might read several paragraphs before
figuring out that this is a story from two years ago.
A bad example for this is Aeon. It’s otherwise a really
interesting magazine, but they hide the date in tiny font at the bottom of long
essays. Please put the date on the top. Better still, if it’s an old story, make
sure the reader can’t miss the date. Here’s
an example for how to do it from the Guardian.
4. Tell me the history
Related to the previous one, selling an old story as new by
forgetting to mention that it’s been done before. An example is this story from
2019 about a paper which proposed to use certain types of rocks as natural
particle detectors to search for dark matter. The authors of paper called this
paleo-detectors. And in the
paper they write clearly on the first page “Our work on paleo-detectors
builds on a long history of experiments.” But the quanta magazine article makes
it sound like it’s a new idea.
This matters because knowing that it’s an old idea tells you
two things. First, it probably isn’t entirely crazy. And second, it’s probably
a gradual improvement rather than a sudden big breakthrough. That’s relevant context.
5. Don’t oversimplify it
For many questions of science policy, there just isn’t a simple answer, there
is no good solution, and sometimes the best answer we have is “we don’t know.” Sometimes
all possible solutions to a problem suck and trying to decide which one is the
least bad option is difficult. But science news often presents simple answers
and solutions probably thinking it’ll appeal to the reader.
What to do about climate change is a good
example. Have a look at this recent piece in the Guardian. “Climate change
can feel complex, but the IPCC has worked hard to make it simple for us.” Yeah
it only took them 3000 pages. Look, if the
problem was indeed simple to solve, then why haven’t we solved it.
Maybe because it isn’t so simple? Because there are so many aspects to
consider, and each country has their own problems, and one size doesn’t fit
all. Pretending it’s simple when it isn’t doesn’t help us work out a solution.
6. It depends, but on what?
Related to the previous item, if you ask a scientist a
question, then frequently the answer is “it depends”. Will this new treatment
cure cancer? Well, depends on the patient and what cancer they have had and for
how long they’ve had it and whether you trust the results of this paper and
whether that study will get funded, and so on and so forth. Is nuclear power a
good way to curb carbon dioxide emissions? Well, depends on how much wind blows
in your corner of the earth and how high the earthquake risk is and how much
place you have for solar panels, and so on. If science news don’t mention such
qualifiers, I have to throw out the entire argument.
A particularly annoying special case of this are news pages
which don’t tell you what country study participants were recruited from or
where a poll was conducted. They just assume that everyone who comes to their
website must know what country they’re located in.
7. Tell me the whole story.
A lot of science news is guilty of lying by omission. I have talked about several
cases of this this in earlier videos.
For example, stories about how climate models have correctly
predicted the trend of the temperature anomaly that fail to mention that the
same models are miserable at predicting the total temperature. Or stories about
nuclear fusion that don’t tell you the total energy input. Yet another example
are stories about exciting new experiments looking for some new particle that
don’t tell you there’s no reason these particles should exist in the first
place. Or stories about how the increasing temperatures from climate change
kill people in heat waves, but fail to mention that the same increasing
temperatures also save lives because fewer people freeze to death. Yeah, I
don’t trust any of these sources.
8. Spare me the human interest stuff
A currently very common style of science writing is to weave
an archetypical hero story of someone facing a challenge they have to overcome.
You know, someone who encountered this big problem and they set out to solve it
and but they made enemies and then they make a friend and they make a discovery
but it doesn’t work and… and by that time I’ve fallen asleep. Really please
just get to the point already. What’s new and how does it matter? I don’t care
if the lead author is married.
9. Don’t forget that science is fallible
A lot of media coverage on science policy remembers that
science is fallible only when it’s convenient for them. When they’ve proclaimed
something as fact that later turns out to be wrong, then they’ll blame science.
Because science is fallible. Facemasks?
Yeah, well, we lacked the data. Alright.
But that’d be more convincing if science news acknowledged
that their information might be wrong in the first place. The population bomb?
Peak oil? The new ice age? Yeah, maybe if they’d made it clearer at the time
that those stories might not pan out the way they said then we wouldn’t today
have to cope with climate change deniers who think the media can’t tell fact
from fiction.
10. Science doesn’t work by consensus
Science doesn’t work by voting on hypotheses. As Kuhn
pointed out correctly, the scientific consensus can change quite suddenly. And
if you’re writing science news then most of your audience knows that. So
referring to the scientific consensus is more likely to annoy them rather than
to inform them. And in any case, interpreting poll results is science in
itself.
Take the results of this recent
poll among geoscientists mostly in the United States and Canada, all associated
with some research facility. They only counted replies from those participants
who selected climate science and/or atmospheric science within their top three
areas of research expertise.
They found that among the people who have worked in the
field the longest, 20 years or more, more than 5% think climate change is due
to natural causes. So what’s this mean? That there’s a 5% chance it’s just a
statistical fluke?
Well, no, because science doesn’t work by consensus. It doesn’t matter how many
people agree on one thing or another, or, for that matter, how long they’ve
been in a field. It merely matters how good their evidence is.
To me, quoting the “scientific consensus” is an excuse that
science journalists use for not even making an effort to actually explain the
science. Maybe every once in a while an article about climate change should
actually explain how the greenhouse effect works. Because, see earlier, it’s
not as simple as it seems. And I suspect the reason that we still have a
substantial fraction of climate change skeptics and deniers is not that they
haven’t been told often enough what the consensus is. But that they don’t
understand the science, and don’t understand that they don’t understand it. And
that’s mostly because science news doesn’t explain it.
Good example for how to do it right, Lawrence Krauss’s book on the physics of
climate change.
Okay, so those are my top ten misgivings about science news. Let me know what
you think about this in the comments.
No comments:
Post a Comment
COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon.
Note: Only a member of this blog may post a comment.