Uncategorized

Evaluating scientific papers

I think I’ve gone through this before in commenting on news articles, but it’s worth considering in general: how do you evaluate a paper to decide whether it’s worth paying attention to?

It doesn’t matter the topic, or how expert you are in it, because to some extent the same issues apply, and just looking at structural features of the paper is a big part of it. The Open University like to teach a method called PROMPT (Provenance, Relevance, Objectivity, Method, Presentation, Timeliness), which is mostly to do with how to choose papers to reference in your own research, but is still useful because the idea of evaluating the paper is embedded within that. There’s also a method called CRAAP (Currency, Relevance, Authority, Accuracy, Purpose), which has much the same core ideas. Let’s go with PROMPT, just for the sake of argument, and because it’s what I’ve had drilled into me.

  • Provenance: There are several aspects of the provenance that you will want to consider. One is the journal itself: is it peer reviewed? Does it have a good reputation? If you’re talking about Science or Nature, then the answer is yes and you can trust that the standards applied were high. If the journal has no peer review process or unclear criteria for what they’ll publish, you’re getting onto murkier ground. It’s true that sometimes new ideas can have trouble finding a publisher like Science or Nature, because people are people, and resistant to paradigm changes. So a less-known journal doesn’t necessarily mean anything bad. It’s just a point to consider. It could even be a point in favour if the journal actually caters to a very small area of expertise and has a good reputation among people working in that field!

    You also need to consider the scientists involved. Do they have a history of having to retract papers? Have they stated their conflicts of interest up front? A scientist who works for a tobacco company has a vested interest in saying tobacco is safe, to use an age-old example. It’s also worth checking whether the main authors are actually writing on a topic they know something about. If their primary research is all in single-celled organisms, I would discount heavily their ability to comment on human biology in any depth.

  • Relevance: This is more for when you’re producing a review of your own for any reason, but outside of the world of my dissertation, I also use it to remind me of the point above in provenance: is this coming from a journal and from scientists who are part of this aspect of research, or are they from outside? Note: this may mean they have new insights, especially if they’re a multi-disciplinary team. It’s something that’s worth bearing in mind, nonetheless.
  • Objectivity: This is partially about the provenance as well, to my mind, but the tone of the paper also has something to do with this. Is the paper decrying all others, claiming to be the sole sane voice on a topic? It’s probably not worth your time at all. Science proceeds by a process of review and consolidation: sometimes, rarely, it’s time to throw everything to the winds, return to the drawing board, and come up with something new. But it’s rare.
  • Method: This can be difficult to evaluate if it’s not your field or you’re just an interested outsider, but it’s an important one. Sometimes all you can do is read through the method and see if you can see any obvious gaps. You want to think about things like whether the experiment is repeatable, whether the variables not being tested have been kept consistent, even whether it has been fully described or not. If there are any secret ingredients with no details, there’s something fishy going on. Methods include the analysis techniques: if they’re testing for significance using their own special sauce formula rather than a known standard like a t-test, again, there’s something fishy going on.
  • Presentation: Does it look professional? If not, has it just come off the printer of an activist trying to convince you of something? Does it include all the information you need to evaluate it — abstract, introduction, methods, raw results, discussion of the results, citations of other papers… If it doesn’t look like any other scientific paper you’ve ever seen, that’s not a good sign!
  • Timeliness: Sometimes, an old paper is rolled out as proof of x or y assertion. Some papers are just foundational: think about Alvarez’s 1980 paper on the extinction of the dinosaurs, for example! People are still responding to that now because it was such an important paper. But when the papers cited are 30 years old, they may have been disproven or even retracted by now. They encapsulate an understanding that was relevant at that time, and which will have been built on and expanded since, even if it was a good paper. And there are many that aren’t good papers.

    It’s even worth considering the timeliness of the papers that the paper you’re evaluating is referencing. If it’s pulling statistics from a really old paper, it’s worth checking on that paper’s credentials too.

It all sounds so exhausting, right? But if evaluation of this kind of criteria is something you keep in mind, after a while you start to have a pretty good nose for the warning signs. If you just want to read articles that report on science in a way that laypeople can understand, you might want to consider these criteria:

  • Clarity: Does the person writing this layperson-friendly article sound like they know what they’re talking about? If they’re handwaving or saying ‘the details are too complicated’, then find some other précis, because this person hasn’t actually fully read and comprehended the paper. You don’t want to get your cutting-edge science news from someone who is as or more confused about it than you are.
  • Link: Seriously, is there any reference to where you could find the original research if you wanted to read it? If there’s no link, no name of a journal, just a couple of names of scientists, I’m immediately suspicious.
  • Interests: What is the usual slant of the publication you’re reading? Do they present themselves as unbiased? Are they known for sensationalism? I can tell you now, for example, that the Daily Mail rarely reports on science in an unbiased and useful way.
  • Timeliness: Is this new research? If it’s been literally discovered this month, can there really have been time for scientific consensus to be reached? Conversely, is it so out of date that scientists have already discounted it?

And yes, I know what that acronym says. Consider it a way of making it memorable…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.