Thursday, February 4, 2010

Critical reading vs. reading critically

"I can't believe that's a paper."

This was the statement of a colleague--we'll call him Ronald--upon seeing a manuscript on my desk. To my knowledge, he had not read the paper, or even the abstract, just the title. It was not groundbreaking work by any stretch, but it was a nice little methods paper published in a methods journal. It seems that part of Ronald's disdain derived from the fact that he had used the same reagent to label the same tissue for a slightly different technique. Since it was such an obvious and simple method, which Ronald and others in his previous lab had used under other conditions, clearly it was unworthy of publication*.*When something pisses me off, I am prone to using hyperbole--extensively, as you may note here. Although the statements here are not as absolute as I make them seem, the sentiments therein ring true.

Normally I get along quite well with Ronald, but that simple statement pissed me off, perhaps because it is just one example of a attitude among many postdocs at BRI and, I suspect, science in general. Almost inevitably discussion of a paper or seminar focuses on everything that's "wrong" with it: how they did the wrong experiments, used the wrong model, how limited the scope of the study is, that it's not novel or groundbreaking work... Sometimes they have perfectly valid points, but they seem to dismiss the value of the publication because it's not what they would do. They spend a lot of time thinking about how to invalidate the study and ignore its positive contribution to the field. There are a few exceptions: papers published in top journals of the field (unless published on study in direct competition with their own work) and papers published by colleagues and non-competing collaborators.

In graduate school, we are supposed to learn to read the scientific literature critically. The issue is what that means. Many people--especially trainees--use the first definition of critical: "inclined to find fault or to judge with severity, often too readily". We should be using the third definition: "involving skillful merit as to truth, merit, etc.". We're supposed to be looking for good and bad, right and wrong, founded and unfounded, and all degrees in between. When we finish reading the fucking paper, we should have a clear idea of what the results actually demonstrate, the caveats, and its utility, its contribution to the field.

Scientific publishing, in theory, is about the dissemination of knowledge. Some papers are going to change how we think about science or fill in large gaps of a given question or pathway. But not all papers will be "paradigm changing". Some papers reaffirm and expand upon what we already know; these are critical because, as we have seen time and again, there is usually something wrong with the process if no one can reproduce the results of another lab. Some papers alternative interpretations; even if in the end the alternative is wrong, these should help prevent development of tunnel vision, becoming so enamored of our own hypothesis that we become blinded to other possibilities. And some papers are reports of "simple" methods; these keep us from having to reinvent the wheel every time we do a new experiment and sometimes provide faster/cheaper/more sensitive methods than we had in our repertoire. The vast majority of manuscripts are not going to be published in the Cell/Nature/Science families of journals; this does not mean they are useless. It's time for postdocs (and maybe other scientists as well) to reevaluate how we read papers and determine the worth of a publication.

As an aside for those with access to EMBO Reports, I recommend checking out System Crash, a Science & Society piece about how emphasis on high impact publications and focus on short-term gains are affecting science.

Comments (8)

Loading... Logging you in...
  • Logged in as
The EMBO piece was great, thanks for the link.
OMG that EMBO article is amazing. Every couple of paragraphs I was thinking, "Yes! Yes!! Someone's finally saying it!" I wanted to reach through my computer and high-five Laurent Segalat. You on Skype, man?

I think, unfortunately, the reflex to slam just about anything that crosses our desks is learned at an early point in training. Because we get it on the other end, we feel it's our duty to tear everything to shreds. It's a vicious cycle, and one that in a broad sense is likely detrimental to scientific progress.
Almost inevitably discussion of a paper or seminar focuses on everything that's "wrong" with it: how they did the wrong experiments, used the wrong model, how limited the scope of the study is, that it's not novel or groundbreaking work.

It's the same as when a toddler learns the word "no". Good scientists grow out of this natural stage of development of one's critical faculties.
i have a fantastic journal club group here that may tear apart a paper when it's weak, but also discusses ways the data could have been more convincing- if they did x experiment, if they presented the data in a different way, if they did this control. i always walk out of those feeling like i spent my time productively.
In a way, I've always felt that it was one of the strengths of science that papers are a journal club are reviewed so critically. The idea that no one can 'get away' with things, and that every result is treated as wrong until proven right beyond all doubt. In most of my jounral clubs we always end with a 'what did we think of the paper' moment to discuss what we liked about it.
Journal clubs can be an excellent way to practice/learn to critically evaluate a paper. I would argue that a good journal club does what leigh and Lab Rat describe. In the end, you should understand the approach, its limitations, how things could be improved, caveats to the findings, but (provided it is a decent paper) you should be able to see it as a point to build from--what was the importance of the paper, what comes next.

I have no problem with journal clubs; I think they are good things generally. I do have a problem with people being snide and contentious because a paper is so "mundane" or it wasn't published in one of the major journals or it was published by a competitor. Dr Becca suggests this is partly because trainees get beat up on, and so we take it out on others. I think part of it, in very competitive areas, is a general dislike/disdain for a competing lab. People end up being pissed that a competitor published first or feel the competitor was favored by a journal b/c of his/her name and that it would never have been published there if anyone else had submitted it. If you go looking for flawless, novel, game-changing science in every (or any) paper you read, you're going to be disappointed.
To enhance knowledge students should watch online programs on different field of studies. To improve communication skills or analytical skills they should watch political, social talk shows on television.

Post a new comment

Comments by