Showing posts with label scientific merit. Show all posts
Showing posts with label scientific merit. Show all posts

Thursday, February 4, 2010

Critical reading vs. reading critically

"I can't believe that's a paper."

This was the statement of a colleague--we'll call him Ronald--upon seeing a manuscript on my desk. To my knowledge, he had not read the paper, or even the abstract, just the title. It was not groundbreaking work by any stretch, but it was a nice little methods paper published in a methods journal. It seems that part of Ronald's disdain derived from the fact that he had used the same reagent to label the same tissue for a slightly different technique. Since it was such an obvious and simple method, which Ronald and others in his previous lab had used under other conditions, clearly it was unworthy of publication*.*When something pisses me off, I am prone to using hyperbole--extensively, as you may note here. Although the statements here are not as absolute as I make them seem, the sentiments therein ring true.

Normally I get along quite well with Ronald, but that simple statement pissed me off, perhaps because it is just one example of a attitude among many postdocs at BRI and, I suspect, science in general. Almost inevitably discussion of a paper or seminar focuses on everything that's "wrong" with it: how they did the wrong experiments, used the wrong model, how limited the scope of the study is, that it's not novel or groundbreaking work... Sometimes they have perfectly valid points, but they seem to dismiss the value of the publication because it's not what they would do. They spend a lot of time thinking about how to invalidate the study and ignore its positive contribution to the field. There are a few exceptions: papers published in top journals of the field (unless published on study in direct competition with their own work) and papers published by colleagues and non-competing collaborators.

In graduate school, we are supposed to learn to read the scientific literature critically. The issue is what that means. Many people--especially trainees--use the first definition of critical: "inclined to find fault or to judge with severity, often too readily". We should be using the third definition: "involving skillful merit as to truth, merit, etc.". We're supposed to be looking for good and bad, right and wrong, founded and unfounded, and all degrees in between. When we finish reading the fucking paper, we should have a clear idea of what the results actually demonstrate, the caveats, and its utility, its contribution to the field.

Scientific publishing, in theory, is about the dissemination of knowledge. Some papers are going to change how we think about science or fill in large gaps of a given question or pathway. But not all papers will be "paradigm changing". Some papers reaffirm and expand upon what we already know; these are critical because, as we have seen time and again, there is usually something wrong with the process if no one can reproduce the results of another lab. Some papers alternative interpretations; even if in the end the alternative is wrong, these should help prevent development of tunnel vision, becoming so enamored of our own hypothesis that we become blinded to other possibilities. And some papers are reports of "simple" methods; these keep us from having to reinvent the wheel every time we do a new experiment and sometimes provide faster/cheaper/more sensitive methods than we had in our repertoire. The vast majority of manuscripts are not going to be published in the Cell/Nature/Science families of journals; this does not mean they are useless. It's time for postdocs (and maybe other scientists as well) to reevaluate how we read papers and determine the worth of a publication.

As an aside for those with access to EMBO Reports, I recommend checking out System Crash, a Science & Society piece about how emphasis on high impact publications and focus on short-term gains are affecting science.