"I can't believe that's a paper."
This was the statement of a colleague--we'll call him Ronald--upon seeing a manuscript on my desk. To my knowledge, he had not read the paper, or even the abstract, just the title. It was not groundbreaking work by any stretch, but it was a nice little methods paper published in a methods journal. It seems that part of Ronald's disdain derived from the fact that he had used the same reagent to label the same tissue for a slightly different technique. Since it was such an obvious and simple method, which Ronald and others in his previous lab had used under other conditions, clearly it was unworthy of publication*.*When something pisses me off, I am prone to using hyperbole--extensively, as you may note here. Although the statements here are not as absolute as I make them seem, the sentiments therein ring true.
Normally I get along quite well with Ronald, but that simple statement pissed me off, perhaps because it is just one example of a attitude among many postdocs at BRI and, I suspect, science in general. Almost inevitably discussion of a paper or seminar focuses on everything that's "wrong" with it: how they did the wrong experiments, used the wrong model, how limited the scope of the study is, that it's not novel or groundbreaking work... Sometimes they have perfectly valid points, but they seem to dismiss the value of the publication because it's not what they would do. They spend a lot of time thinking about how to invalidate the study and ignore its positive contribution to the field. There are a few exceptions: papers published in top journals of the field (unless published on study in direct competition with their own work) and papers published by colleagues and non-competing collaborators.
In graduate school, we are supposed to learn to read the scientific literature critically. The issue is what that means. Many people--especially trainees--use the first definition of critical: "inclined to find fault or to judge with severity, often too readily". We should be using the third definition: "involving skillful merit as to truth, merit, etc.". We're supposed to be looking for good and bad, right and wrong, founded and unfounded, and all degrees in between. When we finish reading the fucking paper, we should have a clear idea of what the results actually demonstrate, the caveats, and its utility, its contribution to the field.
Scientific publishing, in theory, is about the dissemination of knowledge. Some papers are going to change how we think about science or fill in large gaps of a given question or pathway. But not all papers will be "paradigm changing". Some papers reaffirm and expand upon what we already know; these are critical because, as we have seen time and again, there is usually something wrong with the process if no one can reproduce the results of another lab. Some papers alternative interpretations; even if in the end the alternative is wrong, these should help prevent development of tunnel vision, becoming so enamored of our own hypothesis that we become blinded to other possibilities. And some papers are reports of "simple" methods; these keep us from having to reinvent the wheel every time we do a new experiment and sometimes provide faster/cheaper/more sensitive methods than we had in our repertoire. The vast majority of manuscripts are not going to be published in the Cell/Nature/Science families of journals; this does not mean they are useless. It's time for postdocs (and maybe other scientists as well) to reevaluate how we read papers and determine the worth of a publication.
As an aside for those with access to EMBO Reports, I recommend checking out System Crash, a Science & Society piece about how emphasis on high impact publications and focus on short-term gains are affecting science.

This was the statement of a colleague--we'll call him Ronald--upon seeing a manuscript on my desk. To my knowledge, he had not read the paper, or even the abstract, just the title. It was not groundbreaking work by any stretch, but it was a nice little methods paper published in a methods journal. It seems that part of Ronald's disdain derived from the fact that he had used the same reagent to label the same tissue for a slightly different technique. Since it was such an obvious and simple method, which Ronald and others in his previous lab had used under other conditions, clearly it was unworthy of publication*.*When something pisses me off, I am prone to using hyperbole--extensively, as you may note here. Although the statements here are not as absolute as I make them seem, the sentiments therein ring true.
Normally I get along quite well with Ronald, but that simple statement pissed me off, perhaps because it is just one example of a attitude among many postdocs at BRI and, I suspect, science in general. Almost inevitably discussion of a paper or seminar focuses on everything that's "wrong" with it: how they did the wrong experiments, used the wrong model, how limited the scope of the study is, that it's not novel or groundbreaking work... Sometimes they have perfectly valid points, but they seem to dismiss the value of the publication because it's not what they would do. They spend a lot of time thinking about how to invalidate the study and ignore its positive contribution to the field. There are a few exceptions: papers published in top journals of the field (unless published on study in direct competition with their own work) and papers published by colleagues and non-competing collaborators.
In graduate school, we are supposed to learn to read the scientific literature critically. The issue is what that means. Many people--especially trainees--use the first definition of critical: "inclined to find fault or to judge with severity, often too readily". We should be using the third definition: "involving skillful merit as to truth, merit, etc.". We're supposed to be looking for good and bad, right and wrong, founded and unfounded, and all degrees in between. When we finish reading the fucking paper, we should have a clear idea of what the results actually demonstrate, the caveats, and its utility, its contribution to the field.
Scientific publishing, in theory, is about the dissemination of knowledge. Some papers are going to change how we think about science or fill in large gaps of a given question or pathway. But not all papers will be "paradigm changing". Some papers reaffirm and expand upon what we already know; these are critical because, as we have seen time and again, there is usually something wrong with the process if no one can reproduce the results of another lab. Some papers alternative interpretations; even if in the end the alternative is wrong, these should help prevent development of tunnel vision, becoming so enamored of our own hypothesis that we become blinded to other possibilities. And some papers are reports of "simple" methods; these keep us from having to reinvent the wheel every time we do a new experiment and sometimes provide faster/cheaper/more sensitive methods than we had in our repertoire. The vast majority of manuscripts are not going to be published in the Cell/Nature/Science families of journals; this does not mean they are useless. It's time for postdocs (and maybe other scientists as well) to reevaluate how we read papers and determine the worth of a publication.
As an aside for those with access to EMBO Reports, I recommend checking out System Crash, a Science & Society piece about how emphasis on high impact publications and focus on short-term gains are affecting science.

Genomic Repairman · 792 weeks ago
Dr Becca · 792 weeks ago
I think, unfortunately, the reflex to slam just about anything that crosses our desks is learned at an early point in training. Because we get it on the other end, we feel it's our duty to tear everything to shreds. It's a vicious cycle, and one that in a broad sense is likely detrimental to scientific progress.
Comrade PhysioProf · 792 weeks ago
It's the same as when a toddler learns the word "no". Good scientists grow out of this natural stage of development of one's critical faculties.
leigh · 792 weeks ago
Lab Rat · 792 weeks ago
biochem belle 43p · 792 weeks ago
I have no problem with journal clubs; I think they are good things generally. I do have a problem with people being snide and contentious because a paper is so "mundane" or it wasn't published in one of the major journals or it was published by a competitor. Dr Becca suggests this is partly because trainees get beat up on, and so we take it out on others. I think part of it, in very competitive areas, is a general dislike/disdain for a competing lab. People end up being pissed that a competitor published first or feel the competitor was favored by a journal b/c of his/her name and that it would never have been published there if anyone else had submitted it. If you go looking for flawless, novel, game-changing science in every (or any) paper you read, you're going to be disappointed.
essay wrіters onlіne · 520 weeks ago
rakeshraj · 299 weeks ago
Advanced Python Training Institute in Chennai| Advanced Python Training institute in Chennai
Advanced RPA Training in Chennai | Advanced RPA Training institute in Chennai
Advanced DevOps Training in Chennai | Advanced DevOps Training institute in Chennai
Advanced Azure Training in Chennai | Advanced Azure Training institute in Chennai
Advanced Java Training in Chennai | Advanced Java Training institute in Chennai