I love a well-written methods section in a research communication. There, I said it. And as a peer reviewer, I often go to the methods in the manuscript under review in order to understand both the experiments that authors have designed and performed, and the rationale behind the flow and organization of different experiments, each yielding a separate piece of the overall puzzle in form of data. But I didn’t start out this way; this is the story of my evolution, as well as the woeful tale of a long-held (and recently re-encountered, in a high impact journal, no less) annoyance—poorly or inadequately written, incomplete methods.
Category: Peer Review
Finding—more like, eking out!—time from within a back-breaking work schedule, I recently managed to review back-to-back four manuscripts for publication in diverse journals. The topics in these papers touched my work only marginally, in that they belonged to the broad areas of microbiology, antibodies and immunodiagnostics. A chance remark by a professional friend—”Your reviews are impressively long and detailed…“—got me thinking about my overall experience reviewing scientific manuscripts. “Long and detailed” is probably why it takes me a considerable time and effort to go through the paper, occasionally check the references, and note down my thoughts in the margin, either on paper (i.e. on a print-out), or electronically (annotating the manuscript PDF, my preferred mode). Not unknown to anyone who is familiar with the process of scientific publishing and the world of biomedical journals, Peer Review is a mechanism that attracts a significant amount of controversy. So why do I keep investing the time and effort towards it? More after the fold.
PLOS One seems to have done it again! I wrote a few days ago about how the peer review system at PLOS One seemed to give a free pass to acupuncture studies, when it came to seeking rigorous experimental evidence in support of the claims presented in the paper. I had shared the post via Twitter, and in response, someone from PLOS One had replied:
Serious question: has the peer review system at the PLOS journals been doing a less-than-stellar job when it comes to evaluating complementary and alternative medicine (CAM) research for publication? If the answer is ‘yes’, why? Or if ‘no’, how does a paper like this go through PLOS ONE without some serious revisions? I refer to the systematic review and meta-analysis on effectiveness of acupuncture for essential hypertension, done by a group of researchers from the Tianjin University of Traditional Chinese Medicine (TCM) in China, led by Xiao-Feng Zhao, published on July 24, 2015, on PLOS ONE. The authors conclude that there is acceptable evidence for use of acupuncture as adjunctive therapy along with medication for treating hypertension. My perusal of the paper led to some major reservations about that assumption, as well as indicated some instances of sloppy writing which should have been corrected at the stage of review – but, strangely, wasn’t.
Not apropos of anything, an ethics question flitted through my mind as I was reviewing a rather interesting paper for a journal, which shall remain nameless. As for all questions of such deep significance and importance, I would love to turn to my most valuable resource, the scientists and/or blogger tweeps with whom I communicate and/or interact and/or whom I follow on Twitter. I do see the social medium of Twitter to be a valuable tool for collaboration, and I hope there’d be someone there, who can answer my question – either in 140 characters on Twitter, or more at length, here in the comments.
Two things I encountered today, good and bad in equal measures. First, the good.
In the recent past, I received an invitation for reviewing a submitted manuscript from a noted journal (which shall remain nameless). The topic of the study verged on pharmacognosy and ethnobotany, both areas of knowledge that I – as an erstwhile drug discovery researcher in another lifetime – find fascinating. I accepted the invitation to review because the study piqued my interest.
Via Teh Grauniad, science correspondent Ian Sample reported today on a phenomenon that is at once hilarious and extremely concerning for the academic science research community.
The science-associated blogosphere and Twitterverse were abuzz today with the news of a Gotcha! story published in today’s Science, the premier science publication from the American Association for Advancement of Science. Reporter John Bohannon, working for Science, fabricated a completely fictitious research paper detailing the purported “anti-cancer properties of a substance extracted from a lichen”, and submitted it under an assumed name to no less than 304 Open Access journals all over the world, over a course of 10 months.
My fellow Scilogs Blogger Lee Turnpenny recently described his dissatisfaction with a pro-homeopathy research paper published in the Open Access journal, BMC Cancer.
A short post to express a bit of anguish and to vent. I apologize to my gentle readers in advance. The subject of the peer review system has been discussed elsewhere in greater detail (a notable example is an excellent post by Jonathan Eisen, blogger and Professor at UC Davis). I believe in it, because I firmly believe that serious, conscientious peer review can promote the cause of science and scientific progress.
A while back, I had received an invitation to review a manuscript for a fairly well-circulated journal, and agreed to do it, as the subject matter was well within my area of expertise. Upon reading the manuscript, I found that there were several lacunae in it, in terms of methodology, data interpretation and conclusion. I, however, didn’t recommend outright rejection, because there was scientific merit in the conception of the work, and I thought it necessary to have the information – of the kind embodied in the paper – out in the scientific literature. That, and also because I am painfully aware of the urgencies associated with publishing one’s work. I went through the manuscript assiduously, checking references, eventually pointing out line by line, page by page, where the problems lay, and suggesting how the authors could improve it. I sent it back with the recommendation to ‘Modify’.
It appears that the Editor for the article agreed with me, and returned a decision of ‘Modify’ to the authors. I received a notification to that effect. Initially, I had a quick case of warm fuzzies, because I thought I was able to help the authors; with some modification, their work could be published.
And then I saw something, scrolling down, which completely deflated me. The notification contained all the comments from all the reviewers. Following my detailed, eleven-paragraph response (as reviewer 1) to the authors, I found the responses from reviewer 2 and reviewer 3 – they wrote precisely ONE paragraph each, giving a very general and vague summary of the study, without making any recommendation as to the acceptability of the paper.
Several questions quickly coursed through my mind:
- Is this the true face of peer review – one single hastily scribbled paragraph, determining the value of painstaking work by researchers spending time and money and effort?
- Was I being unfair on the other reviewers, who may be so much more experienced than I am, that one look at the manuscript and they were able to visually separate the grain from the chaff?
- Was I – a mere postdoc who didn’t know any better – an idiot, an irredeemable non compos mentis, to have spent time and effort and care to review this paper in a constructive way?
- Would I want my own papers to be evaluated in this way?
Of course, like many other puzzling mysteries of life, these questions, too, leave me clueless.
Recent Comments