I love a well-written methods section in a research communication. There, I said it. And as a peer reviewer, I often go to the methods in the manuscript under review in order to understand both the experiments that authors have designed and performed, and the rationale behind the flow and organization of different experiments, each yielding a separate piece of the overall puzzle in form of data. But I didn’t start out this way; this is the story of my evolution, as well as the woeful tale of a long-held (and recently re-encountered, in a high impact journal, no less) annoyance—poorly or inadequately written, incomplete methods.
Tag: peer review (Page 1 of 2)
Review Unto Others, As You Would Have Others Review Unto You: my Golden Rule for Scientific Manuscripts
Finding—more like, eking out!—time from within a back-breaking work schedule, I recently managed to review back-to-back four manuscripts for publication in diverse journals. The topics in these papers touched my work only marginally, in that they belonged to the broad areas of microbiology, antibodies and immunodiagnostics. A chance remark by a professional friend—”Your reviews are impressively long and detailed…“—got me thinking about my overall experience reviewing scientific manuscripts. “Long and detailed” is probably why it takes me a considerable time and effort to go through the paper, occasionally check the references, and note down my thoughts in the margin, either on paper (i.e. on a print-out), or electronically (annotating the manuscript PDF, my preferred mode). Not unknown to anyone who is familiar with the process of scientific publishing and the world of biomedical journals, Peer Review is a mechanism that attracts a significant amount of controversy. So why do I keep investing the time and effort towards it? More after the fold.
PLOS One seems to have done it again! I wrote a few days ago about how the peer review system at PLOS One seemed to give a free pass to acupuncture studies, when it came to seeking rigorous experimental evidence in support of the claims presented in the paper. I had shared the post via Twitter, and in response, someone from PLOS One had replied:
Serious question: has the peer review system at the PLOS journals been doing a less-than-stellar job when it comes to evaluating complementary and alternative medicine (CAM) research for publication? If the answer is ‘yes’, why? Or if ‘no’, how does a paper like this go through PLOS ONE without some serious revisions? I refer to the systematic review and meta-analysis on effectiveness of acupuncture for essential hypertension, done by a group of researchers from the Tianjin University of Traditional Chinese Medicine (TCM) in China, led by Xiao-Feng Zhao, published on July 24, 2015, on PLOS ONE. The authors conclude that there is acceptable evidence for use of acupuncture as adjunctive therapy along with medication for treating hypertension. My perusal of the paper led to some major reservations about that assumption, as well as indicated some instances of sloppy writing which should have been corrected at the stage of review – but, strangely, wasn’t.
Early last month, I communicated in a blog post a few questions I had about a study in Electro Acupuncture published in PLOS One. It took the authors a while to get to them, but the senior and corresponding author of that study, Professor Kai-Liang Wu, of the Fudan University Shanghai Cancer Center, graciously wrote a detailed reply to my question a week ago. I am going to put his response in this space in blocks. For better comprehension, I shall put my questions in italicized letters followed by his response; the boldface types are for emphasis, mine. My comments are interspersed with the blocks.
Not apropos of anything, an ethics question flitted through my mind as I was reviewing a rather interesting paper for a journal, which shall remain nameless. As for all questions of such deep significance and importance, I would love to turn to my most valuable resource, the scientists and/or blogger tweeps with whom I communicate and/or interact and/or whom I follow on Twitter. I do see the social medium of Twitter to be a valuable tool for collaboration, and I hope there’d be someone there, who can answer my question – either in 140 characters on Twitter, or more at length, here in the comments.
Two things I encountered today, good and bad in equal measures. First, the good.
In the recent past, I received an invitation for reviewing a submitted manuscript from a noted journal (which shall remain nameless). The topic of the study verged on pharmacognosy and ethnobotany, both areas of knowledge that I – as an erstwhile drug discovery researcher in another lifetime – find fascinating. I accepted the invitation to review because the study piqued my interest.
Via Teh Grauniad, science correspondent Ian Sample reported today on a phenomenon that is at once hilarious and extremely concerning for the academic science research community.
Every so often, some paper happens to grab my attention for various reasons. As I read the paper, often I have questions. Not all of those questions, unfortunately, can be easily submitted for answers. In recent times, one such paper was published earlier this month in PLOS One. The great benefit of the Open-Access model of PLOS is that it allows a reader to ask questions directly of the authors. This level of engagement is very laudable, especially to someone like me who has an interest in the communication of scientific facts.
The science-associated blogosphere and Twitterverse were abuzz today with the news of a Gotcha! story published in today’s Science, the premier science publication from the American Association for Advancement of Science. Reporter John Bohannon, working for Science, fabricated a completely fictitious research paper detailing the purported “anti-cancer properties of a substance extracted from a lichen”, and submitted it under an assumed name to no less than 304 Open Access journals all over the world, over a course of 10 months.