Finding—more like, eking out!—time from within a back-breaking work schedule, I recently managed to review back-to-back four manuscripts for publication in diverse journals. The topics in these papers touched my work only marginally, in that they belonged to the broad areas of microbiology, antibodies and immunodiagnostics. A chance remark by a professional friend—”Your reviews are impressively long and detailed…“—got me thinking about my overall experience reviewing scientific manuscripts. “Long and detailed” is probably why it takes me a considerable time and effort to go through the paper, occasionally check the references, and note down my thoughts in the margin, either on paper (i.e. on a print-out), or electronically (annotating the manuscript PDF, my preferred mode). Not unknown to anyone who is familiar with the process of scientific publishing and the world of biomedical journals, Peer Review is a mechanism that attracts a significant amount of controversy. So why do I keep investing the time and effort towards it? More after the fold.
I have been independently reviewing scientific manuscripts since 2011, when one of my former mentors recommended my name to a journal dealing with fungal diseases as a potential reviewer. The review process is by invitation; the prospective reviewer gets an intimation (nowadays via email) from the office of the journal and/or article editor with a brief glimpse of the topic title (or sometimes, the abstract), and the reviewer has to agree to engage, upon which the manuscript becomes accessible from the journal website in form of a downloadable PDF.
In early days, the PDF had to be printed out, annotated by hand using various arcane proofreading symbols (incidentally, the same symbols we would use to proof-read the galley print of our own papers prior to publication), and then mailed out to the editorial office. Being lazy and a technophile (a potent combination), I figured out how to digitally use the same markings in a PDF, which enabled me to simply email the reviewed document to the editor. With modern PDF-editing software, one can actually highlight sections, write comments and leave notes within the document—which has made the process a lot simpler.
In all these years, I have been invited to review so far nearly 40 submitted manuscripts for 12 journals whose names I am aware of, and roughly about the same count of manuscripts for journals via a double-blind anonymous peer review process, in which I don’t know the names of the journal or any detail about the authors and their institutions. In this process run by a research review management organization, the coordinators meticulously scrub off the PDF every bit of identifying information (sometimes including references to earlier work by the authors cited with identifiable statements such as “In our earlier work…” or “Prior results from our lab…“) before making it available to the reviewers, thereby ensuring anonymity in both directions. When I look back at all those reviews, I note that, indeed, “long and detailed” seems to have become a hallmark for me.
Tomes have been written about the problems besetting the Peer Review process. It was considered to have failed as a check on scientific fraud when the fabricated research (published in prestigious journals) and unethical practices of a South Korean stem-cell researcher came to light in 2005. The now-famous experiment by Fiona Godlee and others at the British Medical Journal (now called simply ‘BMJ’), aided by investigators from the London School of Hygiene and Tropical Medicine, published in 2008, pointed out gaping holes in the peer review process; in the study, over 600 peers reviewers were each sent three clinical study papers with deliberately introduced methodological errors, 9 major and 5 minor. On an average, the reviewers managed to catch only a third of the major errors, and not all of them rejected the paper based on those. The authors duly noted, though, that the findings of this study on review of clinical trials may not be generalizable to other study designs, including basic science research. Right in 2011 (the year I started), Prof. Michael Eisen of UC Berkeley, a long-time proponent of open science and open access publications, laid out a litany of issues bedeviling peer review, especially its inability to achieve its purported goal (‘maintaining the integrity of the scientific literature by preventing the publication of flawed science’, as he wrote); but he also pointed out that peer review as a concept—where scientists read and critique their colleagues’ papers—has a great deal of value.
The problems being well-known haven’t stopped scientists from contemplating ways to make the peer review process better, though. In February this year, science communicator Hilda Bastian of Public Library of Science (PLoS) reported on a meeting of investigators on peer review, where different issues, including time for completion of the process (which used to be inordinately long), ways to encourage researchers to present their data prior to review (which I am personally not comfortable with, for reasons which some, like BMJ’s current executive editor, Dr. Theodora Bloom, seem to share) and to incentivize the peer review process, were discussed. Post-publication open platform peer review systems were also considered for their strengths and weaknesses.
So… why do I volunteer my time and effort as a peer reviewer? In the February meeting, Prof. Erin O’Shea, President of the premier medical research organization Howard Hughes Medical Institute, had indicated one central idea: “Peer review is best suited to advising the author on improving the work and improving the manuscript…“; when I read this, it immediately resonated with me. This is, and has always been, my motivation for peer review.
During the years of my postdoctoral training in New York, I was involved in several projects which generated a lot of data to be communicated via scientific journals. It was during that time that I encountered the dreaded Reviewer number 2 or alternatively, number 3. Endlessly (and much deservedly) memed but by-no-means mythical, the Reviewer #2 (or #3) of a paper is that reviewer, who is cantankerous, needlessly aggressive, often vague, overly committed to a pet theory regardless of the validity of the hypothesis presented in the paper, and always unhelpful. (And not only for manuscripts, I have had anonymous grant reviewers who could well qualify to be Reviewer #2 or 3.) Most infuriatingly, said reviewer would often not read (and/or not comprehend) the text completely, and would be dismissive towards the methodology proposed or presented and the data generated without offering any reason for it.
Having observed firsthand the havoc Reviewer #2 or #3 can leave in their wake, I was determined never to be that reviewer. Sort of like the Golden Principle… for reviewers: Review unto others, as you would have others review unto you. I have this sense of science being a collective and collaborative endeavor; so yes, it appears that, without even knowing about it, I have been taking Prof. O’Shea’s exhortation to heart—for each and every manuscript I have reviewed. I spend time to read the manuscript, while making notes in the margins; once through with the introduction, I look up the Methods first—even if under some modern journal formats, the methods are placed at the end of the manuscript—because often they help me understand the experimental design, especially if there is an iterative component to it, as well as examine if the study incorporates all the necessary controls appropriate for the hypothesis. I also check if there is a section on statistics and the kind of analysis used. I read the Results next, referring frequently to any figure or table presented; I am a fan of the format in which each Result paragraph ends with a summary statement describing the central observation in that paragraph—but not all authors include such a statement. Finally, I pore over the discussion to see how the authors have brought everything together. Throughout the text, I keep a look out for claims or assertions made, as well as specialized techniques cited, without corresponding references—and if so, I make it a point to ask for the reference. Attribution and assignment of due credit are important to me as a researcher. When I write my papers, I follow the same coda; we stand on the shoulders of those who have come before us, and a collaborative effort means that we build on their observations and learn from their mistakes. Therefore, proper attribution is only fair and necessary.
In the past couple of years, via the double-blind anonymous review process, I have received manuscripts describing studies done in various parts of Africa, the Middle East and Asia. In addition to critiquing the science, I have sometimes had to suggest that the authors seek assistance of some English language editing service, because the problems with the language (such as, grammatical and punctuation errors, incomplete sentences, extreme verbiage with repetitions of sentences, incorrect words used, and so forth) hampered the description of the study, its design and execution—making it difficult to understand and follow, let alone critique. If the corrections are moderate, I include them in my marginal notes and in my recommendations. I am justifiably stoked to be a part of the laudable effort to increase the visibility of good science done in non-US, non-European, non-affluent nations, with studies asking pertinent questions of local and regional importance—such as epidemiology of antibiotic use and drug resistance in geographically isolated or localized areas, or say, some innovative diagnostic solution devised for a tropical disease with significance in a given region, but neglected in the rest of the world.
In the interest of full disclosure in this matter, while the invited reviews from known journals are voluntary and unpaid, the organization conducting the double-blind anonymous reviews offers a modest honorarium for the time and effort spent on their reviews, which are to be completed in the accelerated time-frame of 5 days. For me, however, the token honorarium is not, and cannot ever be, an adequate compensation for the personal time (after work and on the weekends) and effort I expend. Rather, my reviewer efforts transcend any consideration of remuneration. Reviewing scientific manuscripts in a fair and detailed manner is my way of giving back to science. As a biomedical researcher working at the bench, I am well-aware of how much hard work, patience and perseverance go into putting out a manuscript for publication; that’s why I want my critiques to be constructive and thoughtful—because the scientific world can honestly do with fewer Reviewer number twos or threes, and more of collegial contributors to the scientific endeavor.