Tag: publication

Review Unto Others, As You Would Have Others Review Unto You: my Golden Rule for Scientific Manuscripts

Finding—more like, eking out!—time from within a back-breaking work schedule, I recently managed to review back-to-back four manuscripts for publication in diverse journals. The topics in these papers touched my work only marginally, in that they belonged to the broad areas of microbiology, antibodies and immunodiagnostics. A chance remark by a professional friend—”Your reviews are impressively long and detailed…“—got me thinking about my overall experience reviewing scientific manuscripts. “Long and detailed” is probably why it takes me a considerable time and effort to go through the paper, occasionally check the references, and note down my thoughts in the margin, either on paper (i.e. on a print-out), or electronically (annotating the manuscript PDF, my preferred mode). Not unknown to anyone who is familiar with the process of scientific publishing and the world of biomedical journals, Peer Review is a mechanism that attracts a significant amount of controversy. So why do I keep investing the time and effort towards it? More after the fold.

Continue reading

RightsLink: my distressing travails with Fair Use

Or, what one gets for trying to be good and law-abiding.

Navigating the labyrinthine maze known as the Copyright Law is never an easy task, either for the prospective blogger/author, or for the organization that would host/publish the work of such a blogger/author. This problem is particularly acute for academic or personal bloggers, who are attached – rather loosely – to free platforms (such as Google Blogger or WordPress), or to platforms hosted by non-profit concerns (such as this one, Scilogs.com – NOTE: Now hosted at my own expense at my server, inscientioveritas.org). I, as an academic/personal blogger, am not paid by Scilogs or anyone else for my blogging endeavors; I like writing, I like explaining how things work, and I am passionate about science, science communication and science education. I do this by carefully juggling my time in between my work as a bioscience researcher.

Continue reading

Mid-week gripes… re: prepublication formats

I’m sorry, but I HATE reading those pre-publication, not-properly formatted, PDF articles that some journals offer. Okay, I am weird in that I prefer on-screen reading to print-reading (see, I’m green. I like to save paper), particularly since I find no point in keeping paper copies of most articles I read.

But these pre-publication PDFs… Gaah! I can’t understand them without actually printing out the pages; to me, it’s difficult to understand the format-less flow of text on-screen, especially since the tables and figures are placed miles away from the main body text, and it is a pain to navigate a 20-30 page document to get to a figure or a specific reference at the end and thereafter return to wherever I was reading.

Is it that difficult to convert an accepted article to a more manageable (more precisely, screen-readable) format even for prepublication? It really is not. A single command or two in Microsoft Word will get rid of the awful double spaces and set the font to a smaller size. If one wishes, one can also – Gasp! – put the text in two columns! And all that would take just two minutes prior to converting it into a PDF for online-access. How is it that NO ONE in the publishing world at the offending journals has come up with this easy solution? (Yes, I am looking at you, Blood, and some of the articles in other journals available through PubMed Central…)

And who is it that dictates this particular order of arrangement of items for submission for publication – text, tables, figures, figure legends, all to be placed separately? Why, exactly? Are we still in the times when there were no decent word processors, manuscripts were type-written, and hand-drawn graphs and photos had to be sent directly to the publisher for compositing? All documents are electronic now; journals even require all figures to be pre-composited and submitted at the final size. So, what is the point of retaining the double-spaced type-set any more, or insisting that tables and figure legends be typed in double-space, and figures and legends be placed after the text?

Indeed, why not put the figures inline with the text, so that the text flows around, and the reader is immediately able to understand what on earth it means when s/he reads something like, “…βCD20/80-HIV showed significantly impaired ability to induce IFN-α, IFN-β and IDO activity (Fig. 1A-C)…” by simultaneously looking at Figure 1 in all its glory – without having to flip through 30 pages to get to the figures? Revolutionary concept, innit?

Am I being a tad uncharitable? Perhaps, but the formatting I am referring to is by no means new. The grant application format for NIH and other agencies already enforces this: figures, along with legends, are placed alongside the body text, making it easier for the reader. It allows for proper and comprehensive evaluation of the scientific arguments and observations in the text. So, why can’t the journals make a simple change and adopt this format for their article submissions?

[Hyperventilates]

’m’okay now. Back to work.

Impact of Impact Factors… (post updated)

This post was originally published on September 20, 2010, and received some interesting comments. Three and a half odd years later, many of the larger issues are still valid, although there has been a drift towards improvement within the scientific community, thanks in no small a part due to the Open Access publishing movement. I decided to update this post today with an excellent video I found on YouTube (see at the end) ~ February 18, 2014.

Nature Immunology has an interesting editorial today, entitled: “Ball and Chain”; it asks the very pertinent question:

The classic impact factor is outmoded. Is there an alternative for assessing both a researcher’s productivity and a journal’s quality?

I have had the exact same impression for a while now, though I was afraid to say it out aloud except amongst friends – for fear of committing scientific blasphemy: I do not think Impact Factors — originally intended as a metric to determine a peer-reviewed journal’s popularity, based on citations — are, or should be, what they are hyped up to eventually represent: a proxy metric to assess a researcher’s productivity and potential, that often influences employment, promotion and tenure, even funding.

The Editors at Nature Immunology consider this latter usage as inappropriate, going as far as stating that:

The use of this outmoded metric to assess a scientist’s productivity and a journal’s rank has become a ball and chain for both researchers and editors alike.

To jog everyone’s memory: What is Impact Factor? It is an artificial metric drummed up by the Institute for Scientific Information (ISI, now part of Thomson Reuters, the creators of the Referencing software like Endnote and Reference Manager) that purports to assess a journal’s impact in the scientific community. The Impact Factor of a journal for any given year (published in Fall) represents the average number of citations of articles in that journal in that year for papers published in the previous two years, divided by the total number of citable papers published in that journal during those two years. Wikipedia explains it well, thusly:

For example, if a journal has an impact factor of 3 in 2008, then its papers published in 2006 and 2007 received 3 citations each on average. The 2008 impact factor of a journal would be calculated as follows:
A = the number of times articles published in 2006 and 2007 were cited by indexed journals during 2008
B = the total number of “citable items” published by that journal in 2006 and 2007 (“Citable items” are usually articles, reviews, proceedings, or notes; not editorials or Letters-to-the-Editor.)
2008 impact factor = A/B

The NI Editors point out several problems to this approach.

  • Impact factors, having citations as the numerator (A), are discipline-dependent, with relatively ‘hot’ sub-disciplines receiving lot many citations than others. The comparison of all journals within a wide group, or even within a particular sub-discipline, based on this single parameter is, therefore, flawed.
  • Thomson Scientific arbitrarily determines the denominator (B) of a journal’s impact factor, but the process by which articles are deemed citable is not transparent. The NI Editors noted that their essays, which are written in a journalistic, rather than scholarly, style and which lack an abstract or complete citation, were now considered a part of the total citable items for this journal.
  • Impact Factor calculation, being based on the mean number of citations per year, is skewed by papers that receive huge numbers of citations. The example cited is that of Acta Crystallographica Section A, whose impact factor rose more than 20-fold in 2009, to 49.926, due to one paper that was cited more than 6,600 times. The calculation does not correct for the kurtosis, and in fact, in its report (available as a PDF), International Mathematical Union has criticized the use of arithmetic mean for evaluation of citations, because the distribution does not follow a Normal distribution.

The NI Editors bring up another important caveat of the Impact Factor system:

A much greater bone of contention is that citations to retracted articles are not excluded from calculation of the impact factor. Because of the nature of research, such papers are often highly cited as many researchers publish articles that refute the findings in these papers.

By this metric, therefore, The Lancet’s Impact Factor is going to remain high as long as people continue to write about Andrew Wakefield’s bogus 1998 paper linking MMR vaccine and autism, that has been retracted.

In addition, the real impact of a particular work or study may not figure in the Impact Factor calculation in the short term, since the calculation for a given year is restricted to previous two years. Early ideas, published in relatively lower Impact Factor journals, may not become popular or be considered trendy until after a few years have passed. In a similar vein, when an idea becomes “hot” (such as Th17, mentioned by the NI Editors, in recent times), oftentimes the review articles on the idea outnumber the actual research articles, which also impacts the Impact Factor calculation — and journals often use this technique to bolster their Impact Factors.

Given the large number of caveats associated with the Impact Factor calculation, it is surprising how this metric is still used to determine the importance of individual publications, or evaluate an individual researcher, based on whether the researcher has or hasn’t published in those journals. It stands to reason that only a small proportion of the articles published in a journal contributes to its Impact Factor. Therefore, continued usage of this metric may do a disservice to a body of work and the researcher (by diminishing the importance of the work based on the journal’s Impact Factor), and to the scientific community as well (by under-emphasizing key research areas because of averaging during calculation). By the same token, the Impact Factor system may artificially elevate the importance of underwhelming research by making it to a higher Impact Factor journal. The NI Editorial makes the bold and welcome statement that the journal…

[…] would like to deemphasize the importance of this metric, especially when the validity of the impact factor, its possible manipulation and its misuse have been highlighted by many different quarters of the scientific community.

It is noteworthy that the European Association of Science Editors, in a 2008 statement on “Inappropriate Use of Impact Factors”, has recommended that journal impact factors be “used only – and cautiously – for measuring and comparing the influence of entire journals, but not for the assessment of single papers, and certainly not for the assessment of researchers or research programmes either directly or as a surrogate.” Sadly, although an overhaul of the system for evaluating a researcher’s professional output and a journal’s importance in the scientific community is desirable and clearly needed, no unified standard metric exists for achieving this; the Impact Factor system, therefore, continues to be in wide use. But if the scientific community, including the top-tier journals, comes together to develop a more evolved alternative, eschewing the Impact Factor metric, perhaps it would benefit the community as a whole in the long term.


UPDATE: As I mentioned above, a lot of water has flown under the bridge since. I indicated my yearning for the Open Source, Open Access publishing model. Open Access was lauded and derided in various outlets. A journalist from the premier journal Science raised a stink about a so-called sting operation on Open Access journals across the world, a study that many have called deeply faulty. The pièce de résistance came in form of Nobel Laureate Randy Schekman, who publicly denounced the Big Three of the prestigious journals, Nature, Cell, and Science, vowing to avoid thenceforth what he termed ‘luxury journals’. Scientist and co-founder of PLOS, Michael Eisen, recounted his own experience, providing a lot of insights for the way forward.

The debate, however, rages on, especially since in a ‘Publish or Perish’ world, it is hard to wean oneself away from the glamor and glitz of the luxury journals, which can make or break a young scientist’s career. But, things are looking up in that respect. As an example – also to round up this update – I leave you with this excellent parody “In PLOS” of Lady Gaga’s “Applause”, that I chanced upon on YouTube. Enjoy!

Pet peeve… and all that

We live in a confusing world. Okay, more accurately, I live in this world, confused. There are so many things I don’t get. I don’t get people who have a professed problem with contractions, such as isn’t (for ‘is not’), don’t (for ‘do not’), shan’t (for ‘shall not’), wouldn’t, can’t, haven’t, aren’t – not to mention the quirky ain’t (originally for ‘am not’). [Yes, Abbie, I am looking at you!] I also don’t get people who confuse (including a certain well-admired Professor who shall remain nameless) between it’s (a contraction of ‘it is’) and its (the possessive form). I seethe with frustration (Yes, I love Lynne Truss!) when people write ‘your’ when they mean “you’re” (contraction of ‘you are’), or say/write the abominable ‘would of’ instead of “would’ve” (contraction of ‘would have’).

Continue reading

Publication Bias in animal experiments?

A Nature News item caught my attention this morning. It is a report by Janelle Weaver, titled: Animal studies paint misleading picture, a title which has rather unfortunate connotations, and which, in all probability, will become a rallying point for the committed anti-animal experimentation folks. The report is based on a paper in PLoS Biology, published today, by Sena et al., titled: Publication Bias in Reports of Animal Stroke Studies Leads to Major Overstatement of Efficacy. I draw your attention to the glaring discrepancy right there – this meta-analytical study focuses on acute ischemic stroke, a small subset of the entire spectrum of research that utilizes animals; yet, Ms. Weaver saw it fit to use a title for her report that tars animal experimentation with an egregiously broad brush.

Continue reading