I love a well-written methods section in a research communication. There, I said it. And as a peer reviewer, I often go to the methods in the manuscript under review in order to understand both the experiments that authors have designed and performed, and the rationale behind the flow and organization of different experiments, each yielding a separate piece of the overall puzzle in form of data. But I didn’t start out this way; this is the story of my evolution, as well as the woeful tale of a long-held (and recently re-encountered, in a high impact journal, no less) annoyance—poorly or inadequately written, incomplete methods.

First, bit of a background. Ever since I started reading research papers and other scientific communications in course of my professional career, I have been fond of the structured format—an abstract of restricted length, which summarizes the most salient points of the paper; an introduction, which provides the necessary background, with citations, to understand the rationale of the study undertaken by the authors; a methods section, which details the experimental procedures undertaken, as well as reagents used, to test the studies hypotheses and arrive at the outcomes; a results section, which lists the experimental outcomes, with short explanations of the visibly represented data in figures and tables; and finally, a discussion, which unifies the study results into an examination of whether the hypotheses being probed could be accepted or rejected, along with thoughts and speculations from the authors on plausible and reasonable subsequent steps for further investigations in the same or divergent directions. In a few journals, a variant format was allowed, in which experimental results could be discussed in a logical and/or chronological manner in a combined Results and Discussion section.

However, I always felt that, of all the sections, the methods section somehow got the shortest shrift in the published papers. I cannot pinpoint the genesis of that sentiment. Perhaps it is the fact that in many journals of my field, the Materials and Methods section of the articles in the printed journal would be written with a font size that was smaller than the one used for the rest of the article. Nowadays, when many research articles are freely accessible for online reading—from archival services such as NCBI PubMed Central or the websites of Open Access publishers—the HTML format uses uniform fonts for all sections. Nevertheless, many journals place the methods section after the discussions, likely following the rationale that MOST readers would be interested in seeing the results, reading the discussion and conclusions to extract the juiciest bit from each paper, not bothering to dive into the experimental details described in methods.

Or perhaps it is the fact that to a young researcher at the bench as I was, writing up the Materials and Methods section was the easiest: just write down the steps of each experiment from the lab notebook and/or protocol—including the names of reagents, instruments, and any special consumables used, along with the names and locations of the manufacturers of said items; a task in which the most exciting maneuver was to go to the ‘About’ or ‘Contacts’ section of the manufacturers’ web site to figure out a city and state to be put down for the location—an exercise whose point I have never fully understood. I recall seeing some newer journals which have started allowing web site URLs for manufacturers (and catalog numbers for specific reagents, which should make them easier to find), but I suspect this practice is not that prevalent yet.

I don’t know, honestly. But I do know now that it is, and I was, WRONG. I have learnt the tremendous importance of the methods sections when trying to replicate (or re-purpose for a different system) an experiment described in a paper. The methods section should describe, in a clear and precise manner, the experimental design—indicating each step undertaken and each model or assay used to answer the research question(s), including how the observed and recorded data were analyzed. Far too many times than I care to remember, I have lamented the fact that the methods section was improperly or incompletely written, omitting crucial information about common items (such as buffers), volumes and/or steps—more like sacrificing clarity at the altar of brevity—information that would be vital in successfully recreating the conditions of, and reproducing, the original experiment. I have come to believe that writing a good methods section is good scientific citizenship, a cornerstone of sharing empirically-gained knowledge. If the hypothesis investigated and presented by a study is robust, it is often reflected in the reproducibility of study outcomes. A well-written methods section helps minimize the external variables that can negatively influence the performance of the experiment, and thereby help flesh out crucial variables if a study is not reproducible. Negative results are often not reported, but different observations made following a given method/protocol have often led to serendipitous discoveries. Anecdotally, the most common example of this is probably the use of various mouse strains, which are valuable models in immunological investigations. If the mouse strain used for a study is not clearly and completely mentioned along with the source, a subsequent investigator can end up using a similar but genetically different mouse substrain which may impact the model’s function, as explained in this informative blog post by Peter Kelmenson of The Jackson Laboratory.

For scientific papers, especially in disciplines dealing with inherently variable systems, such as biology, statistical evaluations of experimental data are of paramount importance, and must be incorporated in the methods section. A common practice is to place this information in the last paragraph of Materials and Methods, including the statistical methods/tests used along with appropriate justifications for the same and the name of statistical software package used. In a seminal review written in 2003 in the journal Infection and Immunity and an update a decade later, biostatistician Prof. Cara Olsen pointed out how many articles contained erroneous statistical analysis and/or reporting of the analyzed results in papers.

So, suffice it to say, the methods section is crucially important for a scientific research paper. Which brings me to my recently encountered annoyance: a research paper published earlier this year in mBio, the first online-only, Open Access journal of broad scientific scope from the American Society for Microbiology (ASM). [In the interest of transparency, I have long been a member of ASM.]

Wait… what? A problem in an ASM Journal?!

I know… right? Most unexpected. The paper itself was interesting by its own merit, describing two new assays capable of sensitive and specific diagnosis of exposure to the Zika virus; the diagnostics would be potentially valuable in clinical management of at-risk pregnant women. So, what’s my beef?

This journal subscribes to the format of relegating the methods to the end of the paper. However, that section per se was sparse, and bereft of any description of the actual statistical treatment of the data. A close reading of the paper revealed an interesting formatting choice. The analytical methodologies are described in terms of ‘assay designs’ prior to the description of the observed results. This is good and bad from two different angles. Good, because the analytical methods involved highly sophisticated bioinformatics analysis involving newly developed, advanced statistical calculations done with specialized packages—based on the statistical software R—derived to focus on the type of data described (as indicated in their references 16 through 21, which I highly recommend going through); therefore, description of the methods could potentially facilitate the understanding of the results as one read on.

So what’s bad exactly?

  • The methods are so complex that their jargon-filled descriptions (‘spatial corrections’, ‘quantile normalization, ‘Empirical Bayes smoothing’, ‘multidimensional scaling’, and so forth) didn’t actually advance the understanding of the results at all, especially galling because each of those terms refers specifically to statistical techniques designed to address specific variables extraneous to the main hypothesis. (For instance, data represented by spots on an array are captured electronically via image sensors (like a camera) prior to analysis. In the imaging step, the intensity of the signal from the spot may vary not due to experimental conditions, but simply due to the position of the spot on the array which can influence how the light/signal from the spot hits the sensor. ‘Spatial correction’ is an algorithmic treatment of such data, which checks and corrects for non-specific signal variation due to distance from the sensor. Cool, right?)
  • The terms don’t actually explain at all how the data in this study were analyzed, how the controls were determined and data from them analyzed, how and on what basis the calculations—for instance, normalization, a crucial analytical procedure for maintaining the fidelity and unbiased nature of data derived from high throughput arrays, or the repeatedly used regression technique of two dimensional locally estimated scatterplot smoothing (LOESS)—were actually done.
  • A much better presentation of the analytical procedures would have been in the methods section at the end, which would have been helpful to any other researcher attempting to use the same/similar procedures on similar or different datasets, but the methods section was oddly Spartan.

Additionally, for any scientific paper, figures are important because they depict data via visual representation, and thereby impact the understanding of the content. Strangely, figure 4 with panels A and B and figure 6B—which present immunoreactivity data—both show (what I presume to be) box and whiskers graphs with zero explanation in the legend as to what variable was being represented (usually the group medians), whether the box edges represented quartiles, whether error bars ran from minimum to maximum values or from some low percentile to high percentiles, and whether the dots beyond the error bars represented outliers. Nothing. There was also no indication as to whether any kind of statistical tests was applied to the groups plotted in the figures. Again, this information could have been easily included in the methods section for facilitating understanding.

https://doi.org/10.1128/mBio.00095-18

mBio paper (DOI:10.1128/mBio.00095-18), interesting and promising, nevertheless disappointed with its poor methods section

A let-down, sadly

Again, this research is vitally important, and the authors, all experts in their fields, hail from national and international laboratories of great public health significance. Therefore, should I be blamed if my expectations from this paper were rather high? The work done is excellent, but the paper, sadly, did not quite make for a satisfying reading—because, for me at least, the importance of the description of the novel diagnostics was somewhat overshadowed by the derisive message that the paper seemed to scream out at the readers—“Oh you don’t understand highly sophisticated bioinformatics analytical methods? Poor you. Ha-ha! Nyuk nyuk.” Communication of scientific data ought to do better than this.