Ada Ao, a cancer and stem cell biologist, and aspiring science communicator writing for Nature Education‘s SciTable blog, has an interesting post put up today. She cautions that it is a tirade (according to her, of course; pffft!) against a recently-published PLoS Medicine article by Amélie Yavchitz and associates, titled “Misrepresentation of randomized controlled trials in press releases and news coverage: a cohort study” (Yavchitz et al., PLoS Med., 9(9):e1001308, 2012).
In explaining the motivation behind the study, the PLoS Medicine Editor’s Summary indicates:
Findings of randomized controlled trials (RCTs—studies that compare the outcomes of patients randomly assigned to receive alternative interventions), which are the best way to evaluate new treatments, are sometimes distorted in peer-reviewed journals by the use of “spin”…
… which the authors have defined in the PLoS Medicine paper as “specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment“. The Editor’s Summary continues,
For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment.
“Spin” can distort the transposition of research into clinical practice and, when reproduced in the mass media, it can give patients unrealistic expectations about new treatments. It is important, therefore, to know where “spin” occurs and to understand the effects of that “spin”.
To this end, the researchers led by Yavchitz used two database indexes, EurekAlert and Lexis Nexis, to evaluate the presence of spin in 70 press releases and corresponding 41 news reports, associated with two-arm, parallel-group RCTs over 4-months. They sought to analyze whether the media coverage contained misinterpretation and/or misrepresentation of RCT results.
The article concluded that about 47-51% of press releases and media coverages of RCTs contained spin; they found that these occurrences of spin correlated positively with similar spin in the article abstracts.
Bummer? Does it cast a shadow over half the clinical trials the authors looked at? Does this indicate that these clinical trials are inherently unreliable? Not really, as Ada explains, expressing her indignation at the implications:
Managing the “spin” factor in scientific publishing requires a certain type of finesse. On the one hand, scientists are expected to present their data dispassionately and objectively; at the same time, they are also expected to make their research sound “sexy,” or at least relevant and orderly. Scientists must appear to know what they’re doing, even though research is a messy, disorganized affair as researchers grope around in the dark in uncharted territory. The unexpected always happens and Murphy’s law holds sway. Yet, scientists must appear to be in control and to have an agenda – to understand disease X, or explain phenomenon Y – all to justify public funding, get a paper published, or prop up an image of competence.
So, are there a lot of spin in the published literature? You bet. Does the spin cross from self-promotion to outright fraud? That’s a grey area, and like pornography, you know that line’s been crossed only when you see it.
Rather eloquently put, I thought, in the last paragraph; Ada seems to have captured the spirit of matter very well in her ‘tirade’. In addition, as a professional scientist myself, I wanted to add a little bit of clarification to it, specifically two points.
First, Yavchitz’s study focused on spin in “press releases and news coverage”, not on spin in actual scientific papers. This is an important distinction. Yavchitz and her associates did bivariate and multivariate analysis to figure out the source of the spin in media coverages and implicated the article abstract (which is an author-written summary, required by the publishers to be sent to indexing services such as PubMed). I submit that the severely abridged nature of the article abstract (often constrained to 250 words or less) often precludes most mentions of the complexities of the research findings. The abstract provides, therefore, an essentially incomplete picture, and Yavchitz’s observations, if anything, highlight the inherent danger in trying to assess the merit of the information in a scientific paper from its abstract.
In addition, press releases and news coverages don’t necessarily have to serve the truth – though ideally, they should; they serve different masters (such as, say, commerce, or popularity, or attention of funding agencies). In contrast, the only allegiance a scientific paper has (or should have) is to the empirical evidence. In this latter format, there is not much space left for spin.
Every paper tries to tell a story coherently; the introduction and discussion parts are used to lay out available evidence and explain the observations. While it is true that conscious or unconscious bias on part of the authors may get into the interpretation of the observed data, the beauty of a scientific paper is that it still contains the results section with raw and/or derived data; with Open Source publishing, more and more publishers are enabling authors to also make supplementary data available for others. This allows independent scrutiny and evaluation of the observations.
Therefore, when we assume the role of scientists and read a paper, we must delve into the actual results and judge for ourselves the interpretations made by the authors. If we find a contradiction, or some point of an unsatisfactory nature, we must question the author(s) – another process made relatively easier by Open Source publishing.
All this to say that chances of spin influencing scientific papers are minimal, given the intense scrutiny they receive before and after publication. Ada brings out this fundamental point about the peer-reviewed, published scientific papers, when she writes:
There’s… an unspoken expectation for the readers to look at the data presented and draw their own conclusions. It’s like every paper comes with a presumption of guilt, and the reader’s job is to prosecute the hell out of it…
That means applying the same level of common sense and skepticism that we may apply to other aspects of our lives. A science paper isn’t meant to tell you what to think, it’s meant to be prosecuted vigorously based on the evidence presented.
It’s not just the sundry readers. As I have written elsewhere, the scientific process demands independent verification and/or replication of the results by, say, other groups. This recursive process distils out scientifically tenable propositions, which eventually no amount of spin can influence.
The readers of scientific papers (in which I don’t, of course, include press releases and news coverage), may be of two types: (a) scientists, or others with some expertise in science and the scientific process (such as veteran science journalists), and (b) the general public. There is an important distinction between the two. Ada understands this; she writes:
… The public was never told, point blank, to read between the lines and seriously critique a paper…
However, the reason for that – to her mind – which is…
… because that would contradict the dispassionate persona science has maintained in the public consciousness — science is supposed to be the distiller of truth.
… is probably not the right way to put it. To me, the reason the general public isn’t meant to seriously critique a scientific paper is essentially one of expertise and specialization, in the same way the general public isn’t expected to argue about finer points of law, or intricacies of economics, or procedures in medicine. This is why the general public likely relies more on press releases and news coverages to become aware of scientific undertakings and facts.
And this, right there, adds to the responsibility of the scientists, who must engage themselves enthusiastically in the process of science communication and education – beyond what they normally do, i.e. the investigation of natural processes; the scientists must be in a position to comment to the general public, explaining processes, interpreting scientific data, and correcting misrepresentations, i.e. generally counteracting spins. Public engagement and science education have gained a crucial significance like never before. Scientists must take the lead.
Hi Kausik, thanks for your thoughtful analysis of my post. Your point about specialist vs. public communication gets to the heart of the matter, and I should have been clearer in my writing-but I was ranting after all…. Anyway, my thinking is the public is skeptical when it comes to other subject areas–say politics, sports, or the economy. And if people are willing to invest considerable time and energy discussing which sport team or political figure is more awesome (all backed by facts or spin), then I don’t see why the same thought processes cannot be applied to science. But when it comes to science and technology, I don’t see that level of engagement (except for highly controversial topics). When a science news story breaks, people just nod their heads, mutter something about how wondrous science is, and then forget about it. The people who do discuss those stories in any depth are mostly scientists or science insiders(geeks). It seems to me the public assumes science is only about knowing or finding an absolute truth, which are common misconceptions. I thought the PLoS Medicine article didn’t help matters by saying scientists spin their research to get attention, even though the authors based their conclusions only on abstracts and press releases (as you’ve pointed out) and are essentially biasing and spinning their own research. I also thought the article obfuscates the purpose of a science paper. I just worry if people read the PLoS paper, then public engagement with science may get even lower because the article was coyly implying scientists publish papers as PR stunts. So, I thought it necessary to point out what a science paper is and isn’t. I sincerely hope and strongly encourage everyone to take any science reports (technical or otherwise) with a grain of salt simply because science is no different from any other human activities and is subject to the same foibles. I hope I’ve clarified my views on this matter. Maybe I should just take a breath so I can write more clearly.
Ada, thank you. We need more committed and passionate scientists and science educators/communicators (such as you are) for this very purpose – to drive home the points you made about science and scientific research. I cannot emphasize enough how important this issue is.
WOW Kausik, Thank you for this post. We may need to talk more via e-mail! I read the study you refer to recently, as I am interested in pursuing some research into press releases and how they cover and frame science news. I’d love to get more of your thoughts on the research and your thoughts about future research in the area of analyzing science press releases… e-mail me your thoughts! (pbrow11@tigers.lsu.edu).
Thanks for a great analysis!