The National Institute of General Medical Sciences (NIGMS), a part of the US National Institutes of Health (NIH), was established in 1962 via an Act of Congress for the “conduct and support of research and research training in the general or basic medical sciences and related natural or behavioral sciences”, especially in areas which are interdisciplinary for other institutes under the Act, or alternatively, which fall under no institute’s purview. In these 52 years, the NIGMS has acquitted itself laudably as one of premier funding agencies that support basic research into understanding biological processes, disease diagnostics, treatment and prevention. At any given time, NIGMS supports close to 5000 research grants, accounting for more than 1 in every 10 grants funded by NIH as a whole, and has the distinction of funding the Nobel Prize-winning research of 75 scientists.
On the heels of my previous post on the severe impact of the shutdown on US biomedical research community and the general populace, comes this statement from the NIH. I present it here in its entirety.
2011 saw a study by Donna Ginther, a University of Kansas economist, and colleagues, published in a journal no less than Science, that presented evidence for the existence of racial bias in the grant funding process at the National Institutes of Health (NIH). This study prompted major changes instituted by the NIH Director to encourage more minority participation in the sciences. A recent Nature News blog post (in my Inbox today) reported a new study, this time in the Journal of Infometrics, by Yang et al., that appears to challenge the conclusion of Ginther’s paper. In fact, the abstract of the Yang article categorically states:
Our results provide new insight and suggest that there is no significant racial bias in the NIH review process (Note: emphasis mine), in contrast to the conclusion from the study by D. K. Ginther et al.
Intrigued, I delved into the Yang paper, and immediately had some issues with the way the News Report was written.
For example, this report states:
Wang and his colleagues applied a mathematical analysis to a random sample of 40 black faculty members in both clinical and basic sciences at the top 92 US medical schools.
I am afraid this sentence does not represent the design of the study correctly. Consider Yang et al.‘s statement in the cited paper:
This study targeted the top 92 American medical schools ranked in the 2011 US News and World Report, from which 31 odd-number-ranked schools were selected for paired analysis (schools were excluded if they did not provide online faculty photos or did not allow 1:2 pairing of black versus white faculty members).
Surely, it is not too difficult to see that there is a lot of difference between 31 and 92; this inaccurate representation of the sample pool changes the complexion of the report. The study’s rationale for this shrinkage in the final pool may be reasonable, but the fact remains that the final pool from which the sample was drawn is much smaller compared to the total sample size, which has a chance to bias the data and/or affect the generalizability of the conclusions – as any biostatistician may point out. The same objection applies to the way 40 Black American samples were drawn from a pool of 130 Black American faculty members. The paper, strangely, doesn’t comment on this possibility.
The News Report reflects the authors’ results and analyses in saying:
The authors found that the black scientists were less productive.
Really? This simple – or perhaps I should say ‘simplistic’ – statement, in absence of further elaboration, doesn’t adequately convey the full import of the analysis. To my mind, When the analyzed data from a study – any study – reveal shocking observations such as:
…the analysis shows the male investigators were statistically more productive than the female colleagues, and the black faculty members statistically less productive than the white colleagues.
…it is without a doubt an important indication that the phenomenon of ‘stereotype threat’ must be considered. Stereotype threat is a situation in which members of a marginalized group perform poorly in Standardized Tests when they are made aware of their marginalized identity. Thus, Steele and Aronson (1995), two Stanford psychologists, showed in several experiments that:
Black college freshmen and sophomores performed more poorly on standardized tests than White students when their race was emphasized. When race was not emphasized, however, Black students performed better and equivalently with White students. The results showed that performance in academic contexts can be harmed by the awareness that one’s behavior might be viewed through the lens of racial stereotypes.
Although over the years, Steele and Aronson’s hypotheses have been challenged and defended in equal measure, even the critics agree that Stereotype Threat is at the very least one of the potential contributing factors to long-standing racial and gender gaps in academic performance.
It seems almost unconscionable that Yang et al. have not discussed the implication of their data in greater detail, apart from simply stating that it contradicts Ginther et al.‘s conclusions. However, the very existence of a situation in which only about 10% of Black American faculties can get funded by the NIH [I quote]:
Among the 130 black samples in the initial list, 14 faculty members were funded by NIH during the period from 2008 to 2011.
… should give policymakers and regulators a pause, and make them reconsider the question whether some inherent imbalance and group (race/gender) disparities continue to exist in the current policies, as well as larger questions about the participation of disadvantaged groups in STEM education and research in this country.
And man, oh man!
Commenting after a Nature News blog post… Good grief! First, I hand-code HTML tags, taking care to check the Preview to ensure that they show up properly, and then when I submit, boom! All the codes have changed to plain-text, showing up all ugly. What kind of a preview was that?
And the time! Despite being logged in with my Nature.com account AND entering the CAPTCHA, my comment went into moderation – which I do kind of understand, having witnessed the terrible level of spam that seems to creep into Nature Blogs. But consider this: my comment went into moderation at 05 Feb 2013 20:15 GMT. As of this moment, the current time is 06 Feb 2013 05:00 GMT, you know, close to 9 hours. Guess what? Still in moderation. How immoderate!