NextGen Voices is a feature of the premier science magazine, Science. It is designed as a series of surveys targeted towards young scientists, asking them questions on different aspects of life as a scientist that matters to them.(For some reason, it is not very well publicized, which is a pity – because I do think that NextGen Voices is offering young scientists an important platform to voice their opinions. I got to know about it only because my colleague in the lab, a subscriber to Science, showed it to me. This is partly the reason why I wanted to blog on this today – to raise awareness).
Another post after a brief hiatus because of work-related pressure. I’m sure nobody missed me, though. [Sniff!] Well, the pressure’s still on, but let’s say I was inspired to write this post by a chance occurrence, a question asked by a physician friend of mine. An accomplished and established surgeon in India, he is considering various possibilities and options, having recently learnt that his young son is desirous of coming to the US to pursue a career in biological research.
He asked me: how is life as a scientist in biological sciences or genetics etc? Very tough, boring life that leaves you no time? Or fulfilling and all that?
You could hear from a mile the sound of my mental machinery creaking and groaning and whirring. Naturally, I’d be delighted to welcome a budding scientist to the fold, but I also wanted to provide my friend with as true and complete a picture as I possibly could.
Shying away from the usual spiel on the quality of scientific research done at noted US universities and institutions of renown (my friend is aware of all that), I focused on the core of his question – the life as a scientist. What exactly is life as a scientist? Is it, like, life in all its glories as presented with a sonorous narration in a Discovery Channel documentary, or is it more of life, as in “Dude! Get a life!“? Does life of the latter kind come to the scientists in the manner of the proverbial Cheshire Cat of Alice in Wonderland, appearing suddenly with a mischievous grin and then vanishing slowly and unattainably until nothing but the grin is left, and then –Poof!– that is gone, too?
Pushing aside these philosophical (and ultimately useless) cogitations, I set to writing him a reply. Here’s a part of what I wrote:
There are several angles to this question, all of which – in the final synthesis – boil down to the matter of temperament.
First, as with every other profession, the rewards of a career in science are not consistent – and indeed, may even be considered insignificant under certain lights. There will be work-related irritation, frustration, aggravation and denial, some of which may even spill into one’s personal life if one cannot carefully separate the personal from the professional.
Secondly, even if one is passionate about the work to begin with, it would be difficult to sustain that same level of passion through the years. However, professional scientists can usually keep their interest aflame by diversifying into multiple research questions and/or refocusing their priorities.
Thirdly, life as a young researcher may be impecunious. One simply doesn’t become a scientist if one’s goal in life is to become a millionaire outright. I admit that in rare moments of self-doubt, I have thought about young adult basket-ball players and other athletes (especially the talented Jeremy Lin in recent times), who seem to command an exorbitant amount of money in exchange for their prowess and agility, whereas we, the science researchers in the same country, despite contributing day in and day out towards the betterment and progress of humanity, are doomed to live in relative penury.
To the discerning mind, however, the rewards are manifold, even though they may not readily translate to wads of greenbacks or pots of gold. Fulfillment is often a matter of perception, after all.
To many scientists, there is an element of thrill-seeking in what they do. Understanding a problem, analyzing it, putting forth a rational hypothesis and then performing rigorous experiments to test its validity, anticipation of the results, the joy that one feels when the observed data vindicate one’s hypothesis or the sobering effect when they don’t and push the scientist back to the drawing board – there is a lot of drama, excitement, emotional upheavals therein that can be quite enjoyable overall.
There are many scientists who find the challenge of an intractable problem very attractive and engrossing. To them, the systematic attempts at puzzle-solving, especially if the problem happens to be multi-layered, are themselves fulfilling; if they do manage to unravel the mystery, it can be very rewarding, sometimes even lucrative.
Many scientists, especially those working upon problems of immediate consequences to, say, the health and well-being of living beings, including their fellow humans, are often fortunate enough to observe the benefits of their work in relatively real-time. It can be incredibly fulfilling as well as humbling. On the other hand, even those scientists, whose professional endeavors are distally related to health, and more proximally, to basic and/or applied problems in biology, have the satisfaction of knowing that their work connects them to a larger continuum, because modern living organisms, having evolved from same or similar ancestors at different levels, often share a surprising degree of relatedness.
In addition, the sharing and communication of one’s research outcomes within the scientific community and without is no less gratifying. Having one’s work accepted for publication in a scientific journal of repute can be quite life-affirming. Recognition and renown for one’s work, when they eventually arrive, ain’t too shabby either. Accomplished scientists often wish to spread or share their experience and life’s journey, thereby hoping to influence younger minds and instilling the spirit of enquiry.
Many of these intellectual rewards or fulfillment that I mentioned above may initially seem too esoteric and far-fetched, but they exist – they require a fair bit of hard work, but they are not unattainable goals. This is important to understand, especially for a new graduate student.
The entire period of graduate studies (leading to a PhD degree) is – as I see it – essentially a period of training. One learns not only technical skills of various sorts, one also learns how to integrate one’s knowledge in one’s work. One grasps the value of perspectives – how to view one’s own work in the context of a larger picture. One picks up valuable people skills, skills of interaction, communication, presentation and the art of networking, as well as how to work cohesively in a group setting and independently at the same time. One assimilates the ways and means of effective time management, and the benefits thereof. Most importantly, under competent mentorship, one gets a thorough grounding in the scientific method.
To me, this is the most crucial aspect of training as, and being, a scientist. A scientist is much more than what one does; it refers to what one is. It is possible to integrate in one’s life, or one’s attitude towards life, the basic tenets of the scientific method, objectivity, reliance on empirical evidence, a rational and skeptical outlook, and an ability to question, observe and analyze, to varying degrees. People who successfully do that are also able to effortlessly transition from their workbench to life outside and back.
As far as having ‘time’ to do other things is concerned, I have found that it largely depends on the individual. It is indeed possible to manage one’s time effectively, so as to be able to pursue other interests. Examples abound. Just to randomly name a few instances, Paul Z Myers is an accomplished and popular biology professor, with a tremendously celebrated blog. Stephen Curry is a noted structural biologist who still finds time to blog and write for the Guardian. Russian composer Alexander Borodin was a life-long and distinguished researcher in organic chemistry. Jennifer Rohn is a working cell biologist who is a champion for the genre of “Lab-Lit”, is an author, as well as finds time for political advocacy for science funding in the UK. Canadian physicist Diane Nalini de Kerckhove combines a career as a successful scientist with her job as a professional jazz singer. As I said right at the beginning, it is a matter of temperament. If one loves what one does, one does it well – no matter what – and garners fulfillment from it.
What do you think, gentle readers? Please throw in your comments, suggestions, bouquets and brickbats in the comment section.
Holy pseudoscience, Batman!
Homeopathy websites (too many to list; I found the material for this post here) are all gleefully abuzz
today** with the following factoid – New Research From Aerospace Institute of the University of Stuttgart Scientifically Proves Water Memory and Homeopathy.
Continued from Part 1… As I was saying, a study by Goldman et al. in the July 2010 issue of Nature Neuroscience, postulates that “Adenosine A1 receptors mediate local anti-nociceptive (i.e. pain reducing) effects of acupuncture.”
I stumbled a little right at the title. Anti-nociceptive effects of acupuncture? Where is the evidence that such an effect exists?
In the first part of this post, I mentioned the important maxim in science, Correlation does not imply causation, providing a glimpse of its logical framework, and discussing how the scientific method is utilized to establish causality in observed relationships between/amongst variables.
And what happens when scientists, study authors, investigators ignore this prime maxim?
One quick disclaimer before I proceed. When I have quoted one or more Wikipedia articles in the text, it is because I have found them well-written, informative, and adequately illustrative; however, I shall make no claim as to their veracity and/or authenticity because I have not been able to access and verify all the background references therein. If you find an error, please feel free to chide me in the comments.
An important maxim used in science, or more precisely, in the scientific study of relationships between/amongst variables, is that ‘Correlation does not imply Causation’. Indeed, until and unless such causality has been verifiably established through independent means, any attempt to indicate that it does falls under the logical fallacy of questionable cause, cum hoc, ergo propter hoc (Latin for “with this, therefore because of this”).
It is important for all to understand this concept – those who are engaged in scientific studies, as well as those who read about and interpret such studies.
Correlation is a statistical relationship between two or more random variables; for simplicity’s sake, let’s consider two, say, A and B, such that if changes in the values of variable A statistically correspond to changes in the values of variable B, a correlation is said to exist between A and B. This reflects a statistical dependence of A on B, and vice versa, and therefore, statistically-computed correlations can be used in a predictive manner. To pick a completely random example, the epidermal growth factor receptor (EGFR) is expressed on neoplastic cells in colorectal carcinoma. Number of cells expressing EGFR was found to be correlated with the size of the tumor (adenoma), i.e., cells from a larger tumor express more EGFR. Therefore, EGFR expression may be useful as a prognostic biomarker for adenoma progression.
Those who have already identified the problem in this assertion, congratulations! As the paper cautions, although EGFR pathway is important to colorectal carcinogenesis, it is unknown at this point whether the observed increase in EGFR expression is because neoplastic cells make more EGFR per se for some reason, or because a larger tumor would house numerically more of the cells that are capable of making EGFR. This, as you can understand, is an important distinction, and therefore, the authors conclude correctly that “Further larger studies are needed to explore EGFR expression as a biomarker for adenoma progression.”
Such examples abound, all illustrating how correlations can be useful in suggesting possible causal or mechanistic relationships between variables, but more importantly, such statistical interdependence between the said variables is not sufficient for logical implication of a causal relationship. In other words, while empirically A may be observed to vary in conjunction with B, that observation is not enough to assume A causes B.
But what happens when one makes such an erroneous assumption? For starters, one is then disregarding four other possibilities, any or each of which may be true and account for the correlation.
- A may cause B.
- B may cause A.
- An unknown or uncharacterized third variable C may cause both A and B.
- A and B may influence each other in presence or absence of C in a feed-back loop, self-reinforcing type of system.
- The two variables, A and B, changing at the same time in absence of any direct logical or actual relationship to each other, besides the fact that the changes are occurring at the same time – a situation also known as coincidence. A coincidence may allude to multiple, complex or indirect factors that are unknown or too nebulous to ascribe causality to, or may reflect pure, random chance.
Each of these five hypotheses is testable and there are statistical methods available to reduce the occurrence of coincidences. Therefore, the mere observation that A and B are statistically correlated doesn’t lend itself to any definitive conclusion as to the existence and/or directionality of a causal relationship between them.
Determination of causality is an entirely different ball of wax, and that discussion is beyond the scope of this post. Suffice it to say that in the sciences, causality is not assumed or given. The scientific method requires that the scientists set up empirical experiments to determine causality in a relationship under investigation.
The scientific method works in logical progression.
- Initial observations (of a putative relationship between variables) are made.
- an explanation is proposed in form of one-or-several hypotheses about possible causal relationships, including one of no relationship (the Null hypothesis).
- Certain predictions or models may be generated on the basis of each of the hypotheses, which in turn guide the experimental design.
- Experiments are designed to demonstrate the falsifiability of the hypotheses, i.e., to test the logical possibility that the hypotheses could be proven false by a particular empirical observation. Indeed, testing for falsifiability or refutability is a key part of the scientific process.
- Once designed, the experiments are used to test the hypotheses rigorously, and the data, analyzed critically to reach a conclusion, accepting or rejecting the hypotheses.
- But the method doesn’t cease there. All empirical observations are potentially under continued scrutiny, which involves reconsideration of the derived results, as well as and re-examination of the methodology, especially in the light of newer techniques that are capable of taking deeper and more accurate measurements. Such is the dynamic nature of the scientific method.
Establishment of causality, therefore, has to pass through the same rigorous filters before it can be accepted. But if it does, the conclusions may be considered unimpeachably valid, within the given set of circumstances.
So… Correlation doesn’t inherently imply causation.
Some modern examples are in Part Deux. Please don’t hesitate to comment.