Subscribe
 
  • Eli is a contributing correspondent for Science magazine.
 

Failure in Science, Quantified

21 June 2011 5:48 pm
Comments

A recent chat on ScienceLive featured a scientist and an author/economist discussing whether policymakers should be more forgiving of failure among scientists funded with public dollars, and if so, how. Three economists led by Pierre Azoulay from the Massachusetts Institute of Technology in Cambridge suggest that encouraging risk taking by researchers can lead to more influential science being published-but also more relatively uninspiring findings. The Boston Globe reports:

Biologists who were given more time and latitude in their research—as well as the freedom to fail—before they were evaluated produced more hit papers and more duds, according to the new study, to be published in the RAND Journal of Economics.

The paper, which has been accepted but not given a publication date, looked at 73 recipients of funding from the Howard Hughes Medical Institute (HHMI) in 1993, 1994, and 1995. HHMI funds these so-called "investigators" based on their track record and potential for new discoveries rather than any specific research proposals.

The economists used a control group of biomedical scientists who received early-career funding from The National Institutes of Health and foundations "well-matched with HHMI investigators in terms of [scientific] fields, age, gender, and host institutions; their accomplishments should also be comparable at baseline" But the control group was supported by programs that lacked the long-term commitment and tolerance for failure inherent in the HHMI approach.

Compared with those scientists, the HHMI-funded researchers wrote papers that were more than twice as likely to be ranked in the top 1% of all cited papers in the year they were published.

But the HHMI scientists also were more likely after being chosen as HHMI investigators to publish work that was cited less than their previously most-cited work. From the paper:

Symmetrically, we also uncover robust evidence that HHMI-supported scientists "flop" more often than [the control]: they publish 35% more articles that fail to clear the…citation bar of their least well cited pre-appointment work. This provides suggestive evidence that HHMI investigators are not simply rising stars anointed by the program. Rather, they appear to place more risky scientific bets after their appointment, as theory would suggest.

One case study is biomedical scientist Iva Greenwald from Columbia University who received an HHMI award in 1994.

Prior to 1994 … her publication with the highest citation quantile is an article which appeared in the journal Cell in 1993 (341 citations as of the end of 2008, which places it in the top percentile of the article-level distribution). Conversely, her publication with the lowest citation quantile is an article which appeared in the journal Molecular and Cellular Biology, also in 1993. It garnered only 11 citations, which places it at the 52nd percentile. … Between 1995 and 2006, Greenwald published three more publications in the top [1% by year].