- News Home
6 March 2014 1:04 pm ,
Vol. 343 ,
Antiretroviral drugs can protect people from becoming infected by HIV. But so-called pre-exposure prophylaxis, or PrEP...
Two studies show that eating a diet low in protein and high in carbohydrates is linked to a longer, healthier life, and...
Considered an icon of conservation science, researchers at World Wildlife Fund (WWF) headquarters in Washington, D.C.,...
The new atlas, which shows the distribution of important trace metals and other substances, is the first product of...
Early in April, the first of a fleet of environmental monitoring satellites will lift off from Europe's spaceport in...
Since 2000, U.S. government health research agencies have spent almost $1 billion on an effort to churn out thousands...
Magdalena Koziol, a former postdoc at Yale University, was the victim of scientific sabotage. Now, she is suing the...
- 6 March 2014 1:04 pm , Vol. 343 , #6175
- About Us
Failure in Science, Quantified
21 June 2011 5:48 pm
A recent chat on ScienceLive featured a scientist and an author/economist discussing whether policymakers should be more forgiving of failure among scientists funded with public dollars, and if so, how. Three economists led by Pierre Azoulay from the Massachusetts Institute of Technology in Cambridge suggest that encouraging risk taking by researchers can lead to more influential science being published-but also more relatively uninspiring findings. The Boston Globe reports:
Biologists who were given more time and latitude in their research—as well as the freedom to fail—before they were evaluated produced more hit papers and more duds, according to the new study, to be published in the RAND Journal of Economics.
The paper, which has been accepted but not given a publication date, looked at 73 recipients of funding from the Howard Hughes Medical Institute (HHMI) in 1993, 1994, and 1995. HHMI funds these so-called "investigators" based on their track record and potential for new discoveries rather than any specific research proposals.
The economists used a control group of biomedical scientists who received early-career funding from The National Institutes of Health and foundations "well-matched with HHMI investigators in terms of [scientific] fields, age, gender, and host institutions; their accomplishments should also be comparable at baseline" But the control group was supported by programs that lacked the long-term commitment and tolerance for failure inherent in the HHMI approach.
Compared with those scientists, the HHMI-funded researchers wrote papers that were more than twice as likely to be ranked in the top 1% of all cited papers in the year they were published.
But the HHMI scientists also were more likely after being chosen as HHMI investigators to publish work that was cited less than their previously most-cited work. From the paper:
Symmetrically, we also uncover robust evidence that HHMI-supported scientists "flop" more often than [the control]: they publish 35% more articles that fail to clear the…citation bar of their least well cited pre-appointment work. This provides suggestive evidence that HHMI investigators are not simply rising stars anointed by the program. Rather, they appear to place more risky scientific bets after their appointment, as theory would suggest.
One case study is biomedical scientist Iva Greenwald from Columbia University who received an HHMI award in 1994.
Prior to 1994 … her publication with the highest citation quantile is an article which appeared in the journal Cell in 1993 (341 citations as of the end of 2008, which places it in the top percentile of the article-level distribution). Conversely, her publication with the lowest citation quantile is an article which appeared in the journal Molecular and Cellular Biology, also in 1993. It garnered only 11 citations, which places it at the 52nd percentile. … Between 1995 and 2006, Greenwald published three more publications in the top [1% by year].