- News Home
17 April 2014 12:48 pm ,
Vol. 344 ,
Officials last week revealed that the U.S. contribution to ITER could cost $3.9 billion by 2034—roughly four times the...
An experimental hepatitis B drug that looked safe in animal trials tragically killed five of 15 patients in 1993. Now,...
Using the two high-quality genomes that exist for Neandertals and Denisovans, researchers find clues to gene activity...
A new report from the Intergovernmental Panel on Climate Change (IPCC) concludes that humanity has done little to slow...
Astronomers have discovered an Earth-sized planet in the habitable zone of a red dwarf—a star cooler than the sun—500...
Three years ago, Jennifer Francis of Rutgers University proposed that a warming Arctic was altering the behavior of the...
- 17 April 2014 12:48 pm , Vol. 344 , #6181
- About Us
To Promote Scientific Creativity, Cut the Strings, Economists Say
9 December 2009 5:56 pm
Biomedical research leaders often complain that the U.S. system of funding research on specific projects stifles risk-taking and creativity. A better model, they say, would be to give researchers long-term awards with no strings attached. Now some Massachusetts Institute of Technology economists say they have rigorously tested this idea for the first time and found that scientists with open-ended funding are indeed more productive and creative.
There are two main models for funding biomedical scientists in the United States. The National Institutes of Health gives out most of its grant money as 3- or 4-year grants, called R01s, for research projects on specific topics with detailed goals. Then there's the Howard Hughes Medical Institute, the non-profit behemoth that supports more than 300 investigators across the country for 5 years or more based on their personal qualifications—not what they're studying. "People, not projects" is HHMI's mantra, and proponents claim it funds the most creative science.
But until now, there has been no "serious effort" to test that idea, says MIT economist Pierre Azoulay. So he and two colleagues compared the careers of two groups: seventy-three were HHMI investigators who were appointed in the early 1990s, and about 400 were scientists around the same age who received prestigious "early career" awards from sources such as the Pew Charitable Trusts and the Packard Foundation. (About 70% of HHMI investigators started out with one of these awards, Azoulay says.) They also compared a group of scientists who received a prestigious, long-term, project-specific award from NIH, called a MERIT award.
The HHMI investigators clearly came out ahead, producing twice as many papers in the top 5% of citations as the early career awardees, for example, Azoulay's team concludes in a working paper. They were closer to the MERIT grantees in output but still wrote 50% more papers in the top 1% by citation. The HHMI group were also more likely to change the keywords in their abstracts over time, suggesting they were moving in new, creative directions.
Makes sense, but did the study fully control for the fact that HHMI scientists might be far more talented to start with? "We can't. We're very open about it," says Azoulay. To really find out whether the type of funding makes a difference, you would have to randomly assign equally qualified researchers to either get an HHMI-type grant or R01 funding—which he admits isn't likely to happen.
He adds that the study is not meant as "an NIH-bashing exercise." While NIH has some new award types that follow the HHMI model, such as the Pioneer award, he doesn't think the agency should ditch R01s altogether, Azoulay says. Science requires a mix of "incremental" and "breakthrough" work, particularly to translate findings, he says. And the paper concludes:
Only scientists showing exceptional promise are eligible for HHMI appointment, and our results may not generalize to the overall population of scientists eligible for grant funding, which include gifted individuals as well as those with more modest talent. Moreover, HHMI provides detailed evaluation and feedback to its investigators. The richness of this feedback consumes a great deal of resources, particularly the time of the elite scientists that serve on review panels, and its quality might degrade if the program was expanded drastically.
But Azoulay thinks NIH is making one mistake right now: it's not collecting the data it should to find out whether scientists funded with the new models are indeed more productive. "We will never learn how effective it was," he says.