- News Home
17 April 2014 12:48 pm ,
Vol. 344 ,
Officials last week revealed that the U.S. contribution to ITER could cost $3.9 billion by 2034—roughly four times the...
An experimental hepatitis B drug that looked safe in animal trials tragically killed five of 15 patients in 1993. Now,...
Using the two high-quality genomes that exist for Neandertals and Denisovans, researchers find clues to gene activity...
A new report from the Intergovernmental Panel on Climate Change (IPCC) concludes that humanity has done little to slow...
Astronomers have discovered an Earth-sized planet in the habitable zone of a red dwarf—a star cooler than the sun—500...
Three years ago, Jennifer Francis of Rutgers University proposed that a warming Arctic was altering the behavior of the...
- 17 April 2014 12:48 pm , Vol. 344 , #6181
- About Us
Government 'Exams' for Italian Scientists Trigger Outcry
13 December 2012 5:40 pm
ROME—An unprecedented government effort to shore up the quality of Italian science by reviewing the work of individual scientists and institutions has triggered a firestorm of protest. Critics say the government review, coordinated by the National Agency for the Evaluation of Universities and Research Institutes (ANVUR) at the Ministry of Education, University and Research is using flawed criteria and will do little to reward the best Italian scientists.
The issue has been furiously debated on an online forum called Return On Academic Research (ROARS) in recent months, and has led to official protests by several scientific and legal associations, including the Mathematics Union and the Association of Psychologists. But the government is going ahead with the scheme anyway.
In 2011, ANVUR started pushing back nepotism, still rife in the Italian academic world, and rewarding excellence. The agency evaluates individual researchers, who, if they meet certain criteria, can get a government stamp of approval that allows them to apply for higher academic positions; it also rates universities and public institutions, who can expect to get more funding if they are among the best. But the criteria used are too crude, scientists say. "It's like judging the bottles in a wine contest by the labels only without tasting their content," says Alberto Baccini, a professor of political economics at the University of Siena.
By late November, more than 65,000 researchers had applied for the individual evaluations; they will know the official outcome in a few months. But many have already calculated their expected score themselves.
For researchers in fields where good bibliometric data are available, ANVUR uses three criteria: the number of papers published in the past 10 years, the number of citations, and the so-called h-index, a measure that takes into account both output and impact. The law governing ANVUR prescribes that only applicants who score above the national median in their field on two of the three criteria can be admitted to the next stage of evaluation, a more quality-based scrutiny by committees. Many scientific groups have protested against what they see as a mindless and unjust application of numbers, and some suggest it may violate Italy's constitution.
ANVUR President Stefano Fantoni tells ScienceInsider that the criteria will be used only as indicators, and that the committees can still pass researchers who fail to meet quantitative criteria -- although they will have to justify their decision. But commentators on ROARS say the law does not provide that escape, and it's not clear what criteria the committees would use.
ANVUR's Web site does not describe the process clearly, critics say; on the homepage, Fantoni acknowledges that the site is still under construction, and that ANVUR is not completely transparent as a result. In order to get the full picture, scientists have to download a series of documents and check for recent changes. The National University Council recently asked research minister Francesco Profumo in an open letter to clarify the median-based criteria. (Profumo declined to talk to ScienceInsider.)
The way that ANVUR gathers bibliometric data is under fire as well. The agency has relied on the Web site of Cineca, a center providing software and computing services to universities. ANVUR has asked researchers to upload the references to their papers to Cineca's Web site; Cineca has also computed the medians and researchers' individual scores. But according to Francesco Sylos Labini, a physicist at the National Research Council, Cineca's database is "untrustworthy." Nobody checks the data entered by researchers, he says, and they can put in nonexistant papers.
Even if the scores are calculated correctly, critics say that they aren't always a measure of quality. It's a public secret in Italy that some lab leaders co-author dozens of papers every year for which they did very little work. ANVUR's methodology does not ferret them out but rewards them with very high scores, which some have now listed on their Web sites. Meanwhile, some younger but more creative researchers have trouble making the median, says Piergiorgio Strata, president of the National Institute of Neuroscience. Even prominent historic figures like mathematician Ettore Majorana would have lost out, according to ScienzaInRete, a researchers' group.
For human and social sciences, which aren't adequately covered by bibliometric databases, ANVUR compiled lists of 16,000 journals whose papers are included in the evaluation. Those lists have been heavily criticized because they include around 200 titles whose scientific credentials are questionable—including glamorous publications like Yacht Capital, religious journals, magazines about food and drink, a trade journal for pig breeders, and supplements of broadsheet newspapers like Il Sole 24 Ore.
ANVUR's other arm, the evaluation process of universities and research institutions, is under fire for similar reasons.
Next year, that evaluation will lead to a ranking that will partly determine the allocation of Italy's public funding.
Not everyone thinks ANVUR's methods are so bad. "I agree with the idea of ANVUR itself and with the idea of using tight numerical criteria for the first screening, especially for the evaluation of Universities," says Giancarlo Ruocco, the director of the physics faculty at the University of Rome La Sapienza. "For individuals, it's more problematic, but using minimum thresholds, which medians are, is a correct approach."
But others say ANVUR should look abroad, for instance at the United Kingdom's Research Excellence Framework, for its evaluations, or use expert evaluators rather than metrics. "We need to look at what people are doing in those countries that have a long evaluation's tradition, such as the U.K. and the U.S., if we want to set up clear and effective rules the majority of scientists will be prepared to accept and share," says Francesca Pasinelli, the director-general of Telethon, a nonprofit foundation that screens about 450 research proposals in medicine and biology every year.