So who's number one? The long-delayed National Academies' assessment of U.S. graduate research programs is finally out today. And it's chock-full of information about 5100 doctoral programs in 62
fields at 212 universities that should prove immensely useful to students, faculty members, university administrators, elected officials, and the
public. But don't expect it to settle any campus smackdowns about who's got the best program in chemical engineering or neuroscience or economics.
The problem, put simply, is that the National Research Council (NRC) committee that carried out the 6-year study has been so careful about not imposing
its own views on the community that its findings are hard to interpret. Instead of assigning a single score to each program in a particular field, the
assessment ranks programs on five different scales.
Each score is given as a range of rankings using the 5th and 95th percentiles as endpoints. The panel also went to great lengths to avoid the criticism
lodged against the previous NRC assessment, published in 1995, which relied heavily on reputational rankings from faculty members in each field. This
time around, the committee chose 20 characteristics—including research activity, student support and outcomes, and diversity—to measure the quality
of any graduate program, and then conducted two separate faculty surveys to figure out what weight to give each characteristic.
That approach protects the panel against charges that the assessment merely perpetuates the status quo. But even members of that elite corps of
world-class institutions are having a hard time figuring out what it all means. Take the anthropology department at Stanford University, for example.
The department is ranked between 13th and 47th on one of the two overall scales, and between 3rd and 9th on the other. In addition, it falls between
3rd and 14th using measures relating to research activity, between 1st and 43rd on student support and outcomes, and between 12th and 33rd on
So how good is the program? "It's difficult to draw meaningful conclusions about the relative quality of programs from these ranges of rankings," says
Patricia Gumport, dean of the graduate school, with impressive understatement. Instead, she and her deans plan to mine a free and publicly available database (a student-oriented one is here) to see, for example, what
it would take to raise the quality of a particular program, or to compare the performance of the university's 47 programs on one or more
characteristics, or to compare one program with its peers around the country.
"While faculty have certain values, students may be worried about other things," says NRC's Charlotte Kuh. "So we wanted to give people the chance to
create rankings based on variables that they thought were important."