The National Research Council’s (NRC) Assessment of Research Doctorate Programs, released today, gave “exceptionally strong evaluations” to Harvard’s offerings, according to a statement released by Graduate School of Arts and Sciences (GSAS) dean Allan Brandt. The NRC report rated and ranked about 5,000 programs in 62 fields at 212 institutions, resulting in an enormous databank—and not a little controversy about the quality of the information contained and the complexity of the evaluative measures.
According to Brandt, “Ninety percent of our programs are in the highest tier of the NRC rankings,” which reflect two measures: a quantitative assessment and a faculty assessment, based on scholars’ evaluation of programs’ strength and reputation. Further, Brandt said, “More than half of our programs are the very highest ranked in the country.” According to GSAS figures, 27 Harvard programs ranked as high as first in at least one ranking scheme; the closest peer institutions were Princeton, Berkeley, Stanford, and MIT, at 19, 18, 18, and 12 respectively; and 46 Harvard programs ranked from first to fifth in at least one of the two ranking schemes, with Berkeley (41), Stanford (39) and Yale (31) the nearest competitors. An enthusiastic Brandt noted:
This report from the National Research Council confirms that Harvard is home to the largest collection of exceptional graduate programs anywhere in the United States.
This collective excellence sustains a teaching and research environment that strongly supports collaboration and innovation across programs and in a wide variety of disciplines. No matter their particular field of interest, graduate students at Harvard benefit from the remarkable scholarly strengths present across all our programs, pursuing their research beyond traditional disciplinary and professional boundaries.
These ratings reflect the quality of our faculty and the talent and motivation of our students. We are extremely proud and appreciative of our overall performance in this comprehensive evaluation, which is likely to be considered useful by prospective students, faculty, government agencies, foundations, and donors—anyone who is investigating the character and nature of graduate education in America today.
At the same time, he was careful to observe that although the NRC findings are “the result of a considered and serious process,”
[A]ll rankings of higher education programs have limitations, and all should be met with a critical eye. Those eager to base decisions on the evaluations in this report must realize how quickly graduate education changes. Because of delays in releasing the report, the data it contains, which comes from the period 2001 through 2006, may in some instances be out of date. In the years since this data was collected, we have worked aggressively across the University to continue to enhance our programs.
The rankings have been controversial because, among other issues, they are based on student and faculty data from early in the decade—and so are subject both to dating and to the sharp changes brought about by the economic crisis and resulting budget constraints imposed on many institutions since 2008. Useful guidelines to the construction of the NRC study, and to its evaluation methods, have been published online by both Inside Higher Ed and the Chronicle of Higher Education (the latter accompanied by a helpful FAQ). Each comes with an interactive table of the underlying data.
As an example of the critiques of the NRC data and methodologies, Inside Higher Ed cited this example of a department whose standing could be influenced by the movement of just a few faculty members (and consult the FAQ link, above, for an attempt at defining the R and S rankings, the most complex feature of the NRC’s process):
Brian Leiter, the John P. Wilson Professor of Law and director of the Center for Law, Philosophy & Human Values at the University of Chicago, who writes frequently about ratings of academic departments (especially in philosophy and law)…noted the impact of “the huge time lag” on the S rankings—and said that given that various measures of “research activity” that are fundamental to the S rankings are based on faculty productivity, this is too long a passage of time for the movement of faculty members not to have a major impact. In Leiter’s blog, he frequently notes the movement of faculty members because—especially in a field like philosophy, which does not have mammoth departments—one or two departures matter a great deal.
He cited some examples. The high end of Yale University’s ranges for philosophy would be 25th (R ranking) and 39th (S ranking, which would be influenced by faculty research accomplishments). In 2005-6, it didn’t yet have two of the most “highly decorated and recognized senior philosophers” around, who are now there—Stephen Darwall (who moved from Michigan) and Thomas Pogge (who moved from Columbia). In programs with small departments, “it’s hard to see how just these two, even by the NRC’s criteria, would not have changed the results significantly.”
Similarly, he said that in 2005-6, Chicago’s faculty roster would have included John Haugeland (a Guggenheim winner who has since died), Charles Larmore (a fellow of the American Academy of Arts and Sciences, who left for Brown University) and William Wimsatt (an influential figure in philosophy who has since retired).
Leiter said these examples could be “multiplied in both directions,” making it foolish to count on measures of faculty accomplishment and reputation that are so out of date.
Inside Higher Ed also emphasized what the rankings do not measure—for example, the quality of undergraduate education:
John V. Lombardi, president of the Louisiana State University System [and formerly head of the flagship institutions of the University of Florida and the University of Massachusetts], has himself written extensively on how to compare research universities and has helped devise systems for comparing them. (Lombardi has blogged for Inside Higher Ed.) He said that the NRC methodology illustrates “the quixotic nature of any effort to measure research university performance in this way.”…
Asked if he would use the rankings in the kind of work facing LSU and many other university systems these days of deciding which programs to build up and which to eliminate, Lombardi said “probably not” because “the data are old” and the focus ignores the way a doctoral program’s faculty may also contribute to undergraduate education. He noted that the research performance of a graduate department “may have nothing at all” [to do] with a department’s undergraduate role or state research priorities.
Those critiques aside, the rankings themselves were understandably celebrated in a University statement echoing Brandt’s. It noted, “Of the 52 Harvard programs in the survey, 27 placed as high as first in at least one of the overall rankings. Ninety percent of the University’s programs placed as high as fifth in at least one of the overall rankings.” President Drew Faust said in the statement, “The fact that we have so many top-rated programs means that every student and faculty member, regardless of School or department, is able to benefit from the collective strength of the University, making the whole genuinely greater than the sum of the parts. These results are a tribute to the quality of the students, faculty, and staff at work across every part of this University.” There were also comments from several deans.