Beki Grinter

National Research Council Rankings: The Politics of Metrics, Part II

In academia, academic management, computer science, discipline on October 3, 2010 at 3:27 pm

Update: The Chronicle ran a nice article about the Computer Science response, and the role of the Computing Research Association in that reposne.

I have a long standing interest in metrics. I have blogged about it, and one of my favourite papers reports a study that tackled the politics and problems of corporate metrics (I’ve always wondered why it was not cited more often honestly). So needless to say I found myself curious about the outcome of the National Research Council (NRC)’s academic ranking activity. The UK has a similar type of ranking, RAE.

The NRC finally released the results last week, although they told Department Chairs/Deans prior to their public release. But even before the data was released the NRC ranking exercise had been widely criticized. I am most familiar with the concerns that the Computer Science research community had with the data collected, concerns that were enough to prompt the Computing Research Association to release an announcement about the data. Some institutions, also troubled by the results, also released announcements. For example, the University of Washington’s Computer Science and Engineering department released an announcement expressing their concerns.

One question I had prior to the release was how the NRC would cope with the widely held sense that the rankings were in trouble as an exercise even prior to their release. One example of how this concern was articulated was that the data for the metrics was collected in 2006 or so, and it had taken four years for the data to be processed. Academic departments change in four years. So, I wondered how they would cope with this. I smiled when I saw that they had released two sets of ranking data, and two sets of ranges associated with each ranking method. To take my department, Computing at Georgia Tech, we are 7-28 by regression-based metrics, and 14-57 by survey-based metrics. My first thought was huh? My second was that this was one pretty clever way of trying to diffuse the situation, the two sets suggest different outcomes and the range within each suggests uncertainty. That’s almost perfect for a metric.

It was also no surprise to me that some took it upon themselves to generate a single metric out of the ranges (my department comes out at 22 using that method). And academic statisticians commented on the appropriateness of the methods used to generate the data, an additional problem with ranking those who are the experts in understanding the methodologies used to generate the rankings.

So, the question then is why rank? There are actually a myriad of rankings for Universities: US News and World Report, Times Higher Education, Top Universities, etc… One reason is to help students make decisions about where to attend college, that’s clearly the intention of the US News and World Report ones. You can tell it by looking at what they take into account as part of their ranking process (although their methodology is not public, but still they include facts such as cost of college that are suggestive of their values). The NRC offers the following

This large dataset will enable university faculty, administrators, and funders to compare, evaluate, and improve programs, while prospective students can use the data to help identify programs best suited to their needs. Universities will be able to update important data on a regular basis, so that programs can continue to be evaluated and improved.

We’ll see whether that’s what happens. Meanwhile I can’t help thinking about that paper. Here’s the abstract, replace words appropriately.

This paper presents a case study of the implementation of one corporate-wide program, focusing particularly on the unexpected difftculties of collecting a small number of straightforward metrics. Several mechanisms causing these difficulties are identified, including attenuated communication across organizational boundaries, inertia created by existing data collection systems, and the perceptions, expectations, and fears about how the data will be used. We describe how these factors influence the interpretation of the definitions of the measurements and influence the degree of conformance that is actually achieved. We conclude with lessons learned about both content and mechanisms to help in navigating the tricky waters of organizational dynamics in implementing a company-wide program.

Advice that the NRC might consider in their next round. And if you want a visual tool for exploring the metrics, the Chronicle has a nice one.

Postscript: for a humourous take on the rankings, try this and another analysis of the problems here, and of course for some it was good news, here’s a sample of positive reporting.

Advertisements
  1. This ranking is a joke. Why? The data they are using is erroneous. For example, they said our department had 80 faculty (when we had 40) and our EE department had 170! Their formulas divide by these numbers.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: