Beki Grinter

Posts Tagged ‘academic metrics’

A Future for Academia Driven by Metrics

In academia, academic management, computer science, discipline on January 2, 2014 at 5:08 pm

By now, anyone who knows me knows that I am a *huge* fan of metrics. Particularly when they are used uncritically. So perhaps it was inevitable that I would end up in an environment where metrics play an increasingly ubiquitous role: academia.

I want to introduce three metrics.

Student credit hours: a number that measures by class/faculty the number of students a person has taught. You will have a larger number if you teach larger classes. It’s also the number that is at the beginning of a formula that computes the portion of the Institute’s state budget (and presumably how that is divided, although that part of the budgetting process is a complete mystery to me). Higher is better, and in fairness I can imagine that larger classes can create their own organizational structures that need managing and more potential problem cases.

What’s missing in this metric are some other fundamentals about class.

  1. Smaller might be better for the student experience including but not limited to mentoring, one-on-one time with individuals, managing different learning styles… and that this might be exactly what distinguishes a University education at a bricks and mortar institution from an online experience.
  2. Class preparation time, do classes with more students involve more course preparation time. I taught a class recently that was about 1000 pages of reading for 12 people, but it would have still been 1000 pages if it had been 120 people.
  3. The lack of institutional support for say, grading, that larger classes receive.

Research expenditure. This metric measures the amount of money that the Institute receives when a faculty member spends their grant. Again, bigger is better. But this metric assumes that all research costs the same. Not all research costs the same amount to achieve, and funding agencies know that. It does not account for how much it costs to do research.

H-index. I’ve already written about this.

Imagine my joy when someone suggested that we plot all three against each other for an individual. What would that mean? Someone with a larger class, in an area of research that was more expensive to do, and with a high index does well. So, should we optimize (which is the purpose of metrics, to drive behaviour) for large classes at the sake of not giving students the opportunities that come from small ones? Should we optimize for expensive and popular research, and ignore the intellectual, social and political good that might come from less expensive research areas? Should we give even more legitimacy to the papers of an h-index and not ask about the papers that were potentially unpopular but changed a person’s thinking, deepen their intellect…?

Needless to say this epitomizes all that worries me about metrics. The desire to rank and compare, and use numbers to support that is to think uncritically. Sadly, it’s all too common in academia.

Why I Wish to Keep my Teaching Comments Out of My Evaluation

In academia, academic management, empirical on September 21, 2011 at 8:30 am

I’ve written a lot about metrics in the past, today my focus is on how qualitative data is generated and the implications for evaluation. I am aware that my management (and I use that term deliberately since this is an evaluation situation) want to see the comments that students write. They currently only see the numeric scores. Their argument is that the comments would enrich their ability to evaluate my teaching.

But, I find myself very resistant to the idea.

First, how do comments shed light on teaching? How do the comments, often typed out hastily in the throws of week 15 of a 16 week semester explain the ebb and flow of the class, the work I did to bring the class together, to draw the timid into discussion, to manage the differences in perspectives among class participants, to listen and counsel the students who brought them problems not related to the class but to their lives and their struggles and joys? These are subtleties of the experience that I’ve never seen in students’ comments. Not surprising, they’re not teachers! Teaching is an intimate and deep experience, one that can only be truly understood through experiencing the classroom. I realize the desire to measure it, but teaching evaluations are only partial instruments hence the ability to improve the scores without improving the actual teaching. Adding comments won’t change that.

Second, I have a particular concern as a woman. I am sure I am not alone in having comments about my body as part of the feedback. It’s tough enough knowing that as a woman my body and its “problems” is a part of the students’ discourse. But I accept that to be young is not always to be thoughtful or kind, and I teach despite that, knowing that I get to keep those indiscretions out of the professional discourse about me. While I respect my all male management, I find the idea that they can read remarks about my body embarrassing. It transforms an annoying inequity confronted by female scientists into a public humiliation.

And that’s why I don’t want my teaching comments made public.

Metrics: Just Because You Can Doesn’t Mean You Should

In academia, academic management, discipline on June 6, 2011 at 8:28 am

The Chronicle of Higher Education recently posted an announcement that a journal ranking system in Australia has been cancelled. It had caused a lot of controversy. Explaining why, Sen. Kim Carr said:

Sen. Kim Carr, Australia’s minister for innovation, industry, science, and research, announced on Monday that the rankings would be jettisoned. “There is clear and consistent evidence that the rankings were being deployed inappropriately within some quarters of the sector, in ways that could produce harmful outcomes, and based on a poor understanding of the actual role of the rankings,” Mr. Carr said in a written statement. Instead of rankings, he said, the Australian system will incorporate “journal quality profiles.” Mr. Carr added that “the removal of the ranks and the provision of the publication profile will ensure they will be used descriptively rather than prescriptively.”

This was also a problem in the metrics effort we studied. But what was also a problem subsequently is that once any metrics program had been used “inappropriately” (in this case to conduct layoffs) every initiative that followed was greeted with healthy suspicion. And why not. Once you use a metrics initiative like that, it’s pretty easy to see why people would be skeptical about anything that followed it. Of course organizations can continue to “enforce” metrics initiatives, and we learnt that when they did that, people learnt how to creatively report and count.

H-Index versus Your Index

In academia, computer science, discipline, HCI on June 2, 2011 at 10:49 am

The h-index is a metric for assessing the impact of scholarly contributions using the number of times each paper has been cited (until that number is smaller than the number of the paper on a list that starts 1,2,3,… ).

My question, if you had to pick the papers that formed your h-index, or to make it easier are the top three most cited of your papers, would you pick the same ones.

No offense to my collaborators on those highly cited papers, but I am disappointed that a couple of papers that have had more influence on me have missed the list. There’s a paper I wrote with Jim Herbsleb called “Conceptual Simplicity meets Organizational Complexity“. It was a write up of our research focused on a corporate-wide metrics program.

I think it’s the paper I’ve written about most in this blog. Why? Because I think metrics are pervasive and many of the problems we found in the paper appear in other settings. For example, I wrote about the apparent difficulty of computing University ranking metrics, and it echoes so much of what we saw in our research. Frequently there’s a gulf between those who want and decide the metrics and those who are the object of those metrics, that gulf is responsible for poor metrics. And just like the technically oriented corporation we studied, I’ve seen it in the engineering oriented University I am in. We are seduced by numbers, because they are readily computable, but like the professor who asked the question about quality, I hear far less of whether they are the right things to know. Just because you can know them doesn’t mean that they are the right thing to know.

And so I return to the h-index. The reason that this paper is not on the list is because citations are a measure of something, but they are not the most effective measure of personal-professional development. The paper on metrics has been very influential in my thinking, about my research and about how I navigate academia. So, what would be in your top ranked papers and why?