How Might Research Performance Metrics Be Used?

At this week’s ARMS 2013 conference I saw Katie Jones from New Zealand’s Unitec Institute of Technology present on “Research Peformance Metrics: The Stick Followed by the Carrot.” Jones gave an informative and well received presentation on how Unitec has developed a traffic light system to track program level research, and that informs research development strategies.

 

In the Q&A period a conference delegate pleaded (paraphrased from memory): “I’m an academic. I also work at weekends. Administrators need to work with academics . . . we need to work together.” Several conference delegates tweeted the exchange using the #ARMS2013 hashtag.

 

The exchange highlights an important question: How might research performance metrics be used?

 

My personal view is that research performance metrics are the start of a strategic conversation about program level research and not the end. What inferences are drawn, how, and why can be just as important as the original data. One example of this is Richard Heuer‘s influential work on the psychology of intelligence analysis.

 

You often need to go beyond initial or surface interpretations. Red scores can be due to milestone slippage, longer than expected theory-building cycles, or unforeseen complexities. Green scores can be ‘gamed’ via tactics such as salami publications (breaking up research results into several micro publications), or adding your name as a co-author to publications that research students have largely written.

 

An effective research program evaluator will be aware of such issues.

 

Jones is passionate about Unitec’s approach to research performance metrics and its achievements to-date. Her background as an anthropologist with experience in health and investment enables Jones to transfer insights from these domains into a university administration context.

 

Perhaps one day Jones’ traffic light system will evolve into the kind of factor models that Standard & Poors and other financial ratings agencies uses to uncover alpha investment returns that outperform a passive benchmark. For those interested, Richard Tortoriello’s book Quantitative Strategies for Achieving Alpha summarises the S&P factor model approach.

 

I also feel empathy for Jones’ Q&A questioner: writers often spend weekends on their publication drafts. This creative, developmental drafting process is not always captured in research performance metrics or is adequately understood. Having more university administrators who publish their own research may be one strategy to bridge the perceived workplace divide between academics and administrators.