31st May 2011: Dropped

For the past several years, in a developmental editing role, I have worked with academics on their grant applications and publication track records. The Australian Research Council’s Excellence for Research in Australia (ERA) initiative has been one external driver of this work. Minister Kim Carr’s announcement on 30th May that he is ending ERA’s journal ranking system has renewed debate, from incisive critics like Anna Poletti and Josh Gans.

The ARC originally conceived ERA’s 2010 journal rankings to bring evidence-based metrics and greater transparency to the higher education sector. Its Excel spreadsheet of 19,000 ranked journals was a controversial but useful tool to discuss with academics their ‘target’ journals and in-progress work. The team that built the Excel spreadsheet benchmarked the project against similar exercises in the United Kingdom, Europe and New Zealand. Whilst there was confusion about the final rankings of some journals, ERA 2010 was a move in the direction of Google’s analytics and ‘chaordic’ projects.

Minister Carr gave the following reason for ending the journal rankings:

“There is clear and consistent evidence that the rankings were being deployed inappropriately within some quarters of the sector, in ways that could produce harmful outcomes, and based on a poor understanding of the actual role of the rankings.

“One common example was the setting of targets for publication in A and A* journals by institutional research managers.”

Consider a more well-known ranking alternative to ERA: Hollywood’s Academy Awards. Studios invest hundreds of thousands of dollars in lavish marketing campaigns for their films. The nominees gain visibility and negotiation bargaining power in the film industry and for ancillary marketing deals. The winners gain substantive, long-term career and financial benefits, and not just a guest appearance on the television series Entourage. Success goes to the resilient. A similar dynamic to ERA 2010 plays out in the quarterly rankings of mutual fund managers, and in subcultures like the 1978-84 post-punk or ‘new wave’ music movement which ushered in MTV’s dominance.

ERA’s developers appear to have made three mistakes. First, there were inconsistencies between the draft and final rankings which remain unexplained, and that galvanised public criticism from academics. Second, its developers may not have considered the ‘unintended’ yet ‘real-world’ decisions that institutional research managers would make using ERA data: poaching high-performance researchers from competitors, closing low-ranked journals, reengineering departments, and evaluating the research components of promotions applications. If this sounds scary, you probably haven’t worked on post-merger integration or consortia bids. Third, the choice of letter codes – A*, A, B, C and unranked – rather than a different descriptive measure, introduced subtle anchoring, framing and representativeness biases into the ERA 2010 journal rankings.

Academics often knew what ERA sought to explicitly codify yet this tacit knowledge could be fragile. For instance, Richard Slaughter spent significant time during a Swinburne Masters in strategic foresight distinguishing between the field’s flagship journal (Elsevier’s Futures), the savvy new entrant (Emerald’s Foresight), and the critical vanguard (Tamkang University’s Journal of Futures Studies). Each journal had its own history, editorial preferences, preferred methodologies, and delimits. You ‘targeted’ each journal accordingly, and sometimes several at once if an article was controversial. ERA’s draft rankings reflected this disciplinary understanding but the 2010 final rankings did not. Likewise, to get into the A*-ranked International Security journal or to get a stellar publisher for international politics – Cambridge, Princeton, Yale, MIT – can take several years of drafting, re-drafting, editing, seminars and consulting with colleagues and professional networks. An influential book from one of these imprints can take up to five to seven years, from ideation to first journal reviews. The “quality is free” in the final manuscript.

This presented a challenge to institutional research managers and to university workload models. This developmental time can inform teaching, seminars, conference panels with exemplars, and peer networking. But it doesn’t necessarily show up quickly as a line-item that can be monitored by managers or evaluated by promotions committees. Instead, it can look like ‘dead time’ or high-reward gambits which have not paid off. Thus, the delays can be potentially detrimental and could affect institutional perceptions on academic performance. Institutional research managers also may not have the scope to develop the above tacit knowledge outside their disciplinary training and professional expertise.

So, like Hollywood producers, the institutional research managers possibly resorted to the A* and A journal codes as visible, high-impact, high-reward rankings. It was a valuable, time-saving short-cut through complex, messy territory. An academic with 15 A* and A level publications looked more convincing on paper than academic with 30 B and C level papers over the same period. A research team with A* and A level publications would be well positioned for ARC Discovery and Linkage grants. Australian Government funds from the annual research data collection had halo effects and financial benefits to institutions, like the Academy Award nominees have for film studios. It can be easier to buy-in expertise like professors and ambitious young researchers than to try and develop would-be writers. Rather than a “poor understanding”, I suggest the institutional research managers had different, perhaps less altruistic goals.

This was clearly a different role to what Carr and the ERA developers had intended, and conveyed to me at a university roadshow meeting. It was a spirited and valuable discussion: I pointed out to the ARC that a focus largely on A* and A level articles meant that 80% of research outputs were de-prioritised, including many B-ranked sub-field journals. However, there were alternatives to scrapping the system outright (or shifting to Field of Research codes and strengthened peer review): Carr might have made the inclusion and selection criterion for journals more public; could have addressed open publishing, and new and online journals; changed the ranking system from letter codes to another structure; and accepted some of the “harmful outcomes” as Machiavellian, power-based realpolitik which occurs in universities: what the sociologist Diane Vaughan calls “institutional deviance”. This may still happen whatever solution Carr and the ERA developers end up devising.

Perhaps if Carr had read two management books he would have foreseen the game that institutional research managers played with the ERA 2010 journal rankings. Jim Collins’ Good To Great (HarperCollins, New York, 2001) counselled managers to “get the right people on the bus”: A* and A level publishing academic stars. Michael Lewis’ Moneyball (W.W. Norton & Co, New York, 2003) examined how Oakland A’s general manager Billy Beane used sabermetrics – performance-based sports statistics – to build a competitive team, improve his negotiation stance with other teams, and maximise his training budget. Beane had to methodologically innovate: he didn’t have the multi-million dollar budgets of other teams. Likewise, institutional research managers appear to have used ERA 2010 like sabermetrics in order to devise optimal outcomes based on university research performance and other criteria. In their eyes, not all academics have an equal performance or scholarly contribution, although each can have a creative potential.

To me, the ERA 2010 journal rankings are still useful, depending on the appropriate context. They can inform discussions about ‘target’ journals and the most effective avenues for publications. They can be eye-opening in providing a filter to evaluate the quantity versus high-impact quality trade-offs in some publication track records. They have introduced me to journals in other disciplines that I wasn’t aware of, thus broadening the ‘journal universe’ being considered. They can be a well-delivered Platonic shock to an academic to expand their horizons and time-frames. The debate unleashed by Carr’s decision will be a distraction for some who will, instead, focus on the daily goals and flywheel tasks which best leverage their expertise and build their plausible, preferred, and personal futures.

9th March 2010: ERA Strategies for ‘Disappeared’ Academic Publication Records

Two separate meetings on career directions: Where do you want to be in 3-to-5 years? What actions can you take to move toward these goals?

 

Collaborator Ben Eltham has written a piece on how the 2010 final rankings for Excellence for Research in Australia (ERA) has affected his academic publishing record: ‘When Your Publication Record Disappears’. A title reminiscent of Nine Inch Nails‘ song ‘The Day The Whole World Went Away.’

 

For the past year I have been dealing, professionally, with issues that Ben raises.Whilst outside academia, journal publications are often viewed as irrelevant, they are crucial to the academic promotions game, and to getting external competitive grants. A personal view:

ERA is the Rudd Government’s evaluation framework for research excellence, developed by the Australian Research Council, to include a ranked list of academic journals and discipline-specific conferences. The ARC released the final ranked list in February 2010. It may be revised and updated in the future, but not this year.

 

The ARC’s goal for this ranked list was to ensure it was comprehensive, peer-reviewed,
would stand up to international scrutiny, and would provide guidance to administrators, managers and researchers on quality research outputs.

 

In the near-term ERA’s 2010 final rankings will require adjustments to our academic publication records. Some of the journals we have published in such as M/C were revised down or excluded, probably because of perceived issues with their peer review process. More starkly, ERA’s guidelines for academic publications filters out most of my writings over the past 15 years: magazines and journals that no longer exist (21C, Artbyte), websites (Disinformation), magazine articles with original research (Desktop, Marketing, Internet.au), unrefereed conference papers, technical reports, and contract research. It also does not usually include textbooks, research monographs, and working papers. The ‘disappearance’ effect that Ben describes also happens elsewhere: when Disinformation upgraded its site to new servers, we sometimes lost several articles during the transition that writers had no back-ups of.

Others are in a tougher position: mid-career academics who have taught and not published or applied for external competitive grants, or who understandably focussed on quantity of articles for DEST points rather than ERA’s focus on quality ranked journals and ‘field of research’ codes. ERA has caused a dramatic re-evaluation for some mid-career and senior academics of their publication record, impact factors, and other esteem measures.

 

In response to Ben’s piece, I mentioned the following possible strategies:

 

1. Know your University’s policy and procedure on ‘research active’ status and how it is calculated. There may be variations of this at Faculty and School level. Once you finish your PhD and have Early Career Researcher status for the next 5 years, focus on building your publication record, internal grants as a rehearsal for external grants, forming a collaborative team, and establishing networks to have industry and government partners. The ARC does not want ERA to be used for academic performance reviews, but this is likely to happen.

 

2. Send in all relevant research outputs to your University’s annual HERDC data collection. Although there is usually at least 12 months delay in this, HERDC outputs mean you contribute to the block grant funding that your University can get for research. Some of this is usually passed on to individual researchers for School and Faculty level research accounts. Where you can, include citation data using ISI Web of Knowledge or Scopus.

 

3. Develop a ‘program of research’ with a 3-to-5 year time-frame. The ‘program’ should encompass multiple projects, collaborations, and creative work or research outputs. This helps the post-PhD transition to ECR status, and ensures you don’t try to put everything into one or two journal articles. One challenge is to first conceptualise what this ‘program of research’ might be, and then translating it into ‘field of research’ codes that are used as institutional metadata. A second is to be able to articulate to others how your approach differs from others in the field; what your distinctive, significant and original contributions may be; and how you will achieve your goals, on-budget, and within the specified time-frame.

 

4. Scholarly published books, i.e. by academic publishers, are counted for both ERA and HERDC data collection. The problem Ben notes for history academics is also a problem for political scientists, who may publish in top journals, but whose life-work usually goes into a major book for Cambridge, Princeton, Routledge, Georgetown, Harvard, Yale, or a similar academic publisher. The ARC does not have a list of academic publishers.

 

A second problem:

 

The ‘research active’ policies and procedures at many universities give a book the same points as two or three articles published in an A* or A-level journal. This points system seriously underestimates the work involved to conceptualise and write the book, and then to get it through the publisher’s development editing process. So, as an incentives scheme it may have subtle and unanticipated effects on knowledge creation.

 

5. Get your research outputs into your University’s institutional repository. This may be run by IT ServicesĀ or Library staff. The repository may have different policies and scope of what it will accept: I have publications at both Victoria and Swinburne universities, and each
institution is slightly different. Take the time to include the relevant metadata for each submission, especially the 4-to-6 digit ‘field of research’ codes. Keep the last version of the article you submitted to an academic journal, because due to publisher copyright and intellectual property contracts, often this is the only version that an institutional repository can publish.

6. Archive your ‘primary’ research and develop a stream of publications. Ben probably approached his excellent Meanjinwork with the mindset of a journalist and long-form essayist. He did 20 interviews for one piece. This is more work than goes into many articles for B- and C-level journals, and even for some A-level ones. He could easily reuse and revisit this ‘primary’ research, for the next three or four years in academia. For example, a paper that reviews current frameworks to identify a knowledge gap or research problem, could then lead to a methodology paper, then to comparative case studies, and then to an evaluation or meta-analysis study.

5th March 2010: ARC Bootstrap Process

House cleaning, gardening, and article writing.

Working through the assessment exercises from Timothy Baldwin, William Bommer and Robert Rubin’s textbook Developing Management Skills: What Great Managers Know and Do (New York: McGraw-Hill, 2008), book site here.

Watched Stanford entrepreneurship lecture on Adding Value to Companies.

Martin Van Creveld on a 1998 television interview: soft-spoken, dismisses claims that the ‘future of war’ will be dominated by ‘cyberterrorism’ and other Revolution in Military Affairs trends.

A colleague told me this week of how a professor used the Australian Research Council‘s national competitive grants program as a bootstrap process for promotion to dean. First, they established their expertise, publication track record, and created a cross-institutional and collaborative research team. Second, they split the ARC grant proposal into different components, delegated each to different team members, and then reassembled them into a completed proposal. Third, they ramped up the number of applications to 15-to-20 per year, with a 50% success rate. The grant revenues made a significant contribution to the department funding. The professor was soon promoted to dean.