11th April 2012: How Academia Kills Writing

I recently had some productive exchanges with Roy Christopher and Axel Bruns on academic writing strategies. Christopher wrote-up his insights:

 

I am sympathetic to all of these conditions, but I have found it important to cultivate the ability to write at any time, in any circumstance — even if it’s just collecting thoughts about something. I keep a pen and paper in my pocket at all times, pen and pad by my bed, notebook(s) in my backpack and all over the house. I do find that I need large chunks of uninterrupted time to surmount larger writing tasks, but the ubiquity of computers, portable or otherwise, makes writing anywhere a much more viable option. [emphasis added]

 

Christopher’s insight led to an email exchange on the barriers that academia poses for writers. I think about this a lot in my current university gig as a developmental editor. I also work with a talented copy-editor. Here are six ways that academia kills writing:

 

1. Perverse incentive structures. Christopher and I are both intrinsically motivated writers who approach it as a craft. We blog, write journal articles and in-progress PhD dissertations, and Christopher has several book projects. In contrast, some academics I know write only for performance-based incentives. They play games such as writing fake conference papers, sending book manuscripts to vanity publishers, and publishing in obscure international journals. This leads the university research administrators to change the incentives structures. It also introduces scoping problems into competitive grants: the journal article(s) only get written if the money is awarded. It’s very rare that I find an intrinsically motivated writer: maybe an Early Career Researcher who has just finished their PhD, or a senior academic intent on making a contribution to their field or discipline. I wish academics had a more hip-hop or punk sensibility and just did the work, regardless of the institutional incentives.

 

2. Misuse of university research metrics. The Australian Research Council‘s Excellence for Research in Australia shifted the research conversation to performance and quality-based outputs. This also lead to games such as poaching academics who had ERA publishing track records. However, it also sometimes led to a narrow focus on A* and A-level journals without changes to the workload models or training investment for academic skills and robust research designs. Not everyone is Group of 8, Harvard or Stanford material, or at least not at their career stage. Metrics use must be counter-balanced with an understanding of intellectual capital and development strategies. To-date the use of ERA and Field of Research metrics is relatively unsophisticated, and it can often actually de-value academic work and publishing track records.

 

3. A failure to understand and create the conditions for the creative process. The current academic debate about knowledge creation swings between two extremes. On the one hand, budget-driven cost-cutting similar to GE’s Work-Out under Jack Welch or private equity turnarounds. On the other, a desire to return to a mythical Golden Age where academics are left alone with little accountability. Both views are value destructive. The middle ground is to learn from Hollywood studios, music producers, and academic superstars about the creative process, and to create the conditions for it. This means allowing time for insights to emerge or for academics to become familiar with new areas. It means not relying on conferences and being pro-active in forming collaborative networks. It means treating academic publications as an event and leveraging them for maximum public impact and visibility. Counterintuitively, it can also mean setting limits, stage gates, and ‘no go’ or ‘abandon’ criteria (real options theory can be a useful tool). This is one reason why Christopher and I sometimes exchange stories of the strategies that artists use: to learn from them. This is a different mentality to some university administrators who expect research publications to emerge from out of nowhere (a view often related to the two barriers above).

 

4. Mystifing the blind peer review process. What differentiates academic research from other writing? Apart from the research design, many academics hold up the blind peer review process to be a central difference. Usually, a competitive grant or a journal article goes to between two and five reviewers, who are often subject matter experts. The identities of both the author(s) and the reviewers are kept secret from each-other. Supposedly, this enhances the quality of the review process and the candour of the feedback provided. Having studied the feedback of 80 journal articles and 50 competitive grants, I disagree. The feedback quality is highly reviewer dependent. Blind peer review provides a lack of transparency that allows reviewers to engage in uber-critical reviews (without constructive or developmental feedback), disciplinary in-fighting, or screeds on what the reviewer wished had been written. Many academic journals have no rejoinder process for authors to respond. These are problems of secrecy and can be avoided through more open systems (a lesson from post-mortems on intelligence ‘failures’).

 

5. Being set up to fail through the competitive grants process. A greater emphasis on research output metrics has prioritised success in competitive grants. Promotions committees now look for a track record in external grants for Associate Professor and Professor roles. Australian universities do not often have endowed chairs or institutional investment portfolios — so they are more reliant on grant income. Collectively, these trends translate into more pressure on academics to apply for competitive grants. However, success is often a matter of paying close attention to the funding rules, carefully scoping the specific research project and budget, developing a collaborative team that can execute on the project, and having the necessary track record in place. These criteria are very similar to those which venture capitalists use to evaluate start-ups. Opportunity evaluation, timing, and preparatory work is essential. Not meeting this criteria means the application will probably fail and the grant-writing time may be wasted: most competitive grants have a 10-20% success rate. Some universities have internal grant schemes that enable new academics to interact with these dynamics before applying to an external agency. In all cases, the competitive grant operates as a career screening mechanism. For institutions, these grants are ‘rain-making’ activities: they bring money in, rather than to the individual academic.

 

6. A narrow focus on A* and A-level journals at the expense of all other forms of academic writing. The ARC’s ERA and similar schemes prioritise peer reviewed journals over other forms of writing. (This de-valued large parts of my 18-year publishing history.) The 2009 and 2010 versions of ERA had a  journal ranking list which led many university administrators I know to focus on A* and A-level journals. I liked the journal ranking list but I also saw it had some perverse effects over its 18 months of use. It led to on-the-fly decisions made because of cumulative metrics in a publishing track record. It destroyed some of the ‘tacit’ knowledge that academics had about how and why to publish in particular journals. It de-valued B-ranked journals that are often sub-discipline leaders. It helped to create two groups of academics: those with the skills and training to publish in A* and A-level journals, and those who did not. It led to unrealistic expectations of what was needed to get into an A* journal like MIT’s International Security: a failure to understand creative and publishing processes. The narrow emphasis on journals ignored academic book publishers, CRC reports, academic internet blogs, media coverage, and other research outputs. Good writers, editors and publishers know differently: a high-impact publication can emerge from the unlikeliest of places. As of April 2012, my most internationally cited research output is a 2009 conference paper, rejected from the peer review stream due to controversy, that I co-wrote with Ben Eltham on Twitter and Iran’s 2009 election crisis. It would be excluded from the above criteria, although Eltham and I have since written several articles for the A-level journal Media International Australia.

 

Awareness of these six barriers is essential to academic success and to not becoming co-dependent on your institution.