For two decades the ARC has been one of Australia’s most important funding agencies for competitive research grants. ARC funding success provides career prestige and national visibility for an academic researcher or collaborative research team. The ARC’s Discovery and Linkage rounds have a success rate of 17-19% whilst the DECRA awards for Early Career Researchers (ECR), or first five years after PhD completion, has been 12%. An ARC grant is a criterion that many promotions committees use for Associate Professor and Professor positions. This competitiveness means that a successful ARC grant can take a year to write. The publications track record needed for successful applicants can take five to seven years to develop.
The ARC’s freeze decision is symptomatic of a deeper sea change in Australian research management: the rise of ‘high finance’ decision-making more akin to private equity and asset management firms.
Richard Hil’s recent book Whackademia evoked the traditional, scholarly Ivory Tower that I remember falling apart during my undergraduate years at Melbourne’s La Trobe University. Hil’s experience fits a Keynesian economic model of universities. Academics got tenure for life or switched universities rarely. There was no ‘publish or perish’ pressure. There was a more collegial atmosphere with smaller class sizes. Performance metrics, national journal rankings, and research incentive schemes did not exist. The “life of the mind” was enough. Universities invested in academics for the long-term because they had a 20-30 year time horizon for research careers to mature. There was no intellectual property strategy to protect academics’ research or to create different revenue streams.
‘High finance’ decision-making creates a different university environment to Hil’s Keynesian Ivory Tower. Senior managers believe they face a hyper-competitive, volatile environment of disruptive, low-cost challengers. This strategic thinking convinced University of Virginia’s board chair Helen Dragas to lead a failed, internal coup d’etat against president Teresa Sullivan with the support of hedge fund maven Paul Tudor Jones III. The same thinking shapes the cost reduction initiatives underway at many Australian universities. It creates a lucrative consulting market in higher education for management consulting firms. It influences journalists who often take the public statements made at face value instead of doing more skeptical, investigative work.
The ARC has played a pivotal role in the sectoral change unfolding in higher education. Its journal rankings scheme the Excellence for Research in Australia (ERA) provided the impetus for initial organisational reforms and for the dominance of superstar economics in academic research careers. ERA empowered research administrators to learn from GE’s Jack Welch and to do forced rankings of academics based on their past research performance. ARC competitive grants and other Category 1 funding became vital for research budgets. Hil’s professoriate are now expected to mentor younger, ECR academics and to be ‘rain-makers’ who bring in grant funding and other research income sources. Academics’ reaction to the ARC’s freeze decision highlights that the Keynesian Ivory Tower has shaky foundations.
The make-or-buy decision in ‘high finance’ changes everything. Hil’s Ivory Tower was like a Classical Hollywood film studio or a traditional record company: invest upfront in talent for a long-term payoff. Combining ERA’s forced rankings of academic staff with capital budgeting and valuation techniques creates a world that is closer to private equity or venture capital ‘screening’ of firms. Why have a 20-to-30 time-frame for an academic research career when you can buy-in the expertise from other research teams? Or handle current staff using short-term contracts and ‘up or out’ attrition? Or your strategy might change in several years to deal with a volatile market environment? Entire academic careers can now be modeled using Microsoft Excel and Business Analyst workflow models as a stream of timed cash-flows from publications, competitive grants, and other sources. Resource allocative decisions can then be considered. ARC competitive grants and research quality are still important — but ‘high finance’ decision-making has changed research management in universities, forever.
Today’s young academics face a future that may be more like auteur film directors or indie musicians: lean, independent, and self-financing.
Administrators have very little regard for academics. I’d say that 90 per cent of them can’t stand academics. They think we have it easy, going off to conferences and the like. They think that all we do is teach. They don’t understand the rest of the things we do like research and writing.
Australia’s universities are in a financial crisis. Federal and State government funding has been cut. International student numbers have fallen. Staff surveys are in flux. The University of Sydney, Australian National University, University of Tasmania, and La Trobe University have all announced cuts to academic staff and the closure of academic programs. Other higher education institutions are considering similar options. Into this fray comes Whackademia a polemical “insider expose” written by Richard Hil, a senior lecturer at Southern Cross University and a visiting scholar in peace studies at University of Sydney.
Hil’s target is Whackademia: “the repressive and constricting work culture currently operating in our universities” (p. 22) which Hil believes has corrupted academic scholarship and led to the rise of manager-administrator control. Academics must now contend with “‘massification'” and “student-shoppers” which includes “full-fee-paying overseas students” (p. 18). Hil tracesWhackademia’s birth and growth to the decision of Australian Federal Education Minister John Dawkins in the 1980s to change Australia’s higher education landscape and to charge student fees (p. 73). This “imposition of free-market ideology” (p. 55) has transformed education into a highly marketed commodity in which institutions rely on an army of casual academics who teach courses (p. 39). In particular, Hil criticises “a new generation of demand-led courses” that offer pseudo-knowledge to fee-paying students (p. 188).
Hil shares his student experiences in the 1970s at the University of Essex and Bristol University. He then taught at the University of the Sunshine Coast, Queensland University of Technology, and Southern Cross University. I had very different experiences and I wonder if there are some memory recall factors that may have influenced his self-narrative. I was an undergraduate student in cinema studies and politics at La Trobe University in the early-mid 1990s including a 1994 stint as a student journalist on the now-notorious Rabelais newspaper. I left my degree to pursue a freelance career in publishing before completing my degree in 2001. I pursued a Masters at Swinburne (in strategic foresight); interned in a research institute and saw it axed; worked for a Cooperative Research Centre and on a successful rebid; and did a second Masters (in counterterrorism). I have higher education student debt (and got my latest tax notice today). I worked on Swinburne’s 2008 audit by the former Australian Universities Quality Agency, and for the past three years as a research facilitator in Victoria University’s Faculty of Business and Law. I currently work on research programs, competitive grants, and commercial research contracts (pp. 145, 180), and worked previously for 18 months in quality assurance (p. 96). I am also a PhD candidate in Monash University’s School of Political and Social Inquiry. My career path has been like Billy Beane in the book and film Moneyball, or like the ‘fixer’ in Michael Clayton. This background and experiences informed how I interpreted Whackademia.
For Whackademia, Hil interviewed 60 academics from Australian universities (p. 24). In contrast to investigative journalists like Lawrence Wright, Steve Coll and William D. Cohan, none of Hil’s interviewees are named ‘on the record’, because Hil believes the outspoken academics would be targeted by managers if they did so (p. 69). Hil wrote a column for several media publications also under the pseudonym ‘Joseph Gora’ (p. 23). (In contrast, I have an on-going public blog thread on academia.) Hil is thus not as thorough in his interviewees and research as Cohan and Wright are, nor as fair-minded as Coll can be. Instead, Hil believes that “the importance of their observations cannot be overstated” (p. 21) and that “complaint is rife throughout Wackademia” (p. 194). For instance, Hil notes the existence of “ghost work” (p. 167) that university workload models do not cover. Whackademia raises important issues that academics, managers, and administrators have discussed, and which the public should know about. Yet it does so primarily at the superficial level of complaints rather than in a sophisticated, multi-stakeholder approach to why these problems exist, and how they might be solved. Having managed a university complaints process I know that complaints can be significant or they can be noise due to personal factors. For every genuine complaint Hil raises I can provide either a similar and supportive anecdote or a counter-complaint about how administrators have to put up with academics. This is why Whackademia is best read as polemic or a collection of ‘water cooler’ anecdotes rather than as rigorous research: an observation that Hil’s school head pointed out to him during Whackademia‘s drafting process (p. 175).
Perhaps the problem is that I am part of the group that Hil criticises: ‘para-academic’ administrators (p. 73) who are “obsequious devotees of micro-management” (p. 88) and who “do little or no research, and devote themselves with feverish intensity to form-filling, co-ordination duties and committee attendance” (p. 183). “Para-academics love this sense of impending doom”, Hil explains about the discussion of university budget processes, “it’s why they get up in the morning” (p. 184). This is pure mind-reading: Hil didn’t ask his ‘para-academics’ what they really felt or do an ethnographic study. Are there ‘para-researchers’ as well? Administrators are described in negative stereotypes including being “performance-obsessed” (p. 132), as “university mandarins” (p. 16), as a “new organisational supremo” (pp. 171-172) and as an “administrative supremo” (pp. 181, 189). This is scapegoating and demonising a social group on the basis of their university HR contract status. Another problem is the proliferation of forms and “deadlines that suit administrators rather than academics” (pp. 93, 172). The worst administrators are those who calculate the workload model (p. 169). As a peace studies scholar Hil understands the power of language, framing, and ‘othering’. Why then does he ‘other’ administrators and ‘para-academics’, none of whom are interviewed or who have an opportunity to respond to the many academic complaints?
In fact, university enterprise bargaining agreements (EBA) differentiate between management and academics on the one hand, and administrators on the other. The EBA defines different incentive structures that also shape cultural perceptions between each group. Management and academics are paid by an academic salary scale. They receive performance-based incentives such as conference travel and institutional research funding. Administrators do not receive these privileges — even if they produce research or advise on the relevant policies — and their HEW salary scale is lower. They are often employed on short-term rather than continuing contracts. Hil omits several important things about administrators and ‘para-academics’. They may also be degree-qualified. They may see hundreds of academic CVs and competitive grants, so they can see patterns of success and failure. They can counsel academics not to make career-limiting decisions — which they may in fact have done. Hil raises a number of issues that are important to administrators: perceptions of academic flexible time (p. 14); full-time staff benefits compared with casuals (p. 20); and the potential misuse of leave applications (p. 187). But he then immediately dismisses these concerns as irrelevant rather than signals of status envy. The psychological gambit Hil uses throughout Whackademia is called ‘shifting the blame’ and it weakens the book’s critique.
At the root of administrator and ‘para-academic’ concerns is a sense that academics get a preferential set of career, financial and research opportunities that administrators (and casuals) do not. Academics can then have entitlement about these opportunities: they are superior or more gifted than the people around them. Compounding this, some academics do not live up to their role expectations and scholarship, and may attempt to ‘game’ the system. Administrators can tell this from institutional research data. Why should this behaviour be accepted and tolerated amongst scholars and professionals? Ted Gurr’s relative deprivation thesis; Leon Festinger’s cognitive dissonance; Barry Oshry’s organisational analysis; Daniel Kahneman and Amos Tversky’s biases and framing; and many others have explained why these dynamics exist, and why institutions and managers are very unlikely to change them anytime soon. The deeper dynamic that remains unexplored is the circulation of elites and meritocratic access to it in universities.
Hil’s attack on administrators and ‘para-academics’ totally misses this debate and instead potentially contributes to the marginalisation of these valuable university staff. Many administrators don’t produce research because their universities don’t value and incentivise them as researchers in the same way as academics. Senior managers and the professoriate rarely act when this status difference is raised: status protection. I understand how to write and research: I have had many dialogues with university managers and academics on this, including how to interpret and use ERA journal rankings to develop a research program. They are a start to a much deeper conversation that is closer to what Hil wants to occur (p. 215). The other administrators I know often have deep institutional capital. Some academics ignore this and mistreat administrators and ignore this expertise. Where Hil ‘essentialises’ identity I see a distinction that can be traced to the EBA, to HR contracts, and ultimately to past decisions by university management when they scope and create the administrator roles. It’s a (university HR contract) decision that administrators have to live with but it doesn’t define them as people.
The problems that Hil and his interviewees highlight exist for important reasons – not explained in Whackademia. Universities are what Canadian management scholar Henry Mintzberg describes as machine bureaucracies (and sometimes professional bureaucracies) that rely on workload models, and policies and procedures to manage staff. This form leads inevitably to elites, power struggles, patronage networks, information asymmetries, and career ambition. Hil presents a romanticised image of overworked academics, and their Golden Age past, but I have seen and been caught in Machiavellian power struggles that felt like a Game of Thrones episode. I find lessons (not necessarily endorsed) in Henry Kissinger’s Harvard International Seminar and also in the troubled track record of defence intellectual who sought to speak truth to power. Administrators do not blindly collaborate with managers and school heads as Hil suggests, or acquiesce to their whims. Instead, administrators are often negotiators in a multi-stakeholder network of different and competing interests. They can see the unintended consequences of change initiatives and often make process redesign requests that many academics are unaware of. They also often take the academic’s side on research issues.
Hil’s advice on career and research management is also problematic. Research administrators give “minimal attention” to “the intellectual content or social purpose of the research” (p. 133) but this is false: competitive grants do not succeed without a compelling, well-formed research program or project proposal. How can Hil know this unless he has attended the grant and project development meetings of many research teams? (I have.) The Australian Research Council (ARC) team that designed the Excellence for Research in Australia exercise had a bibliometrics background and benchmarked similar exercises in the United States, the United Kingdom, and New Zealand. The team were shocked at how managers used the ERA journal rankings – but this happens with any ranking system. The promotions criteria that Hil ascribes to ERA in fact existed prior to it: Associate Professor and Professor level academics are often promoted on the basis of their competitive grant and publications track record (p. 156). ARC grants are not a lottery: Hil might have talked with the ARC, ARC assessors or successful teams, or looked at the ARC’s funding guidance to applicants. Academics who follow Hil’s advice will damage the probability of ARC grant success and possibly their research careers. A quality assurance team would log Hil’s process for “on-line marking” and forms as candidates for a LeanKaizen exercise of process redesign and improvement to remove ‘muda’ or waste (pp. 178-179). I am on two academic research committees and we do not run our meetings like Hil describes (pp. 184-187) and nor would a commercial environment. I do not respect academics who attend meetings and who don’t contribute on discussion items where they have expertise or roles, or who just attend to get workload points. These academics waste mine and others’ attention and time.
I find that in contrast to Hil many academics lack basic time and project management skills, and would benefit from a methodology like David Allen’s Getting Things Done, the Pomodoro Technique, Personal Kanban, Lean Startup or Scrum (p. 141). These techniques resolve the ‘busyness’ dilemma that Hil and many of his interviewees raise, as do practices in agile project management and software development. In some cases these problems exist because of failures in strategic investment and infrastructure, and the continued existence of manual work processes when more humane alternatives are available. School heads make private judgments about resource allocation not on the basis of a “differential exercise of power”, “favor” or “today’s regulatory rationalities” (p. 91) but rather a sense of how the academic has performed against the Minimum Standards for Academic Levels (MSALs) that the EBA defines for each academic level. My experience in talking with school heads is that the “more seasoned academics who are perhaps most resistant to the new order” (p. 91) get the most attention rather than the academics who actually perform well at their MSALs. In such cases, the problem isn’t school heads or administrators: it’s potentially the academic’s failure to uphold the professional standards of their discipline or the long-term effects of institutionalisation. There may also be personal mitigative factors that have to be handled fairly and sensitively. But I also know many hard-working and research-productive casuals who deserve full-time status.
Alternatively, academics might work for a “paradigm-buster” (pp 208-215) like the Oases Graduate School in Hawthorn, Victoria; Newcastle’s annual This Is Not Art festival; the think tank Centre for Policy Development; or the media outlet New Matilda. Several of these were founded by Generation X university graduates. How does Hil know that today’s graduates do not have the civic awareness he values? Who did he interview? What student experience surveys did he look at? We don’t know how Hil arrived at his opinion and what evidence he considered.
“There remains a widespread belief that academics have it good when compared to workers elsewhere,” Hil notes (p. 13). “In some cases this is probably true.” It’s an initial observation that could have been explored further or that could be the basis for a very interesting comparative research project. Hil doesn’t explore it further nor does he examine the varied causes of the problems and complaints that he documents. He appears to take many of his interviewees at face value: we don’t know if there was selection bias in his interview sample, what the inclusion criteria were, who was intervieweed and not included, and who was nominated but not interviewed. There could be confirmation bias and possible sampling effects from specific academic disciplines, sub-disciplines (Hil interviews several peace studies colleagues), NTEU union members, and universities. The media outlets that Hil samples have each crafted their own crisis narratives about universities, and so their reportage can have subtle information biases. This is why I find Cohan, Coll and Wright’s investigative journalism as a more viable model: they interview many people and show several sides to a situation and organisation.
Regrettably, Whackademia contributes to the very “negative public mythology” (p. 13) about universities and academics that Hil diagnoses and seeks to counter. In part the problem is when a term like ‘audit culture’ or ‘free-market ideology’ becomes the accepted frame and can thus be a barrier to further differential diagnosis and emergent, reflective insights. If Hil considers writing a follow-up book then he might look to the scholar Rakesh Khurana (From Higher Aims To Hired Hands) as one possible critical model to use.
I am sympathetic to all of these conditions, but I have found it important to cultivate the ability to write at any time, in any circumstance — even if it’s just collecting thoughts about something. I keep a pen and paper in my pocket at all times, pen and pad by my bed, notebook(s) in my backpack and all over the house. I do find that I need large chunks of uninterrupted time to surmount larger writing tasks, but the ubiquity of computers, portable or otherwise, makes writing anywhere a much more viable option. [emphasis added]
Christopher’s insight led to an email exchange on the barriers that academia poses for writers. I think about this a lot in my current university gig as a developmental editor. I also work with a talented copy-editor. Here are six ways that academia kills writing:
1. Perverse incentive structures. Christopher and I are both intrinsically motivated writers who approach it as a craft. We blog, write journal articles and in-progress PhD dissertations, and Christopher has several book projects. In contrast, some academics I know write only for performance-based incentives. They play games such as writing fake conference papers, sending book manuscripts to vanity publishers, and publishing in obscure international journals. This leads the university research administrators to change the incentives structures. It also introduces scoping problems into competitive grants: the journal article(s) only get written if the money is awarded. It’s very rare that I find an intrinsically motivated writer: maybe an Early Career Researcher who has just finished their PhD, or a senior academic intent on making a contribution to their field or discipline. I wish academics had a more hip-hop or punk sensibility and just did the work, regardless of the institutional incentives.
2. Misuse of university research metrics. The Australian Research Council‘s Excellence for Research in Australia shifted the research conversation to performance and quality-based outputs. This also lead to games such as poaching academics who had ERA publishing track records. However, it also sometimes led to a narrow focus on A* and A-level journals without changes to the workload models or training investment for academic skills and robust research designs. Not everyone is Group of 8, Harvard or Stanford material, or at least not at their career stage. Metrics use must be counter-balanced with an understanding of intellectual capital and development strategies. To-date the use of ERA and Field of Research metrics is relatively unsophisticated, and it can often actually de-value academic work and publishing track records.
3. A failure to understand and create the conditions for the creative process. The current academic debate about knowledge creation swings between two extremes. On the one hand, budget-driven cost-cutting similar to GE’s Work-Out under Jack Welch or private equity turnarounds. On the other, a desire to return to a mythical Golden Age where academics are left alone with little accountability. Both views are value destructive. The middle ground is to learn from Hollywood studios, music producers, and academic superstars about the creative process, and to create the conditions for it. This means allowing time for insights to emerge or for academics to become familiar with new areas. It means not relying on conferences and being pro-active in forming collaborative networks. It means treating academic publications as an event and leveraging them for maximum public impact and visibility. Counterintuitively, it can also mean setting limits, stage gates, and ‘no go’ or ‘abandon’ criteria (real options theory can be a useful tool). This is one reason why Christopher and I sometimes exchange stories of the strategies that artists use: to learn from them. This is a different mentality to some university administrators who expect research publications to emerge from out of nowhere (a view often related to the two barriers above).
4. Mystifing the blind peer review process. What differentiates academic research from other writing? Apart from the research design, many academics hold up the blind peer review process to be a central difference. Usually, a competitive grant or a journal article goes to between two and five reviewers, who are often subject matter experts. The identities of both the author(s) and the reviewers are kept secret from each-other. Supposedly, this enhances the quality of the review process and the candour of the feedback provided. Having studied the feedback of 80 journal articles and 50 competitive grants, I disagree. The feedback quality is highly reviewer dependent. Blind peer review provides a lack of transparency that allows reviewers to engage in uber-critical reviews (without constructive or developmental feedback), disciplinary in-fighting, or screeds on what the reviewer wished had been written. Many academic journals have no rejoinder process for authors to respond. These are problems of secrecy and can be avoided through more open systems (a lesson from post-mortems on intelligence ‘failures’).
5. Being set up to fail through the competitive grants process. A greater emphasis on research output metrics has prioritised success in competitive grants. Promotions committees now look for a track record in external grants for Associate Professor and Professor roles. Australian universities do not often have endowed chairs or institutional investment portfolios — so they are more reliant on grant income. Collectively, these trends translate into more pressure on academics to apply for competitive grants. However, success is often a matter of paying close attention to the funding rules, carefully scoping the specific research project and budget, developing a collaborative team that can execute on the project, and having the necessary track record in place. These criteria are very similar to those which venture capitalists use to evaluate start-ups. Opportunity evaluation, timing, and preparatory work is essential. Not meeting this criteria means the application will probably fail and the grant-writing time may be wasted: most competitive grants have a 10-20% success rate. Some universities have internal grant schemes that enable new academics to interact with these dynamics before applying to an external agency. In all cases, the competitive grant operates as a career screening mechanism. For institutions, these grants are ‘rain-making’ activities: they bring money in, rather than to the individual academic.
6. A narrow focus on A* and A-level journals at the expense of all other forms of academic writing. The ARC’s ERA and similar schemes prioritise peer reviewed journals over other forms of writing. (This de-valued large parts of my 18-year publishing history.) The 2009 and 2010 versions of ERA had a journal ranking list which led many university administrators I know to focus on A* and A-level journals. I liked the journal ranking list but I also saw it had some perverse effects over its 18 months of use. It led to on-the-fly decisions made because of cumulative metrics in a publishing track record. It destroyed some of the ‘tacit’ knowledge that academics had about how and why to publish in particular journals. It de-valued B-ranked journals that are often sub-discipline leaders. It helped to create two groups of academics: those with the skills and training to publish in A* and A-level journals, and those who did not. It led to unrealistic expectations of what was needed to get into an A* journal like MIT’s International Security: a failure to understand creative and publishing processes. The narrow emphasis on journals ignored academic book publishers, CRC reports, academic internet blogs, media coverage, and other research outputs. Good writers, editors and publishers know differently: a high-impact publication can emerge from the unlikeliest of places. As of April 2012, my most internationally cited research output is a 2009 conference paper, rejected from the peer review stream due to controversy, that I co-wrote with Ben Eltham on Twitter and Iran’s 2009 election crisis. It would be excluded from the above criteria, although Eltham and I have since written several articles for the A-level journal Media International Australia.
Awareness of these six barriers is essential to academic success and to not becoming co-dependent on your institution.
I think the academic/policy divide has been wildly overblown, but here’s my modest suggestion on how to bridge it even further. First, wonks should flip through at recent issues of APSRandISQ — and hey, peruse International Organization, International Security, and World Politics while you’re at it. You’d find a lot of good, trenchant, policy-adjacent stuff. Second, might I suggest that authors at these journals be allowed to write a second abstract — and abstract for policymakers, if you will? Even the most jargonesed academic should be able to pull off one paragraph of clean prose. Finally, wonks should not be frightened by statistics. That is by far the dominant “technical” barrier separating these articles from general interest reader.
The Lowy Institute’s Sam Roggeveen contends that Australian academics would benefit from blogging their research (in response to The Australian‘s Stephen Matchett on public policy academics).
I see this debate from several perspectives. In a former life I edited the US-based alternative news site Disinformation (see the 1998-2002 archives). I also work at Victoria University as a research administrator. I’ve blogged in various forums since 2003 (such as an old LiveJournal blog). In contrast, my PhD committee in Monash’s School of Political and Social Inquiry are more likely to talk about book projects, journal articles, and media interviews.
As Roggeveen notes, a major uptake barrier is the structure of institutional research incentives. The Australian Research Council’s Excellence for Research in Australia (ERA) initiative emphasises blind peer reviewed journal articles over other forms. Online blogging is not included as an assessable category of research outputs although it might fit under ‘original creative works’. Nor is blogging included in a university’s annual Higher Education Research Data Collection (HERDC) outputs. University incentives for research closely follow ERA and HERDC guidelines. The ARC’s approach is conservative (in my view) and focuses on bibliometrics.
I know very few academics who blog. Many academics are not ‘intrinsic’ writers and are unused to dealing with developmental editors and journals. University websites often do not have blog publishing systems and I’ve seen several failed attempts to do so. Younger academics who might blog or who do use social media are often on casual or short-term contracts. The ones who do blog like Ben Eltham have a journalism background, are policy-focused, and are self-branded academic entrepreneurs.
Roggeveen is correct that blogging can potentially benefit academics — if approached in a mindful way. I met people like Richard Metzger and Howard Bloom during my publishing stint. I am regularly confused with QUT social media maven Axel Bruns — and we can now easily clarify potential queries. Blogging has helped me to keep abreast of sub-field developments; to build networks; to draft ideas for potential journal articles and my PhD on strategic culture; and has influenced the academic citations of my work and downloads from institutional repositories.
Problem is, HERDC or ERA have no scope for soft measures or ‘tacit’ knowledge creation — so blogging won’t count to many universities.
That Roggeveen needs to make this point at all highlights how much the internet has shifted from its original purpose to become an online marketing environment. Tim Berners-Lee’s proposal HyperText and CERN (1989) envisioned the nascent internet as a space for collaborative academic research. The internet I first encountered in 1993-94 had Gopher and .alt newsgroups, and later, web-pages by individual academics. Regularly visited example for PhD research: University of Notre Dame’s political scientist Michael C. Desch and his collection of easily accessible publications. It’s a long way from that free environment to today’s “unlocking academic expertise” with The Conversation.
I recently got negative reviews for two articles submitted to the Journal of Futures Studies (JFS). Many academics I know find article rejection to be highly stressful. Below are some comments and strategies addressed to three different audiences: academic authors; reviewers; and university administrators. Attention to them may improve the probability that your article is accepted for publication in an academic journal.
1. Be very familiar with your ‘target’ journal: its editors and review panel, its preferred research design and methodologies, and how it handles controversies and debates in your field. Look for an editorial or scoping statement that explains what kinds of articles the journal will not accept.
2. Before submission do a final edit of your article. Define all key terms or cite past definitions if you have referred to the scholarly literature. Check paragraph structure, connecting sentences, section headings, and that the conclusions answer the key questions you have raised in the beginning. Cite some articles from the target journal if possible. Consider who is likely to review your article and factor this into your discussion of key debates. Use redrafting for honing the article and for self-diagnosis of mental models.
3. Ask if the journal has a rejoinder process for authors to reply to the blind peer review comments. A rejoinder is not an invitation to personal attacks or to engage in flame-wars. Rejoinders do enable authors to address situations in which one or more reviewers misunderstand the article, frame their comments in terms of an article they wish the author had written (rather than the actual article), or where there are concerns about the methodologies used, the research design, or data interpretation. An effective rejoinder process respects all parties, maintains the confidentiality of the blind peer review process, and provides an organisational learning loop. A rejoinder response does not necessarily reverse an editorial decision not to publish.
4. If the journal does have a rejoinder process then carefully examine the feedback pattern from reviewers. Highlight where one reviewer answers the concerns that another reviewer raised: this should neutralise the negative comments or at least show that varied opinions exist. It is more difficult when several reviewers raise the same concerns about an article.
5. Set a threshold limit on the amount of editing and rewrites you will do: you have other opportunities. A rejected article might fit better with another journal; with a substantial rewrite; with a different research design; or could be the stepping stone to a more substantive article. Individual reviews also reflect the particular reviewer and their mental models: this can sometimes be like an anthropological encounter between different groups who misunderstand each-other. Sometimes reviewers like critics just get it wrong: one of my most highly cited publicationswith international impact was dropped from the blind peer review stream.
1. Use the ‘track changes’ and ‘comment’ function of your word processor to provide comments. It can be difficult for authors to read comments that you provide in the body text and that is written in the same font. Be time-responsive: authors hate waiting months for feedback.
2. Do a first read of the article without preconceptions: focus on the author’s state intent, their narrative arc, the data or evidence, and their conclusions. Be open to the article you have been asked to review, rather than the article that you wish the author had written. Be open to innovation in data collection, methodologies, and interpretation. Even do a self-review of your own comments before you send your feedback to the journal editors.
3. Know your own mental models. That is, how you see the field or discipline that you are reviewing in; your preference for specific methodologies and research designs; your stance on specific controversies and debates; and what kind of material you expect the journal to publish. Be aware of situations in which you are asked to review articles because you have a particular stance: the tendency is to write lukewarm reviews which focus on perceived deficiencies or ‘overlooked’ material. Be careful of wanting to ‘police’ the field’s boundaries.
4. Use your feedback as a developmental opportunity for the author. Don’t just give negative feedback, faulty sentence construction or grammar. If you don’t like something then explain why so that the author can understand your frame of reference. Focus also issues of research design, methodologies, and data interpretation. If there are other external standards or alternative perspectives (such as on a controversy or debate) then mention them. Articles often combine several potential articles or can have scope problems so note them. Highlight sections where the author makes an original, scholarly contribution, including new insights or where you learned something. It’s important to provide developmental feedback even when you reject an article for publication. A developmental review may evoke in authors the ‘moment of insight’ that occurs in effective therapy. The mystique of the blind peer review process ultimately comes down to the reviewer’s attention to the craft of providing constructive yet critical feedback that sets up future opportunities for the academic to advance their career.
5. Poison pen reviews have consequences. This is clearer in creative industries like film and music where bad reviews can kill a project or career. Pauline Kael and Lester Bangs are honoured in film and music circles respectively because they brought sensitivity and style to their reviews, even when they hated an artist. In academia, the blind peer review process can lead to internecine wars over different methodologies or research designs: problems that don’t usually arise in open publishing (because all parties know who is making the comments) or that can be handled through editorial review standards and a rejoinder process. Nevertheless, a negative review will have consequences. The author may not revise the article for publication. They may publish in a different journal. They may drop the project. In some cases, they may leave the field altogether. Consider how to frame the review so that you address the developmental need in a constructive manner.
1. Know the norms, research designs and methodologies, leading research teams, and the most influential and international journals in at least one discipline. This gives you a framework to make constructive inferences from. You will develop awareness of these factors in other disciplines through your interviews with different academics.
2. Understand the arc or life-span of academic careers: the needs of an early career researcher and the professor will differ, and this will influence which journals they seek to publish in. Every successful publication navigates a series of decisions. Know some relevant books and other resources that you can refer interested academics to.
3. Have some awareness of international publishing trends which affect journals and their editorial decisions. These include the debate about open publishing, the consolidation of publishing firms, and the different editorial roles in a journal. Be aware of the connection between some journals and either professional associations or specific university programs.
4. Know what to look for in publication track records. These include patterns in targeting specific journals; attending conferences; building networks in the academic’s discipline; and shifts in research programs. An academic may have a small number of accepted articles when compared with the number that have been written and rejected by specific journals. Use the publication track record as the basis for a constructive discussion with the individual academic, honoring their experience and resources, and using solution-oriented therapeutic strategies.
5. Understand that quality publications require time which equates to university investment in the academic’s career. The journal letter rankings in the Australian Research Council’s Excellence for Research in Australia led some university administrators to advise academics only to publish in A* and A-level journals. But not everyone will realistically achieve this. There can be variability of effort required: one A-level article I co-wrote required a substantive second draft; another took months to discuss, a day to do the first draft, and it was then accepted with minor changes. On the other hand, articles accepted in the A* journal International Security (MIT) have usually gone through multiple rounds of blind peer review, the authors are deeply familiar with the field’s literature, and have work-shopped the article extensively with colleagues, in graduate school seminars, and at international conferences. This takes a median two to five years to occur. The late Terry Deibel took almost 20 years to conceptualise and refine the national security frameworks he taught at the United States National War College for Foreign Affairs Strategy: Logic for American Statecraft (Cambridge: Cambridge University Press, 2007) and Deibel also spent two years of sabbatical — in 1993 and 2005-06 — to write it. John Lewis Gaddis spent 30 years of research on George F. Kennan: An American Life (New York: The Penguin Press, 2011) and five years to write it. Both books make substantive scholarly contributions to their fields; both books also required the National War College and Yale University to make significant financial investments in the authors’ careers. Are you making decisions based on short-term, volume-driven models or helping to create the enabling conditions that will help academics to have a similar impact in their respective fields?
For the past five years I’ve been working on ‘draft zero’ of a PhD project on counterterrorism, intelligence, and the ‘strategic culture’ debate within international relations theory and strategic studies.
The project ‘flew past me’ during a trip to New York City, shortly after the September 11 attacks, and whilst talking with author Howard Bloom, culture maven Richard Metzger, Disinformation publisher Gary Baddeley, and others. An important moment was standing on the roof of Bloom’s apartment building in Park Slopes, Brooklyn, and seeing the dust cloud over Ground Zero.
The ‘draft zero’ is about 240,000 words of exploratory notes, sections, and working notes; about 146,000 of these words are computer text, whilst 80,000 is handwritten (and thus different, and more fragmentary).
In the next couple of weeks, I’ll write about the PhD application process, and the project when it gets formally under way, to share insights and ‘lessons learned’.
For now, here’s a public version of my CV and academic publications track record (PDF).
Insight during a morning meeting: to follow the money and find the ‘edge’ in an industry, listen to the ‘water cooler’ discussions at investment conferences. The topics may turn up in academic journals about two years later.
PhD ‘draft zero’ progress: several journal articles I had missed on ‘strategic culture’ — one argues that it is a research program instead of a variable. Six pages in to John Hutnyk‘s article ‘Jungle Studies: The State of Anthropology’, Futures34 (2002): 15-31; this exemplifies the fusion of critical realism, cultural studies and post-Marxist critique of universities that I saw in the mid-to-late 1990s. I wondered: Is this the kind of research design that probably led ARC assessors to rank Futures as a B-level journal for the 2010 ERA rankings? A page on how post-September 11 ‘conflict anthropology’ has ‘borrowed’ ideas and insights from anthropological research.
Found in notes pile: two detailed outlines for unfinished, never-submitted journal articles.
The Norwegian band Ulver as a model for the unfolding creative process: a shift from three influential black metal and folk metal albums, to prog rock, ambient glitch, and then to film soundtracks, and jazz-influenced symphonic rock. It helps that Ulver’s Kristoffer Rygg owns his label Jester Records. Occulture bonus points: Ulver’s second album, rumoured to have been recorded on an 8-track in a forest after the band spent their advance money, turns up in HBO’s The Sopranos.
Morning meeting: get people face-to-face on sensitive issues, avoid escalation by email, and remove roadblocks. Some interesting anecdotes on what really happens on an overseas consultancy.
Late afternoon meeting over tea and donuts with collaborator Ben Eltham in Melbourne’s Nicholas Building. Discussion: EMI’s troubles; how ERA will affect two articles we are working on; Australian academic and zine maven Anna Poletti; why journal workshops have bad percolator coffee; sick buildings; and the psychological impact of glass desks in offices.
Evening: PhD ‘background research’ viewing the first episode of Gwynne Dyer‘s mid-1980s series ‘War’: archival footage of World War I nationalist mania, the Western Front trenches, machine guns, German zeppelin raids, and World War II aerial bombings, ending in the Trinity nuclear test and Hiroshima. The nationalist mania, and generals’ decision that led to the sacrifice of 60,000 English in one day to German machine guns and no-man’s land, are examples of George Gurdjieff‘s ‘terror of the situation’.
An insight whilst viewing Dyer’s series: the Napoleonic innovation of national conscripts and total war, and German air-raids, broke the taboo on targeting civilians. Prior to this, 19th century Russian anarchists usually targeted police and political leaders. After this, many groups acted on the taboo, for different reasons: anti-colonialist and nationalist revolutions, radicalisation in the shadow of the Vietnam War and other conflicts, strategic tactics such as during hijack negotiations, and religiously motivated violence. This hypothesis appears to be a close fit to David Rapoport‘s waves thesis and to Mark Juergensmeyer‘s research program. Is this testable using the Correlates of Wardata-sets?