25th February 2012: Mailroom Jobs & Superstar Economics

The Operator: David Geffen Builds, Buys and Sells the New Hollywood (2000)

 

For the past week I’ve been writing about academic entrepreneurs and superstar economics. Now, NPR’s Adam Davidson has a great New York Times article on why many careers are becoming lotteries in which a small group has a ‘winner-takes-all’ or ‘success to the successful‘ dynamic and others can miss out. Davidson’s key insight:

 

Hollywood is, in some ways, the model lottery industry. For most companies in the business, it doesn’t make economic sense to, as Google does, put promising young applicants through a series of tests and then hire only the small number who pass. Instead, it’s cheaper for talent agencies and studios to hire a lot of young workers and run them through a few years of low-paying drudgery. (Actors are another story altogether. Many never get steady jobs in the first place.) This occupational centrifuge allows workers to effectively sort themselves out based on skill and drive. Over time, some will lose their commitment; others will realize that they don’t have the right talent set; others will find that they’re better at something else. [emphasis added]

 

Davidson’s thesis is that this “economic lottery system” pushes talent to the top. He cites Hollywood actors and directors, and Big Four accountants who survive the ‘up or out’ system to make partner (William D. Cohan has interviewed the Wall Street losers). Davidson connects tournament theory — the study of individuals who have relative advantages in salary and wage negotiations — to disruptive innovation (PDF), globalisation, technology and other mega-trends that are creating a ‘race to the bottom’ dynamic. How can individuals cope with these changes? “In a lottery-based economy, you need some luck, too; now, perhaps, more than ever,” Davidson advises. “People should be prepared to enter a few different lotteries, because the new Plan B is just going to be another long shot in a different field.”

 

For Davidson the “economic lottery system” model is the New Hollywood. The reality is a little more complex. Classical Hollywood’s studio production system flourished from the 1930s until the ‘go go’ Sixties when the modern conglomerates collapsed. For a brief period from 1968-73, independent producers flourished before the studios fought back with the blockbuster film, new marketing, distribution, and control of ancillary revenue streams. A similar pattern occurred in the 1995-2000 dotcom period (PDF) in Los Angeles, New York, Austin, and London. Ben Eltham and I found in a 2010 academic paper that Australia’s film industry fluctuated depending on a mixture of Australian Government intervention, available labour, and international tax arbitrage. Eltham and I both read Nikki Finke’s influential blog Deadline Hollywood.

 

History also differs on the New Hollywood exemplars that Davidson selects. “Barry Diller and David Geffen each started his career in the William Morris mailroom,” Davidson observes. Tom King’s biography The Operator: David Geffen Builds, Buys, and Sells the New Hollywood (New York: Random House, 2000) details what actually happened over this six month period in late 1964-early 1965 before Geffen became secretary to television agent Ben Griefer (pp. 46-52). Geffen lied to WM’s Howard Portnoy that he was Phil Spector’s cousin. Geffen lied about having a college education and persuaded his brother Mitchell to write a letter and cover this up. When they met, Diller “thought Geffen was a rather odd duck for using his vacation time to work in the company’s other office” (p. 50). Geffen networked with agent Herb Gart, “stalked” New York office head Nat Lefkowitz, and got his break from Scott Shukat. Geffen relied on chutzpah, hard work, networking, and having a career goal: “signing actors.” No wonder that Geffen hated King’s biography.

 

These qualities are essential to Davidson’s “occupational centrifuge.” When academics ask me about their Dean’s budget and resource allocative controls, and why universities are now like Davidson’s “economic lottery system”, I suggest they invest time in watching the film Moneyball (a film in part about tournament theory), and understanding the performance and value creation goals of private equity firms (the mental model of consultants who possibly advise the Dean).

 

I haven’t finished the academic journal articles on those ideas yet . . .

18th February 2012: Human Capital & Superstar Economics

We Are All Witnesses (Nike)

 

Crikey‘s Ben Eltham has caused a debate with his insightful analysis on Michael Brand, the new director of the Art Gallery of New South Wales:

 

The sheer amount of money washing around global art markets helps us to understand how a gallery director such as Brand can be worth nearly half a million dollars a year. There is in fact an international market for top curators, many of which can all expect to earn comfortably more than the rates Australian galleries pay.

 

Eltham and I did a similar analysis in 2010 of Australia’s film industry. Successful fund managers also have a similar dynamic due to the 2 and 20 norm: 2% of total asset value (management fee) and 20% of any profits.

 

I read Eltham’s analysis the same day as sections of the late Fischer Black‘s book Exploring General Equilibrium (Boston: MIT Press, 2010). Two relevant sections stood out immediately on human capital:

 

What is special about human capital is that people mostly own their own human capital, with all of its specific risks. They could diversify or hedge out some of these risks by trading in shares of physical capital, but as Baxter and Jermann (1993) note, they generally don’t. (p. 69).

 

The normal career path involves many job changes — some within a single firm, and some between firms . . . Careers advance faster in good times than in bad, as investments in human capital, particularly through learning by doing, pay off. (p. 102).

 

Black’s macroeconomic analysis provides some context for Eltham’s critique of Brand’s salary. In two paragraphs, Eltham summarises Brand’s “first-class academic credentials” and “stellar career path.” Brand’s career advanced quickly because he made a series of excellent choices about selecting and delivering on projects, changing galleries, and building a significant body of exhibition work. In doing so, Brand diversified his human capital in a similar fashion to the professors I know who have changed universities in order to get promoted.

 

For Eltham, global art markets provide the context for “a top international director like Brand” to command a premium. The reason, Black suggested, was that “Uncertainty in both tastes and technology makes investments risky, and gives us a frontier of choices among different combinations of expected payoff and risk” (p. 126). The Art Gallery of New South Wales is willing to pay Brand a premium to lock-in his expertise and make the optimal choices for future art exhibitions.

 

Brand’s situation contrasts with university academics who lack the benefits of superstar economics. Academic contracts are defined by a university’s minimum standards for academic levels (MSALs) and by promotion committees. Academics rarely have control of their intellectual property or a share in future revenues from their work: they are forced to assign these rights to global publishing conglomerates. The market for competitive grants is a government-controlled oligopoly that requires a substantive publication track record. Academics who don’t build this cannot hedge their own human capital risk (or exposure to disruptive innovations). Collectively, these conditions place a cap on academic contracts in contrast to Brand and fund managers. The exception is professors who gain in a ‘winner-takes-all’ environment whilst their colleagues are on short-term contracts.

 

Things may change if International Creative Management, Creative Artists Agency or WME work out how to extract greater value in human capital from academic superstars.

16th February 2012: Academic Blogging

fred and academic blogging

 

The Lowy Institute’s Sam Roggeveen contends that Australian academics would benefit from blogging their research (in response to The Australian‘s Stephen Matchett on public policy academics).

 

I see this debate from several perspectives. In a former life I edited the US-based alternative news site Disinformation (see the 1998-2002 archives). I also work at Victoria University as a research administrator. I’ve blogged in various forums since 2003 (such as an old LiveJournal blog). In contrast, my PhD committee in Monash’s School of Political and Social Inquiry are more likely to talk about book projects, journal articles, and media interviews.

 

As Roggeveen notes, a major uptake barrier is the structure of institutional research incentives. The Australian Research Council’s Excellence for Research in Australia (ERA) initiative emphasises blind peer reviewed journal articles over other forms. Online blogging is not included as an assessable category of research outputs although it might fit under ‘original creative works’. Nor is blogging included in a university’s annual Higher Education Research Data Collection (HERDC) outputs. University incentives for research closely follow ERA and HERDC guidelines. The ARC’s approach is conservative (in my view) and focuses on bibliometrics.

 

I know very few academics who blog. Many academics are not ‘intrinsic’ writers and are unused to dealing with developmental editors and journals. University websites often do not have blog publishing systems and I’ve seen several failed attempts to do so. Younger academics who might blog or who do use social media are often on casual or short-term contracts. The ones who do blog like Ben Eltham have a journalism background, are policy-focused, and are self-branded academic entrepreneurs.

 

Roggeveen is correct that blogging can potentially benefit academics — if approached in a mindful way. I met people like Richard Metzger and Howard Bloom during my publishing stint. I am regularly confused with QUT social media maven Axel Bruns — and we can now easily clarify potential queries. Blogging has helped me to keep abreast of sub-field developments; to build networks; to draft ideas for potential journal articles and my PhD on strategic culture; and has influenced the academic citations of my work and downloads from institutional repositories.

 

Problem is, HERDC or ERA have no scope for soft measures or ‘tacit’ knowledge creation — so blogging won’t count to many universities.

 

That Roggeveen needs to make this point at all highlights how much the internet has shifted from its original purpose to become an online marketing environment. Tim Berners-Lee’s proposal HyperText and CERN (1989) envisioned the nascent internet as a space for collaborative academic research. The internet I first encountered in 1993-94 had Gopher and .alt newsgroups, and later, web-pages by individual academics. Regularly visited example for PhD research: University of Notre Dame’s political scientist Michael C. Desch and his collection of easily accessible publications.  It’s a long way from that free environment to today’s “unlocking academic expertise” with The Conversation.

 

Photo: davidsilver/Flickr.

2nd February 2012: Gina Rinehart’s FXJ Move

On 31st January 2012, mining magnate Gina Rinehart bought nearly 8% of Fairfax (FXJ) through Morgan Stanley.

 

New Matilda‘s Ben Eltham observed:

 

Precisely why Gina Rinehart is buying a stake in Fairfax remains a mystery. Neither Rinehart nor her company, Hancock Prospecting, have issued any comment on the move. Rinehart simply issued instructions to her broker, and bought up stock worth about $180 million. . . . It’s simply not necessary to buy a stake in a media company to get your message across.

 

Eltham and Jason Wilson each suggest Rinehart’s bid is to gain control of Fairfax and to promote her political views.

 

Examining the pattern of FXJ trading suggests other possibilities. FXJ jumped from $0.74 close on 2nd February to open at $0.82 on 3rd February. There were major sell-offs that day: during a 14-minute rally period from the market open 10:04am (4.27 million shares), 10:16am (1977.69k shares), to 10:18am (1523.01k shares); during the trader lunch period at 1:22pm (2779.19k shares); at 2:24pm (2185.33k shares); and an end of day sell-off (5.25 million shares) which ensured FXJ shares would open lower the following day.

 

The sell-offs fit a well-known strategy used amongst institutional trading desks: the ‘market squeeze’ trade. In late 2011, I watched J.P. Morgan and Japan’s Mitsubishi UFJ bank use this strategy with several other Australian shares. I found out the details from two sources: ASX regulatory filings made on behalf of offshore hedge funds, and from ThomsonReuters’ SIRCA database which has tick data of individual trades.

 

Here’s one way how the ‘market squeeze’ trade works:

 

1. The trading desk buys up huge amounts of a target share: enough to move the share price. This creates a volume spike that will initially move the share upwards: a stochastic market dynamic used in jump diffusion models of mathematics and option pricing.

 

2. The volume spike creates a red alert which attracts other, different traders. Day traders who use technical analysis, charting, or momentum/rally signals now focus on the share. Exchange trade funds and institutional money managers who must rebalance their portfolios are now also interested. This creates a market for the trading desk to sell to. The share also shows up on the daily volume indicators of the major share trading platforms. The financial media becomes interested.

 

3. The trading desk then dumps a large volume of the share at strategic times during the day. This locks-in a short-term or daily profit for the trading desk. It also influences the upper and lower bounds of the share price. Monte Carlo Markov Chain simulation can predict the share price pathways. The trading desk can then adjust its order book and its market execution costs.

 

4. Meanwhile, the trading desk sells off smaller blocks of shares over a 3-4 week period. This tactic influences high-frequency trading systems. It usually means that the share price trends downward as the trading desk has market-maker control — or several trading desks at different firms create a market equilibrium.

 

5. The trading desk can make profits in several ways. It can dump a large amount of shares at the market open which usually means the share price will fall during the day. This tactic will create cyclical and volume-based effects. It can force other traders to sell once their stop-loss levels are breached. Finally, it can sell shares at the peak of a volatility spike and then buy them back at a much cheaper price when the market trends lower. The trading desk’s volume, its order size, and its lower execution and transaction costs means that it can make a profit from spreads of several cents.

 

FXJ’s trading after Rinehart fits the ‘market squeeze’ pattern. It’s possible that Rinehart will pursue the ownership agenda that Eltham and Wilson emphasise: Michael Milken financed Sir James Goldsmith and others to do so in the 1980s era of leveraged buyout deals. But it’s also possible that Rinehart is using value investment criteria to make a quick profit from market volatility. Or, that Rinehart’s announcement enabled Morgan Stanley and/or other trading desks to use a combination of long/short, paired, and event arbitrage strategies. Someone made a killing on trading FXJ on 1st February 2012.

 

Eltham is right: you don’t need an ownership stake for a media company . . . if you pursue other agendas.

16th January 2012: Australia’s Car Industry & Lost Lean Opportunities

New Matilda’s Ben Eltham writes about Australia’s car industry:

 

All this sounds like a hymn to the efficiency of the open market, and to some extent it is. There is an unavoidably difficult truth to face when we discuss local manufacturing, which is that the high Australian dollar and the small size of our local market makes many aspects of Australian manufacturing uncompetitive. Fairfax’s Ian Verrender outlined the uncomfortable verities last week when he pointed out the obvious: making cars in Australia was never particularly sustainable, and has only been so in the long-term with massive government subsidies. “While we’re at it,” Verrender continued, “let’s be brutally honest. There is no such thing as an Australian car industry. It is an American and Japanese car industry with a couple of plants here.”

 

In the early, 1990s, the International Motor Vehicle Program (IMVP) at the Massachusetts Institute of Technology reached a similar conclusion on Australia’s car industry and the trade-offs of the ‘make or buy’ decision. In their book The Machine That Changed The World: The Story of Lean Production (New York: HarperPerenniel, 1991), authors James Womack, Daniel Jones and Daniel Roos examined Australia’s car industry (pp. 270-272): the role of foreign producers, the $US/$A currency cross-rates, attempts to follow South Korea’s manufacturing model, and an export focus on North America and Europe.

 

Womack, Jones and Roos suggested that Australia’s car industry follow a different strategy:

 

The logical path for Australia would be to reorient its industry toward the Oceanic regional market including Indonesia, Singapore, and the Philippines. Each country within this region might balance its motor-vehicle trade, but, collectively, by permitting cross-shipment of finished units and parts, they could gain the scale needed to reduce costs and let lean production flourish. Australia, as the most advanced country in the region, presumably could concentrate its own production on complex luxury vehicles, while Indonesia at the other extreme, would make cheap, entry-level products. (p. 271).

 

Womack, Jones and Roos observed that this realignment was unlikely for Australia due to two reasons: (1) its focus on northern hemisphere export markets; and (2) cultural and foreign policy barriers to greater involvement in the Association of Southeast Asian Nations (ASEAN).

 

Eltham notes that Australia’s tariffs policy has played a detrimental role in preventing the transition to a lean industry:

 

You need not be a rabid libertarian to note the negative economic impacts associated with car industry assistance. Tariffs are a device to transfer wealth from consumers, who pay more, to producers, who receive direct and indirect subsidies. Those subsidies support local jobs in the manufacturing industry, but at a price. The Productivity Commission estimates the total subsidy is something like $23,500 per worker. Yes, you can take issue with modelling and the econometrics and quibble with the numbers and so on. But there’s no doubt that, in the end, we all pay for the pleasure of sustaining a local car industry.

 

University of Wollongong’s Henry Ergas observes in The Conversation:

 

This is an industry that was born from very high levels of protection and has depended throughout its existence on the continuation of high levels of assistance. None of that makes me hopeful for the long-term prospects of the industry.

 

What lean manufacturing opportunities have Australian policymakers missed? For several decades the GM/Toyota joint collaboration New United Motor Manufacturing, Inc (NUMMI) was highlighted as a success story of United States-led lean manufacturing. But NUMMI closed in 2010. Tesla Motors reopened the former NUMMI factory in 2011 as the Tesla Factory to manufacture the Tesla S sedan car. Meanwhile, Honda plans to increase its United States production. Once again, Australia’s policies on car industry assistance appear to leave it behind global innovation and lean manufacturing.

5th December 2011: Buy Ben Eltham Lunch

Ben Eltham (New Matilda; Crikey)

Ben Eltham is a prolific Australian writer and commentator on national affairs, arts and politics for New Matilda, Crikeyand other online publications.

You can support Eltham’s writing here — I urge you to do so.

Eltham and I have co-written a number of academic journal articles and conference papers, on Twitter and Iran’s 2009 election (PDF and presentation PDF); Australia’s Film Finance corporation and international tax arbitrage (PDF); and on the 2009 Victorian bushfires and journalism (PDF). Our joint paper on Twitter and Iran remains our most academically cited article. Along with his New Matilda and Crikey work, this should convey Eltham’s talent.

Twitterati can follow Eltham here whilst his Google Scholar profile is here.

23rd November 2011: Google Scholar Personal Profiles

Google Scholar has announced open citations and personal profiles.

The service is popular with academics for citation analysis and publication track records. Google Scholar’s data collection is messy: it trawls the internet and gathers citations from a range of websites and sources. It does not yet have the rigour of Elsevier’s Scopus database, for example. However, it is likely to outrank such proprietary services, due to Google’s accessibility and popularity.

My Google Scholar profile is here. For now, it is a highly selective collection — academic journal articles and conference papers, some postgraduate and undergraduate essays, and old Disinformation dossiers (see archives). I was surprised that some long-forgotten articles had been internationally cited. I have a more complete publications profile which gets updated as new academic research is published (PDF).

Several past collaborators — Axel Bruns, Ben Eltham & Jose Ramos — have their own profiles, and you should check out their personal research programs.

25th January 2011: Rare Earths ‘Day Trading’

In mid-2010, Ben Eltham and I discussed various ‘emerging’ threats to traditional military strategy. One of them was rare earths: 11 elements used in defence, automobile applications, consumer electronics, and next generation turbines. We foresaw but didn’t act on the speculative bubble that occurred in rare earths between October and December 2010. There’s a lot driving market sentiment: ‘China’, ‘commodities’, ‘political risk’, ‘first mover advantage’, ‘iPods’, ‘greentech’, ‘next generation automobiles’, and ‘defence’.

Jason Miklian, a researcher at Oslo’s Peace Research Institute did act. Miklian invested in ‘day trade’ stocks of rare earth companies using $US9000 in personal savings. His account is revealing for several reasons. Miklian also foresaw the speculative bubble and continued to do fundamental research on the sector and markets. He timed his market entry. Then, Miklian lost what he had gained through attempting to ‘short’ the market in December 2010.

Miklian blames the market but perhaps the error lies in his ‘day trading’ strategy. Miklian traded a small account. He used options which increased his potential profits yet could quickly engulf his trading account if wrong. He bet on firms like the US-based Molycorp (MCP) which, although their stockprice doubled, are still years away from resolving the production problems with its Mountain Pass facility. Many other firms had questionable earnings and their stocks rose on mainly speculative activity. Others are relying on bullish activity when new production facilities come online in the next 18 months and major deals are signed. What Miklian perhaps needed was a valuation model and assessment of future earnings as well as his sector research. Finally, Miklian mistimed his exit. The volume of trade activity means that despite some market skepticism, trading in major stocks will continue. Technical analysis suggests that stocks of rare earths companies will trade within a range, rather than suddenly collapse.

15th December 2010: Iran-Twitter Q&A

Over a year ago Ben Eltham and I did a conference paper on Twitter’s role in Iran’s 2009 election crisis. The paper proved too controversial for the conference’s refereed stream yet it has gone on to become our most widely read and cited paper.

Today, Paul Raymond posed some questions about Twitter and Wikileaks for a forthcoming article in Saudi Arabia’s magazine The Diplomat. Below are my email answers:

You express doubts that Twitter and other social network tools will “enable ordinary people to seize power from repressive regimes.” But what other political potentials do these networks have, in terms of broadening the public sphere for debate, mobilizing political networks, and helping to globalise civil society? What will be the results of these potentials for governments?

Twitter, Facebook, and other social networking tools certainly have the potential to broaden debate, mobilise political networks, and to globalise civil society. Perhaps they are today’s equivalent of the Cold War’s Radio Free Europe or Voice of America broadcasts. They are able to mobilise autonomous, self-regulative networks of people on a salient issue, and allow government agencies like the US State Department to reach a wider audience. However, these same qualities also mean that particularly for Twitter, social networks can be used to spread rumour and propaganda.US neoconservatives recognised these qualities in 2000 during their discussions on what a ‘next generation’ capability might resemble.Twitter’s interest in Iran gradually faded after the weeks of political uncertainty, as it became clear that Ahmadinejad’s regime would remain in power. Our conclusions echo the late sociologist and political scientist Charles Tilly’s work on political violence and repressive regimes.

The US State Department implicitly recognised Twitter’s importance when it asked Twitter to delay server upgrades – or at least, officials wanted to know what would happen next in the political cyber-laboratory of Iran. What would be a proper response by western governments to the results, including the “unintended uses” different actors gained from the network?

Firstly, to understand that Twitter, Facebook and other social networks will have their own dynamics similar to the CNN Effect of the 1990-91 Gulf War. Second, to counterbalance the ability to reach different audiences with the reality that people may only sustain their attention during a crisis. Third, that different actors will use ‘open network’ tools for their own ends and ethics, such as the Iranian Basij paramilitary using Twitter to arrest and kill protestors.

A ‘proper response’ may depend on the specific government agency. Whilst the US State Department was interested in public diplomacy, other agencies may have different agendas or uses for the same data. The US Department of Defense may be interested in the danger of social networking sites to be used for adversary propaganda and disinformation to international public audiences. A US intelligence agency may be interested in ‘contextual intelligence’ that may arise from diaspora networks, or alternatively, how many ‘tweets’ or messages can lead to ‘noise’. We tried to explore how the same data could be used in different ways depending on the aims and objectives of the specific end-user.

The State Department reacted very differently to the recent phenomenon of Wikileaks. What would be a proper governmental reponse to that kind of use of the internet?

The likely response of the US Government will probably be to charge Wikileaks publisher Julian Assange under the relevant espionage and national security legislation for releasing diplomatic information. In the short-term this will also mean increased security and restricted access in the US Government on a ‘need to know’ basis to diplomatic cables. In the long-term, the US Government could work with specific media and scholarly groups — the American Political Science Association, the Society for Historians of American Foreign Relations, The New York Times, The Washington Post, or George Washington University’s National Security Archive — to release declassified versions of the diplomatic cables in a more controlled and possibly ‘redacted’ manner. However, this might also require changes to US freedom of information laws and declassification schedules. Marc Trachtenberg at the University of California, Los Angeles, is an expert in these declassification issues for historians and political scientists.

What general rules would you suggest governments apply to make best use of the public diplomacy potential of social networking?

Use social networking tools to openly inform the public, such as Saudi Arabia’s initiatives on combatting terrorist financing and successful rehabilitation programs for ex-jihadists. Understand the limitations of social networking tools, such as their varied use by different groups, and how they can become disconnected in crisis situations from ‘on-the-ground’ events. Have mechanisms in place to identify, monitor and to counter disinformation and propaganda that may propagate on such social networks. Integrate social networking tools into a ‘hearts and minds’ strategy that uses a variety of media.

Can government’s efforts to crack down on freedom of information (such as the Chinese attacks on Google and the worldwide campaign against Wikileaks) work in the long run, or has the playing field been permanently leveled, giving civil society and opposition groups the ability to challenge governments’ influence over media agendas and foreign publics’ perceptions?

Constructivist scholars like Alexander Wendt, Peter Katzenstein and Martha Finnemore note the growing power of civil society groups to shape public perceptions and influence media agendas. Sophisticated governments may even work closely with aid organisations like the International Federation of Red Cross and Red Crescent Societies during a crisis. It depends on the context and the nature of the information being publicly released. Google’s problems with China were foreseeable since at least early 2006 because of how Google’s management handled earlier crises about the identities of Chinese human rights activists. The campaign against Wikileaks and its publisher Julian Assange appears in part because the information was released in an ‘unredacted’ form and not through an establishment source like Thomas Friedman or Bob Woodward. The realist scholar Stephen M. Walt and others have pointed out the hypocrisy of this: Assange and Wikileaks are being attacked whilst mainstream media institutions like The New York Times are not. Perhaps the challenge also is that the information Wikileaks has published is about recent and current events, and not the usual 20-30 year gap of normal declassification procedures. The public’s demand for ‘real-time’ information and more transparency is an opportunity for governments and public diplomats, should they decide to seize it.

8th December 2010: Reflections on Editing 1

I spent several hours today editing the third draft of a forthcoming article co-written with Ben Eltham. Some reflections on the process today:

1. Find a co-author who you have synergies with, and whose strengths are a foil for your weaknesses. For the past 5 years I’ve had problems in structuring articles, whereas this is Ben’s specialty.

2. Leave some time to look at a manuscript. Ben finished the third draft in September from conversations that we originally had in November 2009 and earlier. Since then, I’ve read more on Waltzian neo-realism and strategic culture for PhD research. With the extra time, I could immediately see a narrative arc, new argumentation and possible references.

3. Read the journal’s background material: the editorial policy, reference style, and other information. At a strategic level this helps ‘frame’ a developmental editing approach. It also heightens the probability that your academic paper will be accepted into a top scholarly journal.

4. Signpost your key arguments and insights. Often the really interesting material is buried in a paragraph or at the end of a section. Use redrafting to draw it out more clearly for readers. Sometimes it takes a draft or two of getting material down for these ideas to emerge.

5. Kill your darlings. In the first draft I wrote a section on different security threats. It’s irrelevant and misplaced in the current draft. The easiest thing to do was to just cut the entire section. We may have to cut 3,000 words to fit the journal’s preferred word length, so some longer quotations may have to go.

6. Craft your sections and the transitions between each of them. For the article’s narrative arc, I felt several sections from the third draft could be resequenced into a stronger opening. We’ll see how this works – if it doesn’t we can always ‘revert changes’ to the third draft.

7. To hone your material know the field you are writing about. In the subject matter of this particular paper, scholars are expected to immerse themselves in the canonical literature and to understand policymaking processes. Being abreast of this material means we can shape our arguments through careful selection of quotes and references. This is an entirely different approach to some other areas we both write about, which are more fragmented and fluid.