15th June 2013: HFT, Disruptive Innovation & Theta Arbitrage

23rd July 2009 was perhaps the day that retail investors became aware of high-frequency trading (HFT).

 

That was the day that New York Times journalist Charles Duhigg published an article on HFT and market microstructure changes. Duhigg’s article sparked a public controversy about HFT and changes to United States financial markets.

 

Then on 6th May 2010 came the Flash Crash. HFT was again the villain.

 

For the past few years HFT has inspired both pro and con books from publishers. HFT has changed how some retail investors and portfolio managers at mutual and pension funds view financial markets. Now, Matthew Philips of Bloomberg Businessweek reports that 2009-10 may have been HFT’s high-point in terms of being a profitable strategy.

 

Philips’ findings illustrate several often overlooked aspects of Clayton Christensen‘s Disruptive Innovation Theory. Scott Patterson notes in his book Dark Pools (New York: Crown Business, 2012) that HFT arose due to a combination of entrepreneurial innovation; technological advances in computer processing power; and changes to US Securities and Exchanges Commission regulations. Combined, these advances enabled HFT firms to trade differently to other dotcom era and post-dotcom firms that still used human traders or mechanical trading systems. This trading arbitrage fits Christensen’s Disruptive Innovation Theory as a deductive, explanatory framework.

 

The usually overlooked aspect of Disruptive Innovation Theory is that this entrepreneurial investment and experimentation gave HFT firms a time advantage: theta arbitrage. HFT firms were able to engage for about a decade in predatory trading against mutual and pension funds. HFT also disrupted momentum traders, trend-followers, scalping day traders, statistical arbitrage, and some volatility trading strategies. This disruption of trading strategies led Brian R. Brown to focus on algorithmic and quantitative black boxes in his book Chasing The Same Signals (Hoboken, NJ: John Wiley & Sons, 2010).

 

Paradoxically, by the time Duhigg wrote his New York Times article, HFT had begun to lose its profitability as a trading strategy. Sociologist of finance Donald MacKenzie noted that HFT both required significant capex and opex investment for low-latency, and this entry barrier increased competition fueled ‘winner-takes-all’ and ‘race to the bottom’ competitive dynamics. HFT’s ‘early adopters’ got the theta arbitrage that the late-comers did not have, in a more visible and now hypercompetitive market.  Duhigg’s New York Times article wording and the May 2010 Flash crash also sparked an SEC regulatory debate:

 

  • On the pro side were The Wall Street Journal’s Scott Patterson; author Rishi K. Narang (Inside The Black Box); and industry exponent Edgar Perez (The Speed Traders).
  • On the con side were Haim Bodek of Decimus Capital Markets (The Problem With HFT), and Sal L. Arnuk and Joseph C. Saluzzi of Themis Trading (Broken Markets) which specialises in equities investment for mutual and pension fund clients.
  • The winner from the 2009-12 debate about HFT regulation appears to be Tradeworx‘s Manoj Narang who was both pro HFT yet who also licensed his firm’s systems to the SEC for market surveillance, as a regulatory arbitrage move. The SEC now uses Tradworx’ systems as part of the Market Information Data Analytics System (MIDAS, Philips reports.

 

Philips announced that HFT firms now have new targets: CTAs, momentum traders, swing traders, and news sentiment analytics. That might explain some recent changes I have seen whilst trading the Australian equities market. Christensen’s Disruptive Innovation Theory and theta arbitrage both mean that a trading strategy will be profitable for a time before changes in market microstructure, technology platforms, and transaction and execution costs mean that it is no longer profitable.

28th June 2012: Deakin’s Lecture Value Migration

The Australian‘s Andrew Trounson reports that Deakin University is replacing lectures with online, open source, ‘cloud’ content:

 

Traditional lectures look set to go by the wayside at Deakin University. As part of a new strategy students will increasingly access online, open source content from around the world, freeing up academics to focus on smaller tutorial groups delivered not just face-to-face but increasingly through social media like Facebook.

 

Deakin’s decision is an ‘early move’ response to MIT and Harvard’s online courses. It fits both Clayton Christensen‘s disruptive innovation thesis (low-cost entrant to a new market) and Adrian Slywotzky‘s value migration thesis (value migrates from the individual lecturer’s intellectual capital to the open source ‘cloud’). [For more details read a draft research monograph and Masters essay I did on Christensen and Slywotzky.]

 

We can make several inferences from Deakin’s decision. GE and private equity-like models are influencing managerial decisions to cut high costs (including possible offshoring). The espoused rationale is to cut content development costs and prioritise customer-facing activities (with an eye to student experience survey results). ‘Lagging’ universities are responding in a game-theoretic way to what ‘leading’ institutions are doing (in what may be a form of Stackelberg competition). These strategies will place Darwinian selection pressures on academic lecturers who will become either ‘world class’ subject matter experts/researchers or content facilitators.

 

Core Economics’ Stephen King has a more optimistic view: “If done properly, the type of inverted classroom approach that Deakin has announced can work well and improve learning outcomes . . . This is a smart, brave move. But make sure, in the short term, that the University invests in the platforms, training and re-engineering that are needed. If it is just a strategy to save money, it will fail.”

CPRF08 Paper: Disruptive Innovation, Radiohead & Nine Inch Nails

I recently blogged about a presentation the 2008 Communications Policy Research Forum in Sydney on disruptive innovation in the music industry.

You can now download an Adobe PDF version of the PowerPoint slides here.

The refereed paper has been published in the Proceedings of the Communications Policy Research Forum 2008 (pp. 155-175 or PDF file pp. 179-199).  You can also download a local copy of the paper here.

The paper’s case study examines why Radiohead and Nine Inch Nails released their new albums as digital downloads.  I suggest a major reason why, and one that was overlooked by Web 2.0 pundits, is that each artist was in the ‘label shopping phase’ of a new contract and defected after negotiation problems with their major labels.  This fits a pattern in mergers and acquisitions: the major labels lost artists due to integration problems in a merger or acquisition.  Terra Firma Capital Partners has since partially confirmed this hypothesis: the private equity firm endures more post-acquisition integration problems with EMI and is fighting against government regulation of Great Britain’s financial services sector.

The paper’s data appendices contrast the artists’ strategies with signficant events and innovations in music industry contracts, conglomerate mergers and deal structures.  Somehow I missed U2‘s March 2008 deal with Live Nation: I found out about it in an October 2008 announcementGuns n’ Roses also finally released Chinese Democracy (MySpace audio stream): a new album that has taken 15 years, a rumoured US$14 million budget and 14 recording studios in New York, Los Angeles, Las Vegas and London.  I may write a paper on it . . .

CPRF08 Presentation: Disruptive Innovation, Radiohead & Nine Inch Nails

I recently spoke at the 2008 Communications Policy Research Forum in Sydney on disruptive innovation in the music industry.  My presentation looked at the reasons for why Radiohead and Nine Inch Nails pursued online release strategies for their respective albums In Rainbows (2007) and The Slip (2008), and evolved from some initial thoughts here. The reasons suggested in media coverage – Web 2.0 experiments, disruptive innovation and freeconomics – were ‘true yet partial’ explanations.  They overlooked two significant facts: (1) both artists were in the ‘label shopping’ phase near the end of their contracts; and (2) both artists were frustrated with their respective labels EMI and UMG, who each triggered artist defections due to post-merger integration problems.  The presentation also discusses the role of Disruptive Innovation Markets, the Disruptive Information Revelation principle, and lessons for journalists, new media theorists, policymakers and valuation analysts.  Thanks to the Network Insight Institute team (Mark Armstrong, Cristina Abad and Mark Armstrong) and the two anonymous reviewers for their help.

Ebook Textbooks & The Market for Lemons

The software consultant Ed Yourdon once warned US programmers in his book Decline and Fall of the American Programmer (1992) that they faced global hypercompetition.  This was a fashionable message in the turbulent early 1990s of industry deregulation, export tariffs, mega-mergers, downsizing and reengineering.  Spenglerian pessimism made Decline and Fall an IT bestseller as Eastern European and Russian computer programmers emerged as low cost competition with their US counterparts.  Now in Thomas Friedman‘s vision of a flatter world the Eastern European and Russian computer programmers have help from an unlikely source: electronic copies of IT textbooks.

Several barriers mean that US textbook publishers are cautious about embracing ebook versions.  Publishers fear the Napsterisation of ebooks on peer-to-peer networks.  There’s no standard ebook device although Amazon’s Kindle is the latest candidate.  There’s no standard ebook format: most use Adobe PDF, however when Acrobat 8 was released Adobe shifted its ebook functionality to a new Digital Reader that did not necessarily read a user’s existing ebook collection.  Potential customers do not have a utility function to necessarily favour ebooks over printed copies: publishers charge high prices for ebook versions that may contribute a higher contribution margin to profits but that give the customer little price differential compared with print counterparts.

The implementation of digital rights management (DRM) also leaves much to be desired: McGraw-Hill’s Primis uses a digital fingerprint on a hard-drive that voids an ebook even if reinstalled on a reformatted drive due to a virus, whilst Thomson’s Cengage Learning uses a time-sensitive model which gives the user access for one semester to an ebook with the full price of its exact print version.  Publishers are also slow to adjust cross-currency rates: Australian textbooks still cost $A120-$200 despite near parity between the Australian and US dollars.

Thus, it’s no surprise that ebook divisions remain small in multinational publishing conglomerates.  One exception is Harvard Business School Press which appears to have ditched Sealed Media’s DRM plugin for Adobe Acrobat after Oracle acquired SM in August 2006 and then had integration problems with information rights management.

These barriers suggest a failure in market design with analogies to George Akerlof‘s study of the used car market in his influential paper The Market for Lemons (1970).  Publishers counter that although there is a lack of ebook standards similar to Akerlof’s paper the economics of publishing provide a disincentive to lower prices.  They claim high fixed costs in printing, photography rights and licensing fees for the case studies taken from Businessweek, Fortune and The Wall Street Journal.  Author fees and promotional budgets to professional associations add variable costs –  however, Australian academics have a disincentive to publish textbooks compared with their US colleagues, as Australia’s Department of Education, Employment & Workplace Relations does not provide recognition points.

To survive US textbook publishers have turned to global market models with regional editions of popular texts (such as Asia-Pacific editions with local coauthors), and adopted the music industry’s business model of electronic and online content (similar to how record labels have released Dualdisc, DVD and collectors editions of albums).  However as Yourdon warned US programmers this may not be a business model with longterm sustainability.  MIT’s OpenCourseWare, Apple’s iTunesU and Scribd all provide free content that mirrors the generic content in most textbooks, although some differentiate via a problem-based approach.

Yourdon’s ‘challenger’ computer programmers now also have illegal BitTorrent sites such as The Pirate Bay, filehosting networks such as Rapidshare, and ebook sites including Avaxsphere.com and PDFCHM to choose from.  The last two provide solutions to Akerlof’s challenge in market design: they have an easier user interface, a broader (illegal) catalogue of ebook titles, and DRM-free files compared to Cengage Learning or McGraw-Hill.  Even business strategists are getting in on the act, as Clayton Christensen, Curtis Johnson & Michael Horn explore in Disrupting Class: How Disruptive Innovation Will Change the Way the World Learns (McGraw-Hill, New York, 2008).

There’s one textbook coauthor who came up with a unique solution to Akerlof’s dilemma in market design.  His Macroeconomics book coauthors Andrew Abel and Dean Croushore opted for the mod-cons from publisher Addison-Wesley: an online site and a one-semester ebook version as a bundle deal.  The textbook coauthor?

Federal Reserve Chairman Ben Bernanke.