Disrupt Yourself Using Cognitive Task Analysis

I spent part of 2006 writing about Clayton M. Christensen‘s Disruptive Innovation Theory for the Smart Internet Technology CRC. The research process for an unpublished monograph (PDF) led me to discover all sorts of related areas, from the Toyota Production System to agile software development and design patterns in object oriented programming.

 

Recently, this body of work influenced me to look at cognitive task analysis (PDF) as a way to self-disrupt your process and work routines. Beth Crandall, Gary Klein and Robert R. Hoffman’s Working Minds: A Practitioner’s Guide to Cognitive Task Analysis (Boston, MA: MIT Press, 2006) is a useful introduction. John D. Lee and Alex Kirlik’s Oxford Handbook of Cognitive Engineering (New York: Oxford University Press, 2013) is an advanced reference on the broader field of cognitive engineering as a systems approach to human mental processes and decision-making.

The Long Gamma Working

The Long Gamma Working

7th September 2013

 

Background

 

On 11th October 2011, I “resolved to develop a private, low-key, personal vehicle for long-term self-sufficiency” in the Nihonbashi Working, held at the Tokyo Stock Exchange in Japan.

 

I had started to trade in Australian financial markets on 8th October 2011. Standard & Poors had downgraded US debt from its AAA rating on 5th October 2011. Over the next several months the European Union’s bond markets also faced a debt crisis which led to austerity policies and intermarket volatility. Simultaneously, my university employer went through a year-long organisational restructure with several rounds of redundancies. I spent this time gathering resources; doing due diligence on early, failed trades; and researching financial markets using the ThomsonReuters Datastream information system.

 

On 30th July 2013 as part of the Sirius XI Working in the Esoteric Order of Beelzebub, I isolated two specific strategies that provided illustrative LBM insights. I was familiar with Charles Mackay’s influential reportage on the behavioural and sociological dynamics of speculative bubbles (Extraordinary Popular Delusions and the Madness of Crowds). In Arnold Van Gennep’s framework (The Rites of Passage), this can involve creating the conditions for rational herding (indirect, contagious, and negative taboo) that leads to crowded trades (identified by Richard D. Wyckoff and others). Alternatively, this can involve active management (animistic, direct, and sympathetic). I found evidence of the first strategy in the Drexel Burnham Lambert, Galleon, and SAC insider trading cases; and for the second strategy in Bill Ackman and David Einhorn’s hedge fund activism.

 

Long Gamma is a long-term illustrative GBM Working to create a preferred financial future, which builds on these insights, past Workings, and on-going practice-based research. The Working title refers to the key strategy that Nassim Nicholas Taleb articulates in his book Antifragile: Things That Gain From Disorder (New York: Penguin, 2012) (TS-5), from his experience with the Greeks (key sensitivities) as an options trader.

 

Goals

 

1. Utilisation of current research on expertise, skills building, and performance psychology.

 

William Chase, Anders Ericsson, Michael Howe and others have popularised strategies for expertise and skills building using cognitive engineering, deliberate practice, and fluid intelligences (from the Cattell-Horn-Carroll model of human intelligence). Chase, Ericsson and Howe’s research provides a more rigorous framework for cognitive modelling than the Neuro-linguistic Programming approach of Richard Bandler, John Grinder, and Robert Dilts. Brett N. Steenbarger, Ari Kiev, Mark Douglas and Doug Hirschorn have adapted these insights to traders’ use of performance psychology. My illustrative GBM immersion in this material builds on my Fluid Intelligence Working (30th June and 1st July 2012).

 

2. Development of a Long Gamma personal trading capability.

 

Long Gamma involves time (theta), volatility (vega), and nonlinear rates of change (gamma). Hedge fund managers use the two strategies mentioned above as an operative LBM strategy to create Long Gamma dynamics that they can profit from at others’ expense. This part of the Working builds on the Oligarchy Working of 30th January 2013. The development of a personal trading capability involves several stages:

 

Stage 1: 2013 – 2015

 

This period coincides with on-going PhD research which may involve a possible chapter on the Department of Justice and Securities & Exchange Commission’s criminal and civil cases against Steve A. Cohen and SAC. I will use this period to codify the two strategies mentioned above, and to develop case based reasoning from Wall Street biographies, history, and contemporary journalistic reportage, in order to make causal, probabilistic inferences from. I will start to develop a trading playbook of insights during this period which can be used for the later development of trading strategies.

 

Stage 2: 2015 – 2017

 

Using the case based reasoning as a starting point, more in-depth analysis of specific filter and trading rules using a Bayesian belief network to model financial markets as a complex adaptive system. Possible areas of analysis might include: (1) the impact of high-frequency trading; (2) the contributions of behavioural finance and market microstructure literature; and (3) predatory trading approaches that target the retail and institutional trader use of technical analysis. The result will be Bayesian decision rules that can be tested in live trading.

 

Stage 3: 2018 – 2020

 

Codify the Bayesian decision rules into cognitive task protocols for discretionary trade and portfolio management processes. Two more long-term options that will be explored for possible quantitative trading include: (1) the use of Markov Chain Monte Carlo simulation to test market data; and (2) the possible development of proprietary trading strategy algorithms. This may be done using a commercial platform like MetaTrader or TradeStation for retail traders; the possible emergence or the reverse engineering of a retail trader accessible option for alpha discovery comparable to Deltix’s QuantOffice for institutional traders; or using the Matlab programming language as an interface to Interactive Brokers or a similar trade execution platform. This will require an understanding of agile software development; machine learning; quantitative strategies; and test driven development.

How Might Research Performance Metrics Be Used?

At this week’s ARMS 2013 conference I saw Katie Jones from New Zealand’s Unitec Institute of Technology present on “Research Peformance Metrics: The Stick Followed by the Carrot.” Jones gave an informative and well received presentation on how Unitec has developed a traffic light system to track program level research, and that informs research development strategies.

 

In the Q&A period a conference delegate pleaded (paraphrased from memory): “I’m an academic. I also work at weekends. Administrators need to work with academics . . . we need to work together.” Several conference delegates tweeted the exchange using the #ARMS2013 hashtag.

 

The exchange highlights an important question: How might research performance metrics be used?

 

My personal view is that research performance metrics are the start of a strategic conversation about program level research and not the end. What inferences are drawn, how, and why can be just as important as the original data. One example of this is Richard Heuer‘s influential work on the psychology of intelligence analysis.

 

You often need to go beyond initial or surface interpretations. Red scores can be due to milestone slippage, longer than expected theory-building cycles, or unforeseen complexities. Green scores can be ‘gamed’ via tactics such as salami publications (breaking up research results into several micro publications), or adding your name as a co-author to publications that research students have largely written.

 

An effective research program evaluator will be aware of such issues.

 

Jones is passionate about Unitec’s approach to research performance metrics and its achievements to-date. Her background as an anthropologist with experience in health and investment enables Jones to transfer insights from these domains into a university administration context.

 

Perhaps one day Jones’ traffic light system will evolve into the kind of factor models that Standard & Poors and other financial ratings agencies uses to uncover alpha investment returns that outperform a passive benchmark. For those interested, Richard Tortoriello’s book Quantitative Strategies for Achieving Alpha summarises the S&P factor model approach.

 

I also feel empathy for Jones’ Q&A questioner: writers often spend weekends on their publication drafts. This creative, developmental drafting process is not always captured in research performance metrics or is adequately understood. Having more university administrators who publish their own research may be one strategy to bridge the perceived workplace divide between academics and administrators.

On Informal Research Collaboration

 

This week on the plane trip to ARMS 2013 — the Australasian Research Management Association‘s annual conference — I came across this passage on the value of informal research collaboration in John Coates‘ award-winning book The Hour Between Dog And Wolf: Risk-Taking, Gut Feelings, and the Biology of Boom and Bust (London: Fourth Estate, 2012):

 

I came across a likely suspect purely by chance. During the later years of the dot.com era I was fortunate enough to observe some fascinating research being conducted in a neuroscience lab at Rockefeller University, a research institution hidden on the Upper East Side of Manhattan, where a friend, Linda Wilbrecht, was doing a Ph.D. I was not at Rockefeller in any formal capacity, but when the markets were slow I would jump in a taxi and run up to the lab to observe the experiments taking place, or to listen to afternoon lectures in Caspary Auditorium, a geodesic dome set in the middle of that vine-clad campus. Scientists in Linda’s lab were working on what is called ‘neurogenesis’, the growth of new neurons. Understanding neurogenesis is in some ways the Holy Grail of the brain sciences, for if neurologists could figure out how to regenerate neurons they could perhaps cure or reverse the damage of neuro-degenerative diseases such as Alzheimer’s and Parkinson’s. Many of the breakthroughs in the study of neurogenesis have taken place at Rockefeller. (pp. 20-21).

 

Wilbrecht now runs the Wilbrecht Laboratory at University of California, Berkeley. Coates’ exposure to Rockefeller University’s research environment deepened Coates’ perspective on the financial trading he oversaw at Goldman Sachs and Deutsche Bank. This experience likely also influenced Coates’ research program at Cambridge University on the interaction of human physiology and financial markets. The Wellcome Trust, the UK Financial Times, and others shortlisted Coates’ book for 2012 book awards. The Economist and Financial Times both praised Coates’ research. Informal research collaborations can lead to ‘ignition’ or ‘shaping’ experiences that develop researchers, and that combine insights from several domains: Wall Street trading and neuroscience research for Coates.