On Jim Simons, String Theory, and Quantitative Hedge Funds

Renaissance Technologies founder and mathematics professor Jim Simons is an enigma in quantitative hedge funds.


Simons rarely gives interviews. One of the best is an Institutional Investor interview he gave in 2000 (PDF). One insight is that Renaissance makes trades in specific time periods using pattern recognition to model volatility.


Simons has done important work in differential geometry and the theoretical physics subdiscipline of string theory. I recently looked at some academic journal articles by Lars Brink (Sweden’s Chalmers University of Technology) and Leonard Susskind (Stanford University) to try and understand how Simons views financial markets.


String theory proposes one-dimensional objects called strings as particle-like objects that have quantum states. String theory and cosmology has progressed over the past 35 years to describe this phenomena but still lacks some key insights.


How might Simons use string theory to understand financial markets? Two possibilities:


(1) The mathematical language of couplings, phase transitions, perturbations, rotational states, and supersymmetries provides a scientific way to describe financial market  data and price time-series. It does so in a different way to fundamental analysis, technical analysis, and behavioural finance: Simons uses string theory to understand the structure of information in financial markets. (Ed Thorp pursued a similar insight with Claude Shannon using probability theory.) String theory-oriented trading may be falsifiable in Karl Popper’s philosophy of science.


(2) String theory provides a topological model that can be applied to money flows between mutual funds, hedge funds, and bank trading desks over short periods of time. This might enable Simons’ traders to forecast the likely catalysts for changes in stock prices in the short-term and to trade accordingly. This might involve using string theory to forecast how price trajectories might change if portfolio managers at other funds alter their portfolio weights for a stock. In doing so, Simons is trading in a similar way to SAC’s Steve Cohen (who uses game theory) and D.E. Shaw’s David Shaw but uses different methods of pattern recognition to do so.


I have made a list of popular science books and Springer academic monographs to keep an eye on string theory. Simons’ success also illustrates how insights from one knowledge domain (string theory, astrophysics, computational linguistics, and voice recognition) can be transferred to another domain (financial markets trading).

17th April 2012: Is Wall Street Inefficient?

Wall Street


New York University’s Thomas Phillippon has a new paper that reaches counterintuitive conclusions about Wall Street:


Historically, the unit cost of intermediation has been somewhere between 1.3% and 2.3% of assets. However, this unit cost has been trending upward since 1970 and is now significantly higher than in the past. In other words, the finance industry of 1900 was just as able as the finance industry of 2010 to produce loans, bonds and stocks, and it was certainly doing it more cheaply. This is counter-intuitive, to say the least. How is it possible for today’s finance industry not to be significantly more efficient than the finance industry of John Pierpont Morgan? [emphasis added]


Phillippon’s study of financial intermediation costs emphasises debt growth, hidden systemic risks and the growth in trading as a secondary market activity. The New Republic‘s Timothy Noah also emphasised trading in his write-up of Phillippon’s results. The paper’s problem is Phillippon’s reliance on the “neo-classical growth model” which argues that information technology, trading, and risk management should lead to lower costs and superior information about future prices. A second problem is that Phillippon explains growth in ‘informativeness’ as a key criterion variable but he does not adequately define it. A third problem is that the paper examines two key data points – 1900-1910 and 1980-2011 – without considering how the innovation pathways in financial intermediation have also changed (both in-period and across-period). For instance, Value at Risk looked like a great innovation in 1992 but it was re-evaluated in 2007-08 during the global financial crisis. A fourth problem is that prices in trading are not always about future prices, or even the fair market value of firms, but they can reflect the market-moving tactics of hedge funds and other firms. A fifth problem is that the inefficiencies that Phillippon identifies lie partly in the fees and incentives that the mutual industry charges investors as revenue generation (and Wall Street’s incentivisation through end-of-year bonuses). Thus, any evaluation of financial intermediation efficiencies should take current market practices into account.


If Phillippon had used a market microstructure framework then he might have reached different conclusions about the paper’s aggregate data. Specific firms are able to leverage information technology, trading, and risk management to gain an edge on other market participants. They extract alpha (returns made from active management above and beyond the market index returns or beta). This creates a ‘winner takes all’ dynamic in which a small group of efficient firms do exceedingly well. However, the Schumpeterian dynamics of inter-firm competition means that factors like information technology do not simply lead over time to greater efficiencies and lower costs, as they did with Wal-Mart. Quantitative finance firms like Jim Simon’s Renaissance Technologies, Clifford Asness’s AQR Capital and David Shaw’s D.E. Shaw & Company spend millions on infrastructure and proprietary research to outpace their competitors. This creates ‘informativeness’ in the form of private knowledge that Phillippon’s models probably could not measure. Is this really a misallocation of capital?


Photo: apertu/Flickr.