23rd October 2012: On Dark Social

Internet history


“Monetisable analytics” is a phrase on netizen’s lips these days. Google Analytics, Facebook and Twitter dominate the “social web” and are three of the most-used internet platforms. But as The Atlantic Monthly‘s Alexis Madrigal notes, the pre-2004 internet had its own, rich, overlooked social history: BBSs, IRC, Gopher, Usenet, ICQ, and other early instant messaging platforms. The idea that the pre-2004 internet was just a bunch of disconnected links and that the post-2004 internet is an immersive, user-driven environment is a shibboleth. Social network and search engine optimisation (SEO) consultants promote the “social web” interpretation of internet history as self-justification.


Madrigal counters the “social web” interpretation with a counter-view. “Dark social” is a “vast trove of social traffic” that “is essentially invisible to most analytics programs.” Madrigal describes some experiences that TheAtlantic.com has had with real-time web analytics firm ChartBeat and he reaches several conclusions. First, “The only real way to optimize for social spread is in the nature of the content itself” [emphasis original]. Second, “the social sites that arrived in the 2000s did not create the social web, but they did structure it.” Third, “The history of the web, as we generally conceive it, needs to consider technologies that were outside the technical envelope of “webness.”” Dark social is thus more about user-driven referrals, instant chat logs, and other data that existing web analytics has failed to capture.


The rise of “monetisable analytics” creates a market for new analytics like Madrigal’s Dark Social. But it might create challenges for journalists who cover technology issues or who mix historical critique with ‘proof of concept’ pitches. (Will Madrigal develop a venture capital pitch for Dark Social as an Atlantic content partnership or spin-out company?) The pre-2004 internet was already about publishing: home-built, personal websites. Friendster, Facebook and MySpace’s ‘share’ option simply made it easier to diffuse or to transmit viral content but it didn’t create internet publishing from ex nihilo. To-date, the only people to really monetise content are international publishing conglomerates. What Madrigal has rediscovered in this piece is the philosophy of dotcom era content developers and creators which social media and SEO consultants have since eclipsed or obscured. Tools like ChartBeat might empower analytics-driven decision-making but the underlying value of content creation has always been there.


Why is the “social web” now “the dominant history of the web”? One reason is that sites like Wikipedia have editorial policies that limit or prevent the use of primary sources, memoir, in vivo interviews, archival analysis, and other types of historical research. There may also be country ‘home biases’ in the material that editors approve, based on their own knowledge and judgment of historical significance. This might make Wikipedia a useful free source for information but not necessarily for historical analysis of user experiences about the early internet. A second reason is that the circa 1993-2004 period remains undocumented apart from documentaries like Startup.com or We Live In Public about Josh Harris. A third reason is the unconscious projection of today’s standards — Google, Facebook, Twitter — back onto the early-to-mid 1990s web. Netscape Navigator, Altavista, Compuserve, Geocities, Pseudo.com and other early internet sites might seem primitive or static to today’s audience but they were a big deal “back in the day.” Hell, even Usenet, Gopher, Mosaic, and email were a big deal in 1993. As Madrigal notes: “How was I supposed to believe that somehow Friendster and Facebook created a social web out of what was previously a lonely journey in cyberspace when I knew that this has not been my experience?”


Madrigal could be Dark Social’s visionary entrepreneur. Alternatively he might write a short book on revisionist internet history.


Photo: deadheaduk/Flickr.


Parker Conrad and Michael Sha launched Wikinvest in 2006 to gather user-generated security analysis. The project collates wiki profiles on investment concepts, fundamental analysis of companies and technical analysis of market price movements. It also appeals to MBA students with sections on personal investing, investment concepts and funds management. Conrad and Sha have graduated from Harvard dorm day traders to Web 2.0 knowledge entrepreneurs.

Claire Cain Miller’s New York Times profile makes the obligatory link with Wikipedia, the online encyclopedia. Conrad and Sha go into some detail of their verification process for data and public sources. Actually, the wiki has some specific applications for the pooling or crowdsourcing of investor insights. Sell-side analysts in the research departments of investment banks can have dual allegiances if the underwriting departments incentivise their research products to drive sales revenues. The best will gravitate to portfolio managers, dynamic asset allocation and hedge funds that use event/risk arbitrage and short-sell strategies. An investor wiki could provide a counterbalance to these influences through a broader snapshot of investor sentiment, and strategies to delimit analyst biases and groupthink. A side-effect however is that investor views are more likely to converge to a mean, and the market efficiencies may thwart value investing strategies that require information asymmetries.

In fact, the Wikipedia analogy has some limitations because analysts, traders and portfolio managers all structure and use market information in different ways to online encyclopedias. This was one of wiki creator Ward Cunningham‘s insights when he devised the Portland Patterns Repository in 1995: the value of a repository to capture domain knowledge and processes, and to codify them from tacit to explicit form using a methodology such as design patterns or object oriented programming structures. If it stays within Wikimedia’s online encyclopedia model then Wikinvest will be suited to fundamental analysis and introductory investing topics. However, it could evolve into a different form if it adopts insights from behavioural finance and tactical asset allocation into the wiki process. These areas augment Cunningham’s original schema with strategies to deal explicitly with how information quality and source selection can affect investor decisions, judgment and verification. Even these vary depending on the end-user, their self-awareness, the intended contexts of use, and what potential outcomes may occur (a normative stance on the superiority of user-generated content over ‘traditional’ media is not sufficient alone to address the concerns that these processes are meant to anticipate and solve). The pressure to change and evolve may come from sell-side brokerages which now use Wikinvest as a cost-efficient data source for market commentaries. Alternatively, it may come from Wikinvest’s end-users as the wiki gains more public prominence, and attracts a range of investor styles with knowledge of asset classes, inter-market volatilities and global dynamics. If this occurs then Wikinvest and other wikis could have a pivotal role in the democratisation of finance beyond London, New York and Chicago.

Just don’t be surprised if Icahn Reports maven Carl Icahn (video) launches a wiki raid.