In the beginning there was curation, and it was good. People found interesting things, created directories of those things, and then you found what you were looking for inside those curated lists.
That was the origins of the original lists and directories, from Yahoo on outward.
But then that got too hard. The web got bigger faster than anyone could keep track. Curation steadily gave way to algorithmic search, which at first was just spidering of the web, and then more intelligent spidering with keywords.
And then it became Google, with ranking algorithms that placed websites into a hierarchies of keyword-related relevance based on things like authoritativeness, as defined, in part, by links from other sites — by those original hand-curated lists, ironically enough.
That model has now begun to give way too. Any algorithm can be gamed; it’s only a matter of time. The Google algorithm is now well and thoroughly gamed, as I first wrote about late last year, and as now become an entire genre of web writing, and that has grown to include my friend Vivek Wadhwa’s smart piece on TechCrunch not long ago. Google has, they argue, lost its mojo — which is true, but it’s more interesting and complicated than that.
What has happened is that Google’s ranking algorithm, like any trading algorithm, has lost its alpha. It no longer has lists to draw and, on its own, it no longer generates the same outperformance — in part because it is, for practical purposes, reverse-engineered, well-understood and operating in an adaptive content landscape. Search results in many categories are now honey pots embedded in ruined landscapes — traps for the unwary. It has turned search back into something like it was in the dying days of first-generation algorithmic search, like Excite and Altavista: results so polluted by spam that you often started looking at results only on the second or third page — the first page was a smoking hulk of algo-optimised awfulness.
There are two things that can happen now. (OK, three. We could stop search, which won’t happen.). We could get better algorithms, which is happening to some degree, with search engines like Blekko and others. Or, we could head back to curation, which is what I see happening, and watch new algos emerge on top of that next-gen curation again. Think of Twitter as a new stab at curation, but there are plenty of other examples.
Yes, that sounds mad. If we couldn’t index 100,000 websites in 1996 by hand, how do we propose to do 234-million by hand today?
The answer, of course, is that we won’t — do them all by hand, that is. Instead, the re-rise of curation is partly about crowd curation — not one people, but lots of people, whether consciously (lists, etc.) or unconsciously (tweets, etc) — and partly about hand curation (JetSetter, etc.). We are going to increasingly see nichey services that sell curation as a primary feature, with the primary advantage of being mostly unsullied by content farms, SEO spam, and nonsensical Q&A sites intended to create low-rent versions of Borges’ Library of Babylon. The result will be a subset of curated sites that will re-seed a new generation of algorithmic search sites, and the cycle will continue, over and over.
In short, curation is the new search. It’s also the old search. And it’s happening again, and again. [-]
[Update] A friend points out in IM that all of this makes Yahoo mothballing Delicious, a directory of curated lists, more than a little mistimed. And it’s made pointed and ironic too when you look at what the #2 most bookmarked link is on Delicious right now: Google’s weakening search results.