Warning: religious debate here. Should a news web site be open or closed, free or paid-for? There is no simple answer, of course, as hybrid models are a likely part of our future. But, first, let's review the paid-for model I addressed in previous issues of the Monday Note as well as in the French version of Slate.
In a nutshell, paid is likely to become fashionable again under the following conditions:
-    Big brands are more likely to erect tollbooths and balance what needs to remain free, in order to retain large audiences, against what they must monetize: the value-added part on their content.
-    Transactional systems morph into aggregated micro-payments for quick and seamless few cents purchases or subscriptions renewals. The “mental cost” of the transaction must match its low monetary value.
-    Tech platforms improve their performance: in two or three product generations, gizmos such as the Kindle, PlasticLogic tablets, or future iPhones finally do the job thanks to wireless connections, long battery life and resistance to everyday abuse. Just to get an idea of what's looming, watch last week’s presentation of the new iPhone OS 3.0 features (go here and begin the show at time code 10:00 minutes) all is here: eBookstore, eNewstand system dedicated to publishers, iTunes powered subscriptions, etc.
News organizations will find a sustainable model here -- and with Apple’s competitors.

Let's turn to point #2: closed versus open. News organizations have deployed sophisticated programs to allow third party websites to dynamically access their content. Such arrangements systems rely on Application Programming Interface (API), a chunk of code embedded in the pages of the partner site; instead of a simple link sending the user to a static page, the API allows a live access to data maintained by the main site.

On March 10, The Guardian launched its Open Platform program. It is based on a Content API described by the company as "a free service selecting and collecting Guardian content for re-use" and a Data Store, which is "a directory of useful data created by Guardian editors". All together, more than one million items, properly structured and tagged, involving every subject covered by the Guardian:  sports facts, demographic figures, best selling books or movies, alcohol consumption stats, the latest data on the financial crisis (here is a glimpse of the Data Store), by the way, most of this raw material is stored on Google Docs for easier editing (an example here with the world unemployment data).

The Guardian is not the only newspaper looking for a way to exploit its huge archive. Last year, the New York Times began a progressive release of its own API system called "Open Times". At first, it provided dynamic access to financial data for each presidential campaign. Since then, the API has been constantly updated. Last month, the Times made a significant improvement with its Article Search API giving access to 2.8 million articles going back to 1981, all searchable with fine granularity thanks to 35 descriptive items (fields).  As expected, the New York Times forcefully encourages third party sites to embrace "Open Times" by hosting developer conferences at its New York headquarter.

How do these innovative features contribute to the bottom line? First, they result in a significant traffic boost.  Then, imagine the business section of a partner site. There, the reader enjoys access to a vast dataset, produced by a major media, documenting the financial crisis. Compare the click-rate this arrangement produces to what a dumb ad banner generates: what could the multiple be? Ten times, a hundred times better? Consider the targeted advertising value attached to the result, and the long term deal you can make with partners using the API...

Second point, for such a program, the deployment cost is marginal. In most news organizations, structured data have been in place for years. Of course, the value of such data varies with the quality of its structure (to get more of the idea, read this instructive explanation on how "metadata" works). Or see below how the Guardian organize its data by connecting tags to multiple content items:



This leads us to our main point: such use of APIs can only be the privilege of major news media enjoying high value datasets. Not exactly within the reach of a four years-old blog manned by a dozen of smart young bucks. For the older bulls, this means -- again -- that resources have to be allocated to extracting value from the "long tail" of archives, either through recommendation engines (see Monday Note #73) or through well disseminated APIs.

This also assumes a culture of true open-mindedness, one that hasn’t spread everywhere on the Internet. Take basic practices such as deep-linking (the ability to send the reader deep inside a site, even if the target content is behind a paid wall). In the English-speaking world, this is now standard practice: many sites, such as The Economist, allow deep and free access -- even in the paid zone. Such open attitude is no stranger to the success enjoyed by The Guardian: 30 million unique visitors per month, an exceptional performance.  In France, for instance, Le Monde does not allow deep-linking.  The TV network TF1 used to threaten legal action against sites deep-linking to its (free) content. Call it "resilient conservatism".

Content dissemination is by no means a panacea for big news organizations. It won't prevent the revenue downfall many are facing today. But much better than throwing money into keywords acquisition (which only benefits Google), content structuring and dissemination builds a real investment in the intellectual capital of media companies. And it is an act of faith in the power of the brand.  —FF

Print Friendly