online publishing

A Toolkit for the Cognitive Container

We now live in an apps world. “The web is dead” shouts Chris Anderson, Wired’s editor-in-chief. To make his point, he teamed up with Michael Wolff, a Vanity Fair writer. According his latest theory, the internet is taken over by mobile applications, and the web as we know it, will be soon dead. Wired produces a Cisco-originated graph (below) showing the decrease in “web” traffic, down to a quarter of the traffic of the internet. The other 75%, says Anderson, include video, peer-to-peer, gaming, voice-over-IP telephony, a large part of it encapsulated in apps, blah-bla-blah.

Well. Two things. To begin with, Chris Anderson isn’t the first to notice the rise in applications used to access the internet. Every news outlet’s digital division witnesses a sharp increase in its apps-related traffic. Here in France, Le Monde just said its iPhone apps now contribute about 20% of its entire traffic; its iPad application (a bit crude but efficient reader) has been downloaded 150,000 times. This is just the beginning as publishers are working on new apps, for the iPhone, the iPad, but also for Android, Windows 7 for Mobile and even Bada, Samsung’s proprietary OS. Many publishers forecast a share of 30% of their traffic originating from mobile devices. This is consistent with Morgan Stanley’s predictions of smartphones shipments overtaking the PC two years from now (see below).

Such trends, when repackaged in Chris Anderson’s craft, ascend close to papal encyclical status (that Anderson’s particular skill; in a recent lecture, the British journalism professor George Brock calls him “a professional exaggerator”). Never mind the data he presents are not of the utmost rigor. As we can see here, he magnifies the demise of the web.

But byte-flow analysis is misleading. A more accurate measure would be time spent on the traditional web versus apps. For instance, neither Anderson nor the graph say in which category Facebook traffic falls. Is it an app? A web-based service? All we know is American users spends a quarter of their time on it. I wouldn’t dare wrecking such an attractive intellectual scaffolding with mere facts, but we can’t compare video and text-based pages on the basis of their byte-stream. I did the test: a 3 minutes of You Tube video weighs 16 megabytes; the same time spent on text will only require a 20 kilobytes page, 800 times lighter. (The 8000 words Anderson/Wolff story — devoured in 15 minutes at a normal reading speed, weighs only 117 kilobytes). When measuring things, the metric does alter the perspective…

Nevertheless, Anderson’s fatwa is gaining traction, as did, in its time, his Long Tail theory. Later, Anderson amended the postulate, using the concept of “strong head” (mandatory if you expect to make money with the tail). His “Free!” edict was also updated with the Freemium notion – a paid-for model tied to an incentive. But no more sarcasm, such silicon snake oil is a charming ingredient of our e-times.

Caution with Anderson’s theory aside, there is no doubt the app phenomenon will significantly impact the way we consume news: apps might become their main cognitive container. More

The Facebook Gravitational Effect

Over the next twelve months, the media industry is likely to be split between those who master the Facebook system and those who don’t. A decade or so  ago, for a print publication, going on the internet was seen as the best way to rejuvenate its audience; today, as web news audiences reach a plateau, Facebook is viewed as the most potent traffic booster.

If you are looking for the ultimate cyber black hole, point your browser toward Facebook. Beyond the 500 million users milestone, even more significant gravitational pull await the media industry. Here are facts to keep in mind.

— While the average online newspaper is viewed about 30 minutes per month (see data from the NAA), users spend 12 times more on Facebook: a worldwide average of 5hrs 52 minutes, 6hrs 02 minutes in the United States and 4hrs 12  minutes in France. Globally, social networks represent about 10% of the total internet time; and 2/3 of the internet population visit one such network at least once a month. And the growth is about  30% per year; in three years, that’s 220%, a multiplication by 2.2!

— Facebook dwarfs other social networks: worldwide, measured in time per month, it weighs 6 times MySpace, and 12 times twitter and 30 times LinkedIn.

— Of the half billion users, 250 million are logging every day, for about 34 minutes.

— Just as important, or more, 150 million access Facebook through their mobile phone.

— In June alone, on the US market, users spend more time on Facebook than on sites owned by Google, Microsoft and Yahoo combined (source: Nielsen).

Update Aug.2:  Nielsen just released this study showing that American spend 23% of their internet time on social media, vs. 16% a year ago.

The time spent numbers are always spectacular… but some view those as misleading considering how users interact with Facebook: uploading videos or photographs takes inherently more time than glancing over Google News. Granted. Let’s then consider more media related metrics. More

Zero tolerance for latency

The big battle of the coming years will be a battle for time. For media related software or for web design, the fight will be for customers’ or readers’ attention, the challenge will be to prevent them from fleeing elsewhere and to give them more in less time.

More than ever, we are in the business where speed is key.

Look at how critical the speed factor has been in recent tech successes. To be sure, Google has been able to dominate search thanks to the quality of its algorithm. But Google’s win also came from its ability to deliver results faster than anyone else. This speed comes from the combined performance of three pieces of software:  Map Reduce slices the request in multiple chunks and assigns the work to multiple CPUs, the Google File System at the core of Google’s distributed architecture and the main database, BigTable, all supported by an unprecedented hardware deployment of several million servers. (A Cornell University paper by a Google Fellow gives a comprehensive description of the company’s architecture, PDF here). All three components are dedicated to speed of service, one of Google’s edges and a must to conquer cloud computing, where waiting for a file update or data transfer is not an option.

Speed is also a key success factor for Amazon and iTunes — the two kings of the friction-free transactions — but also for the video streaming service Hulu, or for business applications such as Salesforce. As for hardware, Apple taught us that speed and fluidity weigh more than a long feature list. Look at the four iterations of the iPhone (leaving aside the antennae issue for once), most of the hardware improvements have been aimed at increasing speed and fluidity of use (OK, battery life as well).

Two of the reasons why the iPad is about to take over the netbook market are its near to zero boot time and its instant application launch. Both are a blow to the PC which remains stuck in its passé architecture despite huge increases in processing power (or perhaps because abundant power facilitates wasteful programming practices). This contributed to the pace of the iPad adoption by the customers: it took 28 days for the iPad to reach a million users, vs. 180 days for the netbook. Consumers love speedy devices.

And, as a final example of speed related services, we can mention classifieds websites derived from the highly successful Blocket in Sweden (Blocket is owned by my former employer Schibsted). The site is so huge that an equivalent about 5.5% of the entire Swedish GNP goes through it!  As for the French version called Le Bon Coin (see Monday Note’s story), it delivers 2000 pages per second while relying on fairly small hardware. But the site is entirely coded in C language that provides lightning-fast data delivery (users see 40 pages per visits on average, eight to ten times more than any news site).

Let’s face it, consumers tolerance to latency is closing down to zero. They want ultra-fast boot time, quick network access, fast pages display or download.

Sadly, this issue that has yet to percolate to the surface of news media industry consciousness. Too many web publishers remain convinced that the quality of their editorial is far more important that the underlying technology that supports it. This is made even more obvious with the inception of the iPad and of its applications. The vast majority of news media publishers have not focused enough on speed and seamlessness. Let me repeat a point I’ve already made here several times: the digital news sector needs more investment in technology and techies. More

Profitable Long Form Journalism

Over the last month, I’ve been stuffing my iPad with books purchased online, long PDF files and other documents for later reading sessions. I’m waiting for the mind-blowing media applications, they’re still in the making. Several prototypes of French newspapers I have seen are quite promising. We have to be patient. This is just the start of the runway.

Compared to my computer, I realize I’m using the device in a different way. No mail (too clumsy), no writing, no twittering. Just reading stuff, the longer the better.

And I wonder: Can tablet computing be the missing link, the one that could rehabilitate (or rather introduce) long form reading in digital format — in a profitable way?

Let’s project ourselves two years from now. And let’s put the iPad aside for a while. It’s 2012. Tablets have become a cell-phone-like commodity, competition is strong. Aside of Apple, devices from Samsung or HTC running Android or god knows what operating systems are thriving. Standards for digital formats have emerged and e-books are heading toward a 25% market share in Europe and the United States. The digital publishing chain is running smoothly and efficiently with the following characteristics.

  • The old production and distribution system that was eating 65% to 70% of the retail price is now down to a 30% fee taken by publishing platforms. They get this 30% for putting the publications on their virtual shelves and for collecting the money.
  • These inventories are served by clever search and recommendation engines (not the Trabant-like system of the iTunes/iBooks store).
  • To reflect decreasing distribution costs when compared to physical books, e-books retail prices are down by at least 30%.
  • Authors also take advantage of the technological shift, they get higher royalties.
  • New formats have emerged; the old dichotomy between hardcover, priced at $25, and paper back, at $10, is gone, replaced a by a more diversified pricing structure.

Hence the question: What will the impact be on journalism and on the bottom line of media companies?

Before attempting an answer, let’s reframe this in the dual context of the current business situation and of the newscycle. Managing a newsroom within today’s constraints is a difficult exercise. In daily newspapers, physical editorial space (i.e. column inches) is scarce, making long pieces a hard-sell to the editor-in-chief. The web is more welcoming, although we all know that beyond a 600 words story, reader attention tends to fade — especially for younger audiences.

As for the newscycle, it accelerates and becomes increasingly complex, requiring more expertise and, in theory, more editorial resources — should editors decide to go below the surface. Take the debt crisis in Europe, for instance. The general framework is pretty simple: thirty years-old traders in shark-frenzy mode, going against sixty years-old politicians. The sharks prey on the politicians who have failed to build decent economic leadership since the introduction of the euro system (coins and banknotes entered in circulation January 1st 2002). More

Balkanizing the Web

Creeping Balkanization is the internet’s worst enemy. As worldwide literacy grows exponentially, for the web, such expansion results in increasing pressure from corporate interests and regulatory nationalisms. Rising from its arcane beginnings as a DARPA research project, the net has become a symbol of borderless communication between individuals and of unlimited access to knowledge. Unfortunately, the net is about to become a heavily controlled environment, serving two classes of citizens: a dominant class that sets the rules (technological, legal and commercial) and the underclass of citizens and consumers.

Consider these two macro trends:

The first one stems from the world’s linguistic evolution. As of today, there are about one billion English-speaking people worldwide, half of which are native speakers. This latter proportion keeps growing as education improves; this growth reinforces the prevalence of English as the main internet lingua. With 500m people, English accounts for 27.5% of the connected population. Chinese makes the second language group with 400m people, 22.6% of the net population (and the mother of all government-mandated restrictions).

Beyond that, only the Spanish group (7.8% of the internet population) and the Japanese (5.3%) are above the 5% threshold. French accounts for 3.2% of internet users, with a global number of 57m.
Asia’s expanding literacy involves not only national languages (such as Mandarin or Hindi), but also the learning of English. In fact, the “use” of English could be much larger than shown in official statistics. Back in 2006, according to linguistics professor Braj Kachru, the “use” of English, as he called it, involved more than 500m people in India and China combined (see story in the Asian Times).

In many European countries, teenagers’ exposure to the English-speaking internet (for example through illegal downloads of movies and series) is a powerful learning vector.

The second trend involves the telecommunication infrastructure. The physical world is increasingly connected. Take optical fiber: its 2009 global market (outside China) represented 171m of fiber-kilometers, a 22% growth versus 2008 ; and that doesn’t reflect each fiber’s capacity to carry more data as carriers use more sophisticated modulation/demodulation circuitry. More

iPad: Publishers look for the winning formula

Among Australian media executives, like everywhere else, the talk of the town is the iPad. I was in Sydney this week, giving a talk at the Media 2010 conference. This gave rise to vibrant discussions of the ways in which the Apple device could transform our industry.
Among the group of speakers, the most enthusiastic one about the opportunity was Marc Frons, the chief technology officer of New York Times Digital. (Marc oversees a huge team of 150 tech people in New York). Three weeks before Steve Jobs’ January 27th iPad keynote, Marc dispatched a team of developers to Cupertino to crash code an iPad application. According to Marc, the quality of the interface, the speed of the iPad, its software will make it a game changer for the media industry. From a commercial perspective, The New York Times still hasn’t decided how to deal with its upcoming iPad bizmodel: charging or not, and how much. The context is the Times’ recent announcement of a paywall based on a metered system: a few pages a month for free; then you pay. See our recent story The Numbers behind the Paywall).

For all publishers, many obstacles remain. The first one is dealing with Apple. Media executives I talked to in Sydney are unanimous: Steve Jobs’ company is difficult to work with. It is utterly secretive, willing to give only minimal information to content providers. “When we met with the Apple people here, said an executive of Fairfax Digital, they didn’t bring an iPad with them, they were telling things like “it has a ten hours battery life“… C’mon guys…” Fairfax Digital operates 284 websites in Australia, including the two big dailies’s sites – The Sydney Morning Herald and The Age – viewed by 24m uniques visitors each month, which is not bad in a 21m people country.

Apple needs the publishing industry, it should treat it better. When it launched the iPhone in January 2007, the device was a revolutionary product just in itself, one that could wait for more than a year to open its platform to third party applicatios. This isn’t true of the iPad. Unlike the iPhone, the iPad will leave or die by the content it will deliver. Especially for a device priced between $500 and $800.  As for today, everyone is excited by the platform’s technological promise. But being able to offer a reader experience such as the Bonnier Mag+ concept or the recently released Wired demo is one thing. Finding the right economic model is another. More

Cashing in on stolen contents

For publishers: How much money is lost because of stolen contents? Of that, how much can be realistically reclaimed? Before getting into numbers, an overview.

In recent weeks, I’ve gained a first-hand media perspective on anti-piracy technology. The technology is Attributor’s, and the media is Agence France-Presse, one of the big three global newswires along with AP and Reuters. (Disclosure: I produced recently a 15,000 words report for AFP covering its strategy and its future. I’m no longer working for AFP but I keep close links with this news company).

Every day, AFP sends about 400 news items to Attributor – a fraction of its daily production. These items are then matched against a set of web sites, both subscribers and non subscribers of the newswire’s services. Using a simple interface, Attributor ranks the sites by their propensity to reuse contents. For regular clients, the system shows how stories are used, what percentage is utilized and if they are properly credited or even linked. For non-clients if offers a great way to track down stolen content and to make the distinction between minor abuses, honest mistakes and systematic infringement, since the data are viewed from statistical and time-related angles.

For obvious reasons, I can’t disclose the medias I’ve been reviewing in detail. Let’s simply say that results are stunning. AFP material is widespread. To make it short, there are three types of abuses of copyrighted material.

- The first one is insufficient attribution by a client. Typically, a journalist puts his or her byline on a story largely taken from a newswire. On most cases, the byline will be reduced to initials along with “avec agence” (with newswire) mention, like this one for instance where the borrowed text form AFP is automatically highlighted….

For this piece, we can safely say that “M.D.’s” input was minimal and it would have been nicer to simply put the mention “AFP” at the bottom of this cut & paste performance. (It that particular case, the newswire story is itself an explicit recycling of a scoop by Le Figaro, a typical illustration of the Internet’s endless content loop). From a legal perspective, there is no particular issue. It’s only an ethical matter.

Another case involves misuse of contents by bloggers. In most case, bloggers have no clues about the meaning and use of copyright. And big medias who host them don’t really help. Typically, a young an passionate blogger “covering” his beat will simply take in good faith an entire AFP (or AP or Reuters) story and paste it on his blog, this time with a proper attribution. Except that he has no right whatsoever to do so. I’ve seen one big French site, whose boss loathes the AFP, ending up with 60% of its content illegally “borrowed” from the AFP (confronted with facts, the site has made serious efforts to correct the situation). Hypocritically, many sites shield themselves behind the fine print of the term of services buried deep in their site reminding bloggers not to steal copyrighted content. Fact is, most of them, including big medias, do not properly educate their legally challenged blogging contributors. More

The iParanoid Scenario

I’m not through with the iPad. Actually, I’m just warming up. For today’s column, let’s focus on the perils of a closed system.

I live in a country (France) where censorship is a big deal. It comes mostly from greedy celebrities (sorry for the truism); they use a legal system that largely favors them. Often, they find a compassionate judge when it comes to extracting money as compensation for a supposed privacy violation or for some other unauthorized disclosure. Convictions are frequent and expensive; they can lead to the seizure of a magazine or even of a book. France has a long history of such practices. In the early sixties, the country was waging a colonial war in Algeria. Then, for the most avid news readers, the game was to get the weekly magazine l’Express at the kiosk as early as possible before French authorities seized it. (No such risk with today’s Gallic newsmagazines).

Let me reframe this in the context of an upcoming iPad era. An iPad newsmagazine publishes an investigative piece that triggers a legal injunction: remove that from the publication or face a $10,000 penalty per day. No, says the publisher, who has guts and money (proof this is a fiction), we want to fight in court. The plaintiff then turns to Apple. Same talk: face a huge fine, or remove the offending content. Furthermore, says the plaintiff’s attorneys, thanks to your permanent and unique electronic link to your proprietary devices and the fact that the electronic kiosk now resides on the device – yes we can argue that point, they say– , you must extend the deletion to each user’s tablet. C’mon, you keep pushing updates, and various contents bits to these gizmos, you can push a delete instruction code.

What would Apple do? This is a question of balance of power. If the legal action involves some neuron-challenged celebrity, chances are Apple won’t balk. But what if Nicolas Sarkozy or his whispering-singer wife are the plaintiffs? Truth is, given the pattern of legal actions against the press in France, it is more than certain a French judge will be tempted to request an immediate remote deletion of a presumed infringing content. Then we’ll see a replay of what happened last summer in the 1984 case, when Amazon remotely deleted a copy of George Orwell’s novel in the Kindle of buyers for copyrights issues. Amazon’s founder Jeff Bezos apologized profusely for the mishap (plus it involved 1984 not Alice in Wonderland, tough luck). More

The Death of Joe Average

Forget Joe Average, he’s dead. Ten or twenty years ago, analyzing audiences was much easier. Medias enjoyed well-defined and relatively unchanging target groups. For television, networks had a precise idea on who was watching what, and specialized cable outlets knew their viewers pretty well. Newspapers had their content structure sliced to fit various audiences by center of interests, age groups and opinions. At the time, contents were bundled together, delivered on a unique platform for a flat fee, on a per copy or subscription basis: the popular sport section, or classifieds did subsidize the expensive but more elitist foreign section, all for a dollar or the equivalent of a euro.

In today’s marketplace, every single piece of information lies the open, naked, stripped of a set value. People don’t buy contents by the bulk, they peck at it, leaving to a third party (the unstable advertising market), the burden of financing it. As the content scatters on the internet, so does the audience. The money has shifted as well, with an expense of $260 per US household per year for digital services (cell phones, cable, broadband, satellite) that didn’t exist a generation ago. (Even the poorest families still spend $180 a year). This, in itself, makes it hard to hope for an extra $20 a month for news content that is widely available for free.

But the real competition is now for time and attention. Last December, in the United States, people spent 64 hours online, but stayed only 57 seconds on each web page, according to Nielsen. OK, it’s an average, and I’m about to kill this very notion in a minute. But still, it points to the time allocation challenge we face. Again, last December, American web users spent 6:24hrs on Facebook, 2:56hrs on Yahoo properties, 2:21hrs on various Google sites and 2:03hrs on Microsoft sites. As for the time spent on newspapers, it remains stable: around 20 minutes a month, whether you look at US or the European markets.

The shift seems to accelerate towards Facebook, which is becoming the absolute internet attractor: the amount of time spent per person on Facebook has tripled in just one year; in the meantime, Google gained only 10% and Yahoo and Microsoft slipped slightly. Interestingly, the Facebook time explosion even occurred at the expense of online video: with 3:13hrs per user last December, it remains quite high but it is down slightly, by 3.4%. And the Facebook effect probably explains why people are visiting a smaller number of web sites: 83 domains visited last month, a surprising 23% drop in just a year.

The advertising spending is shifting as well. In the US market, between August 2008 and August 2009, the amount spent online by brands decreased by 2%, as  the money spent on top social networks and top blogging sites increased by 119%. Unfortunately, this is done on the cheap: based on 2009 revenue estimates, Facebook is grossing about $1.5 per user and per year in ad revenue. Just to put things in an unpleasant perspective, this compares to the $647 a newspaper such as the Washington Post gets from its print advertising for each of its buyers or subscribers. Make that $215 for each of its readers assuming a rate of three readers per copy. (This is based on the full 2008 year).

Coming back to the title of this column, analyzing trends has become more complicated: audiences are no longer monolithic, their breakdowns are hard to ascertain. This uncertainty makes an average a less and less relevant notion. More

A New Gallic Idea: Taxing Google

The French cultural elite has come up with a bunch of ideas to stimulate the legal consumption of digital goods. The basic principles are stunningly original: subsidize and tax. These creations are detailed in a report ordered by the Président de la République to the Ministry of Culture. This is the way it works here: when a problem plagues the private sector, the executive branch tasks clever, carefully picked-up fellows with writing a report. It involves hearings —about a hundred in that case — held behind closed-door, off-the-record; no one can figure out who is standing for what.

This time, the selected authors of the report titled “Création et Internet(available here) are: Patrick Zelnik, a music producer, Jacques Toubon, a 69 years-old former all-purpose minister (including Culture in1993-1995) and Guillaume Cerutti, the CEO of Sotheby’s France. Not exactly digital front-runners. As a music producer, Zelnik has brilliantly missed the digital train; Toubon has seen more mice in government offices than on his desk and Cerutti is running an auction house where sales are concluded with a hammer blow, not a touchpad click.

One of the most spectacular strokes of inspiration involves the creation of a taxpayer-subsidized “Online Music Card”. It could work like this: a young internet user, compulsive music downloader, buys a card for €20-€25. But the card carries a face value of €50. Then, after a while — expect a few years for roughly a million of young people above 24 — the magic happens: this crowd mutates into legal download addicts and forgets the appeal of illegal Net music (which, in France, is 20 times more important than the legal variety). That’s a hell of a good news for Apple, its iTunes cards could be bought by the bulk using French taxpayers’ money. Bear with me: that’s Christopher Columbus’ Egg. How come we didn’t think of it earlier? Flooding the young addicted-to-free generation with subsidies to reverse the anything goes, culture-copyright-looting tsunami! You know what? Sometimes, I’m proud of my country.

Second idea, my favorite: taxing Google. The concept, so to speak, is the following. More