Browsing Articles Written by

Frédéric Filloux

Google vs. the press: avoiding the lose-lose scenario

online publishing By January 20, 2013 Tags: 12 Comments


Google and the French press have been negotiating for almost three months now. If there is no agreement within ten days, the government is determined to intervene and pass a law instead. This would mean serious damage for both parties. 

An update about the new corporate tax system. Read this story in Forbes by the author of the report quoted below 

Since last November, about twice a week and for several hours, representatives from Google and the French press have been meeting behind closed doors. To ease up tensions, an experienced mediator has been appointed by the government. But mistrust and incomprehension still plague the discussions, and the clock is ticking.

In the currently stalled process, the whole negotiation revolves around cash changing hands. Early on, representatives of media companies where asking Google to pay €70m ($93m) per year for five years. This would be “compensation” for “abusively” indexing and linking their contents and for collecting 20 words snippets (see a previous Monday Note: The press, Google, its algorithm, their scale.) For perspective, this €70m amount is roughly the equivalent to the 2012 digital revenue of newspapers and newsmagazines that constitutes the IPG association (General and Political Information).

When the discussion came to structuring and labeling such cash transfer, IPG representatives dismissively left the question to Google: “Dress it up!”, they said. Unsurprisingly, Google wasn’t ecstatic with this rather blunt approach. Still, the search engine feels this might be the right time to hammer a deal with the press, instead of perpetuating a latent hostility that could later explode and cost much more. At least, this is how Google’s European team seems to feel. (In its hyper-centralized power structure, management in Mountain View seems slow to warm up to the idea.)

In Europe, bashing Google is more popular than ever. Not only just Google, but all the US-based internet giants, widely accused of killing old businesses (such as Virgin Megastore — a retail chain that also made every possible mistake). But the actual core issue is tax avoidance. Most of these companies hired the best tax lawyers money can buy and devised complex schemes to avoid paying corporate taxes in EU countries, especially UK, Germany, France, Spain, Italy…  The French Digital Advisory Board — set up by Nicolas Sarkozy and generally business-friendly — estimated last year that Google, Amazon, Apple’s iTunes and Facebook had a combined revenue of €2.5bn – €3bn but each paid only on average €4m in corporate taxes instead of €500m (a rough 20% to 25% tax rate estimate). At a time of fiscal austerity, most governments see this (entirely legal) tax avoidance as politically unacceptable. In such context, Google is the target of choice. In the UK for instance, Google made £2.5bn (€3bn or $4bn) in 2011, but paid only £6m (€7.1m or $9.5m) in corporate taxes. To add insult to injury, in an interview with The Independent, Google’s chairman Eric Schmidt defended his company’s tax strategy in the worst possible manner:

“I am very proud of the structure that we set up. We did it based on the incentives that the governments offered us to operate. It’s called capitalism. We are proudly capitalistic. I’m not confused about this.”

Ok. Got it. Very helpful.

Coming back to the current negotiation about the value of the click, the question was quickly handed over to Google’s spreadsheet jockeys who came up with the required “dressing up”. If the media accepted the use of the full range of Google products, additional value would be created for the company. Then, a certain amount could be derived from said value. That’s the basis for a deal reached last year with the Belgium press (the agreement is shrouded in a stringent confidentiality clause.)

Unfortunately, the French press began to eliminate most of the eggs in the basket, one after the other, leaving almost nothing to “vectorize” the transfer of cash. Almost three months into the discussion, we are stuck with antagonistic positions. The IPG representatives are basically saying: We don’t want to subordinate ourselves further to Google by adopting opaque tools that we can find elsewhere. Google retorts: We don’t want to be considered as another deep-pocketed “fund” that the French press will tap forever into without any return for our businesses; plus, we strongly dispute any notion of “damages” to be paid for linking to media sites. Hence the gap between the amount of cash asked by one side and what is (reluctantly) acceptable on the other.

However, I think both parties vastly underestimate what they’ll lose if they don’t settle quickly.

The government tax howitzer is loaded with two shells. The first one is a bill (drafted by no one else than IPG’s counsel, see PDF here), which introduces the disingenuous notion of “ancillary copyright”. Applied to the snippets Google harvests by the thousands every day, it creates some kind of legal ground to tax it the hard way. This montage is adapted from the music industry in which the ancillary copyright levy ranges from 4% to 7% of the revenue generated by a sector or a company. A rate of 7% for the revenue officially declared by Google in France (€138m) would translate into less than €10m, which is pocket change for a company that in fact generates about €1.5 billion from its French operations.

That’s where the second shell could land. Last Friday, the Ministry of Finances released a report on the tax policy applied to the digital economy  titled “Mission d’expertise sur la fiscalité de l’économie numérique” (PDF here). It’s a 200 pages opus, supported by no less than 600 footnotes. Its authors, Pierre Collin and Nicolas Colin are members of the French public elite (one from the highest jurisdiction, le Conseil d’Etat, the other from the equivalent of the General Accounting Office — Nicolas Colin being  also a former tech entrepreneur and a writer). The Collin & Colin Report, as it’s now dubbed, is based on a set of doctrines that also come to the surface in the United States (as demonstrated by the multiple references in the report).

To sum up:
— The core of the digital economy is now the huge amount of data created by users. The report categorizes different types of data: “Collected Data”, are  gathered through cookies, wether the user allows it or not. Such datasets include consumer behaviors, affiliations, personal information, recommendations, search patterns, purchase history, etc.  “Submitted Data” are entered knowingly through search boxes, forms, timelines or feeds in the case of Facebook or Twitter. And finally, “Inferred Data” are byproducts of various processing, analytics, etc.
— These troves of monetized data are created by the free “work” of users.
— The location of such data collection is independent from the place where the underlying computer code is executed: I create a tangible value for Amazon or Google with my clicks performed in Paris, while the clicks are processed in a  server farm located in Netherlands or in the United Sates — and most of the profits land in a tax shelter.
— The location of the value insofar created by the “free work” of users is currently dissociated from the location of the tax collection. In fact, it escapes any taxation.

Again, I’m quickly summing up a lengthy analysis, but the conclusion of the Collin & Colin report is obvious: Sooner or later, the value created and the various taxes associated to it will have to be reconciled. For Google, the consequences would be severe: Instead of €138m of official revenue admitted in France, the tax base would grow to €1.5bn revenue and about €500m profit; that could translate €150m in corporate tax alone instead of the mere €5.5m currently paid by Google. (And I’m not counting the 20% VAT that would also apply.)

Of course, this intellectual construction will be extremely difficult to translate into enforceable legislation. But the French authorities intend to rally other countries and furiously lobby the EU Commission to comer around to their view. It might takes years, but it could dramatically impact Google’s economics in many countries.

More immediately, for Google, a parliamentary debate over the Ancillary Copyright will open a Pandora’s box. From the Right to the Left, encouraged by François Hollande‘s administration, lawmakers will outbid each other in trashing the search engine and beyond that, every large internet company.

As for members the press, “They will lose too”, a senior official tells me. First, because of the complications in setting up the machinery the Ancillary Copyright Act would require, they will have to wait about two years before getting any dividends. Two, the governments — the present one as well as the past Sarkozy administration  — have always been displeased with what they see as the the French press “addiction to subsidies”; they intend to drastically reduce the €1.5bn in public aid. If the press gets is way through a law,  according to several administration officials, the Ministry of Finances will feel relieved of its obligations towards media companies that don’t innovate much despite large influxes of public money. Conversely, if the parties are able to strike a decent business deal on their own, the French Press will quickly get some “compensation” from of Google and might still keep most of its taxpayer subsidies.

As for the search giant, it will indeed have to stand a small stab but, for a while, will be spared the chronic pain of a long and costly legislative fight — and the contagion that goes with it: The French bill would be dissected by neighboring governments who will be only too glad to adapt and improve it.   

Next week: When dealing with Google, better use a long spoon; Why European media should rethink their approach to the search giant.


Linking: Scraping vs. Copyright

Uncategorized By January 6, 2013 Tags: 82 Comments


Irish newspapers created quite a stir when they demanded a fee for incoming links to their content. Actually, this is a mere prelude to a much more crucial debate on copyrights,  robotic scraping and subsequent synthetic content re-creation from scraps. 

The controversy erupted on December 30th, when an attorney from the Irish law firm McGarr Solicitors exposed the case of one of its client, the Women’s Aid organization, being asked to pay a fee to Irish newspapers for each link they send to them. The main quote from McGarr’s post:

They wrote to Women’s Aid, (amongst others) who became our clients when they received letters, emails and phone calls asserting that they needed to buy a licence because they had linked to articles in newspapers carrying positive stories about their fundraising efforts.
These are the prices for linking they were supplied with:

1 – 5 €300.00
6 – 10 €500.00
11 – 15 €700.00
16 – 25 €950.00
26 – 50 €1,350.00
50 + Negotiable

They were quite clear in their demands. They told Women’s Aid “a licence is required to link directly to an online article even without uploading any of the content directly onto your own website.”

Recap: The Newspapers’ agent demanded an annual payment from a women’s domestic violence charity because they said they owned copyright in a link to the newspapers’ public website.

Needless to say, the twittersphere, the blogosphere and, by and large, every self-proclaimed cyber moral authority, reacted in anger to Irish newspapers’ demands that go against common sense as well as against the most basic business judgement.

But on closer examination, the Irish dead tree media (soon to be dead for good if they stay on that path) is just the tip of the iceberg for an industry facing issues that go well beyond its reluctance to the culture of web links.

Try googling the following French legalese: “A défaut d’autorisation, un tel lien pourra être considéré comme constitutif du délit de contrefaçon”. (It means any unauthorized incoming link to a site will be seen as a copyright infringement.) This search get dozens of responses. OK, most come from large consumers brands (carmakers, food industry, cosmetics) who don’t want a link attached to an unflattering term sending the reader to their product description… Imagine lemon linked to a car brand.

Until recently, you couldn’t find many media companies invoking such a no-link policy. Only large TV networks such as TF1 or M6 warn that any incoming link is subject to a written approval.

In reality, except for obvious libel, no-links policies are rarely enforced. M6 Television even lost a court case against a third party website that was deep-linking to its catch-up programs. As for the Irish newspapers, despite their dumb rate card for links, they claimed to be open to “arrangements” (in the ill-chosen case of a non-profit organization fighting violence against women, flexibility sounds like a good idea.)

Having said that, such posture reflects a key fact: Traditional media, newspapers or broadcast media, send contradictory messages when it comes to links that are simply not part of their original culture.

The position paper of the National Newspapers of Ireland association’s deserves a closer look (PDF here). It actually contains a set of concepts that resonate with the position defended by the European press in its current dispute with Google (see background story in the NYTimes); here are a few:

— It is the view of NNI that a link to copyright material does constitute infringement of copyright, and would be so found by the Courts.
— [NNI then refers to a decision of the UK court of Appeal in a case involving Meltwater Holding BV, a company specialized in media monitoring], that upheld the findings of the High Court which findings included:
– that headlines are capable of being independent literary works and so copying just a headline can infringe copyright
– that text extracts (headline plus opening sentence plus “hit” sentence) can be substantial enough to benefit from copyright protection
– that an end user client who receives a paid for monitoring report of search results (incorporating a headline, text extract and/or link, is very likely to infringe copyright unless they have a licence from the
Newspaper Licencing Agency or directly from a publisher.
— NNI proposes that, in fact, any amendment to the existing copyright legislation with regard to deep-linking should specifically provide that deep-linking to content protected by copyright without respect for  the linked website’s terms and conditions of use and without regard for the publisher’s legitimate commercial interest in protecting its own copyright is unlawful.

Let’s face it, most publishers I know would not disagree with the basis of such statements. In the many jurisdictions where a journalist’s most mundane work is protected by copyright laws, what can be seen as acceptable in terms of linking policy?

The answer seems to revolve around matters of purpose and volume.

To put it another way, if a link serves as a kind of helper or reference, publishers will likely tolerate it. (In due fairness, NNI explicitly “accepts that linking for personal use is a part of how individuals communicate online and has no issue with that” — even if the notion of “personal use” is pretty vague.) Now, if the purpose is commercial and if linking is aimed at generating traffic, NNI raises the red flag (even though legal grounds are rather brittle.) Hence the particular Google case that also carries a notion of volume as the search engine claims to harvest thousands of sources for its Google News service.

There is a catch. The case raised by NNI and its putative followers is weakened by a major contradiction: everywhere, Ireland included, news websites invest a great deal of resources in order to achieve the highest possible rank in Google News. Unless specific laws are voted (German lawmakers are working on such a bill), attorneys will have hard time invoking copyright infringements that in fact stem for the very Search Engine Optimization tactics publishers encourage.

But there might be more at stake. For news organizations, the future carries obvious threats that require urgent consideration: In coming years, we’ll see great progress — so to speak — in automated content production systems. With or without link permissions, algorithmic content generators will be able (in fact: are) to scrap sites’original articles, aggregate and reprocess those into seemingly original content, without any mention, quotation, links, or reference of any kind. What awaits the news industry is much more complex than dealing with links from an aggregator.

It boils down to this: The legal debate on linking as copyright infringement will soon be obsolete. The real question will emerge as a much more complex one: Should a news site protect itself from being “read”  by a robot? The consequences for doing so are stark: except for a small cohort of loyal readers, the site would purely and simply vanish from cyberspace… Conversely, by staying open to searches, the site exposes itself to forms of automated and stealthy depletion that will be virtually impossible to combat. Is the situation binary — allowing “bots” or not — or is there middle ground? That’s a fascinating playground for lawyers and techies, for parsers of words and bits.


Quartz: Interesting… and uncertain

mobile internet, online publishing By September 30, 2012 Tags: 18 Comments


Atlantic’s new digital venture named Quartz is aimed at global business people. It innovates in many radical ways, but its business model remains dicey.

Two years ago, Atlantic Media’s president Justin Smith was interviewed by the New York Times. The piece focused on the digital strategy he successfully executed:

“We imagined ourselves as a Silicon Valley venture-backed startup whose mission was to attack and disrupt The Atlantic. In essence, we brainstormed the question: What would we do if the goal was to aggressively cannibalize ourselves?”

In most media companies, that kind of statement would have launched a volley of rotten tomatoes. Atlantic’s disruptive strategy gave birth to a new offspring: Quartz (URL:, launched a couple of weeks ago.

Quartz is a fairly light operation based in New York and headed by Kevin Delaney, a former managing editor at the Its staff of 25 was pulled together from great brands in business journalism: Bloomberg, The Wall Street Journal, The Economist and the New York Times. According to the site’s official introduction, this is a team with a record of reporting in 119 countries and speaking 19 languages. Not exactly your regular gang of digital serfs or unpaid contributors that most digital pure players are built on.

This professional maturity, along with the backing of the Atlantic Media Company, a 155 years-old organization, might explain the set of rather radical options that makes Quartz so interesting.

Here are a few:

Priority on mobile use. Quartz is the first of its kind to deliberately reverse the old hierarchy: first, traditional web (for PC), and mobile interfaces, second. This is becoming a big digital publishing debate as many of us strongly believe we should go for mobile first and design our services accordingly (I fall in that category).

Quartz founders mentioned market research showing their main target — people on the road interested in global economy — uses 4.21 mobiles devices on average (I love those decimals…): one laptop, one iPad, and two (!) Blackberrys. (Based on multiple observations, I’d rather say, one BB and one iPhone.)

No native mobile app. Similarly, Quartz went for an open HTML5 design instead of apps. We went through this before in the Monday Note. Apps are mandatory for CPU intensive features such as heavy graphics, 3D rendering and games. For news, HTML5 — as messy as it is — does the job just fine. In addition, Quartz relies on “responsive design”, one that allows a web site to dynamically morph in response to the specific connected device (captures are not to scale):

Here is how it looks on a desktop screen:

… on an iPad in landscape mode:


…on an iPad in portrait mode:

on a small tablet:

..on an iPhone:

and on a small phone:

(I used Matt Kerlsey Responsive Design Test Site to capture Quartz renderings, it’s an excellent tool to see how your site will look like on various devices).

A river-like visual structure. Quartz is an endless flow of stories that automatically load one below the other as you scroll down. The layout is therefore pretty straightforward: no page-jumps, no complicated navigational tools, just a lateral column with the latest headlines and the main windows where articles concatenate. Again, the priority given to mobile use dictates design purity.

A lightweight technical setup. Quartz does not rely on a complex Content Management System for its production but on WordPress. In doing so, it shows the level of sophistication reached by what started as a simple blog platform. Undoubtedly, the Quartz design team invested significant resources in finding the best WP developers, and the result speaks for itself (despite a few bugs, sure to be short-lived…).

Editorial choices. Instead of the traditional news “beats” (national, foreign, economy, science…), Quartz went boldly for what it calls “obsessions”. This triggered a heated debate among media pundits: among others, read C.W. Anderson piece What happens when news organizations move from “beats” to “obsessions”? on the Nieman Journalism Lab.  Admittedly, the notion of “beats” sounds a bit old-fashioned. Those who have managed newsrooms know beats encourages fiefdoms, fence-building and bureaucracy… Editors love them because they’re much simpler to manage on a day-to-day basis; editorial meetings can therefore be conducted on the basis of a rigid organizational chart; it’s much easier to deal with a beat reporter or his/her desk chief than with some fuzzy “obsession” leader. At Quartz, current “Obsessions” appear in a discreet toolbar. They includes China Slowdown, The Next Crisis, Modern States, Digital, Money, Consumer Class, Startups, etc.

To me, this “obsessive” way of approaching news is way more modern than the traditional “beat” mode. First, it conveys the notion of adjustability to news cycles as “obsessions” can — should — vary. Second, it breeds creativity and transversal treatments among writers (most business publications are quite boring precisely due to their “silo culture”.) Third, digital journalism is intrinsically prone to “obsession”, i.e. strong choices, angles, decisions. For sure, facts are sacred, but they are everywhere: when reporting about the last alarming report from the World Bank, there is no need to repeat what lies just one click away — just sum up the main facts, and link back to the original source! Still, this shouldn’t preclude balanced treatment, fairness and everything in the basic ethics formulary. (Having said that, let’s be realistic: managing a news flow through “obsessions” is fine for  an editorial staff of 20, certainly not so for hundreds of writers.)

Quartz business side. Quartz is a free publication. No paywall, no subscription, nothing. Very few ads either. Again, it opted for a decisive model by getting rid of the dumb banner. And it’s a good thing: traditional display advertising kills designs, crappy targeting practices irritate readers and bring less and less money. (Most news sites are now down to single digital digits in CPM [Cost Per Thousand page views], and it will get worse as ad exchanges keep gaining power, buying remnant inventories by the bulk and reselling those for nothing.) Instead, Quartz started with four sponsors:  Chevron, Boeing, Credit Suisse and Cadillac, all showing quality brand contents. It’s obviously too early to assess this strategy. But Quartz business people opted for being extremely selective in their choice of sponsors (one car-maker, one bank, etc.), with rates negotiated accordingly.

Two, brands are displayed prominently with embedded contents instead of usual formats. Quartz is obviously shooting for very high CPMs. At the very least, they are right to try. I recently meet a European newspaper that extracts €60 to €100 CPMs by tailoring ads and making special ads placements for a small list of advertisers.

Again: such strategy is fine for a relatively small operation: as it is now, Quartz should not burn more than $3-4M a year. Betting on high CPMs is way more difficult for large websites — but niches can be extremely profitable. (For more on Quartz economics, read Ken Doctor’s piece also on Nieman.)

To sum up, three elements will be key to Quartz’ success. 

1 . Quickly build a large audience. Selected advertisers are not philanthropists; they want eyeballs, too. Because of its editorial choices, Quartz will never attract HuffPo-like audiences. To put things in perspective, the Economist gets about 7M uniques browsers a month (much less unique visitors) and has 632,000 readers on its app.

2 . Quartz bets on foreign audiences (already 60% of the total). Fine. But doing so is extremely challenging. Take The Guardian: 60 million uniques visitors per month — one third in the UK, another in the US, and the rest abroad — a formidable journalistic firepower, and a mere £40m in revenue (versus $160m in advertising alone for the with half of the Guardian’s audience, that’s a 5 to 1 ratio per reader.)

3 . Practically, it means Quartz will have to deploy the most advanced techniques to qualify its audience: it will be doomed if it is unable to tell its advertisers (more than four we hope) it can identify a cluster of readers traveling to Dubai more than twice a year, or another high income group living in London and primarily interested in luxury goods and services (see a previous Monday Note on extracting reader’s value through Big Data)

4 . In the end, Quartz is likely to face a growth question: staying in a niche or broadening its reach (and its content, and increasing its staff) to satisfy the ad market. Once its audience levels off, it might have no other choice than finding a way to make its readers pay. It should not be a problem as it focuses on a rather solvent segment.


Google’s Amazing “Surveywall”

advertising By September 9, 2012 Tags: , , 8 Comments


How Google could reshape online market research and also reinvent micro-payments. 

Eighteen months ago — under non disclosure — Google showed publishers a new transaction system for inexpensive products such as newspaper articles. It worked like this: to gain access to a web site, the user is asked to participate to a short consumer research session. A single question, a set of images leading to a quick choice. Here are examples Google recently made public when launching its Google Consumer Surveys:

Fast, simple and efficient. As long as the question is concise and sharp, it can be anything: pure market research for a packaging or product feature, surveying a specific behavior,  evaluating a service, intention, expectation, you name it.

This caused me to wonder how such a research system could impact digital publishing and how it could benefit web sites.

We’ll start with the big winner: Google, obviously. The giant wins on every side. First, Google’s size and capillarity puts it in a unique position to probe millions of people in a short period of time. Indeed, the more marketeers rely on its system, the more Google gains in reliability, accuracy, granularity (i.e. ability to probe a segment of blue collar-pet owners in Michigan or urbanite coffee-drinkers in London).The bigger it gets, the better it performs. In the process, Google disrupts the market research sector with its customary deflationary hammer. By playing on volumes, automation (no more phone banks), algorithms (as opposed to panels), the search engine is able to drastically cut prices. By 90% compared to  traditional surveys, says Google. Expect $150 for 1500 responses drawn from the general US internet population. Targeting a specific group can cost five times as much.

Second upside for Google: it gets a bird’s eye on all possible subjects of consumer researches. Aggregated, anonymized, recompiled, sliced in every possible way, these multiple datasets further deepen Google’s knowledge of consumers — which is nice for a company that sells advertising. By the way, Google gets paid for research it then aggregates into its own data vault. Each answer collected contributes a smallish amount of revenue; it will be a long while, if ever, before such activity shows in Google’s quarterly results — but the value is not there, it resides in the data the company gets to accumulate.

The marketeers’ food chain should be happy. With the notable exception of those who make a living selling surveys, every company, business unit or department in charge of a product line or a set of services will be able to throw a poll quickly, efficiently and cheaply. Of course, legacy pollsters will argue Google Consumer Surveys are crude, inaccurate. They will be right. For now. Over time the system will refine itself, and Google will have put  a big lock on another market.

What’s in Google’s Consumer Surveys for publishers whose sites will host a surveywall? In theory, the mechanism finally solves the old quest for tiny, friction-free transactions: replace the paid-for zone with a survey-zone through which access is granted after answering a quick question. Needless to say, it can’t be recommended for all sites. We can’t reasonably expect a general news site, not to mention a business news one, to adopt such a scheme. It would immediately irritate the users and somehow taint the content.

But a young audience should be more inclined to accept such a surveywall. Younger surfers will always resist any form of payment for digital information, regardless of quality, usefulness, relevance. Free is the norm. Or its illusion. Young people have already demonstrated their willingness to give up their privacy in exchange for free services such as Facebook — they have yet to realize they paid the hard price, but that’s another subject.
On the contrary, a surveywall would be at least more straightforward, more honest: users gives a split second of their time by clicking on an image or checking a box to access the service (whether it is an article, a video or a specific zone.) The system could even be experienced as fun as long as the question is cleverly put.
Economically, having one survey popping up from time to time — for instance when the user reconnects to a site — makes sense. Viewed from a spreadsheet (I ran simulations with specific sites and varying parameters), it could yield more money than the cheap ads currently in use. This, of course, assumes broad deployment by Google with thousands of market research sessions running at the same time.

A question crosses my mind : how come Facebook didn’t invented the surveywall?





Why newspapers must raise their price

newspapers By September 2, 2012 24 Comments

For quite a while, I’ve been advocating a newspapers price hike. My take: the news market is undergoing an irreversible split. On one side, digital distribution (on web, mobile and tablets) will thrive through higher volumes and deeper penetration; revenue is not easy to squeeze out of digital subscribers and advertisers but, as some consolation, serving one or ten million customers costs about the same.

On the other side, print is built on a different equation: gaining audience is costly; every additional reader comes with tangible industrial costs (printing, shipping, home delivery). Having said that, each print reader carries a much better ARPU than its online counterpart (bet on a 5 to 15 times higher yield, depending on the product). And, for a long time, there will be a significant share of the audience that will favor the print version, regardless of  price (almost). Those are super loyal and super solvent readers.

Last week, my little solo tune about price hikes received independent support of people much better equipped to define prices and value. Global marketing consultants Simon-Kucher & Partners released conclusions from an in-depth study of newspaper price evolution and its impact on circulation (PDF summary here). The headline: “Calling all newspapers: A premium model is your best hope”, which the authors, Andre Weber and Kyle Poyar, sum up thusly:

Newspapers are in an unenviable, but not uncommon position: raising print prices may shrink their already anemic readership base, but may also be their best hope for staying afloat.

Headquartered in Germany, with 23 branches across the world, Simon-Kucher specializes in marketing, sales and pricing strategies. They rely on thorough analysis and models to help their clients value a wide range of products and services. For this study, they surveyed the 25 largest US newspapers (ABC’s listing here). Before that, they’d worked on quality newspapers in the UK. Their findings:

— When technological disruption causes an irrevocable market decline, “it’s almost prudent to raise prices”. To support their claim, SKP mentions AOL which, at a critical point of its existence, raised its rates and generated large amounts of cash. This helped the online service finance major shifts in its business. To the contrary, Kodak continuously lowered the price of its film products, found itself unable to invest in a digital turnaround and finally went bankrupt.

— There is no elasticity in newspaper prices. In other words, a significant price hike won’t necessarily translate into a material drop in circulation. But the extra money raised in the process will provide welcomed help for investments in digital technologies.

— Raising prices discourages price wars. Many sectors are engaged in a downward spiral that doesn’t always translate into higher volume, but guarantees weaker revenues.

They conclude:

The print business is not your legacy, it’s your bank.

For publishing companies with struggling print divisions, SKP’s shibboleth might appear a bit overstated but it still contains valuable truths.

Let’s come back to the price elasticity issue. It’s an endless debate within publishing houses. Fact is there is none. For the US market, here are the effects of specific price hikes on circulation revenues:


…In an earlier UK market study, SKP looked at the consequences of price increases between 2007 and 2010 for these quality papers:

                Price        Variation in     Variation in 
                Increase     Circ. volume     Circ. revenue
 The Times        +54%         -24%             +16.7%
 The Guardian     +43%         -19%             +15.8%
 The Independent  +43%         -21%             +13%
 The Telegraph    +43%         -25%             +7%

When I spoke with Andre Weber and Kyle Poyar, the authors of the study, they were reluctant to evaluate which part of the circulation drop was attributable to the natural erosion of print, and which part was linked to the price hike. Also, they were careful not to venture into the consequences of the drop in circulation on advertising (as ad rates are tied to the circulation.)

However, they didn’t dispute that the bulk of the drop in circulation was linked to the erosion of print caused by the shift to digital. If there is any remaining doubt, watch this chart compiled the Pew Research Center:

With the left scale showing the percentage drop (!), the plunge is obvious, even though a change in the counting system by the Audit Bureau of Circulation embellishes the situation a bit.

The price equation for print newspapers can be summed-up as follows:

#1 Price hikes –both for street price and subscriptions– only marginally impact circulation already devastated by the conversion to digital.

#2 Additional revenue coming from price hikes far outpaces the loss in circulation (which will occur anyway). Ten or twenty years ago, US newspapers drew most of their revenue (70%-sometimes 80%) from advertising. Now the revenue structure is more balanced. The NY Times, for instance, evolves into an evenly split revenue structure, as shown in its Q2 2012 financial statement:

#3 There is room for further price increases. When asked about the threshold that could trigger a serious loss in readership, Andre Weber and Kyle Poyar opine that the least loyal customers are already gone, and that we have not yet reached the critical threshold that will discourage the remaining base of loyal readers.

#4 Advertising is indeed an issue, but again, its decline will occur regardless of circulation strategies. The main reason (other than difficult economic conditions): the adjustment between time spent and advertising expenditures on print that will inevitably affect print ads.

(source: Mary Meeker’s State of the Internet, KPCB)

#5 High prices on print versions will help maintain decent prices for digital paid-for contents, through subscriptions, paywalls, etc. As Weber and Poyar point out, for a publisher, the quality of print and digital products must remain connected, the two must work together (even though digital subscriptions will always be substantially lower than print.)

#6  When it comes to pricing strategies, quality rules the game. Simon-Kucher’s conclusions applies for high-end products. The New York Times, The Guardian, or The Sydney Morning Herald won’t have problems raising their prices by substantial amounts. But for tabloids or low end regional papers filled with cheap contents and newswire fodder, it’ll be another story.

#7 Pricing issues can’t be insulated from distribution.  In many countries, publishers of national dailies should consider refocusing their distribution map down to major cities only. The move would save shipping costs without too much of an impact on the advertising side as the solvent readership — the one dearly loved by advertisers — is mostly urban.



Why Murdoch’s The Daily Doesn’t Fly

online publishing By July 16, 2012 Tags: 22 Comments

Is there a future for The Daily? According to last week’s reports by The New York Observer and The New York Times, News Corp’s “tablet newspaper” is on probation: Rupert Murdoch might pull the plug on The Daily which looses $30 million a year. But, in an open email to the publication’s staff, Jesse Angelo, its editor-in-chief, was quick to deny such rumors.

Eighteen months ago, The Daily was held up as embodying the newsmedia’s future. It was the first to be designed for the iPad, it bore the blessing of Steve Jobs himself (quite notable for someone who usually loathed the news sector), and it had the backing of the deep-pocketed News Corporation conglomerate. The project’s success would be measured over time (five years), supported by a considerable amount of funding. It had all it needed to be a success.

Fact is, The Daily never took-off. Six months after its high-wattage launch, it only claimed 80,000 paid-for subscribers. Today, Jesse Angelo mentions a mere 100,000 subs. It is both far from the 500,000 necessary to break-even and totally out of step with the growth of the iPad (and the iPhone, and the Android) installed base.

Something’s wrong with The Daily’s concept.

I subscribed. Twice, actually. At 99 cents a week ($39.99 a year), it was supposed to be a painless addition to my vast set of digital subscriptions. Strangely, it never succeeded in becoming part of my reading habits.

For The Daily, this might be its first problem: It is everything and nothing special at the same time. It’s not a tabloid, but it doesn’t carry in-depth, enterprise journalism either. It’s a sophisticated container for commodity news — i.e. the news that you can get everywhere, in real-time and for free. If I crave celebrity fodder, I go to TMZ or to the Huffington Post. If I want business news, I’ll find everything on CNN Money or Business Insider, all very efficiently and appealingly edited. No need to go through the tedious download of a 100 pages-plus issue.

The Daily’s inherent competition with the web (and mobile) was completely underestimated. Real time is now mandatory, so is the ability to generate conversations. For The Daily, a comparison of last weekend’s newscycle is cruel. Its coverage of the Mitt Romney tax return controversy triggered 179 comments on The Daily vs. 28,464 comments on the Huffington Post. (Note the HuffPo built it on a 150 words Associated Press story and a one minute video segment from CNN — that’s the digital version of the multiplication of the loaves…)

The Daily is like an old concept in a high-tech package. Some draw a parallel with USA Today, the first truly national newspapers launched in 1982. Two things made the paper a success:  its positioning as the first truly national newspaper in the United States, and its innovative format and layout; USA Today was designed for quick reads and explanatory journalism enhanced by graphics. That uniqueness was key to installing the paper America’s news landscape.

By contrast, The Daily does not enjoy the specificity of a “visually attractive convenience”. Its sophistication and its cognitive profusion lead to an excess of complexity which ends up leveling its content off. The Daily has no asperities, nothing to retain the reader’s attention in a recurring manner. A media is driven by its affinity for its audience. An intellectual, cultural or political affinity — or a combination of all three. The Daily lacks such engines. Even its Murdoch-induced anti-Obama stance fails to keep readers coming back.

Another key question for The Daily’s future pertains to its business model. On average, according to the Newspaper Association of America, 70% of the revenue of US dailies comes form advertising, and 14% of the ad revenue comes from digital. By any means, The Daily should have been loaded with ads. There is almost none. Including for the new “WKND” edition. This is worrisome: many papers draw a third or half of their revenue of their weekend editions. The per-copy price won’t help either. At 99 cents a week, it’s pocket change. After Apple’s 30% cut, this leaves less than 10 cents per issue. Even with a streamlined newsroom of 150, it can’t fly. (For more about The Daily’s economics, read Ken Doctor’s January 2011 Newsonomics article, or last week’s piece by Staci Kramer in PaidContent, both explain the issue pretty well.)

The Daily also illustrates the difficulty in building a digital media brand. Many tried, few succeeded. Slate, Salon, excellent as they are, journalistically speaking, never took off audience-wise. The Huffington Post made it through a unique combination of unscrupulous “aggrelooting” of contents from a variety of willing and unwilling sources, legions of unpaid bloggers, Arianna’s media footprint, and unparalleled audience-building techniques (see the previous Monday Note: Transfer of Value). A combination that proved very complicated to reproduce — even for someone as resourceful as Rupert Murdoch.

The Australian-born media mogul thought he could launch a new breed of news product from scratch. But in his quest for bold digital efficiency, he failed to see that a news product with no history, no breadth, no soul, no character could only face an uncertain future.


Transfer of Value

journalism, online publishing By July 8, 2012 Tags: 88 Comments

This is a story of pride vs. geekiness: Traditional newspapers that move online are about to lose the war against pure players and aggregators. Armed with the conviction their intellectual superiority makes them immune to digital modernity, newspapers neglected today’s internet driving forces: relying on technology to build audiences and the ability to coalesce a community over any range of subjects — even the most mundane ones.

When I discuss this with seasoned newsroom people on both sides of the Atlantic, most still firmly believe the quality of their work guarantees their survival against a techno-centric approach to digital contents.

I’m afraid they are wrong. Lethally so.

We are a facing a culture shock. On one side, legacy medias: Great franchises who grew on strong values, such as “pristine” journalism, independence, storytelling, fact-checking, solid editing, respect for the copyright… Along the way, they made their share of mistakes, but, overall, the result is great. After all, at the height of the Fourth Estate’s power, the population was better informed than today’s Facebook cherry-pickers.  Now, this (aging) fraternity faces a new generation of media people who build their fiefdom on a completely different set of values. For instance, the notion of copyright has become exceedingly elastic. A few months ago, Flipboard began to aggregate contents from French news organizations, taking large excerpts — roughly capturing the essence of a story — along with a token link back to the original content. Publishers sent polite letters saying, in substance: ‘Guys, although we are fond of your iOS applications, you can’t simply pick up our stuff without permission, we need to talk first…’

Publishers’ attitude toward aggregators has always been ambiguous. Google is the perfect example: on one hand, publishers complained about the search giant’s power; and, at the same time, they spend huge sums of money optimizing their sites, purchasing relevant keywords, all to make the best use of the very power they criticize. In Belgium, publishers challenged Google in court for the Google News product before realizing they really depended a lot on it, and begging for reintegration in the Google traffic cauldron.

Another example of the culture shock: reliance on technology. It’s a religion for the newcomers but merely a support function for traditional editors. Unfortunately, evidence shows how wrong it is to snub the traffic building arsenal. Here are a few examples.

On July 5th, The Wall Street Journal runs an editorial piece about Mitt Romney’s position on Obamacare.

The rather dull and generic “Romney’s Tax Confusion” title for this 1000 words article attracted a remarkable 938 comments.

But look at what the Huffington Post did: a 500 words treatment including a 300 words article, plus a 200 words excerpt of the WSJ opinion and a link back (completely useless). But, unlike the Journal, the HuffPo ran a much sexier headline :

A choice of words that takes in account all Search Engine Optimization (SEO) prerequisites, using high yield words such as “Squandering”, “Snafu”, in conjunction with much sought-after topics such as “Romney” and “Health Care”. Altogether, this guarantees a nice blip on Google’s radar — and a considerable audience : 7000+ comments (7x more than the original), 600 Facebook shares, etc.

HuffPo’s editors took no chance: the headline they picked is algorithm-designed to yield the best results in Google. The aggregator invested a lot in SEO tools: I was told that every headline is matched in realtime against Google most searched items right before being posted. If the editor’s choice scores low in SEO, the system suggests better terms. In some instances the HuffPo will A/B test headlines: It will serve different versions of a page to a couple of random groups and, after five minutes, the best headline will be selected. Found on Quora, here are explanations by Whitney Snyder, HuffPost’s senior news editor:

The A/B testing was custom built. We do not, however, A/B test every headline. We often use it to see if our readers are familiar with a person’s name (i.e. John Barrasso vs GOP Senator), or to play up two different aspects of a story and see which one interests readers more. We also A/B test different images.

Other examples below will prove the effectiveness of HuffPo’s approach. Here is a media story about a TV host whose position is in jeopardy; the Daily News version: a 500 words article that looks like this:

The Huffington Post summed it up in a 175 words form, but introduced it with a much more potent headline including strong, Google-friendly locutions:

Results speak for themselves:

Daily  News original version : 2 comments, 1 tweet, 1 Facebook share
HuffingtonPost version : 4601 comments, 79 tweets, 155 share.

Like no one else, the HuffPo masters eye-grabbing headline such as these :
Watch Out Swimmers: Testicle-Eating Fish Species Caught in US Lake (4,000 Facebook recommendations), or: Akron Restaurant Owner Dies After Serving Breakfast To Obama (3300 comments) or yesterday’s home: LEPAGE LOSES IT: IRS ‘THE NEW GESTAPO’ displayed in a 80 points font-size; this adaptation of the Maine’s daily Press Herald generated about 6000 comments on the aggregator.

The point is not to criticize the Huffington Post for being extremely efficient at optimizing its work. They invested a lot, they trained their people well. Of course, the bulk of HuffPo’s content  comes from : a) unpaid bloggers — 9,884 new ones last year alone according to Arianna’s count; b) content borrowed from others media and re-engineered by 170 journalists, a term that encompass various kinds of news producers and a bunch of true writers and editors; c) a small percentage of original reporting.  Each day, all this concurs to “over 1,000 stories published” that will translate into 1.4 million of Facebook referrals and 250,000 comments. Staggering numbers indeed. With some downsides, too: 16,000 comments (!) for an 200 words article about Barack Obama asking to turn off Fox News during a campaign tour is not likely to attract enviable demographics advertising-wise. The HuffPo might make a billion page views per month, but most of them only yield dimes.

The essence of what we’re seeing here is a transfer of value. Original stories are getting very little traffic due to the poor marketing tactics of old-fashion publishers. But once they are swallowed by the HuffPo’s clever traffic-generation machine, the same journalistic item will make tens or hundred  times better traffic-wise. Who is right?  Who can look to the better future in the digital world ? Is it the virtuous author carving language-smart headlines or the aggregator generating eye-gobbling phrases thanks to high tech tools?  Your guess. Maybe it’s time to wake-up.


Lessons from ProPublica

journalism By July 1, 2012 Tags: , , , 3 Comments

Paul Steiger is one of the men I admire the most in my profession. Five years ago, at the age of 65, and after a 16-year tenure as the Wall Street Journal’s managing editor, he seized the opportunity to create a new form of investigative journalism. Steiger created ProPublica, a non-profit newsroom dedicated to the public interest and to deep dive reporting. He hired a bunch of young staffers (coached by seasoned editors and reporters) that could help him lift data journalism and computer-assisted reporting to the highest level. Thanks to wisely managed partnerships, he gave ProPublica a wide audience and the quality and breadth of his newsroom’s reporting landed it scores of awards, including two Pulitzer Prizes. ProPublica was the first online news organization to receive such a seal of approval.

All this in five years, with now 33 journalists. Kudos.

Last wednesday, at the end of quick hop to New York, I paid Paul Steiger a visit. His corner office nests on the 23rd floor of Broadway, overlooking Wall Street’s canyons and Manhattan’s southern tip. At 70, Steiger has a twinkle in the eye that you don’t often find in reporters half his age. Especially when he speaks about ProPublica’s most shining journalistic moments.

In late 2006, the Sandler Foundation, approached Steiger with a wish to allocate a fraction of its immense wealth to the funding of investigative reporting. The newsman made four recommendations:

— The first one was to rely on a permanent staff as opposed to hired guns. “To do the kind of journalism we wanted to do, you must have people comfortable enough to stay on the story as long as needed. You also must accept dry holes. Freelancers will starve in such conditions!”

— Two, for the biggest stories, he wanted to partner with one or two large news organizations that could be granted some exclusivity over a short period of time in exchange for good visibility.

— Three, in order to guarantee the widest reach, Paul Steiger wanted to distribute the material freely on the web.

— Four, he would solely be responsible for content; funders or other contributors would not be involved in selecting stories. (Actually, on ProPublica’s first board meeting, none of the financial backers knew what the newsroom was working on.)

The partnership proved to be a great idea and expanded much farther than anticipated. It relied quite a lot on Paul Steiger’s and Stephen Engelberg’s personal connections (Engelberg is ProPublica’s editor-in-chief.) Quite often, Steiger explained, once a story neared completion, he’d place a call directly to a key editor in a major publications. “Sometimes, I could feel the excitement over the phone”, he laughs. He had to be very careful not to say too much before hammering the deal. I asked him how he handles confidential sources: “Well, we do not mind giving sources’ names to editors and lawyers, but less so to beat reporters… You know, reporters are human, and they might be tempted to call the sources themselves…”

Cooperation with other medias turned out to breed an unexpected advantage: transforming good national stories into a local ones. The best example, is the Dollars for Docs project. In a nutshell: a sizable portion of pharmaceutical firms operating in the United States are now required to reveal all direct contributions to doctors. (It’ll be 100% next year.) Needless to say, they complied reluctantly, providing a sloppy, almost useless database. As a result, the two reporters assigned to the subject were at a loss when it came to retrieve relevant data. Then, a young ProPublica in-house data specialist joined the team and solved the puzzle in a few weeks. The story was published by ProPublica’s five partners: The Chicago Tribune, The Boston Globe, PBS, NPR and Consumer Reports. Why Consumer Reports? “Because they had polling capabilities”, Steiger said. “Pharmaceuticals companies were saying patients didn’t mind if doctors got fees from them, we proved patients actually care…” After a few days for the key partners’ exclusivity window, the database was released on the web on October 19, 2010. In an easily searchable way, it showed the status of 17,000 doctors receiving a total of $750 million. A small stimulus to keep the flow of prescriptions smooth and steady — and to contribute to the indecent cost of healthcare in America.

Then the local mechanics kicked in. In the months afterwards, no less than 125 local outlets picked up the story, extracting relevant local information from the database and adding context. That’s one of the most interesting aspects of ProPublica’s work: its ability to cause a national interest story to percolate down to the local news organizations which, in turn, will give the story more depth by connecting it to its relevant community. (ProPublica now has 78 partners)

I asked Paul Steiger if he believes this model could be translated into a classic business. After all, why not gather half a dozen non-competiting news outlets, happy to split the price of large journalistic projects — each costing from $100,000 to $200,000 to produce — in addition to a small fee from local news? Paul Steiger says it cannot be made to work. To sum it up, by asking a newspaper or a TV network to pay, ProPublica would directly compete with their clients’ internal economics. Inevitably, someone will say, hey, last year, we paid x thousands dollars in fees for ProPublica’s stories, that’s the equivalent of y staffers. Not to mention the state of the news industry with, in fact, very few companies willing (and able) to pay extra editorial costs. The consequence would be a down spiral: deprived of the vast audience it now enjoys, the organization would have a hard time attracting clients for its content, nor would it be able to attract donations. Fact is, such syndication doesn’t work. California Watch, which operates on the same beat as ProPublica, burns more than $2 million a year but collects less than… $30,000 dollars in syndication fees.

That’s why ProPublica plans to stick to its original structure. Next year, Paul Steiger will step down as ProPublica’s editor-in-chief and chief executive, he’ll become executive chairman, a position in which he will spend most of his time raising money for the organization. As it stands today, ProPublica is on a sound path. The first two years of operation were solely funded by the Sandler family, for about $10 million a year. This year their contribution will be down to $4 million, with $6m coming from other sources. In 2013, the breakdown will be $3m and $7 million. Not only did ProPublica put itself at the forefront of the public interest, high quality, digitally boosted, modern journalism, but it also created a sustainable way to support it.


The Insidious Power of Brand Content

advertising By June 25, 2012 Tags: 61 Comments

Dassault Systemes is one of the French industry’s greatest successes. Everyday, unbeknownst to most of us, we use products designed using DS software: cars, gadgets, buildings and even clothes. This €2bn company provides all the necessary tools for what has become known as Product Lifecycle Management: starting from the initial design, moving to the software that runs the manufacturing process, then to distribution logistics and, at the end of its life, disposing of the product.

Hence a simple question: What could be the axis of communication for such a company? The performance of its latest release of CAD software? Its simulation capabilities?

No. Dassault Systemes opted to communicate on an science-fiction iceberg-related project. The pitch: a French engineer — the old-fashion type, a dreamer who barely speaks English — envisions capturing an iceberg from a Greenland glacier and tugging it down to the thirsty Canary Islands. The DS mission (should it choose to really accept it): devise all the relevant techniques for the job, minimize melting, maximize fuel-efficiency. The result is a remarkable and quite entertaining documentary, a 56 minutes high-tech festival of solutions for this daunting task’s numerous challenges. I watched it in HD on my iPad, in exchange for my email address (the one I’m dedicating to marketers). It’s a huge, multimillion video production, with scores of the helicopters shots, superb views of Greenland and, of course, spectacular 3D imaging, the core DS business. The budget is so high and the project so ambitious, that the documentary was co-produced by several large European TV channels such as France Televisions and the German ZDF. Quite frankly, it fits the standard of public TV — for such a genre.

But this is neither journalism nor National Geographic film-making. It’s a Brand Content operation.

In advertising, Brand Content is the new black. You can’t bump into an ad exec without hearing about it. It’s the new grail, the replacement for the other formats that failed and the latest hope for an ailing industry. But there are side effects.

Let’s have a closer look.

1/ What defines Brand Content as opposed to traditional advertising?

In a good BC product, the brand can be almost absent. It’s the content that’s front and center. In France, advertisers often quote a series made by the French Bank BNP-Paribas titled “Mes Colocs” (My roommates). The title says it all. Launched two years ago, it featured 20 shorts episodes, later supplemented by… 30 bonus ones, all broadcast on YouTube and DailyMotion. Mes Colocs became such a success that two cable TV channels picked it up. The brand name does not appear — except in the opening credits. But, far from being a philanthropic operation, its performance was carefully monitored. BNP-Paribas’ goal was obvious: raising its awareness among young people. And it seems to have worked: the operation translated into a 1.6% increase in accounts opening and a rise of 6.5% in the number of loans granted to young adults (details in this promotional parody produced by the agency.)

This dissociation between brand and content is essential. An historical French brand has been rightly celebrated for being the first to do brand content decades before the term was coined: Michelin with its eponymous guides provided a genuine service without promoting its tires (read Jean-Louis’ Monday Note Why Apple Should Follow Michelin.)

The following opposition can be drawn between traditional advertising and content-based message :

2 / Why the hype ?

First of all, medias are increasingly fragmented. Advertisers and marketers have a hard time targeting the right audience. BC is a good way to let the audience build itself — for instance through virality. It is much more subtle than relying on the heavily (and easily) corrupted blogosphere.

Second, most digital formats are faltering. Display advertising is spiraling down due to well-known factors: unlimited inventories, poor creativity, excessive discounts, bulk purchasing, cannibalization by value killing ad networks, etc. Behavioral targeting is technically spectacular but people get irritated by invasive tracking techniques (see my previous take: Pro (Advertising) Choice.)

Three, marketers have matured. The caricatural advertorial grossly extolling a product is long gone.  Today’s contents are much smarter, they provide information (real or a respectable imitation), and good entertainment. Everything is increasingly well-crafted. Why? Because — and that is reason #4 for growth in BC — there is a lot of available talent out there. As news media shrink, advertising agencies find an abundance of writers, producers, film-makers all eager to work for much more money they could hope to get in their former jobs. Coming in with a fresh mindset, not (yet) brain-washed by marketing, they will do their job professionally, accepting “minor” constraints in exchange for great working conditions — no penny pinching when you do a web series for a global brand.

Five, compared to traditional advertising messages, Brand Content is cheap. As an example, see the making of a recent and highly conceptual Air France commercial shot in Morocco; the cost ran into seven figures. Now, imagine how many brand content products can be done with the same investment. Brand content allows an advertiser to place multiple bets at the same time.

3/ The risks. (Here comes the newsman’s point of view)

Brand content is the advertiser’s dream come true. The downfall of the print press has opened floodgates: publishers become less and less scrupulous in their blurring of the line between editorial and promotion — which is precisely what ad agencies always shoot for. Most women magazines, the luxury press, and now mainstream glossies allocate between 30% and 70% to such “tainted” editorial: nice “journalistic” treatment in exchange for favors on the advertising side. I’m not blaming publishers who do their best to save their business, I’m just stating the facts.

The consequence is obvious: readers are not informed as they should about products. Less and less so. (Although islands of integrity like Consumer Reports remain.) That is not good for the print media as it feeds the public’s distrust. While many publications lose what’s left of their credibility by being too cosy with their advertisers, brands are becoming increasingly savvy at producing quality contents that mimic traditional editorial. As brands tend to become full blown medias, the public will get confused. Sooner or later, it will be difficult to distinguish between a genuine, editorially-driven prime-time TV show and another one sponsored by an advertiser. Call it the ever shrinking journalism.


Off The eBook Shelf

online publishing By June 18, 2012 Tags: 32 Comments

Readers are voting with their wallets: The eBook is winning. In the US, eBooks sales are now topping hardcovers for the first time (story in TechCrunch). Not everywhere of course. According to the Bowker Global eBook Research, the global market for eBooks is driven — in that order — by India, Australia, the UK and the United States. The laggards are Japan and (no surprise) France. The chart below shows the percentage of internet population reporting the purchase of a digital book over the last six months prior to the survey.

Interestingly, for most population samples, the level of purchases is not correlated with awareness. France enjoys the highest level of awareness but its internet population buys five times less eBooks than India’s. Once an Indian internet user finds an attractive digital book offer, he/she will most likely jump on it. This could lead to the following: in emerging countries, the cellular phone has become the main communication tool, leapfrogging the deployment of land lines; similarly, we could see eBooks bypassing print in countries like India where a large segment of the population is getting both literate and connected at a fast pace. (Actually, Bowker also reports that over 50% of respondents in India and Brazil are likely to buy an eBook in the next six months, ten times more than in France.)

If the rise of the eBook happily provides access to knowledge in emerging countries, the picture is more contrasted in countries with a long history and high penetration of printed books.

For instance, let’s have look at the ISBN registrations data for the United States. The chart below, drawn again from Bowkers (full PDF table here) shows a steady progression:

Between 2002 and 2011, in the US market, ISBN registration grew by 61% and reached 347,178 new titles. (A technical note: I’m only taking into account books that fall in an identified category, such as Arts, Biography, Business, etc. I’m excluding the huge segment labeled as nontraditional, which includes reprints, public domain, and titles printed on demand; this segment grew by over 3500% to 1.2 million registrations, which would distort the picture.)

We clearly see the impact of mainstream e-readers such as the Kindle and the iPad. Without any doubt, they contributed to the growth of registrations. (Unfortunately, ISBN counts does not provide a breakdown between print and digital.) Over the last nine years, some book publishing segments fared better than others. See the chart below:

Fiction is doing twice better than all other categories together. The Digital Book is the medium of choice for fiction: a) eBooks are set to be cheaper that print and price elasticity is now a proven fact, the cheaper a book is, the more likely a reader is to try it; b) e-commerce breeds impulse buying (cf the famous “One-Click® feature); c) readers can test the product more efficiently than in the printed world as Amazon and the iBooks Store make larges sample available for free. No surprise, then, to see the Fiction category holding well.

No surprise either in seeing the three worst performers also as prime victims of the digital era. History books have to compete with the vast trove of material available on the web; that’s the Encyclopaedia Britannica syndrome, going out of print after 244 years of duty, demoted by the 11-year-old Wikipedia. Like it or not, most history books publishers will follow the same fate.

Similarly,Travel and Computer books are in direct competition with mostly free online services. Who will buy a “how-to” computer book today? There are plenty of video tutorials explaining how to replace a hard drive or how to struggle with Photoshop? And let’s not even mention the Travel segment with tons of guides, reviews, price comparators and transactions services. As for the language sections of the bookstore, again, a simple query in Google can help with spelling, translation and grammar… Even the precious Roget’s Thesaurus is online, and rather efficiently so. I’ll just venture French Canadians did Roget one better: A company called Druide publishes a suite of applications for PCs, tablets and smartphones called Antidote. It’s an unusually clever combination of dictionary, thesaurus, quotations, etymology and more. I wondered for a while about the name Antidote — until I realized Quebecois saw the product as an antidote to… English. An old struggle.

The main eBooks casualty is likely to be bookstores. In a city like New York, in the Fifties, about 330 bookstores were in business. Now they are down to 30 or even less, laments André Schiffrin, former head of Pantheon Books, in his recent book Words & Money. Countries like France or Germany have laws that protect independent bookstores: From Bordeaux to Berlin, citizens are thankful for finding warmer and more relevant recommendations than the algorithm-based suggestions provided by Amazon. But how long will it last?