Browsing Category

online publishing

Why Google Will Crush Nielsen

advertising, online publishing By May 19, 2013 Tags: 19 Comments


Internet measurement techniques need a complete overhaul. New ways have emerged, potentially displacing older panel-based technologies. This will make it hard for incumbent players to stay in the game.

The web user is the most watched consumer ever. For tracking purposes, every large site drops literally dozens of cookies in the visitor’s browser. In the most comprehensive investigation on the matter, The Wall Street Journal found that each of the 50 largest web sites in the United Sates, weighing 40% of the US page views, installed an average of 64 files on a user device. (See the WSJ’s What They Know series and a Monday Note about tracking issues.) As for server logs, they record every page sent to the user and they tell with great accuracy which parts of a page collect most of the reader’s attention.

But when it comes to measuring a digital viewer’s commercial value, sites rely on old-fashioned panels, that is limited user population samples. Why?

Panels are inherited. They go back to the old days of broadcast radio when, in order to better sell advertising, dominant networks wanted to know which station listeners tuned in to during the day. In the late thirties, Nielsen Company made a clever decision: they installed a monitoring box in 1000 American homes. Twenty years later, Nielsen did the same, on a much larger scale, with broadcast television. The advertising world was happy to be fed with plenty of data — mostly unchallenged as Nielsen dominated the field. (For a detailed history, you can read Rating the Audience, written by two Australian media academics). As Nielsen expanded to other media (music, film, books and all sorts of polls), moving to the internet measurement sounded like a logical step. As of today, Nielsen only faces smaller competitors such as ComScore and others.

I have yet to meet a publisher who is happy with this situation. Fearing retribution, very few people talk openly about it (twisting the dials is so easy, you know…), but hey all complain about inaccurate, unreliable data. In addition, the panel system is vulnerable to cheating on a massive scale. Smarty pants outfits sell a vast array of measurement boosters, from fake users that will come in just once a month to be counted as “unique” (they are indeed), to more sophisticated tactics such as undetectable “pop under” sites that will rely on encrypted URLs to deceive the vigilance of panel operators. In France for instance, 20% to 30% of some audiences can be bogus — or largely inflated. To its credit, Mediametrie — the French Nielsen affiliate that produces the most watched measurements — is expending vast resources to counter the cheating, and to make the whole model more reliable. It works, but progress is slow. In August 2012, Mediametrie Net Ratings (MNR), launched a Hybrid Measure taking into account site centric analytics (server logs) to rectify panel numbers, but those corrections are still erratic. And it takes more than a month to get the data, which is not acceptable for the real-time-obsessed internet.

Publishers monitor the pulse of their digital properties on a permanent basis. In most newsrooms, Chartbeat (also imperfect, sometimes) displays the performance of every piece of content, and home pages get adjusted accordingly. More broadly, site-centric measures detail all possible metrics: page views, time spent, hourly peaks, engagement levels. This is based on server logs tracking dedicated tags inserted in each served page. But the site-centric measure is also flawed: If you use, say, four different devices — a smartphone, a PC at home, another at work, and a tablet — you will be incorrectly counted as four different users. And if you use several browsers you could be counted even more times. This inherent site-centric flaw is the best argument for panel vendors.

But, in the era of Big Data and user profiling, panels no longer have the upper hand.

The developing field of statistical pairing technology shows great promise. It is now possible to pinpoint a single user browsing the web with different devices in a very reliable manner. Say you use the four devices mentioned earlier: a tablet in the morning and the evening; a smartphone for occasional updates on the move, and two PCs (a desktop at the office and a laptop elsewhere). Now, each time you visit a new site, an audience analytics company drops a cookie that will record every move on every site, from each of your devices. Chances are your browsing patterns will be stable (basically your favorite media diet, plus or minus some services that are better fitted for a mobile device.) Not only your browsing profile is determined from your navigation on a given site, but it is also quite easy to know which sites you have been to before the one that is currently monitored, adding further precision to the measurement.

Over time, your digital fingerprint will become more and more precise. Until then, the set of four cookies is independent from each other. But the analytics firm compiles all the patterns in single place. By data-mining them, analysts will determine the probability that a cookie dropped in a mobile application, a desktop browser or a mobile web site belongs to the same individual. That’s how multiple pairing works. (To get more details on the technical and mathematical side of it, you can read this paper by the founder of Drawbridge Inc.) I recently discussed these techniques with several engineers both in France and in the United Sates. All were quite confident that such fingerprinting is doable and that it could be the best way to accurately measure internet usage across different platforms.

Obviously, Google is best positioned to perform this task on a large scale. First, its Google Analytics tool is deployed over 100 millions web sites. And the Google Ad Planner, even in its public version, already offers a precise view of the performance of many sites in the world. In addition, as one of the engineers pointed out, Google is already performing such pairing simply to avoid showing the same ad twice to a someone using several devices. Google is also most likely doing such ranking in order to feed the obscure “quality index” algorithmically assigned to each site. It even does such pairing on a nominative basis by using its half billion Gmail accounts (425 million in June 2012) and connecting its Chrome users. As for giving up another piece of internet knowledge to Google, it doesn’t sounds like a big deal to me. The search giant knows already much more about sites than most publishers do about their own properties. The only thing that could prevent Google from entering the market of public web rankings would be the prospect of another privacy outcry. But I don’t see why it won’t jump on it — eventually. When this happens, Nielsen will be in big trouble.


Google News: The Secret Sauce

online publishing By February 24, 2013 Tags: 15 Comments


A closer look at Google’s patent for its news retrieval algorithm reveals a greater than expected emphasis on quality over quantity. Can this bias stay reliable over time?

Ten years after its launch, Google News’ raw numbers are staggering: 50,000 sources scanned, 72 editions in 30 languages. Google’s crippled communication machine, plagued by bureaucracy and paranoia, has never been able to come up with tangible facts about its benefits for the news media it feeds on. It’s official blog merely mentions “6 billion visits per month” sent to news sites and Google News claims to connect “1 billion unique users a week to news content” (to put things in perspective, the or the Huffington Post are cruising at about 40 million UVs per month). Assuming the clicks are sent to a relatively fresh news page bearing higher value advertising, the six billion visits can translate into about $400 million per year in ad revenue. (This is based on a $5 to $6 revenue per 1,000 pages, i.e. a few dollars in CPM per single ad, depending on format, type of selling, etc.) That’s a very rough estimate. Again: Google should settle the matter and come up with accurate figures for its largest markets. (On the same subject, see a previous Monday Note: The press, Google, its algorithm, their scale.)

But how exactly does Google News work? What kind of media does its algorithm favor most? Last week, the search giant updated its patent filing with a new document detailing the thirteen metrics it uses to retrieve and rank articles and sources for its news service. (Computerworld unearthed the filing, it’s here).

What follows is a summary of those metrics, listed in the order shown in the patent filing, along with a subjective appreciation of their reliability, vulnerability to cheating, relevancy, etc.

#1. Volume of production from a news source:

A first metric in determining the quality of a news source may include the number of articles produced by the news source during a given time period [week or month]. [This metric] may be determined by counting the number of non-duplicate articles produced by the news source over the time period [or] counting the number of original sentences produced by the news source.

This metric clearly favors production capacity. It benefits big media companies deploying large staffs. But the system can also be cheated by content farms (Google already addressed these questions); new automated content creation systems are gaining traction, many of them could now easily pass the Turing Test.

#2. Length of articles. Plain and simple: the longer the story (on average), the higher the source ranks. This is bad news for aggregators whose digital serfs cut, paste, compile and mangle abstracts of news stories that real media outlets produce at great expense.

#3. “The importance of coverage by the news source”. To put it another way, this matches the volume of coverage by the news source against the general volume of text generated by a topic. Again, it rewards large resource allocation to a given event. (In New York Times parlance, such effort is called called “flooding the zone”.)

#4. The “Breaking News Score”:   

This metric may measure the ability of the news source to publish a story soon after an important event has occurred. This metric may average the “breaking score” of each non-duplicate article from the news source, where, for example, the breaking score is a number that is a high value if the article was published soon after the news event happened and a low value if the article was published after much time had elapsed since the news story broke.

Beware slow moving newsrooms: On this metric, you’ll be competing against more agile, maybe less scrupulous staffs that “publish first, verify later”. This requires a smart arbitrage by the news producers. Once the first headline has been pushed, they’ll have to decide what’s best: Immediately filing a follow-up or waiting a bit and moving a longer, more value-added story that will rank better in metrics #2 and #3? It depends on elements such as the size of the “cluster” (the number of stories pertaining to a given event).

#5. Usage Patterns:

Links going from the news search engine’s web page to individual articles may be monitored for usage (e.g., clicks). News sources that are selected often are detected and a value proportional to observed usage is assigned. Well known sites, such as CNN, tend to be preferred to less popular sites (…). The traffic measured may be normalized by the number of opportunities readers had of visiting the link to avoid biasing the measure due to the ranking preferences of the news search engine.

This metric is at the core of Google’s business: assessing the popularity of a website thanks to the various PageRank components, including the number of links that point to it.

#6. The “Human opinion of the news source”:

Users in general may be polled to identify the newspapers (or magazines) that the users enjoy reading (or have visited). Alternatively or in addition, users of the news search engine may be polled to determine the news web sites that the users enjoy visiting. 

Here, things get interesting. Google clearly states it will use third party surveys to detect the public’s preference among various medias — not only their website, but also their “historic” media assets. According to the patent filing, the evaluation could also include the number of Pulitzer Prizes the organization collected and the age of the publication. That’s for the known part. What lies behind the notion of “Human opinion” is a true “quality index” for news sources that is not necessarily correlated to their digital presence. Such factors clearly favors legacy media.

# 7. Audience and traffic. Not surprisingly Google relies on stats coming from Nielsen Netratings and the like.

#8. Staff size. The bigger a newsroom is (as detected in bylines) the higher the value will be. This metric has the merit of rewarding large investments in news gathering. But it might become more imprecise as “large” digital newsrooms tend now to be staffed with news repackagers bearing little added value.

#9. Numbers of news bureaus. It’s another way to favor large organizations — even though their footprint tends to shrink both nationally and abroad.

#10. Number of “original named entities”. That’s one of the most interesting metric. A “named entity is the name of a person, place or organization”. It’s the primary tool for semantic analysis.

If a news source generates a news story that contains a named entity that other articles within the same cluster (hence on the same topic) do not contain, this may be an indication that the news source is capable of original reporting.

Of course, some cheaters insert misspelled entities to create “false” original entities and fool the system (Google took care of it). But this metric is a good way to reward original source-finding.

#11. The “breadth” of the news source. It pertains to the ability of a news organizations to cover a wide range of topics.

#12. The global reach of the news sources. Again, it favors large media who are viewed, linked, quoted, “liked”, tweeted from abroad.

This metric may measure the number of countries from which the news site receives network traffic. In one implementation consistent with the principles of the invention, this metric may be measured by considering the countries from which known visitors to the news web site are coming (e.g., based at least in part on the Internet Protocol (IP) addresses of those users that click on the links from the search site to articles by the news source being measured). The corresponding IP addresses may be mapped to the originating countries based on a table of known IP block to country mappings.

#13. Writing style. In the Google world, this means statistical analysis of contents against a huge language model to assess “spelling correctness, grammar and reading levels”.

What conclusions can we draw? This enumeration clearly shows Google intends to favor legacy media (print or broadcast news) over pure players, aggregators or digital native organizations. All the features recently added, such as Editor’s pick, reinforce this bias. The reason might be that legacy media are less prone to tricking the algorithm. For once, a know technological weakness becomes an advantage.


The Google Fund for the French Press

newspapers, online publishing By February 3, 2013 Tags: , 13 Comments


At the last minute, ending three months of  tense negotiations, Google and the French Press hammered a deal. More than yet another form of subsidy, this could mark the beginning of a genuine cooperation.

Thursday night, at 11:00pm Paris time, Marc Schwartz, the mediator appointed by the French government got a call from the Elysée Palace: Google’s chairman Eric Schmidt was en route to meet President François Hollande the next day in Paris. They both intended to sign the agreement between Google and the French press the Friday at 6:15pm. Schwartz, along with Nathalie Collin, the chief representative for the French Press, were just out of a series of conference calls between Paris and Mountain view: Eric Schmidt and Google’s CEO Larry Page had green-lighted the deal. At 3 am on Friday, the final draft of the memorandum was sent to Mountain View. But at 11:00am everything had to be redone: Google had made unacceptable changes, causing Schwartz and Collin to  consider calling off the signing ceremony at the Elysée. Another set of conference calls ensued. The final-final draft, unanimously approved by the members of the IPG association (General and Political Information), was printed at 5:30pm, just in time for the gathering at the Elysée half an hour later.

The French President François Hollande was in a hurry, too: That very evening, he was bound to fly to Mali where the French troops are waging as small but uncertain war to contain Al-Qaeda’s expansion in Africa. Never shy of political calculations, François Hollande seized the occasion to be seen as the one who forced Google to back down. As for Google’s chairman, co-signing the agreement along with the French President was great PR. As a result, negotiators from the Press were kept in the dark until Eric Schmidt’s plane landed in Paris Friday afternoon and before heading to the Elysée. Both men underlined what  they called “a world premiere”, a “historical deal”…

This agreement ends — temporarily — three months of difficult negotiations. Now comes the hard part.

According to Google’s Eric Schmidt, the deal is built on two stages:

“First, Google has agreed to create a €60 million Digital Publishing Innovation Fund to help support transformative digital publishing initiatives for French readers. Second, Google will deepen our partnership with French publishers to help increase their online revenues using our advertising technology.”

As always, the devil lurks in the details, most of which will have to be ironed over the next two months.

The €60m ($82m) fund will be provided by Google over a three-year period; it will be dedicated to new-media projects. About 150 websites members of the IPG association will be eligible for submission. The fund will be managed by a board of directors that will include representatives from the Press, from Google as well as independent experts. Specific rules are designed to prevent conflicts of interest. The fund will most likely be chaired by the Marc Schwartz, the mediator, also partner at the global audit firm Mazars (all parties praised him for his mediation and wish him to take the job).

Turning to the commercial part of the pact, it is less publicized but at least as equally important as the fund itself. In a nutshell, using a wide array of tools ranging from advertising platforms to content distribution systems, Google wants to increase its business with the Press in France and elsewhere in Europe. Until now, publishers have been reluctant to use such tools because they don’t want to increase their reliance on a company they see as cold-blooded and ruthless.

Moving forward, the biggest challenge will be overcoming an extraordinarily high level distrust on both sides. Google views the Press (especially the French one) as only too eager to “milk” it, and unwilling to genuinely cooperate in order to build and share value from the internet. The engineering-dominated, data-driven culture of the search engine is light-years away from the convoluted “political” approach of legacy media that don’t understand or look down on the peculiar culture of tech companies.

Dealing with Google requires a mastery of two critical elements: technology (with the associated economics), and the legal aspect. Contractually speaking, it means transparency and enforceability. Let me explain.

Google is a black box. For good and bad reasons, it fiercely protects the algorithms that are key to squeezing money from the internet, sometimes one cent at a time — literally. If Google consents to a cut of, say, advertising revenue derived from a set of contents, the partner can’t really ascertain whether the cut truly reflects the underlying value of the asset jointly created – or not. Understandably, it bothers most of Google’s business partners: they are simply asked to be happy with the monthly payment they get from Google, no questions asked. Specialized lawyers I spoke with told me there are ways to prevent such opacity. While it’s futile to hope Google will lift the veil on its algorithms, inserting an audit clause in every contract can be effective; in practical terms, it means an independent auditor can be appointed to verify specific financial records pertaining to a business deal.

Another key element: From a European perspective, a contract with Google is virtually impossible to enforce. The main reason: Google won’t give up on the Governing Law of a contract that is to be “Litigated exclusively in the Federal or States Courts of Santa Clara County, California”. In other words: Forget about suing Google if things go sour. Your expensive law firm based in Paris, Madrid, or Milan will try to find a correspondent in Silicon Valley, only to be confronted with polite rebuttals: For years now, Google has been parceling out multiples pieces of litigation among local law firms simply to make them unable to litigate against it. Your brave European lawyer will end up finding someone that will ask several hundreds thousands dollars only to prepare but not litigate the case. The only way to prevent this is to put an arbitration clause in every contract. Instead of going before a court of law, the parties agrees to mediate the matter through a private tribunal. Attorneys say it offers multiples advantages: It’s faster, much cheaper, the terms of the settlement are confidential, and it carries the same enforceability as a Court order.

Google (and all the internet giants for that matter) usually refuses an arbitration clause as well as the audit provision mentioned earlier. Which brings us to a critical element: In order to develop commercial relations with the Press, Google will have to find ways to accept collective bargaining instead of segmenting negotiations one company at a time. Ideally, the next round of discussions should come up with a general framework for all commercial dealings. That would be key to restoring some trust between the parties. For Google, it means giving up some amount of tactical as well as strategic advantage… that is part of its long-term vision. As stated by Eric Schmidt in its upcoming book “The New Digital Age” (the Wall Street Journal had access to the galleys) :

“[Tech companies] will also have to hire more lawyers. Litigation will always outpace genuine legal reform, as any of the technology giants fighting perpetual legal battles over intellectual property, patents, privacy and other issues would attest.”

European media are warned: they must seriously raise their legal game if they want to partner with Google — and the agreement signed last Friday in Paris could help.

Having said that, I personally believe it could be immensely beneficial for digital media to partner with Google as much as possible. This company spends roughly two billion dollars a year refining its algorithms and improving its infrastructure. Thousands of engineers work on it. Contrast this with digital media: Small audiences, insufficient stickiness, low monetization plague both web sites and mobile apps; the advertising model for digital information is mostly a failure — and that’s not Google’s fault. The Press should find a way to capture some of Google’s technical firepower and concentrate on what it does best: producing original, high quality contents, a business that Google is unwilling (and probably culturally unable) to engage in. Unlike Apple or Amazon, Google is relatively easy to work with (once the legal hurdles are cleared).

Overall, this deal is a good one. First of all, both sides are relieved to avoid a law (see last Monday Note Google vs. the press: avoiding the lose-lose scenario). A law declaring that snippets and links are to be paid-for would have been a serious step backward.

Second, it’s a departure from the notion of “blind subsidies” that have been plaguing the French Press for decades. Three months ago, the discussion started with irreconcilable positions: publishers were seeking absurd amounts of money (€70m per year, the equivalent of IPG’s members total ads revenue) and Google was focused on a conversion into business solutions. Now, all the people I talked to this weekend seem genuinely supportive of building projects, boosting innovation and also taking advantage of Google’s extraordinary engineering capabilities. The level of cynicism often displayed by the Press is receding.

Third, Google is changing. The fact that Eric Schmidt and Larry Page jumped in at the last minute to untangle the deal shows a shift of perception towards media. This agreement could be seen as a template for future negotiations between two worlds that still barely understand each other.


Google vs. the press: avoiding the lose-lose scenario

online publishing By January 20, 2013 Tags: 12 Comments


Google and the French press have been negotiating for almost three months now. If there is no agreement within ten days, the government is determined to intervene and pass a law instead. This would mean serious damage for both parties. 

An update about the new corporate tax system. Read this story in Forbes by the author of the report quoted below 

Since last November, about twice a week and for several hours, representatives from Google and the French press have been meeting behind closed doors. To ease up tensions, an experienced mediator has been appointed by the government. But mistrust and incomprehension still plague the discussions, and the clock is ticking.

In the currently stalled process, the whole negotiation revolves around cash changing hands. Early on, representatives of media companies where asking Google to pay €70m ($93m) per year for five years. This would be “compensation” for “abusively” indexing and linking their contents and for collecting 20 words snippets (see a previous Monday Note: The press, Google, its algorithm, their scale.) For perspective, this €70m amount is roughly the equivalent to the 2012 digital revenue of newspapers and newsmagazines that constitutes the IPG association (General and Political Information).

When the discussion came to structuring and labeling such cash transfer, IPG representatives dismissively left the question to Google: “Dress it up!”, they said. Unsurprisingly, Google wasn’t ecstatic with this rather blunt approach. Still, the search engine feels this might be the right time to hammer a deal with the press, instead of perpetuating a latent hostility that could later explode and cost much more. At least, this is how Google’s European team seems to feel. (In its hyper-centralized power structure, management in Mountain View seems slow to warm up to the idea.)

In Europe, bashing Google is more popular than ever. Not only just Google, but all the US-based internet giants, widely accused of killing old businesses (such as Virgin Megastore — a retail chain that also made every possible mistake). But the actual core issue is tax avoidance. Most of these companies hired the best tax lawyers money can buy and devised complex schemes to avoid paying corporate taxes in EU countries, especially UK, Germany, France, Spain, Italy…  The French Digital Advisory Board — set up by Nicolas Sarkozy and generally business-friendly — estimated last year that Google, Amazon, Apple’s iTunes and Facebook had a combined revenue of €2.5bn – €3bn but each paid only on average €4m in corporate taxes instead of €500m (a rough 20% to 25% tax rate estimate). At a time of fiscal austerity, most governments see this (entirely legal) tax avoidance as politically unacceptable. In such context, Google is the target of choice. In the UK for instance, Google made £2.5bn (€3bn or $4bn) in 2011, but paid only £6m (€7.1m or $9.5m) in corporate taxes. To add insult to injury, in an interview with The Independent, Google’s chairman Eric Schmidt defended his company’s tax strategy in the worst possible manner:

“I am very proud of the structure that we set up. We did it based on the incentives that the governments offered us to operate. It’s called capitalism. We are proudly capitalistic. I’m not confused about this.”

Ok. Got it. Very helpful.

Coming back to the current negotiation about the value of the click, the question was quickly handed over to Google’s spreadsheet jockeys who came up with the required “dressing up”. If the media accepted the use of the full range of Google products, additional value would be created for the company. Then, a certain amount could be derived from said value. That’s the basis for a deal reached last year with the Belgium press (the agreement is shrouded in a stringent confidentiality clause.)

Unfortunately, the French press began to eliminate most of the eggs in the basket, one after the other, leaving almost nothing to “vectorize” the transfer of cash. Almost three months into the discussion, we are stuck with antagonistic positions. The IPG representatives are basically saying: We don’t want to subordinate ourselves further to Google by adopting opaque tools that we can find elsewhere. Google retorts: We don’t want to be considered as another deep-pocketed “fund” that the French press will tap forever into without any return for our businesses; plus, we strongly dispute any notion of “damages” to be paid for linking to media sites. Hence the gap between the amount of cash asked by one side and what is (reluctantly) acceptable on the other.

However, I think both parties vastly underestimate what they’ll lose if they don’t settle quickly.

The government tax howitzer is loaded with two shells. The first one is a bill (drafted by no one else than IPG’s counsel, see PDF here), which introduces the disingenuous notion of “ancillary copyright”. Applied to the snippets Google harvests by the thousands every day, it creates some kind of legal ground to tax it the hard way. This montage is adapted from the music industry in which the ancillary copyright levy ranges from 4% to 7% of the revenue generated by a sector or a company. A rate of 7% for the revenue officially declared by Google in France (€138m) would translate into less than €10m, which is pocket change for a company that in fact generates about €1.5 billion from its French operations.

That’s where the second shell could land. Last Friday, the Ministry of Finances released a report on the tax policy applied to the digital economy  titled “Mission d’expertise sur la fiscalité de l’économie numérique” (PDF here). It’s a 200 pages opus, supported by no less than 600 footnotes. Its authors, Pierre Collin and Nicolas Colin are members of the French public elite (one from the highest jurisdiction, le Conseil d’Etat, the other from the equivalent of the General Accounting Office — Nicolas Colin being  also a former tech entrepreneur and a writer). The Collin & Colin Report, as it’s now dubbed, is based on a set of doctrines that also come to the surface in the United States (as demonstrated by the multiple references in the report).

To sum up:
— The core of the digital economy is now the huge amount of data created by users. The report categorizes different types of data: “Collected Data”, are  gathered through cookies, wether the user allows it or not. Such datasets include consumer behaviors, affiliations, personal information, recommendations, search patterns, purchase history, etc.  “Submitted Data” are entered knowingly through search boxes, forms, timelines or feeds in the case of Facebook or Twitter. And finally, “Inferred Data” are byproducts of various processing, analytics, etc.
— These troves of monetized data are created by the free “work” of users.
— The location of such data collection is independent from the place where the underlying computer code is executed: I create a tangible value for Amazon or Google with my clicks performed in Paris, while the clicks are processed in a  server farm located in Netherlands or in the United Sates — and most of the profits land in a tax shelter.
— The location of the value insofar created by the “free work” of users is currently dissociated from the location of the tax collection. In fact, it escapes any taxation.

Again, I’m quickly summing up a lengthy analysis, but the conclusion of the Collin & Colin report is obvious: Sooner or later, the value created and the various taxes associated to it will have to be reconciled. For Google, the consequences would be severe: Instead of €138m of official revenue admitted in France, the tax base would grow to €1.5bn revenue and about €500m profit; that could translate €150m in corporate tax alone instead of the mere €5.5m currently paid by Google. (And I’m not counting the 20% VAT that would also apply.)

Of course, this intellectual construction will be extremely difficult to translate into enforceable legislation. But the French authorities intend to rally other countries and furiously lobby the EU Commission to comer around to their view. It might takes years, but it could dramatically impact Google’s economics in many countries.

More immediately, for Google, a parliamentary debate over the Ancillary Copyright will open a Pandora’s box. From the Right to the Left, encouraged by François Hollande‘s administration, lawmakers will outbid each other in trashing the search engine and beyond that, every large internet company.

As for members the press, “They will lose too”, a senior official tells me. First, because of the complications in setting up the machinery the Ancillary Copyright Act would require, they will have to wait about two years before getting any dividends. Two, the governments — the present one as well as the past Sarkozy administration  — have always been displeased with what they see as the the French press “addiction to subsidies”; they intend to drastically reduce the €1.5bn in public aid. If the press gets is way through a law,  according to several administration officials, the Ministry of Finances will feel relieved of its obligations towards media companies that don’t innovate much despite large influxes of public money. Conversely, if the parties are able to strike a decent business deal on their own, the French Press will quickly get some “compensation” from of Google and might still keep most of its taxpayer subsidies.

As for the search giant, it will indeed have to stand a small stab but, for a while, will be spared the chronic pain of a long and costly legislative fight — and the contagion that goes with it: The French bill would be dissected by neighboring governments who will be only too glad to adapt and improve it.   

Next week: When dealing with Google, better use a long spoon; Why European media should rethink their approach to the search giant.


Quartz: Interesting… and uncertain

mobile internet, online publishing By September 30, 2012 Tags: 18 Comments


Atlantic’s new digital venture named Quartz is aimed at global business people. It innovates in many radical ways, but its business model remains dicey.

Two years ago, Atlantic Media’s president Justin Smith was interviewed by the New York Times. The piece focused on the digital strategy he successfully executed:

“We imagined ourselves as a Silicon Valley venture-backed startup whose mission was to attack and disrupt The Atlantic. In essence, we brainstormed the question: What would we do if the goal was to aggressively cannibalize ourselves?”

In most media companies, that kind of statement would have launched a volley of rotten tomatoes. Atlantic’s disruptive strategy gave birth to a new offspring: Quartz (URL:, launched a couple of weeks ago.

Quartz is a fairly light operation based in New York and headed by Kevin Delaney, a former managing editor at the Its staff of 25 was pulled together from great brands in business journalism: Bloomberg, The Wall Street Journal, The Economist and the New York Times. According to the site’s official introduction, this is a team with a record of reporting in 119 countries and speaking 19 languages. Not exactly your regular gang of digital serfs or unpaid contributors that most digital pure players are built on.

This professional maturity, along with the backing of the Atlantic Media Company, a 155 years-old organization, might explain the set of rather radical options that makes Quartz so interesting.

Here are a few:

Priority on mobile use. Quartz is the first of its kind to deliberately reverse the old hierarchy: first, traditional web (for PC), and mobile interfaces, second. This is becoming a big digital publishing debate as many of us strongly believe we should go for mobile first and design our services accordingly (I fall in that category).

Quartz founders mentioned market research showing their main target — people on the road interested in global economy — uses 4.21 mobiles devices on average (I love those decimals…): one laptop, one iPad, and two (!) Blackberrys. (Based on multiple observations, I’d rather say, one BB and one iPhone.)

No native mobile app. Similarly, Quartz went for an open HTML5 design instead of apps. We went through this before in the Monday Note. Apps are mandatory for CPU intensive features such as heavy graphics, 3D rendering and games. For news, HTML5 — as messy as it is — does the job just fine. In addition, Quartz relies on “responsive design”, one that allows a web site to dynamically morph in response to the specific connected device (captures are not to scale):

Here is how it looks on a desktop screen:

… on an iPad in landscape mode:


…on an iPad in portrait mode:

on a small tablet:

..on an iPhone:

and on a small phone:

(I used Matt Kerlsey Responsive Design Test Site to capture Quartz renderings, it’s an excellent tool to see how your site will look like on various devices).

A river-like visual structure. Quartz is an endless flow of stories that automatically load one below the other as you scroll down. The layout is therefore pretty straightforward: no page-jumps, no complicated navigational tools, just a lateral column with the latest headlines and the main windows where articles concatenate. Again, the priority given to mobile use dictates design purity.

A lightweight technical setup. Quartz does not rely on a complex Content Management System for its production but on WordPress. In doing so, it shows the level of sophistication reached by what started as a simple blog platform. Undoubtedly, the Quartz design team invested significant resources in finding the best WP developers, and the result speaks for itself (despite a few bugs, sure to be short-lived…).

Editorial choices. Instead of the traditional news “beats” (national, foreign, economy, science…), Quartz went boldly for what it calls “obsessions”. This triggered a heated debate among media pundits: among others, read C.W. Anderson piece What happens when news organizations move from “beats” to “obsessions”? on the Nieman Journalism Lab.  Admittedly, the notion of “beats” sounds a bit old-fashioned. Those who have managed newsrooms know beats encourages fiefdoms, fence-building and bureaucracy… Editors love them because they’re much simpler to manage on a day-to-day basis; editorial meetings can therefore be conducted on the basis of a rigid organizational chart; it’s much easier to deal with a beat reporter or his/her desk chief than with some fuzzy “obsession” leader. At Quartz, current “Obsessions” appear in a discreet toolbar. They includes China Slowdown, The Next Crisis, Modern States, Digital, Money, Consumer Class, Startups, etc.

To me, this “obsessive” way of approaching news is way more modern than the traditional “beat” mode. First, it conveys the notion of adjustability to news cycles as “obsessions” can — should — vary. Second, it breeds creativity and transversal treatments among writers (most business publications are quite boring precisely due to their “silo culture”.) Third, digital journalism is intrinsically prone to “obsession”, i.e. strong choices, angles, decisions. For sure, facts are sacred, but they are everywhere: when reporting about the last alarming report from the World Bank, there is no need to repeat what lies just one click away — just sum up the main facts, and link back to the original source! Still, this shouldn’t preclude balanced treatment, fairness and everything in the basic ethics formulary. (Having said that, let’s be realistic: managing a news flow through “obsessions” is fine for  an editorial staff of 20, certainly not so for hundreds of writers.)

Quartz business side. Quartz is a free publication. No paywall, no subscription, nothing. Very few ads either. Again, it opted for a decisive model by getting rid of the dumb banner. And it’s a good thing: traditional display advertising kills designs, crappy targeting practices irritate readers and bring less and less money. (Most news sites are now down to single digital digits in CPM [Cost Per Thousand page views], and it will get worse as ad exchanges keep gaining power, buying remnant inventories by the bulk and reselling those for nothing.) Instead, Quartz started with four sponsors:  Chevron, Boeing, Credit Suisse and Cadillac, all showing quality brand contents. It’s obviously too early to assess this strategy. But Quartz business people opted for being extremely selective in their choice of sponsors (one car-maker, one bank, etc.), with rates negotiated accordingly.

Two, brands are displayed prominently with embedded contents instead of usual formats. Quartz is obviously shooting for very high CPMs. At the very least, they are right to try. I recently meet a European newspaper that extracts €60 to €100 CPMs by tailoring ads and making special ads placements for a small list of advertisers.

Again: such strategy is fine for a relatively small operation: as it is now, Quartz should not burn more than $3-4M a year. Betting on high CPMs is way more difficult for large websites — but niches can be extremely profitable. (For more on Quartz economics, read Ken Doctor’s piece also on Nieman.)

To sum up, three elements will be key to Quartz’ success. 

1 . Quickly build a large audience. Selected advertisers are not philanthropists; they want eyeballs, too. Because of its editorial choices, Quartz will never attract HuffPo-like audiences. To put things in perspective, the Economist gets about 7M uniques browsers a month (much less unique visitors) and has 632,000 readers on its app.

2 . Quartz bets on foreign audiences (already 60% of the total). Fine. But doing so is extremely challenging. Take The Guardian: 60 million uniques visitors per month — one third in the UK, another in the US, and the rest abroad — a formidable journalistic firepower, and a mere £40m in revenue (versus $160m in advertising alone for the with half of the Guardian’s audience, that’s a 5 to 1 ratio per reader.)

3 . Practically, it means Quartz will have to deploy the most advanced techniques to qualify its audience: it will be doomed if it is unable to tell its advertisers (more than four we hope) it can identify a cluster of readers traveling to Dubai more than twice a year, or another high income group living in London and primarily interested in luxury goods and services (see a previous Monday Note on extracting reader’s value through Big Data)

4 . In the end, Quartz is likely to face a growth question: staying in a niche or broadening its reach (and its content, and increasing its staff) to satisfy the ad market. Once its audience levels off, it might have no other choice than finding a way to make its readers pay. It should not be a problem as it focuses on a rather solvent segment.


Why Murdoch’s The Daily Doesn’t Fly

online publishing By July 16, 2012 Tags: 22 Comments

Is there a future for The Daily? According to last week’s reports by The New York Observer and The New York Times, News Corp’s “tablet newspaper” is on probation: Rupert Murdoch might pull the plug on The Daily which looses $30 million a year. But, in an open email to the publication’s staff, Jesse Angelo, its editor-in-chief, was quick to deny such rumors.

Eighteen months ago, The Daily was held up as embodying the newsmedia’s future. It was the first to be designed for the iPad, it bore the blessing of Steve Jobs himself (quite notable for someone who usually loathed the news sector), and it had the backing of the deep-pocketed News Corporation conglomerate. The project’s success would be measured over time (five years), supported by a considerable amount of funding. It had all it needed to be a success.

Fact is, The Daily never took-off. Six months after its high-wattage launch, it only claimed 80,000 paid-for subscribers. Today, Jesse Angelo mentions a mere 100,000 subs. It is both far from the 500,000 necessary to break-even and totally out of step with the growth of the iPad (and the iPhone, and the Android) installed base.

Something’s wrong with The Daily’s concept.

I subscribed. Twice, actually. At 99 cents a week ($39.99 a year), it was supposed to be a painless addition to my vast set of digital subscriptions. Strangely, it never succeeded in becoming part of my reading habits.

For The Daily, this might be its first problem: It is everything and nothing special at the same time. It’s not a tabloid, but it doesn’t carry in-depth, enterprise journalism either. It’s a sophisticated container for commodity news — i.e. the news that you can get everywhere, in real-time and for free. If I crave celebrity fodder, I go to TMZ or to the Huffington Post. If I want business news, I’ll find everything on CNN Money or Business Insider, all very efficiently and appealingly edited. No need to go through the tedious download of a 100 pages-plus issue.

The Daily’s inherent competition with the web (and mobile) was completely underestimated. Real time is now mandatory, so is the ability to generate conversations. For The Daily, a comparison of last weekend’s newscycle is cruel. Its coverage of the Mitt Romney tax return controversy triggered 179 comments on The Daily vs. 28,464 comments on the Huffington Post. (Note the HuffPo built it on a 150 words Associated Press story and a one minute video segment from CNN — that’s the digital version of the multiplication of the loaves…)

The Daily is like an old concept in a high-tech package. Some draw a parallel with USA Today, the first truly national newspapers launched in 1982. Two things made the paper a success:  its positioning as the first truly national newspaper in the United States, and its innovative format and layout; USA Today was designed for quick reads and explanatory journalism enhanced by graphics. That uniqueness was key to installing the paper America’s news landscape.

By contrast, The Daily does not enjoy the specificity of a “visually attractive convenience”. Its sophistication and its cognitive profusion lead to an excess of complexity which ends up leveling its content off. The Daily has no asperities, nothing to retain the reader’s attention in a recurring manner. A media is driven by its affinity for its audience. An intellectual, cultural or political affinity — or a combination of all three. The Daily lacks such engines. Even its Murdoch-induced anti-Obama stance fails to keep readers coming back.

Another key question for The Daily’s future pertains to its business model. On average, according to the Newspaper Association of America, 70% of the revenue of US dailies comes form advertising, and 14% of the ad revenue comes from digital. By any means, The Daily should have been loaded with ads. There is almost none. Including for the new “WKND” edition. This is worrisome: many papers draw a third or half of their revenue of their weekend editions. The per-copy price won’t help either. At 99 cents a week, it’s pocket change. After Apple’s 30% cut, this leaves less than 10 cents per issue. Even with a streamlined newsroom of 150, it can’t fly. (For more about The Daily’s economics, read Ken Doctor’s January 2011 Newsonomics article, or last week’s piece by Staci Kramer in PaidContent, both explain the issue pretty well.)

The Daily also illustrates the difficulty in building a digital media brand. Many tried, few succeeded. Slate, Salon, excellent as they are, journalistically speaking, never took off audience-wise. The Huffington Post made it through a unique combination of unscrupulous “aggrelooting” of contents from a variety of willing and unwilling sources, legions of unpaid bloggers, Arianna’s media footprint, and unparalleled audience-building techniques (see the previous Monday Note: Transfer of Value). A combination that proved very complicated to reproduce — even for someone as resourceful as Rupert Murdoch.

The Australian-born media mogul thought he could launch a new breed of news product from scratch. But in his quest for bold digital efficiency, he failed to see that a news product with no history, no breadth, no soul, no character could only face an uncertain future.


Transfer of Value

journalism, online publishing By July 8, 2012 Tags: 88 Comments

This is a story of pride vs. geekiness: Traditional newspapers that move online are about to lose the war against pure players and aggregators. Armed with the conviction their intellectual superiority makes them immune to digital modernity, newspapers neglected today’s internet driving forces: relying on technology to build audiences and the ability to coalesce a community over any range of subjects — even the most mundane ones.

When I discuss this with seasoned newsroom people on both sides of the Atlantic, most still firmly believe the quality of their work guarantees their survival against a techno-centric approach to digital contents.

I’m afraid they are wrong. Lethally so.

We are a facing a culture shock. On one side, legacy medias: Great franchises who grew on strong values, such as “pristine” journalism, independence, storytelling, fact-checking, solid editing, respect for the copyright… Along the way, they made their share of mistakes, but, overall, the result is great. After all, at the height of the Fourth Estate’s power, the population was better informed than today’s Facebook cherry-pickers.  Now, this (aging) fraternity faces a new generation of media people who build their fiefdom on a completely different set of values. For instance, the notion of copyright has become exceedingly elastic. A few months ago, Flipboard began to aggregate contents from French news organizations, taking large excerpts — roughly capturing the essence of a story — along with a token link back to the original content. Publishers sent polite letters saying, in substance: ‘Guys, although we are fond of your iOS applications, you can’t simply pick up our stuff without permission, we need to talk first…’

Publishers’ attitude toward aggregators has always been ambiguous. Google is the perfect example: on one hand, publishers complained about the search giant’s power; and, at the same time, they spend huge sums of money optimizing their sites, purchasing relevant keywords, all to make the best use of the very power they criticize. In Belgium, publishers challenged Google in court for the Google News product before realizing they really depended a lot on it, and begging for reintegration in the Google traffic cauldron.

Another example of the culture shock: reliance on technology. It’s a religion for the newcomers but merely a support function for traditional editors. Unfortunately, evidence shows how wrong it is to snub the traffic building arsenal. Here are a few examples.

On July 5th, The Wall Street Journal runs an editorial piece about Mitt Romney’s position on Obamacare.

The rather dull and generic “Romney’s Tax Confusion” title for this 1000 words article attracted a remarkable 938 comments.

But look at what the Huffington Post did: a 500 words treatment including a 300 words article, plus a 200 words excerpt of the WSJ opinion and a link back (completely useless). But, unlike the Journal, the HuffPo ran a much sexier headline :

A choice of words that takes in account all Search Engine Optimization (SEO) prerequisites, using high yield words such as “Squandering”, “Snafu”, in conjunction with much sought-after topics such as “Romney” and “Health Care”. Altogether, this guarantees a nice blip on Google’s radar — and a considerable audience : 7000+ comments (7x more than the original), 600 Facebook shares, etc.

HuffPo’s editors took no chance: the headline they picked is algorithm-designed to yield the best results in Google. The aggregator invested a lot in SEO tools: I was told that every headline is matched in realtime against Google most searched items right before being posted. If the editor’s choice scores low in SEO, the system suggests better terms. In some instances the HuffPo will A/B test headlines: It will serve different versions of a page to a couple of random groups and, after five minutes, the best headline will be selected. Found on Quora, here are explanations by Whitney Snyder, HuffPost’s senior news editor:

The A/B testing was custom built. We do not, however, A/B test every headline. We often use it to see if our readers are familiar with a person’s name (i.e. John Barrasso vs GOP Senator), or to play up two different aspects of a story and see which one interests readers more. We also A/B test different images.

Other examples below will prove the effectiveness of HuffPo’s approach. Here is a media story about a TV host whose position is in jeopardy; the Daily News version: a 500 words article that looks like this:

The Huffington Post summed it up in a 175 words form, but introduced it with a much more potent headline including strong, Google-friendly locutions:

Results speak for themselves:

Daily  News original version : 2 comments, 1 tweet, 1 Facebook share
HuffingtonPost version : 4601 comments, 79 tweets, 155 share.

Like no one else, the HuffPo masters eye-grabbing headline such as these :
Watch Out Swimmers: Testicle-Eating Fish Species Caught in US Lake (4,000 Facebook recommendations), or: Akron Restaurant Owner Dies After Serving Breakfast To Obama (3300 comments) or yesterday’s home: LEPAGE LOSES IT: IRS ‘THE NEW GESTAPO’ displayed in a 80 points font-size; this adaptation of the Maine’s daily Press Herald generated about 6000 comments on the aggregator.

The point is not to criticize the Huffington Post for being extremely efficient at optimizing its work. They invested a lot, they trained their people well. Of course, the bulk of HuffPo’s content  comes from : a) unpaid bloggers — 9,884 new ones last year alone according to Arianna’s count; b) content borrowed from others media and re-engineered by 170 journalists, a term that encompass various kinds of news producers and a bunch of true writers and editors; c) a small percentage of original reporting.  Each day, all this concurs to “over 1,000 stories published” that will translate into 1.4 million of Facebook referrals and 250,000 comments. Staggering numbers indeed. With some downsides, too: 16,000 comments (!) for an 200 words article about Barack Obama asking to turn off Fox News during a campaign tour is not likely to attract enviable demographics advertising-wise. The HuffPo might make a billion page views per month, but most of them only yield dimes.

The essence of what we’re seeing here is a transfer of value. Original stories are getting very little traffic due to the poor marketing tactics of old-fashion publishers. But once they are swallowed by the HuffPo’s clever traffic-generation machine, the same journalistic item will make tens or hundred  times better traffic-wise. Who is right?  Who can look to the better future in the digital world ? Is it the virtuous author carving language-smart headlines or the aggregator generating eye-gobbling phrases thanks to high tech tools?  Your guess. Maybe it’s time to wake-up.


Off The eBook Shelf

online publishing By June 18, 2012 Tags: 32 Comments

Readers are voting with their wallets: The eBook is winning. In the US, eBooks sales are now topping hardcovers for the first time (story in TechCrunch). Not everywhere of course. According to the Bowker Global eBook Research, the global market for eBooks is driven — in that order — by India, Australia, the UK and the United States. The laggards are Japan and (no surprise) France. The chart below shows the percentage of internet population reporting the purchase of a digital book over the last six months prior to the survey.

Interestingly, for most population samples, the level of purchases is not correlated with awareness. France enjoys the highest level of awareness but its internet population buys five times less eBooks than India’s. Once an Indian internet user finds an attractive digital book offer, he/she will most likely jump on it. This could lead to the following: in emerging countries, the cellular phone has become the main communication tool, leapfrogging the deployment of land lines; similarly, we could see eBooks bypassing print in countries like India where a large segment of the population is getting both literate and connected at a fast pace. (Actually, Bowker also reports that over 50% of respondents in India and Brazil are likely to buy an eBook in the next six months, ten times more than in France.)

If the rise of the eBook happily provides access to knowledge in emerging countries, the picture is more contrasted in countries with a long history and high penetration of printed books.

For instance, let’s have look at the ISBN registrations data for the United States. The chart below, drawn again from Bowkers (full PDF table here) shows a steady progression:

Between 2002 and 2011, in the US market, ISBN registration grew by 61% and reached 347,178 new titles. (A technical note: I’m only taking into account books that fall in an identified category, such as Arts, Biography, Business, etc. I’m excluding the huge segment labeled as nontraditional, which includes reprints, public domain, and titles printed on demand; this segment grew by over 3500% to 1.2 million registrations, which would distort the picture.)

We clearly see the impact of mainstream e-readers such as the Kindle and the iPad. Without any doubt, they contributed to the growth of registrations. (Unfortunately, ISBN counts does not provide a breakdown between print and digital.) Over the last nine years, some book publishing segments fared better than others. See the chart below:

Fiction is doing twice better than all other categories together. The Digital Book is the medium of choice for fiction: a) eBooks are set to be cheaper that print and price elasticity is now a proven fact, the cheaper a book is, the more likely a reader is to try it; b) e-commerce breeds impulse buying (cf the famous “One-Click® feature); c) readers can test the product more efficiently than in the printed world as Amazon and the iBooks Store make larges sample available for free. No surprise, then, to see the Fiction category holding well.

No surprise either in seeing the three worst performers also as prime victims of the digital era. History books have to compete with the vast trove of material available on the web; that’s the Encyclopaedia Britannica syndrome, going out of print after 244 years of duty, demoted by the 11-year-old Wikipedia. Like it or not, most history books publishers will follow the same fate.

Similarly,Travel and Computer books are in direct competition with mostly free online services. Who will buy a “how-to” computer book today? There are plenty of video tutorials explaining how to replace a hard drive or how to struggle with Photoshop? And let’s not even mention the Travel segment with tons of guides, reviews, price comparators and transactions services. As for the language sections of the bookstore, again, a simple query in Google can help with spelling, translation and grammar… Even the precious Roget’s Thesaurus is online, and rather efficiently so. I’ll just venture French Canadians did Roget one better: A company called Druide publishes a suite of applications for PCs, tablets and smartphones called Antidote. It’s an unusually clever combination of dictionary, thesaurus, quotations, etymology and more. I wondered for a while about the name Antidote — until I realized Quebecois saw the product as an antidote to… English. An old struggle.

The main eBooks casualty is likely to be bookstores. In a city like New York, in the Fifties, about 330 bookstores were in business. Now they are down to 30 or even less, laments André Schiffrin, former head of Pantheon Books, in his recent book Words & Money. Countries like France or Germany have laws that protect independent bookstores: From Bordeaux to Berlin, citizens are thankful for finding warmer and more relevant recommendations than the algorithm-based suggestions provided by Amazon. But how long will it last?


Advertising: The trust Factor

advertising, online publishing By April 29, 2012 517 Comments

The digital advertising equation is outlined in the Nielsen graph below. The Global Trust in Advertising survey released this month (summary on Nielsen site and PDF here) underlines one key finding: For the vast majority of digital users, trust lies first and foremost in recommendations and opinions from their peers. As for the bulk of formats found on web sites or on mobile (such as various flavors of display advertising), they fall to the bottom of the chart. Nielsen’s study, based on 26,000 respondents in 56 countries, was conducted in Q3 2011.

Here are the expanded results (click to enlarge):

By themselves, these figures provide the perfect explanation for the current state of the advertising industry and, more specifically, for the digital ads segment.

Then, superimposing the ad revenue structure of most news medias companies would show an alarmingly symmetry: these businesses derive most of their revenue, allocate most of their effort to the least trusted ad vectors: display banners of various forms (on web, mobile or social), online video ads, etc.

The survey also provides a grim view of what people trust: they put more of their faith in a branded website (58% positive), a brand sponsorship (47%) ad, or even a product placement in a TV series (40%) than in a display ad on a website or on mobile (33% each)!

Even worse is the general distrust of advertising: on this list of 19 ad vectors, only 5 are are trusted by 50% of the respondents.

Let’s focus on a few items:

Recommendation from people I know: Trusted: 92% Not Trusted: 8%
Consumer opinions posted online: Trusted: 70% Not Trusted: 30%
Problem is: traditional medias don’t own these two segments. Social networks and consumer websites do. It’s a key Facebook’s strength to have people engage in conversations around brands and products. (IMO: a pathetic waste of time). Interestingly enough, the social network environment doesn’t boost the despised banners that much: When served on a social network, banners gain a mere 3 percentage points (at 36%) against a plain website or a mobile context. This must be a matter of concern for Facebook’s revenue stream: its unparalleled ability to pinpoint a target doesn’t raise the level of trust.

Editorial content such as newspaper articles. Trusted: 58%, Not trusted: 42%
Not surprising, but worth a bit more thought. It pertains to the level of trust readers put in the medium of their choice — carbon or bits. As expected, a fair and balanced product review written by a non-corrupted journalist (every word in the sentence counts) will be trusted. That’s what I call the Consumer Reports syndrome. This organization deploys 100+ professionals testers — and no ads beyond the ones for its own paid-for services and extra publications. Among its enviable base of 7 million subscribers, half pay $6.95 a month (or, a much better deal, $30 dollars a year) to access — this is good ARPU compared to other digital medias who only make a few bucks per year and per viewer in advertising revenue.

What does this mean for online outlets? They should consider beefing up the volume of product reviews, while preserving the reliability of their coverage. This also raises the question of the separation between journalism, advertorial and plain advertising. By no means should a publisher accept blurring the lines: beneficial on the short term but damaging on the long run. Having said this, when I see a growing number of anglo-saxons magazines making big money from high quality advertorials, I tend to believe online medias should consider sections of their websites or applications harboring such content. But two requirements need to be met: (again) no confusion whatsoever; and editorial standards for what will indeed carry commercial content, but in a well-designed, informative, visually attractive package. One important point to keep in mind: this type of service is typically out of reach for a Facebook, a Google or a Microsoft. But moving in such a direction requires unified thinking between publishers, the sales house (and the ad agencies they are dealing with) and the editorial team. A long way to go.

Ads served in search engine results:  Trusted: 40% Untrusted: 60%
Speaking of Google, here’s another interesting finding in the Nielsen survey: by and large, readers doesn’t trust search ads. To many viewers, text ads popping up on pages, on YouTube video or on emails, are seen as intrusive and irrelevant (to say the least: look at this hilarious site featuring inappropriate ad placements.) Still, search ads account for about 60% of online ad revenues. Why? Essentially because it provides a cheap, convenient, and totally disintermediated way of promoting a product. On this count, Google makes no mystery of its intention to vaporize the advertising middleman thanks to its superior technology.

The digital advertising party is just warming up. The business will continue its ongoing transformation. Currently, digital accounts for 16% of the global ad spending. It is likely to gain 10 more percentage points over the next five years. Not all markets nor products carry the same potential: According to the Financial Times, Unilever currently spends 35% of its US budget on digital, compared with 25% in Europe and only 4% in India. For news medias, the opportunity is that brands and agencies are still searching for the right formula. Brands face an incredibly complex challenge as they have to play with many dials at the same time: traditional ads, digital, web, mobile, apps, social, behavioral. And all are tightly intertwined, creating flurries of new metrics: ROI naturally, but also engagement, sentiment, feelings.

Like elsewhere in the digital world, the most successful players will be the genuine tinkerers. Software giant Adobe is said to spent 20% of its digital budget on experimental campaigns. They test, measure, adjust and iterate.

It is up to digital medias to go from passive to active in the quest for the right model. Their economics depend on it.


NYT Digital Lessons

newspapers, online publishing By April 22, 2012 Tags: , 9 Comments

The New York Times Company’s latest quarterly numbers contain a rich trove of data regarding the health of the digital news industry. Today, we’ll focus on the transition from traditional advertising to paywall strategies being implemented across the world. Paywall appear as a credible way to offset — alas too partially — the declining revenue from print operations.

First, the highlights.

(See NYTCO’s press release here and stock here. Unless otherwise stated, all figures are for Q1 2012 and comparisons are Q1 2012 vs. Q1 2011.)

  • Total Revenue is stable at $499.4 million.
  • Operating profit is down by 23% at $19.6 million. When excluding depreciation, amortization and (generous) severance packages, OP is up 9.4% at $57 million.
  • Print advertising for all properties and from all sources is down 8.1% at $238 million
  • Circulation revenue is up 9.7% at $227 million.
  • Digital subscriptions, launched just a year ago, reach 454,000. That’s a 16% growth vs. Q4 2011.
  • Digital advertising for the entire NYTCO (this includes,,,, etc) is down 10.3% to $71 million.
  • Such decrease is primarily due to losing 24% of its ad revenue to $22.6 million, and 50% of its operating profit to $7 million. This online guide is entirely dependent on advertising.
  • But the real bad news is the decline in digital advertising for the NYT News Media Group  consisting mostly of the NYT and the Boston Globe. Revenue dropped by 2.3% to $48.5 million for the quarter.
  • Digital advertising accounts for 22.5% of the entire NYTCO ad revenue, and for 30% of the NYT News Media Group’s digital advertising revenue.

We can discern four trends:

#1:  Digital advertising is struggling, even for a major brand such as the New York Times.
Again the evolution :
FY 2010: +18%
FY 2011: +10%
Q1 2012 (Y/Y):  -2%

This confirms a much feared trend. By and large, in a news context, the performance of digital advertising is on the decline. All indicators are now flashing red: CPM (cost per thousand impressions), cost per click, volumes, yields, etc. The cause is well-known, and way more acute for digital than for print: ads and news contents do compete for the same eyeballs. The more attractive and eye-catching the content is, the lesser the ad yields. Behavioral advertising won’t change that much — at least for hard core, high value-added news environment.

This decline also announces a major shift in the way ads are sold. The advertising flow is likely to split: premium ads such as well-placed special packages will still be sold for high prices by in-house teams. But the bulk of the inventory will shift downward to bazaars in which gazillions of pageviews will be dumped into real-time exchanges supposed to optimize prices. The bad news: such schemes are likely to fuel deflationary trends for remnant (i.e. sub-premium) inventories. The good news: media organizations such as online news outlets or pure players are likely to join such marketplaces and perhaps gain an operating role of sorts — assuming they are smart enough to cooperate (I’ll address this in an upcoming column).

#2 Paywalls work. With roughly half a million paying subscribers, the has captured the equivalent of 39% of its weekday print circulation of 1.3 million. In its financial statements, the Times doesn’t break down its revenue structure, but a significant part of the 13% increase in circulation revenue (print + digital) is attributable to digital subscriptions (the rest comes from the recent print price hike).
Estimates are difficult but here are some clues: on these 500,000 digital subs, it is estimated that 60% pay the basic $15/mo rate while 40% opt for the full $35 digital package. This would translate to digital subscribers contributing $34.5 million (18%) to the $190 million in NYT Media Group circulation revenue that appear in its quarterly statement. 18% is not that bad for a paywall that is barely one year old (even though this estimated revenue doesn’t reflect the cost of the NYTimes’ massive promotions for its paywall program). But again, compared to the $48 million of digital advertising, it is significant.

#3 A warning to paywall dreamers: some restrictions apply. In order to be successful, a digital subscription must check the following boxes:
Own a sizable share of a given (and preferably solvent) segment of the population. In other words: start from a large built-in audience. Globally, the New York Times has about 34 million unique visitors per month – a large pool for conversions to the paywall.
Don’t expect a paywall to work for a small site or a niche product — unless it is a reference for its community. Even then, in spite of its reference status in New England, the Boston Globe shows a mere 18,000 paid-for digital subscribers.
— Allow time to grow the subscriber base. A paywall strategy must spread over several years. The free audience first has to be converted into registered users able to be thoroughly data-mined; then the paywall will be tightened with less and less articles available for free (the NYT recently lowered its threshold from 20 to 10 free articles); the entire process will take at least two to four years, depending on where you start from.
— Carefully manage porosity. That’s why some people refer to a “semi-permeable membrane” (see the interesting conversation between Clay Shirky and NYT’s Digital manager Denise Warren on NPR last January). While it is tightening its paywall, the NYT leaves willingly plenty of free access to its content: if you land its site from a search engine, from Facebook, Twitter, or from a blog, no limit applies (same for the, actually). Such tactic has two virtues: it doesn’t affect natural referencing and incoming traffic from search engines (which could weigh as much as 30-40% of the audience), and the brand remains exposed to many — such as social networks users.
— Quality is non-negotiable. A successful paywall requires exclusive, unique, authoritative, high-quality content. A paywall isn’t the right solution for streams of “commodity news” or user-generated contents. It won’t work for the Huffington Post. Despite its enormous audience, the HuffPo’s embryonic original content won’t do much to alter its “Left wing Fox News” positioning (Even though the HuffPo managed to score a Pulitzer Prize for National reporting for its remarkable Beyond The Battlefield series.)

#4 Print is still alive. While print advertising is drying up, the share of circulation revenue keeps rising (in relative terms.) The good news: price hikes don’t seem to matter: the recent increase to $2.50 had no effect on sales. Actually, the Times uses its weekend edition (priced at $5.00) to channel digital subscriptions by providing the best deal of its complex rate card. Which leads to two conclusions: a sizable reservoir of readers is ready to pay for quality-on-paper at almost any price (see a previous Monday Note Cracking the Paywall); and commercially strong weekend editions can be a potent vector for digital subscriptions.

Print and digital strategies are more intertwined than ever.