About Frédéric Filloux

http://

Posts by Frédéric Filloux:

Google vs. the press: avoiding the lose-lose scenario

 

Google and the French press have been negotiating for almost three months now. If there is no agreement within ten days, the government is determined to intervene and pass a law instead. This would mean serious damage for both parties. 

An update about the new corporate tax system. Read this story in Forbes by the author of the report quoted below 

Since last November, about twice a week and for several hours, representatives from Google and the French press have been meeting behind closed doors. To ease up tensions, an experienced mediator has been appointed by the government. But mistrust and incomprehension still plague the discussions, and the clock is ticking.

In the currently stalled process, the whole negotiation revolves around cash changing hands. Early on, representatives of media companies where asking Google to pay €70m ($93m) per year for five years. This would be “compensation” for “abusively” indexing and linking their contents and for collecting 20 words snippets (see a previous Monday Note: The press, Google, its algorithm, their scale.) For perspective, this €70m amount is roughly the equivalent to the 2012 digital revenue of newspapers and newsmagazines that constitutes the IPG association (General and Political Information).

When the discussion came to structuring and labeling such cash transfer, IPG representatives dismissively left the question to Google: “Dress it up!”, they said. Unsurprisingly, Google wasn’t ecstatic with this rather blunt approach. Still, the search engine feels this might be the right time to hammer a deal with the press, instead of perpetuating a latent hostility that could later explode and cost much more. At least, this is how Google’s European team seems to feel. (In its hyper-centralized power structure, management in Mountain View seems slow to warm up to the idea.)

In Europe, bashing Google is more popular than ever. Not only just Google, but all the US-based internet giants, widely accused of killing old businesses (such as Virgin Megastore — a retail chain that also made every possible mistake). But the actual core issue is tax avoidance. Most of these companies hired the best tax lawyers money can buy and devised complex schemes to avoid paying corporate taxes in EU countries, especially UK, Germany, France, Spain, Italy…  The French Digital Advisory Board — set up by Nicolas Sarkozy and generally business-friendly — estimated last year that Google, Amazon, Apple’s iTunes and Facebook had a combined revenue of €2.5bn – €3bn but each paid only on average €4m in corporate taxes instead of €500m (a rough 20% to 25% tax rate estimate). At a time of fiscal austerity, most governments see this (entirely legal) tax avoidance as politically unacceptable. In such context, Google is the target of choice. In the UK for instance, Google made £2.5bn (€3bn or $4bn) in 2011, but paid only £6m (€7.1m or $9.5m) in corporate taxes. To add insult to injury, in an interview with The Independent, Google’s chairman Eric Schmidt defended his company’s tax strategy in the worst possible manner:

“I am very proud of the structure that we set up. We did it based on the incentives that the governments offered us to operate. It’s called capitalism. We are proudly capitalistic. I’m not confused about this.”

Ok. Got it. Very helpful.

Coming back to the current negotiation about the value of the click, the question was quickly handed over to Google’s spreadsheet jockeys who came up with the required “dressing up”. If the media accepted the use of the full range of Google products, additional value would be created for the company. Then, a certain amount could be derived from said value. That’s the basis for a deal reached last year with the Belgium press (the agreement is shrouded in a stringent confidentiality clause.)

Unfortunately, the French press began to eliminate most of the eggs in the basket, one after the other, leaving almost nothing to “vectorize” the transfer of cash. Almost three months into the discussion, we are stuck with antagonistic positions. The IPG representatives are basically saying: We don’t want to subordinate ourselves further to Google by adopting opaque tools that we can find elsewhere. Google retorts: We don’t want to be considered as another deep-pocketed “fund” that the French press will tap forever into without any return for our businesses; plus, we strongly dispute any notion of “damages” to be paid for linking to media sites. Hence the gap between the amount of cash asked by one side and what is (reluctantly) acceptable on the other.

However, I think both parties vastly underestimate what they’ll lose if they don’t settle quickly.

The government tax howitzer is loaded with two shells. The first one is a bill (drafted by no one else than IPG’s counsel, see PDF here), which introduces the disingenuous notion of “ancillary copyright”. Applied to the snippets Google harvests by the thousands every day, it creates some kind of legal ground to tax it the hard way. This montage is adapted from the music industry in which the ancillary copyright levy ranges from 4% to 7% of the revenue generated by a sector or a company. A rate of 7% for the revenue officially declared by Google in France (€138m) would translate into less than €10m, which is pocket change for a company that in fact generates about €1.5 billion from its French operations.

That’s where the second shell could land. Last Friday, the Ministry of Finances released a report on the tax policy applied to the digital economy  titled “Mission d’expertise sur la fiscalité de l’économie numérique” (PDF here). It’s a 200 pages opus, supported by no less than 600 footnotes. Its authors, Pierre Collin and Nicolas Colin are members of the French public elite (one from the highest jurisdiction, le Conseil d’Etat, the other from the equivalent of the General Accounting Office — Nicolas Colin being  also a former tech entrepreneur and a writer). The Collin & Colin Report, as it’s now dubbed, is based on a set of doctrines that also come to the surface in the United States (as demonstrated by the multiple references in the report).

To sum up:
– The core of the digital economy is now the huge amount of data created by users. The report categorizes different types of data: “Collected Data”, are  gathered through cookies, wether the user allows it or not. Such datasets include consumer behaviors, affiliations, personal information, recommendations, search patterns, purchase history, etc.  “Submitted Data” are entered knowingly through search boxes, forms, timelines or feeds in the case of Facebook or Twitter. And finally, “Inferred Data” are byproducts of various processing, analytics, etc.
– These troves of monetized data are created by the free “work” of users.
– The location of such data collection is independent from the place where the underlying computer code is executed: I create a tangible value for Amazon or Google with my clicks performed in Paris, while the clicks are processed in a  server farm located in Netherlands or in the United Sates — and most of the profits land in a tax shelter.
– The location of the value insofar created by the “free work” of users is currently dissociated from the location of the tax collection. In fact, it escapes any taxation.

Again, I’m quickly summing up a lengthy analysis, but the conclusion of the Collin & Colin report is obvious: Sooner or later, the value created and the various taxes associated to it will have to be reconciled. For Google, the consequences would be severe: Instead of €138m of official revenue admitted in France, the tax base would grow to €1.5bn revenue and about €500m profit; that could translate €150m in corporate tax alone instead of the mere €5.5m currently paid by Google. (And I’m not counting the 20% VAT that would also apply.)

Of course, this intellectual construction will be extremely difficult to translate into enforceable legislation. But the French authorities intend to rally other countries and furiously lobby the EU Commission to comer around to their view. It might takes years, but it could dramatically impact Google’s economics in many countries.

More immediately, for Google, a parliamentary debate over the Ancillary Copyright will open a Pandora’s box. From the Right to the Left, encouraged by François Hollande‘s administration, lawmakers will outbid each other in trashing the search engine and beyond that, every large internet company.

As for members the press, “They will lose too”, a senior official tells me. First, because of the complications in setting up the machinery the Ancillary Copyright Act would require, they will have to wait about two years before getting any dividends. Two, the governments — the present one as well as the past Sarkozy administration  — have always been displeased with what they see as the the French press “addiction to subsidies”; they intend to drastically reduce the €1.5bn in public aid. If the press gets is way through a law,  according to several administration officials, the Ministry of Finances will feel relieved of its obligations towards media companies that don’t innovate much despite large influxes of public money. Conversely, if the parties are able to strike a decent business deal on their own, the French Press will quickly get some “compensation” from of Google and might still keep most of its taxpayer subsidies.

As for the search giant, it will indeed have to stand a small stab but, for a while, will be spared the chronic pain of a long and costly legislative fight — and the contagion that goes with it: The French bill would be dissected by neighboring governments who will be only too glad to adapt and improve it.

frederic.filloux@mondaynote.com   

Next week: When dealing with Google, better use a long spoon; Why European media should rethink their approach to the search giant.

Linking: Scraping vs. Copyright

 

Irish newspapers created quite a stir when they demanded a fee for incoming links to their content. Actually, this is a mere prelude to a much more crucial debate on copyrights,  robotic scraping and subsequent synthetic content re-creation from scraps. 

The controversy erupted on December 30th, when an attorney from the Irish law firm McGarr Solicitors exposed the case of one of its client, the Women’s Aid organization, being asked to pay a fee to Irish newspapers for each link they send to them. The main quote from McGarr’s post:

They wrote to Women’s Aid, (amongst others) who became our clients when they received letters, emails and phone calls asserting that they needed to buy a licence because they had linked to articles in newspapers carrying positive stories about their fundraising efforts.
These are the prices for linking they were supplied with:

1 – 5 €300.00
6 – 10 €500.00
11 – 15 €700.00
16 – 25 €950.00
26 – 50 €1,350.00
50 + Negotiable

They were quite clear in their demands. They told Women’s Aid “a licence is required to link directly to an online article even without uploading any of the content directly onto your own website.”

Recap: The Newspapers’ agent demanded an annual payment from a women’s domestic violence charity because they said they owned copyright in a link to the newspapers’ public website.

Needless to say, the twittersphere, the blogosphere and, by and large, every self-proclaimed cyber moral authority, reacted in anger to Irish newspapers’ demands that go against common sense as well as against the most basic business judgement.

But on closer examination, the Irish dead tree media (soon to be dead for good if they stay on that path) is just the tip of the iceberg for an industry facing issues that go well beyond its reluctance to the culture of web links.

Try googling the following French legalese: “A défaut d’autorisation, un tel lien pourra être considéré comme constitutif du délit de contrefaçon”. (It means any unauthorized incoming link to a site will be seen as a copyright infringement.) This search get dozens of responses. OK, most come from large consumers brands (carmakers, food industry, cosmetics) who don’t want a link attached to an unflattering term sending the reader to their product description… Imagine lemon linked to a car brand.

Until recently, you couldn’t find many media companies invoking such a no-link policy. Only large TV networks such as TF1 or M6 warn that any incoming link is subject to a written approval.

In reality, except for obvious libel, no-links policies are rarely enforced. M6 Television even lost a court case against a third party website that was deep-linking to its catch-up programs. As for the Irish newspapers, despite their dumb rate card for links, they claimed to be open to “arrangements” (in the ill-chosen case of a non-profit organization fighting violence against women, flexibility sounds like a good idea.)

Having said that, such posture reflects a key fact: Traditional media, newspapers or broadcast media, send contradictory messages when it comes to links that are simply not part of their original culture.

The position paper of the National Newspapers of Ireland association’s deserves a closer look (PDF here). It actually contains a set of concepts that resonate with the position defended by the European press in its current dispute with Google (see background story in the NYTimes); here are a few:

– It is the view of NNI that a link to copyright material does constitute infringement of copyright, and would be so found by the Courts.
– [NNI then refers to a decision of the UK court of Appeal in a case involving Meltwater Holding BV, a company specialized in media monitoring], that upheld the findings of the High Court which findings included:
- that headlines are capable of being independent literary works and so copying just a headline can infringe copyright
- that text extracts (headline plus opening sentence plus “hit” sentence) can be substantial enough to benefit from copyright protection
- that an end user client who receives a paid for monitoring report of search results (incorporating a headline, text extract and/or link, is very likely to infringe copyright unless they have a licence from the
Newspaper Licencing Agency or directly from a publisher.
– NNI proposes that, in fact, any amendment to the existing copyright legislation with regard to deep-linking should specifically provide that deep-linking to content protected by copyright without respect for  the linked website’s terms and conditions of use and without regard for the publisher’s legitimate commercial interest in protecting its own copyright is unlawful.

Let’s face it, most publishers I know would not disagree with the basis of such statements. In the many jurisdictions where a journalist’s most mundane work is protected by copyright laws, what can be seen as acceptable in terms of linking policy?

The answer seems to revolve around matters of purpose and volume.

To put it another way, if a link serves as a kind of helper or reference, publishers will likely tolerate it. (In due fairness, NNI explicitly “accepts that linking for personal use is a part of how individuals communicate online and has no issue with that” — even if the notion of “personal use” is pretty vague.) Now, if the purpose is commercial and if linking is aimed at generating traffic, NNI raises the red flag (even though legal grounds are rather brittle.) Hence the particular Google case that also carries a notion of volume as the search engine claims to harvest thousands of sources for its Google News service.

There is a catch. The case raised by NNI and its putative followers is weakened by a major contradiction: everywhere, Ireland included, news websites invest a great deal of resources in order to achieve the highest possible rank in Google News. Unless specific laws are voted (German lawmakers are working on such a bill), attorneys will have hard time invoking copyright infringements that in fact stem for the very Search Engine Optimization tactics publishers encourage.

But there might be more at stake. For news organizations, the future carries obvious threats that require urgent consideration: In coming years, we’ll see great progress — so to speak — in automated content production systems. With or without link permissions, algorithmic content generators will be able (in fact: are) to scrap sites’original articles, aggregate and reprocess those into seemingly original content, without any mention, quotation, links, or reference of any kind. What awaits the news industry is much more complex than dealing with links from an aggregator.

It boils down to this: The legal debate on linking as copyright infringement will soon be obsolete. The real question will emerge as a much more complex one: Should a news site protect itself from being “read”  by a robot? The consequences for doing so are stark: except for a small cohort of loyal readers, the site would purely and simply vanish from cyberspace… Conversely, by staying open to searches, the site exposes itself to forms of automated and stealthy depletion that will be virtually impossible to combat. Is the situation binary — allowing “bots” or not — or is there middle ground? That’s a fascinating playground for lawyers and techies, for parsers of words and bits.

frederic.filloux@mondaynote.com

Mobile’s Rude Awakening

 

Mobile audiences are large and growing. Great. But their monetization is mostly a disaster. The situation will be slow to improve, but the potential is still there — if the right conditions are met.    

This year, a major European newspaper expects to make around €16m in digital advertising revenue. The business is even slightly profitable. But there is a catch: while mobile devices now provide more than 50% of its traffic, advertising revenue from smartphones and tablets will only reach €1m. For this particular company, like many others, mobile advertising doesn’t work. It brings about 5% or 6% of what desktop web ads do — which, already, suffer from a 15 times cut in revenue when compared to print.

Call it a double whammy: Publishers took a severe hit by going digital in a way that compounded commoditization of contents with an endless supply of pages. The result is economically absurd: in a “normal” world, when audiences rise, advertising reaches more people and, as a result, rates rise. At least, that was the rule in the comfy world of print. No such thing in digital media. As many news sites experienced, despite double digit audience growth, CPMs (Cost per Thousand page impressions) actually declined over recent years. Fact is, this sector is much more sensitive to general economic conditions than to its extraordinary large adoption. And as if that wasn’t enough, publishers now take another blow as a growing share of their audience moves to mobile where money hasn’t followed… yet.

Granted, there are exceptions. Nordic media, for instance, benefit from an earlier and stronger mobile adoption (think Nokia and Ericsson, even before smartphones). Supported by many paid-for services, Scandinavian media houses extract a significant amount of profit from mobile. Similarly, Facebook mobile operations are faring quite well. According to the latest TBG Digital report, Click Through Rate (CTR) on ads placed on mobile News Feeds are 23 times higher than those displayed on the desktop version (respectively a CTR of 1.290% vs. 0.049%).

The digital mediasphere is struggling with mobile ads. In June, we went through most of the causes (see Jean-Louis’ note Mobile Advertising: The $20bn Opportunity Mirage). Problem is: there are still few signs of improvement. Inventories are growing, ad creativity remains at a low point (just look at the pixelated ads that plague the bottom of your mobile screens). As you can see below, programmatic buying is on the rise as this low-yield market remains vastly intermediated (click to enlarge):

– Too many middlemen? –

This results in the following eCPMs (effective CPM is the price advertisers are willing to pay for a given audience) as surveyed for different mobile platforms:

iOS iPad........... $0.90-$1.10
iOS iPhone......... $0.70-$0.80
Android Tablet..... $0.60-$0.70
Android Phones..... $0.40-$0.60

Advertising-wise, mobile is mostly a dry hole.

OK. Enough whining. Where do we go from here? What to expect in the next 18 months? How to build upon the inherent (and many) advantages offered by the mobile space?

For rate cards, we have some good news: prices on Android and iOS are converging upward as Android demographics are rising; soon, the two dominant mobile platforms will be in the higher price range. The value of ads is also likely to climb a little as screens gets better and larger, and as bandwidth increases: such improvements will (should) allow more visually attractive, more engaging ads. The ecosystem should also benefit from the trend toward more customized advertising. Ideally, promotional campaigns should be completely integrated and provide a carefully designed continuum within the three digital vectors: desktop web to be viewed at home or at the office; mobile formats for quick reading on the go; and tablet-friendly for a slower, more engaged, lean-back consumption (reading time is five or ten times higher on an iPad than on a PC). But, again, as long as creative agencies or media themselves do not commit adequate resources to such a virtuous chain, the value created will stay dangerously close to zero. (Those players better hurry up as a myriad of agile startups are getting ready to take control of this neglected potential.)

A few more reasons for being bullish on mobile. For instance, the level of personalization has nothing to do with what we see on the PC; a smartphone is not shared; it’s personal; and it’s the best vector to carry an intimate environment in which to create one’s dedicated social interaction system, transactional tools, entertainment selections (games, movies, books, TV series), etc. Mobile devices come with other, high potential features such as geolocation, ability to scan a bar-code — all favoring impulse buying. (This happened to me more than once: In a Paris bookstore, if the only copy left of a book I want is worn-off, or if the salesperson seems annoyed by my mere presence, I quickly scan the bare-code and order it from Amazon on the spot, right from the store. Apparently, I’m not the only one: about 20% of mobile users admitted they scanned a bar-code, or took a picture of a product in a store). And soon, these features will be supplemented by electronic wallet functions. Think about it: which marketeer wouldn’t dreamed of having access to such capabilities?

frederic.filloux@mondaynote.com

Google’s looming hegemony

 

If we factor Google geospatial applications + its unique data processing infrastructure + Android tracking, etc., we’re seeing the potential for absolute power over the economy. 

Large utility companies worry about Google. Why? Unlike those who mock Google for being a “one-trick pony”, with 99% of its revenue coming from Adwords, they connect the dots. Right before our eyes, the search giant is weaving a web of services and applications aimed at collecting more and more data about everyone and every activity. This accumulation of exabytes (and the ability to process such almost unconceivable volumes) is bound to impact sectors ranging from power generation, transportation, and telecommunications.

Consider the following trends. At every level, Western countries are crumbling under their debt load. Nations, states, counties, municipalities become unable to support the investment necessary to modernize — sometimes even to maintain — critical infrastructures. Globally, tax-raising capabilities are diminishing.

In a report about infrastructure in 2030 (500 pages PDF here), the OECD makes the following predictions (emphasis mine):

Through to 2030, annual infrastructure investment requirements for electricity, road and rail transport, telecommunications and water are likely to average around 3.5% of world gross domestic product (GDP).

For OECD countries as a whole, investment requirements in electricity transmission and distribution are expected to more than double through to 2025/30, in road construction almost to double, and to increase by almost 50% in the water supply and treatment sector. (…)

At present, governments are not well placed to meet these growing, increasingly complex challenges. The traditional sources of finance, i.e. government budgets, will come under significant pressure over the coming decades in most OECD countries – due to aging populations, growing demands for social expenditures, security, etc. – and so too will their financing through general and local taxation, as electorates become increasingly reluctant to pay higher taxes.

What’s the solution? The private sector will play a growing role through Public-Private-Partneships (PPPs). In these arrangements, a private company (or, more likely, a consortium of such) builds a bridge, a motorway, a railroad for a city, region or state, at no expense to the taxpayer. It will then reimburse itself from the project’s cash-flow. Examples abound. In France the elegant €320m ($413m) viaduct of Millau was built — and financed — by Eiffage, a €14 billion revenue construction group. In exchange for financing the viaduct, Eiffage was granted a 78-year toll concession with an expected internal rate of return ranging from 9.2% 17.3%. Across the world, a growing number of projects are built using this type of mechanism.

How can a company commit hundreds of millions of euros, dollars, pounds with an acceptable level of risk over several decades? The answer lies in data-analysis and predictive models. Companies engineer credible cash-flow projections using reams of data on operations, usages patterns and components life cycles.

What does all this have to do with Google?

Take a transportation company building and managing networks of buses, subways or commuter trains in large metropolitan areas. Over the years, tickets or passes analysis will yield tons of data on customer flows, timings, train loads, etc. This is of the essence when assessing the market’s potential for a new project.

Now consider how Google aggregates the data it collects today — and what it will collect in the future. It’s a known fact that cellphones send back to Mountain View (or Cupertino) geolocalization data. Bouncing from one cell tower to another, catching the signal of a geolocalized wifi transmitter, even if the GPS function is turned off, Android phone users are likely to be tracked in realtime. Bring this (compounded and anonymized) dataset on information-rich maps, including indoor ones, and you will get very high definition of profiles for who goes or stays where, anytime.

Let’s push it a bit further. Imagine a big city such as London, operating 500,000 security cameras, which represent the bulk of the 1.85 million CCTVs deployed in the UK — one for every 32 citizens. 20,000 of them are in the subway system. The London Tube is the perfect candidate for partial or total privatization as it bleeds money and screams for renovations. In fact, as several people working at the intersection of geo applications and big data project told me, Google would be well placed to provide the most helpful datasets. In addition to the circulation data coming from cellphones, Google would use facial recognition technology. As these algorithms are already able to differentiate a woman from a man, they will soon be able to identify (anonymously) ethnicities, ages, etc. Am I exaggerating ? Probably not. Mercedes-Benz already has a database of 1.5 million visual representations of pedestrians to be fed into the software of its future self-driving cars. This is a type of applications in which, by the way, Google possesses a strong lead with its fleets of driverless Prius crisscrossing Northern California and Nevada.

Coming back to the London Tube and its unhappy travelers, we have traffic data, to some degree broken down into demographics clusters; why not then add shopping data (also geo-tagged) derived from search and ads patterns, Street View-related informations… Why not also supplement all of the above with smart electrical grid analysis that could refine predictive models even further (every fraction of percentage points counts…)

The value of such models is much greater than the sum of their parts. While public transportation operators or utility companies are already good at collecting and analyzing their own data, Google will soon be in the best position to provide powerful predictive models that aggregate and connect many layers of information. In addition, its unparalleled infrastructure and proprietary algorithms provide a unique ability to process these ever-growing datasets. That’s why many large companies over the world are concerned about Google’s ability to soon insert itself into their business.

frederic.filloux@mondaynote.com

 

Schibsted’s extraordinary click machines

 

The Nordic media giant wants to be the #1 worldwide of online classifieds by replicating its high-margin business one market after another, with great discipline. 

It all starts in 2005 with a Power Point presentation in Paris. At the time, Schibsted ASA, the Norwegian media group, is busy deploying its free newspapers in Switzerland, France and Spain. Schibsted wants its French partner Ouest-France — the largest regional newspapers group — to co-invest in a weird concept: free online classifieds. As always with the Scandinavian, the deck of slides is built around a small number of key points. To them, three symptoms attest to the maturity of a market’s online classified business:  (a) The number one player in the field ranks systematically among the top 10 web sites, regardless of the category; (b) it is always much bigger than the number two; (c) it reaps most of the profits in the sector. “Look at the situation here in France”, the Norwegians say, “the first classifieds site ranks far down in Nielsen rankings. The market is up for grabs, and we intend to get it”. The Oslo and Stockholm executives already had an impressive track record: in 2000, they launched Finn.no in Norway and, in 2003, they acquired Blocket.se in Sweden. Both became incredible cash machines for the group, with margins above 50% and unabated growth. Ouest-France eventually agreed to invest 50% in the new venture. In november 2010, they sold their stake back to Schibsted at a €400m valuation. (As we’ll see in a moment, the classified site Le Bon Coin is now worth more than twice that number.)

November 2012. I’m sitting in the office of Olivier Aizac, CEO of Le Bon Coin, the French iteration of Schibsted’s free classifieds concept. The office space is dense and scattered over several floors in a building near the Paris Bourse. Since my last 2009 visit (see a previous Monday Note Learning from free classifieds), the startup grew from a staff of 15 to 150 people. And Aizac tells me he plans to hire 70 more staff in 2013. Crisis or not, the business is booming.

A few metrics: According to Nielsen, LeBonCoin.fr (French for The Right Spot) ranks #9 in France with 17m monthly unique users. With more than 6 billion page views per month, it even ranks #3, behind Facebook and Google. Revenue-wise, Le Bon Coin might hit the €100m mark this year, with a profit margin slightly above… 70%. Fort the 3rd quarter of this year, the business grew by 50% vs. a year ago.

In terms of competition it dominates every segment: cars, real estate (twice the size of Axel Springer’s SeLoger.com) and jobs with about 60,000 classifieds, roughly five times the inventory of a good paid-for job board (LeBonCoin is not positioned in the upper segment, though, it mostly targets regional small to medium businesses).

Le Bon Coin’s revenue stream is made of three parts: premium services (you pay to add a picture, a better ranking, tracking on your ad); fees coming from the growing number professionals who flock to LBC (many car dealerships put their entire inventory here); and advertising for which the primary sectors are banking and insurance, services such as mobile phone carriers or pay-TV, and automobile. Although details are scarce, LBC seems to have given up the usual banner sales, focusing instead on segmented yearly deals: A brand will target a specific demographic and LBC will deliver, for half a million or a million euros per annum.

One preconceived idea depicts Le Bon Coin as sitting at the cheaper end of the consumer market. Wrong. In the car segment, its most active advertiser is Audi for whom LBC provides tailored-made promotions. (Strangely enough Renault is much slower to catch the wave.) “We are able to serve any type of market”, says Olivier Aizac who shows an ad peddling a €1.4m Bugatti, and another for the brand new low-cost Peugeot 301, not yet available in dealerships but offered on LBC for €15,000. Similarly, LBC is the place to go to rent a villa on the Cote d’Azur or a chalet for the ski season. With more than 21 millions ads at any given moment, you can find pretty much anything there.

Now, let’s zoom out and look at a broader picture. How far can Le Bon Coin go? And how will its cluster of free classifieds impact Schibsted’s future?

Today, free online classifieds weigh about 25% of Schibsted revenue (about 15bn Norwegian Kroner, €2bn this year), but it it accounts for 47% of the group’s Ebitda (2.15bn NOK, €300m). All online activities now represent 39% of the revenue and 62% of the Ebitda.

The whole strategy can be summed up in these two charts: The first shows the global deployment of the free classifieds business (click ton enlarge):

Through acquisitions, joint ventures or ex nihilo creations, Schibsted now operates more than 20 franchises. Their development process is highly standardized. Growth phases have been codified in great detail, managers often gather to compare notes and the Oslo mothership watches everything, providing KPIs, guidelines, etc. The result is this second chart showing the spread of deployment phases. More than half of the portfolio still is in infancy, but most likely to follow the path to success:

Source: Schibsted Financial Statements

This global vision combined to what is seen as near-perfect execution explains why the financial community is betting so much on Schibsted’s classified business.

When assessing the potential of each local brand, analysts project the performances of the best and mature properties (the Nordic ones) onto the new ones. As an example, see below the number of visits per capita and per month from web and mobile since product launch:

Source : Dankse Market Equities

For Le Bon Coin’s future, this draws a glowing picture: according to Danske Market Equities, today, the Norwegian Finn.no generates ten times more revenue per page view than LBC, and twenty times more when measured by Average revenue per user (ARPU). The investment firm believes that Le Bon Coin’s revenue can reach €500m in 2015, and retain a 65% margin. (As noted by its CEO, Le Bon Coin has yet to tap into its trove of data accumulated over the last six years, which could generate highly valuable consumer profiling information).

When translated into valuation projections, the performance of Schibsted classifieds businesses far exceed the weight of traditional media properties (print and online newspapers). The sum-of-the-parts valuations drawn by several private equities firms show the value of the classifieds business yielding more than 80% of the total value of this 173 year-old group.

frederic.filloux@mondaynote.com
Disclosure: I worked for Schibsted for nine years altogether between 2001 and 2010; six years indirectly as the editor of 20 minutes and three years afterwards, in a business development unit attached to the international division.
——- 

The Release Windows Archaism

 

Television and media industry are stuck in a wasteful rear-guard fight for the preservation of an analog era relic: the Release Windows system. Designed to avoid destructive competition among media, it ends up boosting piracy while frustrating honest viewers willing to pay.  

A couple of months ago, I purchased the first season of the TV series Homeland from the iTunes Store. I paid $32 for 12 episodes that all landed seamlessly in my iPad. I gulped them in a few days and was left in a state of withdrawal. Then, on September 30th, when season 2 started over, I would have had no alternative to downloading free but illegal torrent files. Hundreds of thousands of people anxious to find out the whereabouts of the Marine turncoat pursued by the bi-polar CIA operative were in the same quandary (go to the dedicated Guardian blog for more on the series).

In the process, the three losers are:
– The Fox 21 production company that carries the risk of putting the show together (which costs about $36m per season, $3m per episode)
– Apple which takes its usual cut. (The net loss for both will actually be $64 since the show has been signed up for a third season by the paid-for Showtime channel and I wonder if I’ll have the patience to wait months for its availability on iTunes.)
– And me, as I would have to go through the painstaking task of finding the right torrent file, hoping that it is not bogus, corrupted, or worse, infected by a virus.

Here, we put our finger on the stupidity of the Release Windows system, a relic of the VHS era. To make a long story short, the idea goes back to the 80′s when the industry devised a system to prevent different media — at the time, movie theaters, TV networks, cable TV and VHS — from cannibalizing each other. In the case of a motion picture, the Release Windows mechanism called for a 4 months delay before its release on DVD, additional months for the release on Pay-TV, Video-On-Demand, and a couple of years before showing up on mainstream broadcast networks (where the film is heavily edited, laced with commercial, dubbed, etc.)

The Western world was not the only one to adopt the Release Window system. At the last Forum d’Avignon cultural event a couple of weeks ago, Ernst & Young presented a survey titled  Mastering tempo: creating long-term value amidst accelerating demand (PDF in English here and in French here).

The graph below shows the state of the windows mechanism in various countries:

Europe should be happy when comparing its situation to India’s. There, it takes half a year to see a movie in DVD while the box-office contributes to 75% of a film’s revenue. Ernst & Young expects this number to drop only slightly, to 69%, in 2015 (by comparison, the rate is only 28% in the UK). Even though things are changing fast in India, internet penetration is a mere 11.4% of the population and movie going still is a great popular entertainment occasion.

In the United States, by comparison, despite a large adoption of cable TV, Blue-Ray or VOD, and a 78% penetration rate for the internet (84% in the UK and higher in Northern Europe), the Release Windows system shows little change: again, according to the E&R survey, it went from 166 days in 2000 to 125 days in 2011:

Does it makes sense to preserve a system roughly comparable to the one in India for the US or Europe where the connected digital equipment rate is seven times higher?

Motion pictures should probably be granted a short head start in the release process. But it should coincide with the theatrical lifetime of a production that is about 3-4 weeks. Even better, it should be adjusted to the box-office life — if a movie performs so well that people keep flocking to theaters, DVDs should wait. On the contrary, if the movie bombs, it should be given a chance to resurrect online, quickly, sustained by a cheaper but better targeted marketing campaign mostly powered by social networks.

Similarly, movie releases should be simultaneous and global. I see no reason why Apple or Microsoft are able to make their products available worldwide almost at the same time while a moviegoer has to wait three weeks here or two months there. As for the DVD Release Windows, it  should go along with the complete availability of a movie for all possible audiences, worldwide and on every medium.  Why? Because the release on DVD systematically opens piracy floodgates (but not for the legitimate purchase on Netflix, Amazon Prime or iTunes).

As for the TV shows such as Homeland and others hits, there is not justification whatsoever to preserve this calendar archaism. They should be made universally available from the day when they are aired on TV, period. Or customers will vote with their mouse anyway and find the right file-sharing sites.

The “Industry” fails to assess three shifts here.

–The first one is the globalization of audiences. Worldwide, about 360m people are native English speakers; for an additional 375m, it is the second language, and 750m more picked English as an foreign language at school. That’s about 1.5 billion people likely to be interested in English-speaking culture. As a result, a growing proportion of teenagers watch their pirated series without subtitles — or scruples.

–Then, the “spread factor”: Once a show becomes a hit in the United States, it becomes widely commented in Europe and elsewhere, not only because a large number of people speak serviceable English, but also because many national websites propagate the US buzz. Hollywood execs would be surprised to see how well young (potential) audiences abroad know about their productions months before seeing them.

–And finally, technology is definitely on the side of the foreign consumer: Better connectivity (expect 5 minutes to download an episode), high definition image, great sound… And mobility (just take a high-speed train in Europe and see how many are watching videos on their tablets).

To conclude, let’s have a quick look at the numbers. Say a full season of Homeland costs $40m to produce. Let’s assume the first release is supposed to cover 40% of the costs, that is $16m. Homeland is said to gather 2 million viewers. Each viewer will therefore contribute for $8 to the program’s economics. Compare to what I paid through iTunes: my $32 probably leave about half to the producers; or compare to the DVD, initially sold for $60 for the season, now discounted at $20. You get my point. Even if the producer nets on average $15 per online viewer, it would need only 1.6 million paid-for viewers worldwide to break-even (much less when counting foreign syndication.) Even taking in account the unavoidable piracy (which also acts as a powerful promotional channel), with two billion people connected to the internet outside the US, the math heavily favors the end of the counter-productive and honest-viewer-hostile Release Windows archaism.

–frederic.filloux@mondaynote.com

It’s the Competitive Spirit, Stupid

 

Legacy media suffer from a deadly DNA mutation: they’ve lost  their appetite for competition; they no longer have the will to fight the hordes of new, hungry mutants emerging from the digital world. 

For this week’s column, my initial idea was to write about Obama’s high tech campaign. As in 2008, his digital team once again raised the bar on the use of data mining, micro-targeting, behavioral analysis, etc. As Barack Obama’s strategist David Axelrod suggested just a year ago in Bloomberg BusinessWeek, compared to what they were working on, the 2008 campaign technology looked prehistoric. Without a doubt, mastering the most sophisticated practices played a crucial role in Obama’s November 6th victory.

As I researched the subject, I decided against writing about it. This early after the election, it would have been difficult to produce more than a mere update to my August 2008 story, Learning from the Obama Internet Machine. But, OK. For those of you interested in the matter, here are a couple of resources I found this week: An interesting book by Sasha Issenberg, The Victory Lab, The Secret Science of  Winning Campaigns, definitely worth a read; or previously unknown tidbits in this Stanford lecture by Dan Siroker, an engineer who left Google to join the Obama campaign in 2008. (You can also feast on a Google search with terms like “obama campaign + data mining + microtargeting”.)

I switched subjects because something jumped at me: the contrast between a modern election campaign and the way traditional media cover it. If it could be summed up in a simplistic (and, sorry, too obvious) graph, it would look like this :

The 2012 Election campaign carries all the ingredients of the fiercest of competitions: concentrated in a short time span; fueled by incredible amounts of cash (thus able to get the best talent and technology money can buy); a workforce that is, by construction, the most motivated any manager can dream of, a dedicated staff led by charismatic stars of the trade; a binary outcome with a precise date and time (first Tuesday of November, every four years.) As if this was not enough, the two camps actually compete for a relatively small part of the electorate, the single digit percentage that will swing one way or the other.

At the other end of the spectrum, you have traditional media. Without falling into caricature, we can settle for the following descriptors: a significant pool of (aging) talent; a great sense of entitlement; a remote connection with the underlying economics of the business; a remarkably tolerance for mediocrity (unlike, say, pilots, or neurosurgeons); and, stemming from said tolerance, a symmetrical no-reward policy — perpetuated by unions and guilds that planted their nails in the media’s coffin.

My point: This low level of competitive metabolism has had a direct and negative impact on the economic performance of legacy media.

In countries, regions, or segments where newsrooms compete the most on a daily basis (on digital or print), business is doing just fine.

That is the case in Scandinavia which enjoys good and assertive journalism, with every media trying to beat the other in every possible way: investigation, access to sources, creative treatment, real-time coverage, innovations in digital platforms… The UK press is also intensively competitive — sometimes for the worse as shown in the News Corp phone hacking scandal. To some extent, German, Italian, Spanish media are also fighting for the news.

At the other end of the spectrum, the French press mostly gave up competing. The market is more or less distributed on the basis readers’ inclinations. The biggest difference manifests itself when a source decides to favor one media against the others. Reminding someone of the importance of competing, of sometimes taking a piece of news from someone else’s plate tends to be seen as ill-mannered, not done. The result is an accelerating drop in newspapers sales. Strangely enough, Nordic media will cooperate without hesitation when it comes to sharing industrial resources such as printing plants and distribution channels while being at each other’s throat when it comes to news gathering. By contrast, the French will fight over printing resources, but will cooperate when it’s time to get subsidies from the government or to fight Google.

Digital players do not suffer from such a cumbersome legacy. Building organizations from scratch, they hired younger staff and set up highly motivated newsrooms. Pure players such as Politico, Business Insider, TechCrunch and plenty of others are fighting in their beat, sometimes against smaller but sharper blogs. Their journalistic performance (although uneven) translates into measurable audience bursts that turn into advertising revenues.

Financial news also fall into that same category. Bloomberg, DowJones and Reuters are fighting for their market-mover status as well for the quality — and usefulness — of their reporting; subscriptions to their service depends on such performance. Hence the emergence of a “quantifiable motivation” for the staff. At Bloomberg — one of the most aggressive news machine in the world — reporters are provided financial incentives for their general performance and rewarded for exclusive information. Salaries and bonuses are high, so is the workload. But CVs are pouring in — a meaningful indicator.

Digital newsrooms are much more inclined to performance measurements than old ones. This should be seen as an advantage. As gross as it might sound to many journalists, media should seize the opportunity that comes with modernizing their publishing tools to revise their compensation policies. The main index should be “Are we doing better than the competition? Does X or Y contribute to our competitive edge?”. Aside from the editor’s judgement, new metrics will help. Ranking in search engines and aggregators; tweets, Facebook Likes; appearances on TV or radio shows; syndication (i.e. paid-for republication elsewhere)… All are credible indicators. No one should be afraid to use them to reward talent and commitment.

It’s high time to reshuffle the nucleotides and splice in competitive DNA strands, they do contribute to economic performance.

frederic.filloux@mondaynote.com

 

The press, Google, its algorithm, their scale

 

In their fight against Google, traditional media firmly believe the search engine needs them to refine (and monetize) its algorithm. Let’s explore the facts.

The European press got itself in a bitter battle against Google. In a nutshell, legacy media want money from the search engine: first, for the snippets of news it grabs and feeds into its Google News service; second, on a broader basis, for all the referencing Google builds with news media material. In Germany, the Bundestag is working on a bill to force all news aggregators to pay their toll; in France, the executive is pushing for a negotiated solution before year-end. Italy is more or less following the same path. (For a detailed and balanced background, see this Eric Pfanner story in the International Herald Tribune.)

In the controversy, an argument keeps rearing its head. According to the proponents of a “Google Tax”, media contents greatly improve the contextualization of advertising. Therefore, the search engine giant ought to pay for such value. Financially speaking, without media articles Google would not perform as well it does, hence the European media hunt for a piece of the pie.

Last week, rooting for facts, I spoke with several people possessing deep knowledge of Google’s inner mechanics; they ranged from Search Engine Marketing specialists to a Stanford Computer Science professor who taught Larry Page and Sergey Brin back in the mid-90′s.

First of all, pretending to know Google is indeed… pretentious. In order to outwit both competitors and manipulators (a.k.a, Search Engine Optimization gurus), the search engine keeps tweaking its secret sauce. Just for the August-September period, Google made no less than 65 alterations to its algorithm (list here.) And that’s only for the known part of the changes; in fact, Google allocates large resources to counter people who try too game its algorithm with an endless stream of tricks.

Maintaining such a moving target also preserves Google’s lead: along with its distributed computing capabilities (called MapReduce), its proprietary data storage system BigTable, its immense infrastructure, Google’s PageRank algorithm is at the core of the search engine’s competitive edge. Allowing anyone to catch up, even a little, is strategically inconceivable.

Coming back to the Press issues, let’s consider both quantitative and qualitative approaches. In the Google universe — currently about 40 billion indexed pages –, contents coming from media amount to a small fraction. It is said to be a low single-digit percentage. To put things in perspective, on average, an online newspaper adds between 20,000 and 100,000 new URLs per year. Collectively, the scale roughly looks like millions of news articles versus a web growing by billions of pages each year.

Now, let’s consider the nature of searches. Using Google Trends for the last three months, the charts below ranks the most searched terms in the United States, France and Germany (click to enlarge):


Do the test yourself by going to the actual page: you’ll notice that, except for large dominant American news topics (“Hurricane Sandy” or “presidential debate”), very few search results bring back contents coming from mainstream media. As Google rewards freshness of contents — as well as sharp SEO tactics — “web native” media and specialized web sites perform much better than their elder “migrants”, that is web versions of traditional media.

What about monetization ?  How do media contents contribute to Google’s bottom line? Again let’s look at the independent rankings of the most expensive keywords, those that can bring $50 per click to Google — through its opaque pay-per-click bidding system. For instance, here is a recent Wordstream ranking (example keywords in parenthesis):

Insurance (“buy car insurance online” and “auto insurance price quotes”)
Loans (“consolidate graduate student loans” and “cheapest homeowner loans”)
Mortgage (“refinanced second mortgages” and “remortgage with bad credit”)
Attorney (“personal injury attorney” and “dui defense attorney”)
Credit (“home equity line of credit” and “bad credit home buyer”)
Lawyer (“personal  injury lawyer”, “criminal defense lawyer)
Donate (“car donation centers”, “donating a used car”)
Degree (“criminal justice degrees online”, “psychology bachelors degree online”)
Hosting (“hosting ms exchange”, “managed web hosting solution”)
Claim (“personal injury claim”, “accident claims no win no fee”)
Conference Call (“best conference call service”, “conference calls toll free”)
Trading (“cheap online trading”, “stock trades online”)
Software (“crm software programs”, “help desk software cheap”)
Recovery (“raid server data recovery”, “hard drive recovery laptop”)
Transfer (“zero apr balance transfer”, “credit card balance transfer zero interest”)
Gas/Electricity (“business electricity price comparison”, “switch gas and electricity suppliers”)
Classes (“criminal justice online classes”, “online classes business administration”)
Rehab (“alcohol rehab centers”, “crack rehab centers”)
Treatment (“mesothelioma treatment options”, “drug treatment centers”)
Cord Blood (“cordblood bank”, “store umbilical cord blood”)

(In my research, several Search Engine Marketing specialists came up with similar results.)

You see where I’m heading to. By construction, traditional media do not bring money to the classification above. In addition, as an insider said to me this week, no one is putting ads against keywords such as “war in Syria” or against the 3.2 billion results of a “Hurricane Sandy” query. Indeed, in the curve of ad words value, news slides to the long tail.

Then, why is Google so interested in news contents? Why has it has been maintaining  Google News for the past ten years, in so many languages, without making a dime from it (there are no ads on the service)?

The answer pertains to the notion of Google’s general internet “footprint”. Being number one in search is fine, but not sufficient. In its goal to own the semantic universe, taking over “territories” is critical. In that context, a “territory” could be a semantic environment that is seen as critical to everyone’s daily life, or one with high monetization potential.

Here are two recent examples of monetization potential as viewed by Google: Flights and Insurance. Having (easily) determined flight schedules were among the most sought after informations on the web, Google dipped into its deep cash reserve and, for $700m, acquired ITA software in July 2010. ITA was the world largest airline search company, powering sites such as Expedia or TripAdvisor. Unsurprisingly, the search giant launched Goolge Flight Search in Sept 2011.

In passing, Google showed its ability to kill any price comparator of its choosing. As for Insurance, the most expensive keyword, Google recently launched its own insurance comparison service in the United Kingdom… just after launching a similar system for credit cards and bank services.

Over the last ten years, Google has become the search tool of choice for Patents, and for scientific papers with Google Scholar. This came after shopping, books, Hotel Finder, etc.

Aside of this strategy of making Google the main — if only — entry point to the web, the search engine is working hard on its next transition: going from a search engine to a knowledge engine.

Early this year, Google created Knowledge Graph, a system that connects search terms to what is known as entities (names, places, events, things) — millions of them. This is Google’s next quantum leap. Again, you might think news related corpuses could constitute the most abundant trove of information to be fed into the Knowledge Graph. Unfortunately, this is not the case. At the core of the Knowledge Graph resides Metaweb, acquired by Google in July 2010. One of its key assets was a database of 12 million entities (now 23m) called Freebase. This database is fed by sources (listed here), ranging from the International Federation of Association Football (FIFA) to the Library of Congress, Eurostat or the India Times. (The only French source of the list is the movie database AlloCine.)

Out of about 230 sources, there are less than 10 medias outlets. Why? Again, volume and, perhaps even more important, ability to properly structure data. When the New York Times has about 14,000 topics, most newspapers only have hundreds of those, and a similar number of named entities in their database. (As a comparison, web native medias are much more skilled at indexation: the Huffington Post assigns between 12 and 20 keywords to each story.) By building upon acquisitions such as Metaweb’s Freebase, Google now has about half billion entries of all kinds.

Legacy media must deal with a harsh reality: despite their role in promoting and defending democracy, in lifting the veil on things that mean much for society, or in propagating new ideas, when it come to data, news media compete in the junior leagues. And for Google, the most data-driven company in the world, having newspapers articles in its search system is no more than small cool stuff.

frederic.filloux@mondaynote.com

The New York Times’ shifting model

 

At the NYT Company, in ten years, the share of quarterly revenue attributed to circulation grew from less than 30% to more than half today… 

The stock market brutally punished the New York Times for its worse-than-expected quarterly earnings. Are financial analysts completely blind? How come they didn’t foresee the decline in advertising revenue that affects the NYTimes — and any publication in the world outside of the BRIC zone? This is incomprehensible. A simple look at the overall ad sector (see the previous column featuring the Internet Split) causes one to realize how much worse the New York Times numbers could have been.

In any event, the demise of the ad market will accelerate the transformation of the Times. Here are the highlights for the third quarter of 2012 that particularly disappointed Wall Street (comparisons are for Q3 2012 vs. Q3 2011, full earnings release here):
– Total revenue decreases by -0.6%
– Advertising revenue drops by -9% across the board. Print ad takes a -11% dive and  digital ad revenue is off by -2.2% (for the second quarter in a row.)
– Costs are not contained enough (again, according to analysts) and rise by 2.3%, mostly because of benefits, performance-based and stock-based compensation and… printing costs.

Thursday, Wall Street dumped the stock, causing its biggest drop since 1980: It plunged by 22% to $8.31. Since the beginning of the year, NYT shares are up by about 6% vs. 12% for the S&P index.

On the bright side: Circulation revenue grew by 7% vs. last year. This is mostly due to the rise in digital subscribers. (Print gains reflect a recent price hike). Paid subs for the NYTimes and the International Herald Tribune totaled 566,000 for the 3rd quarter, a remarkable growth of 11% vs. the 2nd quarter of 2012 (+57,000 subs.)

In hard dollars, though, circulation figures no longer offset the loss in advertising. For the first nine months of 2012, revenue coming from circulation grew by $55m to $695m vs. a $47m loss in ads. But, for last three months, the NYT lost more in ads (-$18m) than it added in circulation (+$17m). In the earnings call with analysts, CFO Jim Follo points to a difficulty with his company’s business model: When advertising revenue goes down, 90% of the decrease translates into a margin loss, but circulation revenue gains generate additional costs.

The last 10 years show an interesting evolution for the advertising vs. circulation ratio. Between 2001 and 2011, revenue for the New York Times Media Group (primarily the newspapers and digital operations), fell by 30% in dollars adjusted for inflation. Advertising revenue decreased by 45% as Circulation revenue grew by 9% (and the “Other” category was slashed by 51%.

As shown in the table below, the New York Times’ revenue stream now relies mostly on circulation: 55% today vs. 29% in 2001. As digital subscriptions gain traction and advertising plummets, the trend accelerates when comparing the full 2011 year with the 3rd quarter of 2012:

              2001   2011  Q3-2012 
Advertising    62%    49%   39%
Circulation    29%    45%   55% 
Others          9%     6%    6%
Source: NYT Co. Financial statements

This evolution shows the strategic importance of the digital subscription system setup by the NY Times 15 months ago. So far, it works fine (see also a previous column NYT Digital Lessons). Thanks to its paywall, the NYT collects an average of 4750 new subscribers each week. Even the Boston Globe grew: +13% digital subscribers (currently 3,000) for this quarter when compared to the previous one .

The system has yet to unleash its full potential. For now, the NYTimes maintains a great deal of paywall porosity. Unlike the FT.com, there is no mandatory registration. It is actually pretty easy to circumvent the limit of 10 free articles per month: simply use different computers and devices. But the New York Times execs in charge of the system are in no rush to tighten the reins. They know mandatory registration will boost the transformation of registered users into full-paid-for ones, but it will be costly in terms of traffic.

Audience-wise, the paywall’s impact is uncertain. Times’ insiders said it had no effect. But, according to GigaOM’s Mathew Ingram (who quotes ComScore data), unique viewers would have fallen by 20% since March 2011 (from 34m Unique Visitors to 27m) and page views by 15%. Ingram suggests this trend could contribute to the erosion in ad revenue (although there is plenty of evidence showing that CMPs — cost per thousands page views — are indeed higher behind a paywall.)

One sure thing: before adding further paywall restrictions, The New York Times wants to find the perfect formula. On the Q3 earnings call, Denise Warren, who oversees the revenue side, explicitly referred to the topic: “We are exploring entry level opportunities as well as higher-ends as well”. In other words, her team is testing all possible prices and packages; current offers are likely to be sliced into multiple segments.

Overall, NYT’s management remains bearish on advertising for the next quarter at least. Jim Follo and Denise Warren invoked business leaders’ evaporating trust in the economy and also mentioned the oversupply in digital inventories (too many page views for sale, everywhere). They also point a finger to the shift in buying practices with, as they call it, “programmatic buying channels” (ad exchange, real-time bidding), who take over the market, pushing prices further down. One exception to this deflationary spiral is the luxury segment, stronger than ever, and well-tapped by The New York Times’ ability to provide customized campaigns.

Future Times revenue streams also lie in its ability to expand abroad. Last summer, the NYT.com launched its Chinese version (under Beijing’s strong vigilance). Next year, says Chairman Arthur Sulzberger, the Times will launch a Portuguese version aimed at the vast Brazilian market (and there are rumors of a Spanish language version.)

Denis Warren, also referred to what she called an “untapped demand in the corporate education segment”. Strangely, her statement echoes Harvard professor Clayton Christensen’s interview with the Neiman Journalism Lab where he discusses his favorite topic, the disruption of legacy businesses:

For the Harvard Business School — we’ve been saying for about 13 years now that management education is going to be disrupted by in-house corporate universities. And nobody just ever imagined that it would happen. In fact, every metric of goodness at the Harvard Business School has been improving and still continues to improve — even as last year the number of people applying to Harvard MBA programs dropped off 22 percent. In the prior year, it went down 11 percent. I really believe that the business schools are just going off the cliff.

I’m concerned: If business schools are going off the cliff, who will produce next generation of media analysts?…
frederic.filloux@mondaynote.com

The Internet Split

 

Web sites will soon fall into two categories: high audience low yield, low audience higher yield. Such a divide will impact digital advertising.

This Autumn, whomever you talk to will tell you this: internet advertising yields are taking a dive. CPMs (cost per thousand page views), now down to single digits, keep falling. This you hear on both sides of the Atlantic. Economic conditions, concerns with the level of debt (both private and public), business-hostile tax policies (see Jean-Louis’ Monday Note on French Entrepreneurs Revolt), the upcoming fiscal cliff in the United States, the fragility of the Eurozone are all pointing in the same direction. Even Google’s cash machine is showing weakness.

In such times, advertising budgets are usually the first ones to get the ax. Slowing down an industrial production line can get complicated, but slashing an ad campaign can be done with a mouse click. In the Fall, when everyone struggles with next year’s projections, a marketing director is inevitably tempted to hit the delete key to embellish his/er spreadsheet (remember: we are all short-termists.)

According to ZenithOptimedia’s recent forecasts, Eurozone ad expenditures will end this year in the red: -3.1% overall. But all countries are not equal: Just to illustrate the reactivity of advertising expenditures to economic conditions, consider three regional economies badly impacted by Europe’s downturn:

Italy:       -6.5% in ad spending vs. 2011
Spain:       -12.2%
Portugal:    -12.2%

And just for the sake of it, let’s award a special prize to the Greek economy: its advertising expenditure will be down 33.2% this year and it will be off its 2007 peak by… 63%! This shows how the ad market reacts and amplifies an economic crisis. (For 2013, Zenith predicts +0.9% growth in the Eurozone, but has since it downgraded its entire Western Europe 2012 projections from +0.4% in June to -0.7% in September. As a result, no one seriously believes Zenith’s projection for upcoming year.)

For digital media, such a trend will be the underlying cause of three evolutions:
- A rise in paid-for-performance advertising
- A rise in alternate formats beyond traditional display — for better or worse
- And a split between volume-driven and quality-driven digital properties.
To an extent, the third trend is a consequence of the other two. Let’s give it a closer look.

First, this graph:

A quick disclaimer: To avoid offending anyone, note there is no scale, nor any proportions between media outlets. The point is to map out clusters among various brands, to see who sits where relatively to others.

On the top left part of the chart, high audience but low yield: The Guardian (£40-50m in revenue for a stunning 60+ million uniques visitors), Business Insider, The Huffington Post and TV networks web sites. However, they have different ways of gathering huge audiences: The Guardian does it thanks to its fantastic journalistic machine and its unabated investment in digital (read this interesting story in last summer’s GQ); as for the Huffington Post, it has elevated clicking-techniques to an art.

Business Insider has become a class in itself. In the last two or three years, it drifted from a good tech/business blog to a compilation of eye-grabbing-headlines (a rather profuse one: I counted more than 90 items on BI’s home page this weekend.) Having said that: it remains interesting reading, and its crew sometimes gets scoops. But the entire editing is built on grabbing attention. And it works beautifully, so to speak. Here are some examples of stories and how they score:

12 Long-Dead Brands That Are Ripe For Resurrection:
55,000 views
Stunning Images From The Best Wildlife Photo Competition Of The Year:
78,000 views
19 Chinese White Collar Criminals Who Were Handed The Death Sentence :
104,000 views
There Is Simply No Other Plane In The World Like Air Force One :
650,000+ views
These Pictures May Give You Nightmares About Canada’s Oil Sands :
1.17 million views

Buried deep inside this accumulation of SEO-dreams-come-true items, there are some serious stories, but their scores are paltry:

Here’s The Big Mystery With Google’s $8 Billion Mobile Business :
7000 views
Jeff Bezos: People Who Are Right Are People That Change Their Mind A Lot :
6400 views

Well, you get my point. Again, I’m not judging here. With incredibly hard work, Henry Blodget and his team have built a great franchise, and I’d wish more business sites would learn — just a little bit, say 5% — how to stage business news and catch readers. And to be fair with Business Insider, let’s underline that its must-read 139 slides about The State of the Internet, attracted nearly 5 million viewers in three weeks.

Coming back to the chart above: on the bottom left, web sites like Slate or Salon (there are many others) enjoy strong reputation, loyal readership… but — as unfair as it sounds — tiny ones. On the upper right corner, we have the exception(s), lead by the New York Times: high audience, high ARPU (about $160m-200m in advertising revenue, recently supplemented by a $60-100m in subscription revenue that didn’t exist 15 months earlier.)

Let’s wrap up with advertising formats. Bottom line: ad agencies and their clients will always seek to blur the distinction between editorial and commercial contents. In that respect, the future lies, again, in the Business Insider model, which pushes the envelope pretty far (in my own, probably conservative opinion.) On this weekend’s home page, you can see how BI morphs its editorial team into sales reps with this story : 15 Tips For Getting The Perfect Tailored Suit, an advertorial for a wannabe hip Manhattan tailor. The package looks entirely like news coverage and it comes into two stages. Linked to the We-Give-You-Great-Tips treatment, you get the full monty: The secret suit shop underneath Mazza’s swank Chelsea sports bar, complete with a 20 pics slide-show and a glowing profile of the owners. The two stories gathered more than… 100,000 views (including mine — note that I linked to the stories because I’m pretty sure you’ll click on it…) But it’s pocket change compared to the rather straightforward Here’s Why Peter Luger Is The Best Steakhouse In New York City which collected an amazing 244,000 views. (Business Insider wins on both ends: for the advertorial — Henry, please, don’t tell me you do that for free — and the ads surrounding it.)

With very few exceptions, the editorial independence of lifestyle and consumer sections is now long gone (this includes “respectable” legacy media.) But this obvious violation of the separation between Church and State is bound to percolate into more pernicious “brand content” (see this earlier Monday Note) for more serious subjects than food or clothing. That’s where the credibility issue will set in.

frederic.filloux@mondaynote.com