About Frédéric Filloux

http://

Posts by Frédéric Filloux:

Growing Forces in Mobile

 

As seen last week in Barcelona, the mobile industry is red hot. The media sector will have to work harder to capture its share of that growth.

The 2013 edition of the Mobile World Congress held last week in Barcelona was as large as the biggest auto-show in the world: 1500 exhibitors and a crowd of 72,000 attendees from 200 countries. The mobile industry is roaring like never before. But the news media industry lags and will have to fight hard to stay in the game. Astonishingly, only two media companies deigned to show up: Pearson with its huge education business accounting for 75% of its 2012 revenue (vs. 7% for its Financial Times unit); and Agence France-Presse which is entering the customized application market. No other big media brand in sight, no trade organizations either. Apparently, the information sector is about to miss the mobile train.

Let’s begin with data that piqued my interest, from AT Kearney surveys for the GSM Association.

Individual mobile subscribers: In 2012, the worldwide number of mobile subscribers reached 3.2 billion. A billion subscribers was added in the last four years. As the world population is expected to grow by 1.1% per year between 2008 and 2017, the mobile sector enjoyed a 8.3% CAGR (Compound Annual Growth Rate) for the 2008-2012 period. For the 2012 – 2017 interval the expected CAGR is 4.2%. The 4 billion subscribers mark will be passed in 2018. By that time, 80% of the global population will be connected via a mobile device.

The rise of the machines. When machine-to-machine (M2M) connections are taken into account, growth becomes even more spectacular: In 2012, there were 6.8 billion active SIM cards, 3% of them being M2M connections. In 2017, there will be 9.7 billion active SIM cards and the share of M2M connections will account for 13% with almost 1.3 billion devices talking to each other.
The Asia-Pacific region will account for half of the connection growth, both for individual subscriptions and M2M.

We’ll now turn to stats that could benefit the media industry.

Mobile growth will be mostly driven by data usage. In 2012, the volume of data exchanged through mobile devices amounted to .9 exabytes per month (1 exabyte = 1bn gigabytes), this is more than the all preceding years combined! By 2017, it is expected to reach 11.2 exabytes, that’s a 66% CAGR!

A large part of this volume will come from the deployment of 4G (LTE) networks. Between now and 2017, deploying LTE technology will result in a 4X increase in connection speeds.

For the 2012 – 2017 period, bandwidth distribution is expected to grow as follows:

M2M:......... +89% 
Video:....... +75% 
Gaming:...... +62% 
Other data:...+55% 
File sharing: +34% 
VoIP:........ +34%

Obviously, the huge growth of video streaming (+75%) points to a great opportunity for the media industry as users will tend to watch news capsules on-the-go in the same way they today look at a mobile web sites or an app (these two will be part of the 55% annual growth).

The growing social mobility will also be an issue for news media. Here are the key figures for today in active mobile users

Facebook:...680m 
Twitter:....120m 
LinkedIn:....46m 
Foursquare:..30m

Still, as important as it is, social usage only accounts for 17 minutes per day, vs. 25 minutes for internet browsing and a mere 12 minutes for voice calls. Most likely, the growth of video will impact the use of social networks as Facebook collects more and more videos directly uploaded from smartphones.

A large part of this growth will be driven by the rise of inexpensive smartphones. Last week in Barcelona, the largest stand was obviously Samsung’s. But a huge crowd also gathered around Huawei or ZTE showing sophisticated Android-powered smartphones — at much lower prices. This came as a surprise to many westerners like me who don’t have access to these Chinese devices. And for emerging markets, Firefox is coming with a HTML5 operating system that looked surprisingly good.

In years to come, the growing number of operating systems, screen sizes and features will be a challenge. (At the MWC, the trend was definitely in favor of large screens, read this story in Engadget.) An entire hall was devoted to applications — and software aimed at producing apps in a more standardized, economical fashion. As a result, we might see three approaches to delivering contents on mobile:
- The simplest way will be mobile sites based on HTML5 and responsive design; more features will be embedded in web applications.
- The second stage will consist of semi-native apps, quickly produced using standardized tools, allowing fast updates and adaptations to a broad range of devices.
- The third way will involve expensive deep-coded native apps aimed at supporting sophisticated graphics; they will mainly be deployed by the gaming industry.

In upcoming Monday Notes, we will address two majors mobile industry trends not tied to the media industry: Connected Living (home-car-city), a sector likely to account for most machine-to-machine use; and digital education taking advantage of a happy combination of more affordable handsets and better bandwidth.

frederic.filloux@mondaynote.com

Google News: The Secret Sauce

 

A closer look at Google’s patent for its news retrieval algorithm reveals a greater than expected emphasis on quality over quantity. Can this bias stay reliable over time?

Ten years after its launch, Google News’ raw numbers are staggering: 50,000 sources scanned, 72 editions in 30 languages. Google’s crippled communication machine, plagued by bureaucracy and paranoia, has never been able to come up with tangible facts about its benefits for the news media it feeds on. It’s official blog merely mentions “6 billion visits per month” sent to news sites and Google News claims to connect “1 billion unique users a week to news content” (to put things in perspective, the NYT.com or the Huffington Post are cruising at about 40 million UVs per month). Assuming the clicks are sent to a relatively fresh news page bearing higher value advertising, the six billion visits can translate into about $400 million per year in ad revenue. (This is based on a $5 to $6 revenue per 1,000 pages, i.e. a few dollars in CPM per single ad, depending on format, type of selling, etc.) That’s a very rough estimate. Again: Google should settle the matter and come up with accurate figures for its largest markets. (On the same subject, see a previous Monday Note: The press, Google, its algorithm, their scale.)

But how exactly does Google News work? What kind of media does its algorithm favor most? Last week, the search giant updated its patent filing with a new document detailing the thirteen metrics it uses to retrieve and rank articles and sources for its news service. (Computerworld unearthed the filing, it’s here).

What follows is a summary of those metrics, listed in the order shown in the patent filing, along with a subjective appreciation of their reliability, vulnerability to cheating, relevancy, etc.

#1. Volume of production from a news source:

A first metric in determining the quality of a news source may include the number of articles produced by the news source during a given time period [week or month]. [This metric] may be determined by counting the number of non-duplicate articles produced by the news source over the time period [or] counting the number of original sentences produced by the news source.

This metric clearly favors production capacity. It benefits big media companies deploying large staffs. But the system can also be cheated by content farms (Google already addressed these questions); new automated content creation systems are gaining traction, many of them could now easily pass the Turing Test.

#2. Length of articles. Plain and simple: the longer the story (on average), the higher the source ranks. This is bad news for aggregators whose digital serfs cut, paste, compile and mangle abstracts of news stories that real media outlets produce at great expense.

#3. “The importance of coverage by the news source”. To put it another way, this matches the volume of coverage by the news source against the general volume of text generated by a topic. Again, it rewards large resource allocation to a given event. (In New York Times parlance, such effort is called called “flooding the zone”.)

#4. The “Breaking News Score”:   

This metric may measure the ability of the news source to publish a story soon after an important event has occurred. This metric may average the “breaking score” of each non-duplicate article from the news source, where, for example, the breaking score is a number that is a high value if the article was published soon after the news event happened and a low value if the article was published after much time had elapsed since the news story broke.

Beware slow moving newsrooms: On this metric, you’ll be competing against more agile, maybe less scrupulous staffs that “publish first, verify later”. This requires a smart arbitrage by the news producers. Once the first headline has been pushed, they’ll have to decide what’s best: Immediately filing a follow-up or waiting a bit and moving a longer, more value-added story that will rank better in metrics #2 and #3? It depends on elements such as the size of the “cluster” (the number of stories pertaining to a given event).

#5. Usage Patterns:

Links going from the news search engine’s web page to individual articles may be monitored for usage (e.g., clicks). News sources that are selected often are detected and a value proportional to observed usage is assigned. Well known sites, such as CNN, tend to be preferred to less popular sites (…). The traffic measured may be normalized by the number of opportunities readers had of visiting the link to avoid biasing the measure due to the ranking preferences of the news search engine.

This metric is at the core of Google’s business: assessing the popularity of a website thanks to the various PageRank components, including the number of links that point to it.

#6. The “Human opinion of the news source”:

Users in general may be polled to identify the newspapers (or magazines) that the users enjoy reading (or have visited). Alternatively or in addition, users of the news search engine may be polled to determine the news web sites that the users enjoy visiting. 

Here, things get interesting. Google clearly states it will use third party surveys to detect the public’s preference among various medias — not only their website, but also their “historic” media assets. According to the patent filing, the evaluation could also include the number of Pulitzer Prizes the organization collected and the age of the publication. That’s for the known part. What lies behind the notion of “Human opinion” is a true “quality index” for news sources that is not necessarily correlated to their digital presence. Such factors clearly favors legacy media.

# 7. Audience and traffic. Not surprisingly Google relies on stats coming from Nielsen Netratings and the like.

#8. Staff size. The bigger a newsroom is (as detected in bylines) the higher the value will be. This metric has the merit of rewarding large investments in news gathering. But it might become more imprecise as “large” digital newsrooms tend now to be staffed with news repackagers bearing little added value.

#9. Numbers of news bureaus. It’s another way to favor large organizations — even though their footprint tends to shrink both nationally and abroad.

#10. Number of “original named entities”. That’s one of the most interesting metric. A “named entity is the name of a person, place or organization”. It’s the primary tool for semantic analysis.

If a news source generates a news story that contains a named entity that other articles within the same cluster (hence on the same topic) do not contain, this may be an indication that the news source is capable of original reporting.

Of course, some cheaters insert misspelled entities to create “false” original entities and fool the system (Google took care of it). But this metric is a good way to reward original source-finding.

#11. The “breadth” of the news source. It pertains to the ability of a news organizations to cover a wide range of topics.

#12. The global reach of the news sources. Again, it favors large media who are viewed, linked, quoted, “liked”, tweeted from abroad.

This metric may measure the number of countries from which the news site receives network traffic. In one implementation consistent with the principles of the invention, this metric may be measured by considering the countries from which known visitors to the news web site are coming (e.g., based at least in part on the Internet Protocol (IP) addresses of those users that click on the links from the search site to articles by the news source being measured). The corresponding IP addresses may be mapped to the originating countries based on a table of known IP block to country mappings.

#13. Writing style. In the Google world, this means statistical analysis of contents against a huge language model to assess “spelling correctness, grammar and reading levels”.

What conclusions can we draw? This enumeration clearly shows Google intends to favor legacy media (print or broadcast news) over pure players, aggregators or digital native organizations. All the features recently added, such as Editor’s pick, reinforce this bias. The reason might be that legacy media are less prone to tricking the algorithm. For once, a know technological weakness becomes an advantage.

frederic.filloux@mondaynote.com

The Need for a Digital “New Journalism”

 

The survival of quality news calls for a new approach to writing and reporting. Inspiration could come from blogging and magazine storytelling and also bring back memories of the 70′s New Journalism movement. 

News reporting is aging badly. Legacy newsrooms style books look stuck in a last Century formalism (I was tempted to write “formalin“). Take a newspaper, print or online. When it comes news reporting, you see the same old structure dating back to the Fifties or even earlier. For the reporter, there is the same (affected) posture of effacing his/her personality behind facts, and a stiff structure based on a string of carefully arranged paragraphs, color elements, quotes, etc.

I hate useless quotes. Most often, for journalists, such quotes are the equivalent of the time-card hourly workers have to punch. To their editor, the message is ‘Hey, I did my my job; I called x, y, z’ ; and to the  the reader, ‘Look, I’m humbly putting my personality, my point of view behind facts as stated by these people’ — people picked by him/herself, which is the primary (and unavoidable) way to twist a story. The result becomes borderline ridiculous when, after a lengthy exposé in the reporter’s voice to compress the sources’ convoluted thoughts, the line of reasoning concludes with a critical validation such as :

“Only time will tell”, said John Smith, director of the social studies at the University of Kalamazoo, consultant for the Rand Corporation, and author of “The Cognitive Deficit of Hyperactive Chimpanzees”. 

I’m barely making this up. Each time I open a carbon-based newspaper (or read its online version), I’m stuck by how old-fashioned news writing remains. Unbeknownst to the masthead (i.e. editorial top decision-makers) of legacy media, things have changed. Readers no longer demand validating quotes that weigh the narrative down. They want to be taken from A to B, with the best possible arguments, and no distraction or wasted time.

Several factors dictate an urgent evolution in the way newspapers are written.

1/ Readers’ Time Budget. People are deluged with things to read. It begins at 7:00 in the morning and ends up late into the night. The combination of professional contents (mail, reports, PowerPoint presentations) and social networking feeds, have put traditional and value-added contents (news, books) under great pressure. Multiple devices and the variable level of attention that each of them entails create more complications: a publishing house can’t provide the same content for a smartphone screen to be read in a cramped subway as for a tablet used in lean-back mode at home. More than ever, the publisher is expected to clearly arbitrate between the content that is to be provided in a concise form and the one that justifies a long, elaborate narrative. The same applies to linking and multi-layer constructs: reading a story that opens several browser tabs on a 22-inch screen is pleasant — and completely irrelevant for quick lunchtime mobile reading.

2/ Trust factor / The contract with the Brand. When I pick a version of The New York Times, The Guardian, or a major French newspaper, this act materializes my trust (and hope) in the professionalism associated with the brand. In a more granular way, it works the same for the writer. Some are notoriously sloppy, biased, or agenda-driven; others are so good than they became a brand by themselves. My point: When I read a byline I trust, I assume the reporter has performed the required legwork — that is collecting five or ten times the amount of information s/he will use in the end product. I don’t need the reporting to be proven or validated by an editing construct that harks back to the previous century. Quotes will be used only for the relevant opinion of a source, or to make a salient point, not as a feeble attempt to prove professionalism or fairness.

3 / Competition from the inside. Strangely enough, newspapers have created their own gauge to measure their obsolescence. By encouraging their writing staff to blog, they unleashed new, more personal, more… modern writing practices. Fact is, many journalists became more interesting on their own blogs than in their dedicated newspaper or magazine sections. Again, this trend evaded many editors and publishers who consider blogging to be a secondary genre, one that can be put outside a paywall, for instance. (This results in a double whammy: not only doesn’t the paper cash on blogs, but it also frustrates paid-for subscribers).

4/ The influence of magazine writing. Much better than newspapers, magazines have always done a good job capturing readers’ preferences. They’ve have always been ahead in market research, graphic design, concept and writing evolution. (This observations also applies to the weekend magazines operated by large dailies). As an example, magazine writers have been quick to adopt first person accounts that rejuvenated journalism and allowed powerful narrative. In many newspapers, authors and their editors still resists this.

Digital media needs to invent its own journalistic genres. (Note the plural, dictated by the multiplicity of usages and vectors). The web and its mobile offspring, are calling for their own New Journalism comparable to the one that blossomed in the Seventies. While the blogosphere has yet to find its Tom Wolfe, the newspaper industry still has a critical role to play: It could be at the forefront of this essential evolution in journalism. Failure to do so will only accelerate its decline.

frederic.filloux@mondaynote.com

The Google Fund for the French Press

 

At the last minute, ending three months of  tense negotiations, Google and the French Press hammered a deal. More than yet another form of subsidy, this could mark the beginning of a genuine cooperation.

Thursday night, at 11:00pm Paris time, Marc Schwartz, the mediator appointed by the French government got a call from the Elysée Palace: Google’s chairman Eric Schmidt was en route to meet President François Hollande the next day in Paris. They both intended to sign the agreement between Google and the French press the Friday at 6:15pm. Schwartz, along with Nathalie Collin, the chief representative for the French Press, were just out of a series of conference calls between Paris and Mountain view: Eric Schmidt and Google’s CEO Larry Page had green-lighted the deal. At 3 am on Friday, the final draft of the memorandum was sent to Mountain View. But at 11:00am everything had to be redone: Google had made unacceptable changes, causing Schwartz and Collin to  consider calling off the signing ceremony at the Elysée. Another set of conference calls ensued. The final-final draft, unanimously approved by the members of the IPG association (General and Political Information), was printed at 5:30pm, just in time for the gathering at the Elysée half an hour later.

The French President François Hollande was in a hurry, too: That very evening, he was bound to fly to Mali where the French troops are waging as small but uncertain war to contain Al-Qaeda’s expansion in Africa. Never shy of political calculations, François Hollande seized the occasion to be seen as the one who forced Google to back down. As for Google’s chairman, co-signing the agreement along with the French President was great PR. As a result, negotiators from the Press were kept in the dark until Eric Schmidt’s plane landed in Paris Friday afternoon and before heading to the Elysée. Both men underlined what  they called “a world premiere”, a “historical deal”…

This agreement ends — temporarily — three months of difficult negotiations. Now comes the hard part.

According to Google’s Eric Schmidt, the deal is built on two stages:

“First, Google has agreed to create a €60 million Digital Publishing Innovation Fund to help support transformative digital publishing initiatives for French readers. Second, Google will deepen our partnership with French publishers to help increase their online revenues using our advertising technology.”

As always, the devil lurks in the details, most of which will have to be ironed over the next two months.

The €60m ($82m) fund will be provided by Google over a three-year period; it will be dedicated to new-media projects. About 150 websites members of the IPG association will be eligible for submission. The fund will be managed by a board of directors that will include representatives from the Press, from Google as well as independent experts. Specific rules are designed to prevent conflicts of interest. The fund will most likely be chaired by the Marc Schwartz, the mediator, also partner at the global audit firm Mazars (all parties praised him for his mediation and wish him to take the job).

Turning to the commercial part of the pact, it is less publicized but at least as equally important as the fund itself. In a nutshell, using a wide array of tools ranging from advertising platforms to content distribution systems, Google wants to increase its business with the Press in France and elsewhere in Europe. Until now, publishers have been reluctant to use such tools because they don’t want to increase their reliance on a company they see as cold-blooded and ruthless.

Moving forward, the biggest challenge will be overcoming an extraordinarily high level distrust on both sides. Google views the Press (especially the French one) as only too eager to “milk” it, and unwilling to genuinely cooperate in order to build and share value from the internet. The engineering-dominated, data-driven culture of the search engine is light-years away from the convoluted “political” approach of legacy media that don’t understand or look down on the peculiar culture of tech companies.

Dealing with Google requires a mastery of two critical elements: technology (with the associated economics), and the legal aspect. Contractually speaking, it means transparency and enforceability. Let me explain.

Google is a black box. For good and bad reasons, it fiercely protects the algorithms that are key to squeezing money from the internet, sometimes one cent at a time — literally. If Google consents to a cut of, say, advertising revenue derived from a set of contents, the partner can’t really ascertain whether the cut truly reflects the underlying value of the asset jointly created – or not. Understandably, it bothers most of Google’s business partners: they are simply asked to be happy with the monthly payment they get from Google, no questions asked. Specialized lawyers I spoke with told me there are ways to prevent such opacity. While it’s futile to hope Google will lift the veil on its algorithms, inserting an audit clause in every contract can be effective; in practical terms, it means an independent auditor can be appointed to verify specific financial records pertaining to a business deal.

Another key element: From a European perspective, a contract with Google is virtually impossible to enforce. The main reason: Google won’t give up on the Governing Law of a contract that is to be “Litigated exclusively in the Federal or States Courts of Santa Clara County, California”. In other words: Forget about suing Google if things go sour. Your expensive law firm based in Paris, Madrid, or Milan will try to find a correspondent in Silicon Valley, only to be confronted with polite rebuttals: For years now, Google has been parceling out multiples pieces of litigation among local law firms simply to make them unable to litigate against it. Your brave European lawyer will end up finding someone that will ask several hundreds thousands dollars only to prepare but not litigate the case. The only way to prevent this is to put an arbitration clause in every contract. Instead of going before a court of law, the parties agrees to mediate the matter through a private tribunal. Attorneys say it offers multiples advantages: It’s faster, much cheaper, the terms of the settlement are confidential, and it carries the same enforceability as a Court order.

Google (and all the internet giants for that matter) usually refuses an arbitration clause as well as the audit provision mentioned earlier. Which brings us to a critical element: In order to develop commercial relations with the Press, Google will have to find ways to accept collective bargaining instead of segmenting negotiations one company at a time. Ideally, the next round of discussions should come up with a general framework for all commercial dealings. That would be key to restoring some trust between the parties. For Google, it means giving up some amount of tactical as well as strategic advantage… that is part of its long-term vision. As stated by Eric Schmidt in its upcoming book “The New Digital Age” (the Wall Street Journal had access to the galleys) :

“[Tech companies] will also have to hire more lawyers. Litigation will always outpace genuine legal reform, as any of the technology giants fighting perpetual legal battles over intellectual property, patents, privacy and other issues would attest.”

European media are warned: they must seriously raise their legal game if they want to partner with Google — and the agreement signed last Friday in Paris could help.

Having said that, I personally believe it could be immensely beneficial for digital media to partner with Google as much as possible. This company spends roughly two billion dollars a year refining its algorithms and improving its infrastructure. Thousands of engineers work on it. Contrast this with digital media: Small audiences, insufficient stickiness, low monetization plague both web sites and mobile apps; the advertising model for digital information is mostly a failure — and that’s not Google’s fault. The Press should find a way to capture some of Google’s technical firepower and concentrate on what it does best: producing original, high quality contents, a business that Google is unwilling (and probably culturally unable) to engage in. Unlike Apple or Amazon, Google is relatively easy to work with (once the legal hurdles are cleared).

Overall, this deal is a good one. First of all, both sides are relieved to avoid a law (see last Monday Note Google vs. the press: avoiding the lose-lose scenario). A law declaring that snippets and links are to be paid-for would have been a serious step backward.

Second, it’s a departure from the notion of “blind subsidies” that have been plaguing the French Press for decades. Three months ago, the discussion started with irreconcilable positions: publishers were seeking absurd amounts of money (€70m per year, the equivalent of IPG’s members total ads revenue) and Google was focused on a conversion into business solutions. Now, all the people I talked to this weekend seem genuinely supportive of building projects, boosting innovation and also taking advantage of Google’s extraordinary engineering capabilities. The level of cynicism often displayed by the Press is receding.

Third, Google is changing. The fact that Eric Schmidt and Larry Page jumped in at the last minute to untangle the deal shows a shift of perception towards media. This agreement could be seen as a template for future negotiations between two worlds that still barely understand each other.

frederic.filloux@mondaynote.com

Google vs. the press: avoiding the lose-lose scenario

 

Google and the French press have been negotiating for almost three months now. If there is no agreement within ten days, the government is determined to intervene and pass a law instead. This would mean serious damage for both parties. 

An update about the new corporate tax system. Read this story in Forbes by the author of the report quoted below 

Since last November, about twice a week and for several hours, representatives from Google and the French press have been meeting behind closed doors. To ease up tensions, an experienced mediator has been appointed by the government. But mistrust and incomprehension still plague the discussions, and the clock is ticking.

In the currently stalled process, the whole negotiation revolves around cash changing hands. Early on, representatives of media companies where asking Google to pay €70m ($93m) per year for five years. This would be “compensation” for “abusively” indexing and linking their contents and for collecting 20 words snippets (see a previous Monday Note: The press, Google, its algorithm, their scale.) For perspective, this €70m amount is roughly the equivalent to the 2012 digital revenue of newspapers and newsmagazines that constitutes the IPG association (General and Political Information).

When the discussion came to structuring and labeling such cash transfer, IPG representatives dismissively left the question to Google: “Dress it up!”, they said. Unsurprisingly, Google wasn’t ecstatic with this rather blunt approach. Still, the search engine feels this might be the right time to hammer a deal with the press, instead of perpetuating a latent hostility that could later explode and cost much more. At least, this is how Google’s European team seems to feel. (In its hyper-centralized power structure, management in Mountain View seems slow to warm up to the idea.)

In Europe, bashing Google is more popular than ever. Not only just Google, but all the US-based internet giants, widely accused of killing old businesses (such as Virgin Megastore — a retail chain that also made every possible mistake). But the actual core issue is tax avoidance. Most of these companies hired the best tax lawyers money can buy and devised complex schemes to avoid paying corporate taxes in EU countries, especially UK, Germany, France, Spain, Italy…  The French Digital Advisory Board — set up by Nicolas Sarkozy and generally business-friendly — estimated last year that Google, Amazon, Apple’s iTunes and Facebook had a combined revenue of €2.5bn – €3bn but each paid only on average €4m in corporate taxes instead of €500m (a rough 20% to 25% tax rate estimate). At a time of fiscal austerity, most governments see this (entirely legal) tax avoidance as politically unacceptable. In such context, Google is the target of choice. In the UK for instance, Google made £2.5bn (€3bn or $4bn) in 2011, but paid only £6m (€7.1m or $9.5m) in corporate taxes. To add insult to injury, in an interview with The Independent, Google’s chairman Eric Schmidt defended his company’s tax strategy in the worst possible manner:

“I am very proud of the structure that we set up. We did it based on the incentives that the governments offered us to operate. It’s called capitalism. We are proudly capitalistic. I’m not confused about this.”

Ok. Got it. Very helpful.

Coming back to the current negotiation about the value of the click, the question was quickly handed over to Google’s spreadsheet jockeys who came up with the required “dressing up”. If the media accepted the use of the full range of Google products, additional value would be created for the company. Then, a certain amount could be derived from said value. That’s the basis for a deal reached last year with the Belgium press (the agreement is shrouded in a stringent confidentiality clause.)

Unfortunately, the French press began to eliminate most of the eggs in the basket, one after the other, leaving almost nothing to “vectorize” the transfer of cash. Almost three months into the discussion, we are stuck with antagonistic positions. The IPG representatives are basically saying: We don’t want to subordinate ourselves further to Google by adopting opaque tools that we can find elsewhere. Google retorts: We don’t want to be considered as another deep-pocketed “fund” that the French press will tap forever into without any return for our businesses; plus, we strongly dispute any notion of “damages” to be paid for linking to media sites. Hence the gap between the amount of cash asked by one side and what is (reluctantly) acceptable on the other.

However, I think both parties vastly underestimate what they’ll lose if they don’t settle quickly.

The government tax howitzer is loaded with two shells. The first one is a bill (drafted by no one else than IPG’s counsel, see PDF here), which introduces the disingenuous notion of “ancillary copyright”. Applied to the snippets Google harvests by the thousands every day, it creates some kind of legal ground to tax it the hard way. This montage is adapted from the music industry in which the ancillary copyright levy ranges from 4% to 7% of the revenue generated by a sector or a company. A rate of 7% for the revenue officially declared by Google in France (€138m) would translate into less than €10m, which is pocket change for a company that in fact generates about €1.5 billion from its French operations.

That’s where the second shell could land. Last Friday, the Ministry of Finances released a report on the tax policy applied to the digital economy  titled “Mission d’expertise sur la fiscalité de l’économie numérique” (PDF here). It’s a 200 pages opus, supported by no less than 600 footnotes. Its authors, Pierre Collin and Nicolas Colin are members of the French public elite (one from the highest jurisdiction, le Conseil d’Etat, the other from the equivalent of the General Accounting Office — Nicolas Colin being  also a former tech entrepreneur and a writer). The Collin & Colin Report, as it’s now dubbed, is based on a set of doctrines that also come to the surface in the United States (as demonstrated by the multiple references in the report).

To sum up:
– The core of the digital economy is now the huge amount of data created by users. The report categorizes different types of data: “Collected Data”, are  gathered through cookies, wether the user allows it or not. Such datasets include consumer behaviors, affiliations, personal information, recommendations, search patterns, purchase history, etc.  “Submitted Data” are entered knowingly through search boxes, forms, timelines or feeds in the case of Facebook or Twitter. And finally, “Inferred Data” are byproducts of various processing, analytics, etc.
– These troves of monetized data are created by the free “work” of users.
– The location of such data collection is independent from the place where the underlying computer code is executed: I create a tangible value for Amazon or Google with my clicks performed in Paris, while the clicks are processed in a  server farm located in Netherlands or in the United Sates — and most of the profits land in a tax shelter.
– The location of the value insofar created by the “free work” of users is currently dissociated from the location of the tax collection. In fact, it escapes any taxation.

Again, I’m quickly summing up a lengthy analysis, but the conclusion of the Collin & Colin report is obvious: Sooner or later, the value created and the various taxes associated to it will have to be reconciled. For Google, the consequences would be severe: Instead of €138m of official revenue admitted in France, the tax base would grow to €1.5bn revenue and about €500m profit; that could translate €150m in corporate tax alone instead of the mere €5.5m currently paid by Google. (And I’m not counting the 20% VAT that would also apply.)

Of course, this intellectual construction will be extremely difficult to translate into enforceable legislation. But the French authorities intend to rally other countries and furiously lobby the EU Commission to comer around to their view. It might takes years, but it could dramatically impact Google’s economics in many countries.

More immediately, for Google, a parliamentary debate over the Ancillary Copyright will open a Pandora’s box. From the Right to the Left, encouraged by François Hollande‘s administration, lawmakers will outbid each other in trashing the search engine and beyond that, every large internet company.

As for members the press, “They will lose too”, a senior official tells me. First, because of the complications in setting up the machinery the Ancillary Copyright Act would require, they will have to wait about two years before getting any dividends. Two, the governments — the present one as well as the past Sarkozy administration  — have always been displeased with what they see as the the French press “addiction to subsidies”; they intend to drastically reduce the €1.5bn in public aid. If the press gets is way through a law,  according to several administration officials, the Ministry of Finances will feel relieved of its obligations towards media companies that don’t innovate much despite large influxes of public money. Conversely, if the parties are able to strike a decent business deal on their own, the French Press will quickly get some “compensation” from of Google and might still keep most of its taxpayer subsidies.

As for the search giant, it will indeed have to stand a small stab but, for a while, will be spared the chronic pain of a long and costly legislative fight — and the contagion that goes with it: The French bill would be dissected by neighboring governments who will be only too glad to adapt and improve it.

frederic.filloux@mondaynote.com   

Next week: When dealing with Google, better use a long spoon; Why European media should rethink their approach to the search giant.

Linking: Scraping vs. Copyright

 

Irish newspapers created quite a stir when they demanded a fee for incoming links to their content. Actually, this is a mere prelude to a much more crucial debate on copyrights,  robotic scraping and subsequent synthetic content re-creation from scraps. 

The controversy erupted on December 30th, when an attorney from the Irish law firm McGarr Solicitors exposed the case of one of its client, the Women’s Aid organization, being asked to pay a fee to Irish newspapers for each link they send to them. The main quote from McGarr’s post:

They wrote to Women’s Aid, (amongst others) who became our clients when they received letters, emails and phone calls asserting that they needed to buy a licence because they had linked to articles in newspapers carrying positive stories about their fundraising efforts.
These are the prices for linking they were supplied with:

1 – 5 €300.00
6 – 10 €500.00
11 – 15 €700.00
16 – 25 €950.00
26 – 50 €1,350.00
50 + Negotiable

They were quite clear in their demands. They told Women’s Aid “a licence is required to link directly to an online article even without uploading any of the content directly onto your own website.”

Recap: The Newspapers’ agent demanded an annual payment from a women’s domestic violence charity because they said they owned copyright in a link to the newspapers’ public website.

Needless to say, the twittersphere, the blogosphere and, by and large, every self-proclaimed cyber moral authority, reacted in anger to Irish newspapers’ demands that go against common sense as well as against the most basic business judgement.

But on closer examination, the Irish dead tree media (soon to be dead for good if they stay on that path) is just the tip of the iceberg for an industry facing issues that go well beyond its reluctance to the culture of web links.

Try googling the following French legalese: “A défaut d’autorisation, un tel lien pourra être considéré comme constitutif du délit de contrefaçon”. (It means any unauthorized incoming link to a site will be seen as a copyright infringement.) This search get dozens of responses. OK, most come from large consumers brands (carmakers, food industry, cosmetics) who don’t want a link attached to an unflattering term sending the reader to their product description… Imagine lemon linked to a car brand.

Until recently, you couldn’t find many media companies invoking such a no-link policy. Only large TV networks such as TF1 or M6 warn that any incoming link is subject to a written approval.

In reality, except for obvious libel, no-links policies are rarely enforced. M6 Television even lost a court case against a third party website that was deep-linking to its catch-up programs. As for the Irish newspapers, despite their dumb rate card for links, they claimed to be open to “arrangements” (in the ill-chosen case of a non-profit organization fighting violence against women, flexibility sounds like a good idea.)

Having said that, such posture reflects a key fact: Traditional media, newspapers or broadcast media, send contradictory messages when it comes to links that are simply not part of their original culture.

The position paper of the National Newspapers of Ireland association’s deserves a closer look (PDF here). It actually contains a set of concepts that resonate with the position defended by the European press in its current dispute with Google (see background story in the NYTimes); here are a few:

– It is the view of NNI that a link to copyright material does constitute infringement of copyright, and would be so found by the Courts.
– [NNI then refers to a decision of the UK court of Appeal in a case involving Meltwater Holding BV, a company specialized in media monitoring], that upheld the findings of the High Court which findings included:
- that headlines are capable of being independent literary works and so copying just a headline can infringe copyright
- that text extracts (headline plus opening sentence plus “hit” sentence) can be substantial enough to benefit from copyright protection
- that an end user client who receives a paid for monitoring report of search results (incorporating a headline, text extract and/or link, is very likely to infringe copyright unless they have a licence from the
Newspaper Licencing Agency or directly from a publisher.
– NNI proposes that, in fact, any amendment to the existing copyright legislation with regard to deep-linking should specifically provide that deep-linking to content protected by copyright without respect for  the linked website’s terms and conditions of use and without regard for the publisher’s legitimate commercial interest in protecting its own copyright is unlawful.

Let’s face it, most publishers I know would not disagree with the basis of such statements. In the many jurisdictions where a journalist’s most mundane work is protected by copyright laws, what can be seen as acceptable in terms of linking policy?

The answer seems to revolve around matters of purpose and volume.

To put it another way, if a link serves as a kind of helper or reference, publishers will likely tolerate it. (In due fairness, NNI explicitly “accepts that linking for personal use is a part of how individuals communicate online and has no issue with that” — even if the notion of “personal use” is pretty vague.) Now, if the purpose is commercial and if linking is aimed at generating traffic, NNI raises the red flag (even though legal grounds are rather brittle.) Hence the particular Google case that also carries a notion of volume as the search engine claims to harvest thousands of sources for its Google News service.

There is a catch. The case raised by NNI and its putative followers is weakened by a major contradiction: everywhere, Ireland included, news websites invest a great deal of resources in order to achieve the highest possible rank in Google News. Unless specific laws are voted (German lawmakers are working on such a bill), attorneys will have hard time invoking copyright infringements that in fact stem for the very Search Engine Optimization tactics publishers encourage.

But there might be more at stake. For news organizations, the future carries obvious threats that require urgent consideration: In coming years, we’ll see great progress — so to speak — in automated content production systems. With or without link permissions, algorithmic content generators will be able (in fact: are) to scrap sites’original articles, aggregate and reprocess those into seemingly original content, without any mention, quotation, links, or reference of any kind. What awaits the news industry is much more complex than dealing with links from an aggregator.

It boils down to this: The legal debate on linking as copyright infringement will soon be obsolete. The real question will emerge as a much more complex one: Should a news site protect itself from being “read”  by a robot? The consequences for doing so are stark: except for a small cohort of loyal readers, the site would purely and simply vanish from cyberspace… Conversely, by staying open to searches, the site exposes itself to forms of automated and stealthy depletion that will be virtually impossible to combat. Is the situation binary — allowing “bots” or not — or is there middle ground? That’s a fascinating playground for lawyers and techies, for parsers of words and bits.

frederic.filloux@mondaynote.com

Mobile’s Rude Awakening

 

Mobile audiences are large and growing. Great. But their monetization is mostly a disaster. The situation will be slow to improve, but the potential is still there — if the right conditions are met.    

This year, a major European newspaper expects to make around €16m in digital advertising revenue. The business is even slightly profitable. But there is a catch: while mobile devices now provide more than 50% of its traffic, advertising revenue from smartphones and tablets will only reach €1m. For this particular company, like many others, mobile advertising doesn’t work. It brings about 5% or 6% of what desktop web ads do — which, already, suffer from a 15 times cut in revenue when compared to print.

Call it a double whammy: Publishers took a severe hit by going digital in a way that compounded commoditization of contents with an endless supply of pages. The result is economically absurd: in a “normal” world, when audiences rise, advertising reaches more people and, as a result, rates rise. At least, that was the rule in the comfy world of print. No such thing in digital media. As many news sites experienced, despite double digit audience growth, CPMs (Cost per Thousand page impressions) actually declined over recent years. Fact is, this sector is much more sensitive to general economic conditions than to its extraordinary large adoption. And as if that wasn’t enough, publishers now take another blow as a growing share of their audience moves to mobile where money hasn’t followed… yet.

Granted, there are exceptions. Nordic media, for instance, benefit from an earlier and stronger mobile adoption (think Nokia and Ericsson, even before smartphones). Supported by many paid-for services, Scandinavian media houses extract a significant amount of profit from mobile. Similarly, Facebook mobile operations are faring quite well. According to the latest TBG Digital report, Click Through Rate (CTR) on ads placed on mobile News Feeds are 23 times higher than those displayed on the desktop version (respectively a CTR of 1.290% vs. 0.049%).

The digital mediasphere is struggling with mobile ads. In June, we went through most of the causes (see Jean-Louis’ note Mobile Advertising: The $20bn Opportunity Mirage). Problem is: there are still few signs of improvement. Inventories are growing, ad creativity remains at a low point (just look at the pixelated ads that plague the bottom of your mobile screens). As you can see below, programmatic buying is on the rise as this low-yield market remains vastly intermediated (click to enlarge):

– Too many middlemen? –

This results in the following eCPMs (effective CPM is the price advertisers are willing to pay for a given audience) as surveyed for different mobile platforms:

iOS iPad........... $0.90-$1.10
iOS iPhone......... $0.70-$0.80
Android Tablet..... $0.60-$0.70
Android Phones..... $0.40-$0.60

Advertising-wise, mobile is mostly a dry hole.

OK. Enough whining. Where do we go from here? What to expect in the next 18 months? How to build upon the inherent (and many) advantages offered by the mobile space?

For rate cards, we have some good news: prices on Android and iOS are converging upward as Android demographics are rising; soon, the two dominant mobile platforms will be in the higher price range. The value of ads is also likely to climb a little as screens gets better and larger, and as bandwidth increases: such improvements will (should) allow more visually attractive, more engaging ads. The ecosystem should also benefit from the trend toward more customized advertising. Ideally, promotional campaigns should be completely integrated and provide a carefully designed continuum within the three digital vectors: desktop web to be viewed at home or at the office; mobile formats for quick reading on the go; and tablet-friendly for a slower, more engaged, lean-back consumption (reading time is five or ten times higher on an iPad than on a PC). But, again, as long as creative agencies or media themselves do not commit adequate resources to such a virtuous chain, the value created will stay dangerously close to zero. (Those players better hurry up as a myriad of agile startups are getting ready to take control of this neglected potential.)

A few more reasons for being bullish on mobile. For instance, the level of personalization has nothing to do with what we see on the PC; a smartphone is not shared; it’s personal; and it’s the best vector to carry an intimate environment in which to create one’s dedicated social interaction system, transactional tools, entertainment selections (games, movies, books, TV series), etc. Mobile devices come with other, high potential features such as geolocation, ability to scan a bar-code — all favoring impulse buying. (This happened to me more than once: In a Paris bookstore, if the only copy left of a book I want is worn-off, or if the salesperson seems annoyed by my mere presence, I quickly scan the bare-code and order it from Amazon on the spot, right from the store. Apparently, I’m not the only one: about 20% of mobile users admitted they scanned a bar-code, or took a picture of a product in a store). And soon, these features will be supplemented by electronic wallet functions. Think about it: which marketeer wouldn’t dreamed of having access to such capabilities?

frederic.filloux@mondaynote.com

Google’s looming hegemony

 

If we factor Google geospatial applications + its unique data processing infrastructure + Android tracking, etc., we’re seeing the potential for absolute power over the economy. 

Large utility companies worry about Google. Why? Unlike those who mock Google for being a “one-trick pony”, with 99% of its revenue coming from Adwords, they connect the dots. Right before our eyes, the search giant is weaving a web of services and applications aimed at collecting more and more data about everyone and every activity. This accumulation of exabytes (and the ability to process such almost unconceivable volumes) is bound to impact sectors ranging from power generation, transportation, and telecommunications.

Consider the following trends. At every level, Western countries are crumbling under their debt load. Nations, states, counties, municipalities become unable to support the investment necessary to modernize — sometimes even to maintain — critical infrastructures. Globally, tax-raising capabilities are diminishing.

In a report about infrastructure in 2030 (500 pages PDF here), the OECD makes the following predictions (emphasis mine):

Through to 2030, annual infrastructure investment requirements for electricity, road and rail transport, telecommunications and water are likely to average around 3.5% of world gross domestic product (GDP).

For OECD countries as a whole, investment requirements in electricity transmission and distribution are expected to more than double through to 2025/30, in road construction almost to double, and to increase by almost 50% in the water supply and treatment sector. (…)

At present, governments are not well placed to meet these growing, increasingly complex challenges. The traditional sources of finance, i.e. government budgets, will come under significant pressure over the coming decades in most OECD countries – due to aging populations, growing demands for social expenditures, security, etc. – and so too will their financing through general and local taxation, as electorates become increasingly reluctant to pay higher taxes.

What’s the solution? The private sector will play a growing role through Public-Private-Partneships (PPPs). In these arrangements, a private company (or, more likely, a consortium of such) builds a bridge, a motorway, a railroad for a city, region or state, at no expense to the taxpayer. It will then reimburse itself from the project’s cash-flow. Examples abound. In France the elegant €320m ($413m) viaduct of Millau was built — and financed — by Eiffage, a €14 billion revenue construction group. In exchange for financing the viaduct, Eiffage was granted a 78-year toll concession with an expected internal rate of return ranging from 9.2% 17.3%. Across the world, a growing number of projects are built using this type of mechanism.

How can a company commit hundreds of millions of euros, dollars, pounds with an acceptable level of risk over several decades? The answer lies in data-analysis and predictive models. Companies engineer credible cash-flow projections using reams of data on operations, usages patterns and components life cycles.

What does all this have to do with Google?

Take a transportation company building and managing networks of buses, subways or commuter trains in large metropolitan areas. Over the years, tickets or passes analysis will yield tons of data on customer flows, timings, train loads, etc. This is of the essence when assessing the market’s potential for a new project.

Now consider how Google aggregates the data it collects today — and what it will collect in the future. It’s a known fact that cellphones send back to Mountain View (or Cupertino) geolocalization data. Bouncing from one cell tower to another, catching the signal of a geolocalized wifi transmitter, even if the GPS function is turned off, Android phone users are likely to be tracked in realtime. Bring this (compounded and anonymized) dataset on information-rich maps, including indoor ones, and you will get very high definition of profiles for who goes or stays where, anytime.

Let’s push it a bit further. Imagine a big city such as London, operating 500,000 security cameras, which represent the bulk of the 1.85 million CCTVs deployed in the UK — one for every 32 citizens. 20,000 of them are in the subway system. The London Tube is the perfect candidate for partial or total privatization as it bleeds money and screams for renovations. In fact, as several people working at the intersection of geo applications and big data project told me, Google would be well placed to provide the most helpful datasets. In addition to the circulation data coming from cellphones, Google would use facial recognition technology. As these algorithms are already able to differentiate a woman from a man, they will soon be able to identify (anonymously) ethnicities, ages, etc. Am I exaggerating ? Probably not. Mercedes-Benz already has a database of 1.5 million visual representations of pedestrians to be fed into the software of its future self-driving cars. This is a type of applications in which, by the way, Google possesses a strong lead with its fleets of driverless Prius crisscrossing Northern California and Nevada.

Coming back to the London Tube and its unhappy travelers, we have traffic data, to some degree broken down into demographics clusters; why not then add shopping data (also geo-tagged) derived from search and ads patterns, Street View-related informations… Why not also supplement all of the above with smart electrical grid analysis that could refine predictive models even further (every fraction of percentage points counts…)

The value of such models is much greater than the sum of their parts. While public transportation operators or utility companies are already good at collecting and analyzing their own data, Google will soon be in the best position to provide powerful predictive models that aggregate and connect many layers of information. In addition, its unparalleled infrastructure and proprietary algorithms provide a unique ability to process these ever-growing datasets. That’s why many large companies over the world are concerned about Google’s ability to soon insert itself into their business.

frederic.filloux@mondaynote.com

 

Schibsted’s extraordinary click machines

 

The Nordic media giant wants to be the #1 worldwide of online classifieds by replicating its high-margin business one market after another, with great discipline. 

It all starts in 2005 with a Power Point presentation in Paris. At the time, Schibsted ASA, the Norwegian media group, is busy deploying its free newspapers in Switzerland, France and Spain. Schibsted wants its French partner Ouest-France — the largest regional newspapers group — to co-invest in a weird concept: free online classifieds. As always with the Scandinavian, the deck of slides is built around a small number of key points. To them, three symptoms attest to the maturity of a market’s online classified business:  (a) The number one player in the field ranks systematically among the top 10 web sites, regardless of the category; (b) it is always much bigger than the number two; (c) it reaps most of the profits in the sector. “Look at the situation here in France”, the Norwegians say, “the first classifieds site ranks far down in Nielsen rankings. The market is up for grabs, and we intend to get it”. The Oslo and Stockholm executives already had an impressive track record: in 2000, they launched Finn.no in Norway and, in 2003, they acquired Blocket.se in Sweden. Both became incredible cash machines for the group, with margins above 50% and unabated growth. Ouest-France eventually agreed to invest 50% in the new venture. In november 2010, they sold their stake back to Schibsted at a €400m valuation. (As we’ll see in a moment, the classified site Le Bon Coin is now worth more than twice that number.)

November 2012. I’m sitting in the office of Olivier Aizac, CEO of Le Bon Coin, the French iteration of Schibsted’s free classifieds concept. The office space is dense and scattered over several floors in a building near the Paris Bourse. Since my last 2009 visit (see a previous Monday Note Learning from free classifieds), the startup grew from a staff of 15 to 150 people. And Aizac tells me he plans to hire 70 more staff in 2013. Crisis or not, the business is booming.

A few metrics: According to Nielsen, LeBonCoin.fr (French for The Right Spot) ranks #9 in France with 17m monthly unique users. With more than 6 billion page views per month, it even ranks #3, behind Facebook and Google. Revenue-wise, Le Bon Coin might hit the €100m mark this year, with a profit margin slightly above… 70%. Fort the 3rd quarter of this year, the business grew by 50% vs. a year ago.

In terms of competition it dominates every segment: cars, real estate (twice the size of Axel Springer’s SeLoger.com) and jobs with about 60,000 classifieds, roughly five times the inventory of a good paid-for job board (LeBonCoin is not positioned in the upper segment, though, it mostly targets regional small to medium businesses).

Le Bon Coin’s revenue stream is made of three parts: premium services (you pay to add a picture, a better ranking, tracking on your ad); fees coming from the growing number professionals who flock to LBC (many car dealerships put their entire inventory here); and advertising for which the primary sectors are banking and insurance, services such as mobile phone carriers or pay-TV, and automobile. Although details are scarce, LBC seems to have given up the usual banner sales, focusing instead on segmented yearly deals: A brand will target a specific demographic and LBC will deliver, for half a million or a million euros per annum.

One preconceived idea depicts Le Bon Coin as sitting at the cheaper end of the consumer market. Wrong. In the car segment, its most active advertiser is Audi for whom LBC provides tailored-made promotions. (Strangely enough Renault is much slower to catch the wave.) “We are able to serve any type of market”, says Olivier Aizac who shows an ad peddling a €1.4m Bugatti, and another for the brand new low-cost Peugeot 301, not yet available in dealerships but offered on LBC for €15,000. Similarly, LBC is the place to go to rent a villa on the Cote d’Azur or a chalet for the ski season. With more than 21 millions ads at any given moment, you can find pretty much anything there.

Now, let’s zoom out and look at a broader picture. How far can Le Bon Coin go? And how will its cluster of free classifieds impact Schibsted’s future?

Today, free online classifieds weigh about 25% of Schibsted revenue (about 15bn Norwegian Kroner, €2bn this year), but it it accounts for 47% of the group’s Ebitda (2.15bn NOK, €300m). All online activities now represent 39% of the revenue and 62% of the Ebitda.

The whole strategy can be summed up in these two charts: The first shows the global deployment of the free classifieds business (click ton enlarge):

Through acquisitions, joint ventures or ex nihilo creations, Schibsted now operates more than 20 franchises. Their development process is highly standardized. Growth phases have been codified in great detail, managers often gather to compare notes and the Oslo mothership watches everything, providing KPIs, guidelines, etc. The result is this second chart showing the spread of deployment phases. More than half of the portfolio still is in infancy, but most likely to follow the path to success:

Source: Schibsted Financial Statements

This global vision combined to what is seen as near-perfect execution explains why the financial community is betting so much on Schibsted’s classified business.

When assessing the potential of each local brand, analysts project the performances of the best and mature properties (the Nordic ones) onto the new ones. As an example, see below the number of visits per capita and per month from web and mobile since product launch:

Source : Dankse Market Equities

For Le Bon Coin’s future, this draws a glowing picture: according to Danske Market Equities, today, the Norwegian Finn.no generates ten times more revenue per page view than LBC, and twenty times more when measured by Average revenue per user (ARPU). The investment firm believes that Le Bon Coin’s revenue can reach €500m in 2015, and retain a 65% margin. (As noted by its CEO, Le Bon Coin has yet to tap into its trove of data accumulated over the last six years, which could generate highly valuable consumer profiling information).

When translated into valuation projections, the performance of Schibsted classifieds businesses far exceed the weight of traditional media properties (print and online newspapers). The sum-of-the-parts valuations drawn by several private equities firms show the value of the classifieds business yielding more than 80% of the total value of this 173 year-old group.

frederic.filloux@mondaynote.com
Disclosure: I worked for Schibsted for nine years altogether between 2001 and 2010; six years indirectly as the editor of 20 minutes and three years afterwards, in a business development unit attached to the international division.
——- 

The Release Windows Archaism

 

Television and media industry are stuck in a wasteful rear-guard fight for the preservation of an analog era relic: the Release Windows system. Designed to avoid destructive competition among media, it ends up boosting piracy while frustrating honest viewers willing to pay.  

A couple of months ago, I purchased the first season of the TV series Homeland from the iTunes Store. I paid $32 for 12 episodes that all landed seamlessly in my iPad. I gulped them in a few days and was left in a state of withdrawal. Then, on September 30th, when season 2 started over, I would have had no alternative to downloading free but illegal torrent files. Hundreds of thousands of people anxious to find out the whereabouts of the Marine turncoat pursued by the bi-polar CIA operative were in the same quandary (go to the dedicated Guardian blog for more on the series).

In the process, the three losers are:
– The Fox 21 production company that carries the risk of putting the show together (which costs about $36m per season, $3m per episode)
– Apple which takes its usual cut. (The net loss for both will actually be $64 since the show has been signed up for a third season by the paid-for Showtime channel and I wonder if I’ll have the patience to wait months for its availability on iTunes.)
– And me, as I would have to go through the painstaking task of finding the right torrent file, hoping that it is not bogus, corrupted, or worse, infected by a virus.

Here, we put our finger on the stupidity of the Release Windows system, a relic of the VHS era. To make a long story short, the idea goes back to the 80′s when the industry devised a system to prevent different media — at the time, movie theaters, TV networks, cable TV and VHS — from cannibalizing each other. In the case of a motion picture, the Release Windows mechanism called for a 4 months delay before its release on DVD, additional months for the release on Pay-TV, Video-On-Demand, and a couple of years before showing up on mainstream broadcast networks (where the film is heavily edited, laced with commercial, dubbed, etc.)

The Western world was not the only one to adopt the Release Window system. At the last Forum d’Avignon cultural event a couple of weeks ago, Ernst & Young presented a survey titled  Mastering tempo: creating long-term value amidst accelerating demand (PDF in English here and in French here).

The graph below shows the state of the windows mechanism in various countries:

Europe should be happy when comparing its situation to India’s. There, it takes half a year to see a movie in DVD while the box-office contributes to 75% of a film’s revenue. Ernst & Young expects this number to drop only slightly, to 69%, in 2015 (by comparison, the rate is only 28% in the UK). Even though things are changing fast in India, internet penetration is a mere 11.4% of the population and movie going still is a great popular entertainment occasion.

In the United States, by comparison, despite a large adoption of cable TV, Blue-Ray or VOD, and a 78% penetration rate for the internet (84% in the UK and higher in Northern Europe), the Release Windows system shows little change: again, according to the E&R survey, it went from 166 days in 2000 to 125 days in 2011:

Does it makes sense to preserve a system roughly comparable to the one in India for the US or Europe where the connected digital equipment rate is seven times higher?

Motion pictures should probably be granted a short head start in the release process. But it should coincide with the theatrical lifetime of a production that is about 3-4 weeks. Even better, it should be adjusted to the box-office life — if a movie performs so well that people keep flocking to theaters, DVDs should wait. On the contrary, if the movie bombs, it should be given a chance to resurrect online, quickly, sustained by a cheaper but better targeted marketing campaign mostly powered by social networks.

Similarly, movie releases should be simultaneous and global. I see no reason why Apple or Microsoft are able to make their products available worldwide almost at the same time while a moviegoer has to wait three weeks here or two months there. As for the DVD Release Windows, it  should go along with the complete availability of a movie for all possible audiences, worldwide and on every medium.  Why? Because the release on DVD systematically opens piracy floodgates (but not for the legitimate purchase on Netflix, Amazon Prime or iTunes).

As for the TV shows such as Homeland and others hits, there is not justification whatsoever to preserve this calendar archaism. They should be made universally available from the day when they are aired on TV, period. Or customers will vote with their mouse anyway and find the right file-sharing sites.

The “Industry” fails to assess three shifts here.

–The first one is the globalization of audiences. Worldwide, about 360m people are native English speakers; for an additional 375m, it is the second language, and 750m more picked English as an foreign language at school. That’s about 1.5 billion people likely to be interested in English-speaking culture. As a result, a growing proportion of teenagers watch their pirated series without subtitles — or scruples.

–Then, the “spread factor”: Once a show becomes a hit in the United States, it becomes widely commented in Europe and elsewhere, not only because a large number of people speak serviceable English, but also because many national websites propagate the US buzz. Hollywood execs would be surprised to see how well young (potential) audiences abroad know about their productions months before seeing them.

–And finally, technology is definitely on the side of the foreign consumer: Better connectivity (expect 5 minutes to download an episode), high definition image, great sound… And mobility (just take a high-speed train in Europe and see how many are watching videos on their tablets).

To conclude, let’s have a quick look at the numbers. Say a full season of Homeland costs $40m to produce. Let’s assume the first release is supposed to cover 40% of the costs, that is $16m. Homeland is said to gather 2 million viewers. Each viewer will therefore contribute for $8 to the program’s economics. Compare to what I paid through iTunes: my $32 probably leave about half to the producers; or compare to the DVD, initially sold for $60 for the season, now discounted at $20. You get my point. Even if the producer nets on average $15 per online viewer, it would need only 1.6 million paid-for viewers worldwide to break-even (much less when counting foreign syndication.) Even taking in account the unavoidable piracy (which also acts as a powerful promotional channel), with two billion people connected to the internet outside the US, the math heavily favors the end of the counter-productive and honest-viewer-hostile Release Windows archaism.

–frederic.filloux@mondaynote.com