About Frédéric Filloux

Posts by Frédéric Filloux:

What’s the Fuss About Native Ads?

 

In the search for new advertising models, Native Ads are booming. The ensuing Web vs. Native controversy is a festival of fake naïveté and misplaced indignation. 

Native Advertising is the politically correct term for Advertorial, period. Or rather, it’s an upgrade, the digital version of an old practice dating back to the era of typewriters and lead printing presses. Everyone who’s been in the publishing business long enough has in mind the tug-of-war with the sales department who always wants its ads to to appear next to an editorial content that will provide good “context”. This makes the whole “new” debate about Native Ads quite amusing. The magazine sector (more than newspapers), always referred to “clean” and “tainted” sections. (The latter kept expanding over the years). In consumer and lifestyle sections, editorial content produced by the newsroom is often tailored to fit surrounding ads (or to flatter a brand that will buy legit placements).

The digital era pushes the trend several steps further. Today, legacy media brands such as Forbes, Atlantic Media, or the Washington Post have joined the Native Ads bandwagon. Forbes even became the poster child for that business, thanks to the completely assumed approach carried out by its chief product officer Lewis DVorkin (see his insightful blog and also this panel at the recent Paid Content Live conference.) Advertising is not the only way DVorkin has revamped Forbes. Last week, Les Echos (the business daily that’s part of the media group I work for) ran an interesting piece about it titled “The Old Press in a Startup mode” (La vielle presse en mode start-up). It details the decisive — and successful — moves by the century-old media house: a downsized newsroom, external contributors (by the thousand, and mostly unpaid) who produce a huge stream of 400 to 500 pieces a day. “In some cases”, wrote Lucie Robequain, Les Echos’s New York correspondent, “the boundary between journalism and advertorial can be thin…” To which Lewis DVorkin retorts: “Frankly, do you think a newspaper that conveys corporate voices is more noble? At Forbes, at least, we are transparent: We know which company the contributor works for and we expose potentials conflicts of interests in the first graph…” Maybe. But screening a thousand contributors sounds a bit challenging to me… And Forbes evidently exposed itself as part of the “sold” blogosphere. Les Echos’ piece also quotes Joshua Benton from Harvard’s Nieman Journalism Lab who finds the bulk of Forbes production to be, on average, not as good as it was earlier, but concedes the top 10% is actually better…

As for Native Advertising, two years ago, Forbes industrialized the concept by creating BrandVoice. Here is the official definition:

Forbes BrandVoice allows marketers to connect directly with the Forbes audience by enabling them to create content – and participate in the conversation – on the Forbes digital publishing platform. Each BrandVoice is written, edited and produced by the marketer.

Practically, Forbes lets marketers use the site’s Content Management System (CMS) to create their content at will. The commercial deal — from what we can learn — involves volumes and placements that cause the rate to vary between $50,000 to $100,000 per month. The package can also include traditional banners that will send traffic back to the BrandVoice page.

At any given moment, there are about 16 brands running on Forbes’ “Voices”. This revenue stream was a significant contributor to the publisher’s financial performances. According to AdWeek (emphasis mine):

The company achieved its best financial performance in five years in 2012, according to a memo released this morning by Forbes Media CEO Mike Perlis. Digital ad revenue, which increased 19 percent year over year, accounted for half of the company’s total ad revenue for the year, said Perlis. Ten percent of total revenue came from advertisers who incorporated BrandVoice into their buys, and by the end of this year, that share is estimated to rise to 25 percent.

Things seemed pretty positive across other areas of Forbes’ business as well. Newsstand sales and ad pages were up 2 percent and 4 percent, respectively, amid industry-wide drops in both areas. The relatively new tablet app recently broke 200,000 downloads.

A closer look gives a slightly bleaker picture: According to latest data from the Magazine Publishers Association, between Q1 2013 and Q1 2012, Forbes Magazine (the print version only) lost 16% in ads revenues ($50m to $42m). By comparison, Fast Company scored +25%, Fortune +7%, but The Economist -27% and Bloomberg Business Week -30%. The titles compiled by the MPA are stable (+0.5%).

I almost never click on banners (except to see if they work as expected on the sites and apps I’m in charge of). Most of the time their design sucks, terribly so, and the underlying content is usually below grade. However, if the subject appeals to me, I will click on Native Ads or brand contents. I’ll read it like another story, knowing full well it’s a promotional material. The big difference between a crude ad and a content-based one is the storytelling dimension. Fact is: Every company has great stories to tell about its products, strategy or vision. And I don’t see why they shouldn’t be told  resorting to the same storytelling tools news media use. As long as it’s done properly, with a label explaining the contents’ origin, I don’t see the problem (for more on this question, read a previous Monday Note: The Insidious Power of Brand Content.) In my view, Forbes does blur the line a bit too much, but Atlantic’s business site Quartz is doing fine in that regard. With the required precautions, I’m certain Native Ads, or branded contents are a potent way to go, especially when considering the alarming state of other forms of digital ads. Click-through rates are much better (2%-5% vs. a fraction of a percentage for a dumb banner) and the connection to social medias works reasonably well.

For news media companies obsessed with their journalistic integrity (some still do…), the development of such new formats makes things more  complicated when it comes to decide what’s acceptable and what’s not. Ultimately, the editor should call the shots. Which brings us to the governance of media companies. For digital media, the pervasive advertising pressure is likely keep growing. Today, most rely on a Chief Revenue Officer to decide what’s best for the bottom line such as balancing circulation and advertising, arbitraging between a large audience/low yield or smaller audience/higher yield, for instance. But, in the end, only the editor must be held accountable for the contents’ quality and the credibility — which contribute to the commercial worthiness of the media. Especially in the digital field, editors should be shielded from the business pressure. Editors should be selected by CEOs and appointed by boards or better, boards of trustees. Independence will become increasingly scarce.

frederic.filloux@mondaynote.com

A lesson of Public e-Policy

 

The small Baltic republic of Estonia is run like a corporation. But its president believes government must to play a crucial role in areas of digital policy such as secure ID. 

Toomas Hendrik Ilves must feel one-of-a-kind when he attends international summits. His personal trajectory has nothing in common with the backgrounds of other heads of state. Born in Stockholm in 1953 where his parents had taken refuge from the Soviet-controlled Estonia, Ilves was raised mostly in the United States. There, he got a bachelor’s degree in psychology from Columbia University and a master’s degree in the same subject from the University of Pennsylvania. In 1991, when Estonia became independent, Ilves was in Munich, working as a journalist for Radio Free Europe (he is also fluent English, German and Latin.) Two years later, he was appointed ambassador to — where else? — the United States. In 2006, a centrist coalition elected him president of the republic of Estonia (1.4m inhabitants).

One more thing about Toomas Hendrik Ilves: he programmed his first computer at the age of 13. A skill that would prove decisive for his country’s fate.

Last week in Paris, president Ilves was the keynote speaker at a conference organized by Jouve Group, a 3,000 employees French company specialized in digital distribution. The bow-tied Estonian captivated the audience with his straight speech, the polar opposite of the classic politician’s. Here are abstracts from my notes:

“At the [post-independence] time, the country, plagued by corruption, was rather technologically backward. To give an example, the phone system in the capital [Tallinn] dated back to 1938. One of our first key decisions was to go for the latest digital technologies instead of being encumbered by analog ones. For instance, Finland offered to provide Estonia with much more modern telecommunication switching systems, but still based on analog technology. We declined, and elected instead to buy the latest digital network equipment”.  

Estonia’s ability to build a completely new infrastructure without being dragged down by technologies from the past (and by the old-guard defending it) was essential to the nation’s development. When I later asked him about the main resistance factors he had encountered, he mentioned legacy technologies: “You in France, almost invented the internet with the Minitel. Unfortunately, you were still pushing the Minitel when Mosaic [the first web browser] was invented”. (The videotext-based system was officially retired at last in… 2012. France lost almost a decade by delaying its embrace of Internet Protocols.)

The other key decision was introducing computers in schools and teaching programming on a large scale. Combined to the hunger for openness in a tiny country emerging from 45 years of Soviet domination, this explains why Estonia has become an energetic tech incubator, nurturing big names like Kazaa or Skype (Skype still maintains its R&D center in Tallinn.)

“Every municipality in Estonia wanted to be connected to the Internet, even when officials didn’t know what it was. (…) And we played with envy…. With neighbors such as Finland or Sweden, the countries of Nokia and Ericsson, we wanted to be like them.”  

To further encourage the transition to digital, cities opened Internet centers to give access to people who couldn’t afford computers. If, in Western Europe, the Internet was seen as a prime vector of American imperialism, up in the newly freed Baltic states, it was seen as an instrument of empowerment and access to the world:

“We wanted a take the leap forward and build a modern country from the outset. The first public service we chose to go digital was the tax system. As a result, not only we eliminate corruption in the tax collection system — a computer is difficult to bribe –, but we increased the amount of money the state collected. We put some incentives in: When filing digitally, you’d get your tax refund within two weeks versus several months with paper. Today, more than 95% of tax returns are filed electronically. And the fact that we got more money overcame most of the resistance in the administration and paved the way for future developments”. 

“At some point we decided to give to every citizen a chip-card… In other words, a digital ID card. When I first mentioned this to some Anglo-saxon government officials, they opposed the classic ”Big Brother” argument. Our belief was, if we really wanted to build a digital nation, the government had to be the guarantor of digital authentication by providing everyone with a secure ID. It’s the government’s responsibility to ensure that someone who connects to an online service is the right person. All was built on the public key-private key encryption system. In Estonia, digital ID is a legal signature.The issue of secure ID is essential, otherwise we’ll end-up stealing from ourselves. Big brother is not the State, Big Brother lies in Big Data.”

“In Estonia, every citizen owns his or her data and has full access to it. We currently have about 350 major services securely accessible online. A patient, never gets a paper prescription; the doctor will load the prescription in a the card and the patient can go to any pharmacy. The system will soon be extended to Sweden, Denmark, Finland, Norway, as our citizens travel a lot. In addition, everyone can access their medical records. But they can chose what doctor will see them. I was actually quite surprised when a head of State from Southern Europe told me some paper medical records bear the mention “not to be shown to the patient” [I suspect it was France...]. As for privacy protection, the ID chip-card works both ways. If a policeman wants to check on your boyfriend outside the boundaries of a legal investigation, the system will flag it — it actually happened.” 

As the Estonian president explained, some good decisions also come out of pure serendipity,:

“[In the Nineties], Estonia had the will but not all the financial resources to build all the infrastructure it wanted, such as massive centralized data centers. Instead, the choice was to interconnect in the most secure way all the existing government databases. The result has been a highly decentralized network of government servers that prevent most abuses. Again, the citizen can access his health records, his tax records, the DMV [Department of Motor Vehicles], but none of the respective employees can connect to another database”.

The former Soviet Union had the small Baltic state pay the hard price for its freedom. In that respect, I recommend reading CyberWar by Richard Clarke, a former cyber-security advisor in the Clinton administration, who describes multiple cyber-attacks suffered by Estonia in 2007. These actually helped the country develop skillful specialists in that field. Since 2008, Tallinn harbors NATO’s cyber defense main center in addition to a EU large-scale IT systems center.

Toomas Hendrik Ilves stressed the importance of cyber-defense, both at the public and private sector level:

“Vulnerability to a cyber attacks must be seen as a complete market failure. It is completely unacceptable for a credit card company to deduct theft from its revenue base, or for a water supply company to invoke cyber attack as a force majeure. It is their responsibility to protect their systems and their customers. (…) Every company should be aware of this, otherwise we’ll see all our intellectual property ending up in China”. 

–frederic.filloux@mondaynote.com

Schibsted’s High Octane Diversification

 

The Norwegian media group Schibsted now aggressively invests in startups. The goal: digital dominance, one market at a time. France is in next in line. Here is a look at their strategy. 

This thought haunts most media executives’ sleepless nights:My legacy business is taking a hit from the internet; my digital conversion is basically on track, but it goes with an massive value destruction. We need both a growth engine and consolidation. How do we achieve this? What are our core assets to build upon? Should we undertake a major diversification that could benefit from our brand and know-how?” (At that moment, the buzzer goes off, it’s time to go to work.) Actually, such nighttime cogitations are a good sign, they are the privilege of people gifted with long term view.

The Scandinavian media power house Schibsted ASA falls into the long-termist category.  Key FY 2012 data follow. Revenue: 15bn Norwegian Kroner (€2bn or $2.6bn.); EBIT: 13.5%. The group currently employs 7800 people spread over 29 countries. 40% of the revenue and 69% of the EBITDA come from online activities. Online classifieds account for 25% of revenue and 52% of the EBITDA; the rest in publishing. (The usual disclosure: I worked for Schibsted between 2007 and 2009, in the international division).

The company went through the delicate transition to digital about five years ahead of other media conglomerates in the Western world. To be fair, Schibsted enjoyed unique conditions: profitable print assets, huge penetration in small Nordic markets immune to foreign players, a solid grasp of all components of the business, from copy sales to subscribers for newspapers and magazines, to advertising and distribution channels. In addition, the group enjoys a stable ownership structure (controlled by a trust), and its board always encourages the management to aim high and take risks. The company is led by a lean team: only 60 people at the Oslo headquarters to oversee the entire operations, largely staffed by McKinsey alumni.

The transition began in 1995 when Schibsted came to realize the media sector’s center of gravity would inevitably shift to digital. The move could be progressive for reading habits but it would definitely be swift and hard for critical revenue streams such as classifieds and consumer services. Hence the unofficial motto that’s still remains at the core of Schibsted’s strategy: Accelerating the inevitable (before the inevitable falls on us). Such view led to speeding up the demise of print classifieds, for instance, in order to free oxygen for emerging digital products. Not exactly popular at the time but, thanks to methodical pedagogy, the transition went well.

One after the other, business units moved to digital. Then, the dot-com crash hit. In Norway and Sweden, Schibsted media properties where largely deployed online with large dedicated newsrooms, emerging consumer services built from scratch or from acquisitions. Management wondered what to do: Should we opt for a quick and massive downsizing to offset a brutal 50% drop in advertising revenue? Schibsted took the opposite tack: Yes business is terrible, but this is mostly the result of the financial crisis; the audience is still here, not only it won’t go away but, eventually, it will experience huge growth. This was the basis for two key decisions: Pursuing investments in digital journalism while finding ways to monetize it; and doing whatever it took in order to dominate the classifieds business.

In Sweden, a bright spot kept blinking on Schibsted’s radar. Blocket was growing like crazy. It was a bare-bone classifieds website, offering a mixture of free and premium ads in the simplest and most efficient way. At first, Schibsted Sweden tried to replicate Blocket’s model with the goal of killing it. After all, the group thought, it had all the media firepower needed to lift any brand… Wrong. After a while, it turned out Schibsted’s copycat  still lagged behind the original. In the kind of pragmatism allowed by deep pockets, Schibsted decided to acquire Blocket (for a hefty price). The clever classifieds website will become the matrix for the group’s foray in global classifieds.

In 2006, Schibsted had acquired and developed a cluster of consumer-oriented websites, from Yellow-Pages-like directories, to price-comparisons sites, or consumer-data services. Until then, the whole assemblage had been built on pure opportunism. It was time to put things in order. Hence, in 2007, the creation of Tillväxmedier, the first iteration of Schibsted Development. (The Norwegian version was launched in 2010 and the French one starts this year).

Last week in Paris, I met Richard Sandenskog, Tillväxmedier’s investment manager and Marc Brandsma, the newly appointed CEO of Schibsted Development France. Sandenskog is a former journalist who also spent eight years in London as a product manager for Yahoo!  Brandsma is a seasoned French entrepreneur and former venture capitalist. Despite local particularisms precluding a dumb replication of Nordic successes, two basics principles remain:

1. Invest in the number one in a niche market, or a potential number one in a larger one. “In the online business, there is no room for number two”, said Richard Sandenskog. “We want to leverage our dominance on a given market to build brands and drive traffic. The goal is to find the best way to expose the new brand in different channels and integrate it in various properties. The keyword is relevant traffic. We don’t care for page views for their sake, but for the value they bring. We see clicks as a currency.”

2. Picking the right product in the right sector. In Sweden, the Schibsted Developement portfolio evolves around the idea of empowering the consumer. To sum up: people are increasingly lost in a jungle of pricing, plans, offers, deals, for the services they need. It could be cell phones, energy bills, consumer loans… Hence a pattern for acquisitions: a bulk purchase web site for electricity (the Swedish market is largely deregulated with about 100 utilities companies); a helper to find the best cellular carrier plan based on individual usage; a personal finance site that lets consumers shop around for the best loan without degrading their credit rating; a personal factoring service where anyone can auction off invoices, etc.
Most are now #1 on their segment. “We give the power back to the consumer, sums up Richard Sandenskog. We are like Mother Teresa but we make money doing it….” Altogether, Tillväxmedier’s portfolio encompasses about 20 companies that made a billion of Swedish Kröner (€120m, $155m) in 2012 with a 12% EBITDA (several companies are in the growth phase.) All in five years…

France will be a different story. It’s five times bigger than Sweden, a market in which startups can be expensive. But what triggered Schibsted ASA’s decision to create a growth vehicle here is the spectacular performance of the classifieds site LeBoncoin.fr (see a previous Monday Note Schibsted’s extraordinary click machines): €98m in revenue and a cool 68% EBITDA last year. LeBoncoin draws 17m unique viewers (according to Nielsen). Based on this valuable asset, explains Marc Brandsma, the goal is to create the #1 online group in France (besides Facebook and Google). “The typical players we are looking for are B2C companies that already have a proven product — we won’t invest in PowerPoint presentations — driven by a management team aiming to be the leader in their market. Then we acquire it; we buy out all minority shareholders if necessary”. No kolkhoz here; decisions must be made quickly, without interference. “At that point, adds Brandsma we tell managers we’ll take care of growth by providing traffic, brand notoriety, marketing, all based on best practices and proven Schibsted expertise”. Two sectors Marc Brandsma says he won’t touch, though: business-to-business services and news media (ouch…)

frederic.filloux@mondaynote.com

The Mobile Rogue Wave

 

Publishers are concerned: The shift to mobile advertising revenue is lagging way behind the transfer of users to smartphones and tablets. Solutions are coming, but it might take a while before mobile ads catch up with users.
(A mistake in the ad revenue chart has been corrected) 

Last week, at a self-congratulatory celebration held by the French audit bureau of circulation (called OJD), the sports daily l’Equipe was honored for the best progression in  mobile audience. (I’m also happy to mention that Les Echos, the business group I’m working for, won the award for the largest growth in overall circulation with a gain of +3.3% in 2012 — in a national market losing 3.8%.) In terms of mobile page views, l’Equipe is three times bigger than the largest national daily (Le Monde). Unfortunately, its publisher tarnished the end the ceremony a bit by saying [I'm paraphrasing]: “Well, thanks for the award. But let’s not fool ourselves. The half of our digital traffic that comes from mobile represents only 5% of our overall digital revenue. We better react quickly, otherwise we’ll be dead soon”. While that outburst triggered only reluctant applause, almost everyone in the audience agreed.

Two days before, IREP (an advertising economics research organization) released 2012 data on advertising revenue for all media. Here is a quick look:

All media........... €13,300m......-3.5% 
TV...................€3,300m.......-4.5%
Print press (all)....€3,209m.......-8.2%
National Dailies.....€233m........ -8.9%
Internet Display.....€646m.........+4.8%
Internet Search......€1,141m.......+7%
Mobile...............€43m.........+29%

A few comments:
— The print press is nosediving faster than ever: In 2011, national dailies where losing 3.7% in revenue; in 2012, they lost almost 9%; and Q1 2013 doesn’t look better.
— On the digital side: Search is now almost twice as big as the display ads and it’s growing faster (7% vs. 4.8%). Google is grabbing most of this growth as the €1.14bn in revenue mentioned by IREP is roughly the equivalent of Google’s revenue in France.
— Mobile revenue is the fastest growing segment (+29%), but weighs only 2% of the entire digital segment (€1,830m revenue in 2012).

Looking at audiences reveals an even bleaker picture. Data compiled by the French circulation bureau for 87 media show that, between February 2012 and February 2013, the mobile applications audience grew 67% in visits and 102% in page views — again, in a segment that only grew 29% for 2012:

The conclusion is dreadful. Not only do audiences massively flock to mobile (more visits), but people spend more time in their favorite media app (with an even greater increase in page views) but, also, each viewer brings less and less money as ad revenues grew slower than visits — by a factor of two — and slower than page views — by a factor of three.

At the same time, in order to address this shift in audience, media are allocating more and more resources to mobile: Apps gain in sophistication and have to run on a greater number of devices. By the end of this year, the iOS ecosystem, until recently the simplest to deal with, will have at least five different screen sizes, and Android dozens of possible configurations. To add insult to injury, mobile apps don’t allow cookies, which prevents most measurements and users tend to randomly switch from their mobile devices to their PC or tablet, making tracking even more difficult…

Where do we go from here?

Publishers have no choice but following their readers. But, in doing so, they better be smart and select the right vectors. The coming months and years are likely to see scores of experiments. Native applications, meaning dedicated to a given ecosystem, might not last forever. As for now, they still offer superior performance but web apps, served from the internet regardless of the terminal’s operating system, are gaining traction. They become more fluid, accommodate more functionalities and improve their storage of contents for offline reading, but it will be a while before they become mainstream. In addition, web apps allow permanent improvements; if you look at the version number of web apps, you’ll see publishers pushing new releases on a weekly basis. They do so at will, as opposed to begging Apple to speed up the approval of native applications (not to mention the absence of a direct link to the customer.)

Similarly, many publishers are placing serious bets on responsive design sites that dynamically adjust to the screen size (see a previous Monday Note on Atlantic’s excellent business site Quartz). Liquid design, as it is also called, is great in theory but extremely difficult to develop and the slightest change requires diving into hugely complex HTML code (which also makes pages heavier to download and render.)

Technically speaking, in a near future, as rendering engines and processors keep improving, the shift to the mobile will no longer be a problem. But solving the low yield of mobile advertising is another matter. The advertising community evangelizes the promises of Real-Time Bidding; RTB basically removes the Ken and Barbie from the transaction process as demand and supply are matched through automated market places. But RTB is also known to pushes assets prices further down. As usual in the digital ad business, the likely winner will be Google, along with a few smaller players — before these are eventually crushed by Google.

The mobile ecosystem will come up with smarter innovations. Some will involve geo-located advertising, but the concept, great in demo, has yet to prove its revenue potential. Data collected through various means are much potent vector to stimulate mobile ads. Facebook knows it only too well: in the last quarter of 2012, it made $305m in mobile ads (that’s more than five times the French mobile ad market… in one quarter!); it accounts for 23% of FB’s total revenue.

Other technologies look more farfetched but quite promising. This article in the MIT Technology Review features a company that could solve a major issue, that is following users as they jump from one device to another. Drawbridge, Inc. was founded by Kamakshi Sivaramakrishnan, a statistics and probability PhD from Stanford. Her pitch (see a video here): bridging smartphones, tablets and PCs thanks to what she calls a “giant statistical space-time data triangulation technique”. In plain English: a model that generates clusters (based on patterns of usage and collected data) that will be used to create a “match” pinpointing an individual’s collection of devices. The goal is giving advertisers the ability to easily extend their campaigns from PC to mobile terminals. A high potential indeed. It caught the interest of two major venture capital firms, Kleiner Perkins Caufield & Byers and Sequoia Capital, who together injected $20m in the startup. Drawbridge claims to have already bridged about 540 million devices (at a rate or 800 per minute!)

This could be one of the many boards used to ride the Mobile rogue wave and, for many players, avoid drowning.

–frederic.filloux@mondaynote.com

Data in the driver’s seat

 

Autonomous vehicles — fully or partially — will rely on a large variety of data types. And guess who is best positioned to take advantage of this enormous new business? Yep, Google is. 

The Google driveless car is an extraordinary technical achievement. To grasp the its scope, watch this video featuring a near-blind man sitting behind the wheel of an autonomous Prius as the car does the driving. Or, to get an idea of the complexity of the system, see this presentation by Sebastian Thrun (one of the main architects of Google’s self-driving car project) going through the multiple systems running inside the car.

Spectacular as it is, this public demonstration is merely the tip of the iceberg. For Google, the economics of self-driving cars lie in a vast web of data that will become a must to operate partially or fully self-driving vehicles on a massive scale. This network of data will require immense computational and storage capabilities. Consider the following needs in the context of Google’s current position in related fields.

Maps. Since the acquisition of Where2 Technologies and Keyhole Inc. in 2004, Google has been refining its mapping system over and over again (see this brief history of Google Maps). After a decade of work, Google Maps feature a rich set of layers and functions. Their mapping of the world has been supplemented by crowdsourcing systems that allow corrections as well as the creation of city maps where data do not exist. Street View has been launched in 2007 and more than 5 million miles of metropolitan area have been covered. Today, maps are augmented with satellite imagery, 3D, 45-degree aerial views, buildings and infrastructure renderings. All this is now merged, you can plunge from a satellite view to the street level.

Google’s goal is building the most complete an reliable map system in the world. Gradually, the company replaces geo-data from third party suppliers with data collected by its own crews around the world. To get an idea of how fast Google progresses, consider the following: In 2008, Google mapping covered 22 countries and offered 13 million miles with driving directions. In 2012, 187 countries where covered, 26 million miles with driving directions, including 29 countries with turn-by-turn directions. On the chart below, you can also see the growing areas of Google-sourced maps (in green) as opposed to licensed data (in red):

Apple’s failure in maps shows that, regardless of the amount of money invested, experience remains a key element. In California and India, Google maintains a staff of hundreds if not thousands of people manually checking key spots in large metropolitan areas and correcting errors. They rely on users whose individual suggestions are manually checked, using Street View imagery as shown here (the operator drags the 360° Street View image to verify signs at an intersection — click to enlarge.)

Google’s engineers even developed algorithms aimed at correcting slight misalignments between “tiles” (pieces of satellite imagery stitched together) that could result from… tectonic plates movement — it could happen when two pictures are taken two years apart. Such accuracy is not a prerequisite for current navigation, but it could be important for autonomous cars that will depend heavily on ultra-precise (think what centimers/inches mean when cars are close on the road) mapping of streets and infrastructures.

But, one might object, Google is not the only company providing geo-data and great mapping services. True: The Dutch company Tom-tom, or the Chicago-based Navteq have been doing this for years. As geo-data became strategically important, Tom-tom acquired TeleAtlas for $2.9bn in 2008, and Nokia bought Navteq in 2007. But Google intends to move one step ahead by merging its mapping and imagery technologies with its search capabilities. Like in this image:

Accurate, usable and data-rich maps are one thing. Now, when you consider the variety of data needed for autonomous or semi-autonomous vehicles, the task becomes even more enormous. The list goes on:

Traffic conditions will be a key element. It’s pointless to envision fleets of self-driving, or assisted-driving cars without systems to manage traffic. These goes along with infrastructure development. For instance, as  Dr. Kara Kockelman, professor of transportation engineering at the University of Texas at Austin explained to me, in the future, we might see substantial infrastructure renovation aimed at accommodating autonomous vehicles (or vehicles set on self-driving mode). Dedicated highway corridors would be allocated to “platoons” of cars driving close together, in a faster and safer way, than manned cars. Intersections, she said, are also a key challenge as they are responsible for most traffic jams (and a quarter of accidents). With the advent of autonomous vehicles, we can see cars taken over by intersection management systems that will regroup them in platoons and feed them seamlessly in intersecting traffic flows, like in this spectacular simulation. If traffic lights are still needed, they will change every five or six seconds just to optimize the flow.

Applied to millions of vehicles, traffic and infrastructure management will turn into a gigantic data and communication problem. Again, Google might be the only entity able to write the required software and to deploy the data centers to run it. Its millions of servers will be of great use to handle weather information, road conditions (as cars might be able to monitor their actual friction on the road and transmit the data to following vehicles, or detect humidity and temperature change), parking data and fuel availability (gas or electricity). And we can even think of merging all this with day-to-day life elements such as individual calendars, commuting patterns and geolocating people through their cell phones.

If the data collection and crunching tasks can conceivably be handled by a Google-like player, communications remain an issue. “There is not enough overlap between car-to-car communication and in other fields”, Sven Beiker, director Center for Automotive Research  (CARS) at Stanford told me (see his recent lecture about The Future if the Car). He is actually echoing executives from Audi (who made a strategic deal with Google), BMW and Ford; together at the Mobile World Congress, they were critical of cell phone carriers’ inability to provide the right 4G (LTE) infrastructure to handle the amount of data required by future vehicles.

Finally, there is the question of an operating system for cars. Experts are divided. Sven Beiker believes the development of self-driving vehicles will depend more on communication protocols than on an OS per se. Others believe that Google, with its fleet of self-driving Priuses criss-crossing California, is building the first OS dedicated to autonomous vehicles. At some point, the search giant could combine its mapping, imagery and local search capabilities with the accumulation of countless self-driven miles, along with scores of specific situations “learned” by the cars’ software. The value thus created would be huge, giving Google a decisive position in yet another field. The search company could become the main provider of both systems and data for autonomous or semi autonomous cars.

frederic.filloux@mondaynote.com

Growing Forces in Mobile

 

As seen last week in Barcelona, the mobile industry is red hot. The media sector will have to work harder to capture its share of that growth.

The 2013 edition of the Mobile World Congress held last week in Barcelona was as large as the biggest auto-show in the world: 1500 exhibitors and a crowd of 72,000 attendees from 200 countries. The mobile industry is roaring like never before. But the news media industry lags and will have to fight hard to stay in the game. Astonishingly, only two media companies deigned to show up: Pearson with its huge education business accounting for 75% of its 2012 revenue (vs. 7% for its Financial Times unit); and Agence France-Presse which is entering the customized application market. No other big media brand in sight, no trade organizations either. Apparently, the information sector is about to miss the mobile train.

Let’s begin with data that piqued my interest, from AT Kearney surveys for the GSM Association.

Individual mobile subscribers: In 2012, the worldwide number of mobile subscribers reached 3.2 billion. A billion subscribers was added in the last four years. As the world population is expected to grow by 1.1% per year between 2008 and 2017, the mobile sector enjoyed a 8.3% CAGR (Compound Annual Growth Rate) for the 2008-2012 period. For the 2012 – 2017 interval the expected CAGR is 4.2%. The 4 billion subscribers mark will be passed in 2018. By that time, 80% of the global population will be connected via a mobile device.

The rise of the machines. When machine-to-machine (M2M) connections are taken into account, growth becomes even more spectacular: In 2012, there were 6.8 billion active SIM cards, 3% of them being M2M connections. In 2017, there will be 9.7 billion active SIM cards and the share of M2M connections will account for 13% with almost 1.3 billion devices talking to each other.
The Asia-Pacific region will account for half of the connection growth, both for individual subscriptions and M2M.

We’ll now turn to stats that could benefit the media industry.

Mobile growth will be mostly driven by data usage. In 2012, the volume of data exchanged through mobile devices amounted to .9 exabytes per month (1 exabyte = 1bn gigabytes), this is more than the all preceding years combined! By 2017, it is expected to reach 11.2 exabytes, that’s a 66% CAGR!

A large part of this volume will come from the deployment of 4G (LTE) networks. Between now and 2017, deploying LTE technology will result in a 4X increase in connection speeds.

For the 2012 – 2017 period, bandwidth distribution is expected to grow as follows:

M2M:......... +89% 
Video:....... +75% 
Gaming:...... +62% 
Other data:...+55% 
File sharing: +34% 
VoIP:........ +34%

Obviously, the huge growth of video streaming (+75%) points to a great opportunity for the media industry as users will tend to watch news capsules on-the-go in the same way they today look at a mobile web sites or an app (these two will be part of the 55% annual growth).

The growing social mobility will also be an issue for news media. Here are the key figures for today in active mobile users

Facebook:...680m 
Twitter:....120m 
LinkedIn:....46m 
Foursquare:..30m

Still, as important as it is, social usage only accounts for 17 minutes per day, vs. 25 minutes for internet browsing and a mere 12 minutes for voice calls. Most likely, the growth of video will impact the use of social networks as Facebook collects more and more videos directly uploaded from smartphones.

A large part of this growth will be driven by the rise of inexpensive smartphones. Last week in Barcelona, the largest stand was obviously Samsung’s. But a huge crowd also gathered around Huawei or ZTE showing sophisticated Android-powered smartphones — at much lower prices. This came as a surprise to many westerners like me who don’t have access to these Chinese devices. And for emerging markets, Firefox is coming with a HTML5 operating system that looked surprisingly good.

In years to come, the growing number of operating systems, screen sizes and features will be a challenge. (At the MWC, the trend was definitely in favor of large screens, read this story in Engadget.) An entire hall was devoted to applications — and software aimed at producing apps in a more standardized, economical fashion. As a result, we might see three approaches to delivering contents on mobile:
– The simplest way will be mobile sites based on HTML5 and responsive design; more features will be embedded in web applications.
– The second stage will consist of semi-native apps, quickly produced using standardized tools, allowing fast updates and adaptations to a broad range of devices.
– The third way will involve expensive deep-coded native apps aimed at supporting sophisticated graphics; they will mainly be deployed by the gaming industry.

In upcoming Monday Notes, we will address two majors mobile industry trends not tied to the media industry: Connected Living (home-car-city), a sector likely to account for most machine-to-machine use; and digital education taking advantage of a happy combination of more affordable handsets and better bandwidth.

frederic.filloux@mondaynote.com

Google News: The Secret Sauce

 

A closer look at Google’s patent for its news retrieval algorithm reveals a greater than expected emphasis on quality over quantity. Can this bias stay reliable over time?

Ten years after its launch, Google News’ raw numbers are staggering: 50,000 sources scanned, 72 editions in 30 languages. Google’s crippled communication machine, plagued by bureaucracy and paranoia, has never been able to come up with tangible facts about its benefits for the news media it feeds on. It’s official blog merely mentions “6 billion visits per month” sent to news sites and Google News claims to connect “1 billion unique users a week to news content” (to put things in perspective, the NYT.com or the Huffington Post are cruising at about 40 million UVs per month). Assuming the clicks are sent to a relatively fresh news page bearing higher value advertising, the six billion visits can translate into about $400 million per year in ad revenue. (This is based on a $5 to $6 revenue per 1,000 pages, i.e. a few dollars in CPM per single ad, depending on format, type of selling, etc.) That’s a very rough estimate. Again: Google should settle the matter and come up with accurate figures for its largest markets. (On the same subject, see a previous Monday Note: The press, Google, its algorithm, their scale.)

But how exactly does Google News work? What kind of media does its algorithm favor most? Last week, the search giant updated its patent filing with a new document detailing the thirteen metrics it uses to retrieve and rank articles and sources for its news service. (Computerworld unearthed the filing, it’s here).

What follows is a summary of those metrics, listed in the order shown in the patent filing, along with a subjective appreciation of their reliability, vulnerability to cheating, relevancy, etc.

#1. Volume of production from a news source:

A first metric in determining the quality of a news source may include the number of articles produced by the news source during a given time period [week or month]. [This metric] may be determined by counting the number of non-duplicate articles produced by the news source over the time period [or] counting the number of original sentences produced by the news source.

This metric clearly favors production capacity. It benefits big media companies deploying large staffs. But the system can also be cheated by content farms (Google already addressed these questions); new automated content creation systems are gaining traction, many of them could now easily pass the Turing Test.

#2. Length of articles. Plain and simple: the longer the story (on average), the higher the source ranks. This is bad news for aggregators whose digital serfs cut, paste, compile and mangle abstracts of news stories that real media outlets produce at great expense.

#3. “The importance of coverage by the news source”. To put it another way, this matches the volume of coverage by the news source against the general volume of text generated by a topic. Again, it rewards large resource allocation to a given event. (In New York Times parlance, such effort is called called “flooding the zone”.)

#4. The “Breaking News Score”:   

This metric may measure the ability of the news source to publish a story soon after an important event has occurred. This metric may average the “breaking score” of each non-duplicate article from the news source, where, for example, the breaking score is a number that is a high value if the article was published soon after the news event happened and a low value if the article was published after much time had elapsed since the news story broke.

Beware slow moving newsrooms: On this metric, you’ll be competing against more agile, maybe less scrupulous staffs that “publish first, verify later”. This requires a smart arbitrage by the news producers. Once the first headline has been pushed, they’ll have to decide what’s best: Immediately filing a follow-up or waiting a bit and moving a longer, more value-added story that will rank better in metrics #2 and #3? It depends on elements such as the size of the “cluster” (the number of stories pertaining to a given event).

#5. Usage Patterns:

Links going from the news search engine’s web page to individual articles may be monitored for usage (e.g., clicks). News sources that are selected often are detected and a value proportional to observed usage is assigned. Well known sites, such as CNN, tend to be preferred to less popular sites (…). The traffic measured may be normalized by the number of opportunities readers had of visiting the link to avoid biasing the measure due to the ranking preferences of the news search engine.

This metric is at the core of Google’s business: assessing the popularity of a website thanks to the various PageRank components, including the number of links that point to it.

#6. The “Human opinion of the news source”:

Users in general may be polled to identify the newspapers (or magazines) that the users enjoy reading (or have visited). Alternatively or in addition, users of the news search engine may be polled to determine the news web sites that the users enjoy visiting. 

Here, things get interesting. Google clearly states it will use third party surveys to detect the public’s preference among various medias — not only their website, but also their “historic” media assets. According to the patent filing, the evaluation could also include the number of Pulitzer Prizes the organization collected and the age of the publication. That’s for the known part. What lies behind the notion of “Human opinion” is a true “quality index” for news sources that is not necessarily correlated to their digital presence. Such factors clearly favors legacy media.

# 7. Audience and traffic. Not surprisingly Google relies on stats coming from Nielsen Netratings and the like.

#8. Staff size. The bigger a newsroom is (as detected in bylines) the higher the value will be. This metric has the merit of rewarding large investments in news gathering. But it might become more imprecise as “large” digital newsrooms tend now to be staffed with news repackagers bearing little added value.

#9. Numbers of news bureaus. It’s another way to favor large organizations — even though their footprint tends to shrink both nationally and abroad.

#10. Number of “original named entities”. That’s one of the most interesting metric. A “named entity is the name of a person, place or organization”. It’s the primary tool for semantic analysis.

If a news source generates a news story that contains a named entity that other articles within the same cluster (hence on the same topic) do not contain, this may be an indication that the news source is capable of original reporting.

Of course, some cheaters insert misspelled entities to create “false” original entities and fool the system (Google took care of it). But this metric is a good way to reward original source-finding.

#11. The “breadth” of the news source. It pertains to the ability of a news organizations to cover a wide range of topics.

#12. The global reach of the news sources. Again, it favors large media who are viewed, linked, quoted, “liked”, tweeted from abroad.

This metric may measure the number of countries from which the news site receives network traffic. In one implementation consistent with the principles of the invention, this metric may be measured by considering the countries from which known visitors to the news web site are coming (e.g., based at least in part on the Internet Protocol (IP) addresses of those users that click on the links from the search site to articles by the news source being measured). The corresponding IP addresses may be mapped to the originating countries based on a table of known IP block to country mappings.

#13. Writing style. In the Google world, this means statistical analysis of contents against a huge language model to assess “spelling correctness, grammar and reading levels”.

What conclusions can we draw? This enumeration clearly shows Google intends to favor legacy media (print or broadcast news) over pure players, aggregators or digital native organizations. All the features recently added, such as Editor’s pick, reinforce this bias. The reason might be that legacy media are less prone to tricking the algorithm. For once, a know technological weakness becomes an advantage.

frederic.filloux@mondaynote.com

The Need for a Digital “New Journalism”

 

The survival of quality news calls for a new approach to writing and reporting. Inspiration could come from blogging and magazine storytelling and also bring back memories of the 70’s New Journalism movement. 

News reporting is aging badly. Legacy newsrooms style books look stuck in a last Century formalism (I was tempted to write “formalin“). Take a newspaper, print or online. When it comes news reporting, you see the same old structure dating back to the Fifties or even earlier. For the reporter, there is the same (affected) posture of effacing his/her personality behind facts, and a stiff structure based on a string of carefully arranged paragraphs, color elements, quotes, etc.

I hate useless quotes. Most often, for journalists, such quotes are the equivalent of the time-card hourly workers have to punch. To their editor, the message is ‘Hey, I did my my job; I called x, y, z’ ; and to the  the reader, ‘Look, I’m humbly putting my personality, my point of view behind facts as stated by these people’ — people picked by him/herself, which is the primary (and unavoidable) way to twist a story. The result becomes borderline ridiculous when, after a lengthy exposé in the reporter’s voice to compress the sources’ convoluted thoughts, the line of reasoning concludes with a critical validation such as :

“Only time will tell”, said John Smith, director of the social studies at the University of Kalamazoo, consultant for the Rand Corporation, and author of “The Cognitive Deficit of Hyperactive Chimpanzees”. 

I’m barely making this up. Each time I open a carbon-based newspaper (or read its online version), I’m stuck by how old-fashioned news writing remains. Unbeknownst to the masthead (i.e. editorial top decision-makers) of legacy media, things have changed. Readers no longer demand validating quotes that weigh the narrative down. They want to be taken from A to B, with the best possible arguments, and no distraction or wasted time.

Several factors dictate an urgent evolution in the way newspapers are written.

1/ Readers’ Time Budget. People are deluged with things to read. It begins at 7:00 in the morning and ends up late into the night. The combination of professional contents (mail, reports, PowerPoint presentations) and social networking feeds, have put traditional and value-added contents (news, books) under great pressure. Multiple devices and the variable level of attention that each of them entails create more complications: a publishing house can’t provide the same content for a smartphone screen to be read in a cramped subway as for a tablet used in lean-back mode at home. More than ever, the publisher is expected to clearly arbitrate between the content that is to be provided in a concise form and the one that justifies a long, elaborate narrative. The same applies to linking and multi-layer constructs: reading a story that opens several browser tabs on a 22-inch screen is pleasant — and completely irrelevant for quick lunchtime mobile reading.

2/ Trust factor / The contract with the Brand. When I pick a version of The New York Times, The Guardian, or a major French newspaper, this act materializes my trust (and hope) in the professionalism associated with the brand. In a more granular way, it works the same for the writer. Some are notoriously sloppy, biased, or agenda-driven; others are so good than they became a brand by themselves. My point: When I read a byline I trust, I assume the reporter has performed the required legwork — that is collecting five or ten times the amount of information s/he will use in the end product. I don’t need the reporting to be proven or validated by an editing construct that harks back to the previous century. Quotes will be used only for the relevant opinion of a source, or to make a salient point, not as a feeble attempt to prove professionalism or fairness.

3 / Competition from the inside. Strangely enough, newspapers have created their own gauge to measure their obsolescence. By encouraging their writing staff to blog, they unleashed new, more personal, more… modern writing practices. Fact is, many journalists became more interesting on their own blogs than in their dedicated newspaper or magazine sections. Again, this trend evaded many editors and publishers who consider blogging to be a secondary genre, one that can be put outside a paywall, for instance. (This results in a double whammy: not only doesn’t the paper cash on blogs, but it also frustrates paid-for subscribers).

4/ The influence of magazine writing. Much better than newspapers, magazines have always done a good job capturing readers’ preferences. They’ve have always been ahead in market research, graphic design, concept and writing evolution. (This observations also applies to the weekend magazines operated by large dailies). As an example, magazine writers have been quick to adopt first person accounts that rejuvenated journalism and allowed powerful narrative. In many newspapers, authors and their editors still resists this.

Digital media needs to invent its own journalistic genres. (Note the plural, dictated by the multiplicity of usages and vectors). The web and its mobile offspring, are calling for their own New Journalism comparable to the one that blossomed in the Seventies. While the blogosphere has yet to find its Tom Wolfe, the newspaper industry still has a critical role to play: It could be at the forefront of this essential evolution in journalism. Failure to do so will only accelerate its decline.

frederic.filloux@mondaynote.com

The Google Fund for the French Press

 

At the last minute, ending three months of  tense negotiations, Google and the French Press hammered a deal. More than yet another form of subsidy, this could mark the beginning of a genuine cooperation.

Thursday night, at 11:00pm Paris time, Marc Schwartz, the mediator appointed by the French government got a call from the Elysée Palace: Google’s chairman Eric Schmidt was en route to meet President François Hollande the next day in Paris. They both intended to sign the agreement between Google and the French press the Friday at 6:15pm. Schwartz, along with Nathalie Collin, the chief representative for the French Press, were just out of a series of conference calls between Paris and Mountain view: Eric Schmidt and Google’s CEO Larry Page had green-lighted the deal. At 3 am on Friday, the final draft of the memorandum was sent to Mountain View. But at 11:00am everything had to be redone: Google had made unacceptable changes, causing Schwartz and Collin to  consider calling off the signing ceremony at the Elysée. Another set of conference calls ensued. The final-final draft, unanimously approved by the members of the IPG association (General and Political Information), was printed at 5:30pm, just in time for the gathering at the Elysée half an hour later.

The French President François Hollande was in a hurry, too: That very evening, he was bound to fly to Mali where the French troops are waging as small but uncertain war to contain Al-Qaeda’s expansion in Africa. Never shy of political calculations, François Hollande seized the occasion to be seen as the one who forced Google to back down. As for Google’s chairman, co-signing the agreement along with the French President was great PR. As a result, negotiators from the Press were kept in the dark until Eric Schmidt’s plane landed in Paris Friday afternoon and before heading to the Elysée. Both men underlined what  they called “a world premiere”, a “historical deal”…

This agreement ends — temporarily — three months of difficult negotiations. Now comes the hard part.

According to Google’s Eric Schmidt, the deal is built on two stages:

“First, Google has agreed to create a €60 million Digital Publishing Innovation Fund to help support transformative digital publishing initiatives for French readers. Second, Google will deepen our partnership with French publishers to help increase their online revenues using our advertising technology.”

As always, the devil lurks in the details, most of which will have to be ironed over the next two months.

The €60m ($82m) fund will be provided by Google over a three-year period; it will be dedicated to new-media projects. About 150 websites members of the IPG association will be eligible for submission. The fund will be managed by a board of directors that will include representatives from the Press, from Google as well as independent experts. Specific rules are designed to prevent conflicts of interest. The fund will most likely be chaired by the Marc Schwartz, the mediator, also partner at the global audit firm Mazars (all parties praised him for his mediation and wish him to take the job).

Turning to the commercial part of the pact, it is less publicized but at least as equally important as the fund itself. In a nutshell, using a wide array of tools ranging from advertising platforms to content distribution systems, Google wants to increase its business with the Press in France and elsewhere in Europe. Until now, publishers have been reluctant to use such tools because they don’t want to increase their reliance on a company they see as cold-blooded and ruthless.

Moving forward, the biggest challenge will be overcoming an extraordinarily high level distrust on both sides. Google views the Press (especially the French one) as only too eager to “milk” it, and unwilling to genuinely cooperate in order to build and share value from the internet. The engineering-dominated, data-driven culture of the search engine is light-years away from the convoluted “political” approach of legacy media that don’t understand or look down on the peculiar culture of tech companies.

Dealing with Google requires a mastery of two critical elements: technology (with the associated economics), and the legal aspect. Contractually speaking, it means transparency and enforceability. Let me explain.

Google is a black box. For good and bad reasons, it fiercely protects the algorithms that are key to squeezing money from the internet, sometimes one cent at a time — literally. If Google consents to a cut of, say, advertising revenue derived from a set of contents, the partner can’t really ascertain whether the cut truly reflects the underlying value of the asset jointly created – or not. Understandably, it bothers most of Google’s business partners: they are simply asked to be happy with the monthly payment they get from Google, no questions asked. Specialized lawyers I spoke with told me there are ways to prevent such opacity. While it’s futile to hope Google will lift the veil on its algorithms, inserting an audit clause in every contract can be effective; in practical terms, it means an independent auditor can be appointed to verify specific financial records pertaining to a business deal.

Another key element: From a European perspective, a contract with Google is virtually impossible to enforce. The main reason: Google won’t give up on the Governing Law of a contract that is to be “Litigated exclusively in the Federal or States Courts of Santa Clara County, California”. In other words: Forget about suing Google if things go sour. Your expensive law firm based in Paris, Madrid, or Milan will try to find a correspondent in Silicon Valley, only to be confronted with polite rebuttals: For years now, Google has been parceling out multiples pieces of litigation among local law firms simply to make them unable to litigate against it. Your brave European lawyer will end up finding someone that will ask several hundreds thousands dollars only to prepare but not litigate the case. The only way to prevent this is to put an arbitration clause in every contract. Instead of going before a court of law, the parties agrees to mediate the matter through a private tribunal. Attorneys say it offers multiples advantages: It’s faster, much cheaper, the terms of the settlement are confidential, and it carries the same enforceability as a Court order.

Google (and all the internet giants for that matter) usually refuses an arbitration clause as well as the audit provision mentioned earlier. Which brings us to a critical element: In order to develop commercial relations with the Press, Google will have to find ways to accept collective bargaining instead of segmenting negotiations one company at a time. Ideally, the next round of discussions should come up with a general framework for all commercial dealings. That would be key to restoring some trust between the parties. For Google, it means giving up some amount of tactical as well as strategic advantage… that is part of its long-term vision. As stated by Eric Schmidt in its upcoming book “The New Digital Age” (the Wall Street Journal had access to the galleys) :

“[Tech companies] will also have to hire more lawyers. Litigation will always outpace genuine legal reform, as any of the technology giants fighting perpetual legal battles over intellectual property, patents, privacy and other issues would attest.”

European media are warned: they must seriously raise their legal game if they want to partner with Google — and the agreement signed last Friday in Paris could help.

Having said that, I personally believe it could be immensely beneficial for digital media to partner with Google as much as possible. This company spends roughly two billion dollars a year refining its algorithms and improving its infrastructure. Thousands of engineers work on it. Contrast this with digital media: Small audiences, insufficient stickiness, low monetization plague both web sites and mobile apps; the advertising model for digital information is mostly a failure — and that’s not Google’s fault. The Press should find a way to capture some of Google’s technical firepower and concentrate on what it does best: producing original, high quality contents, a business that Google is unwilling (and probably culturally unable) to engage in. Unlike Apple or Amazon, Google is relatively easy to work with (once the legal hurdles are cleared).

Overall, this deal is a good one. First of all, both sides are relieved to avoid a law (see last Monday Note Google vs. the press: avoiding the lose-lose scenario). A law declaring that snippets and links are to be paid-for would have been a serious step backward.

Second, it’s a departure from the notion of “blind subsidies” that have been plaguing the French Press for decades. Three months ago, the discussion started with irreconcilable positions: publishers were seeking absurd amounts of money (€70m per year, the equivalent of IPG’s members total ads revenue) and Google was focused on a conversion into business solutions. Now, all the people I talked to this weekend seem genuinely supportive of building projects, boosting innovation and also taking advantage of Google’s extraordinary engineering capabilities. The level of cynicism often displayed by the Press is receding.

Third, Google is changing. The fact that Eric Schmidt and Larry Page jumped in at the last minute to untangle the deal shows a shift of perception towards media. This agreement could be seen as a template for future negotiations between two worlds that still barely understand each other.

frederic.filloux@mondaynote.com

Google vs. the press: avoiding the lose-lose scenario

 

Google and the French press have been negotiating for almost three months now. If there is no agreement within ten days, the government is determined to intervene and pass a law instead. This would mean serious damage for both parties. 

An update about the new corporate tax system. Read this story in Forbes by the author of the report quoted below 

Since last November, about twice a week and for several hours, representatives from Google and the French press have been meeting behind closed doors. To ease up tensions, an experienced mediator has been appointed by the government. But mistrust and incomprehension still plague the discussions, and the clock is ticking.

In the currently stalled process, the whole negotiation revolves around cash changing hands. Early on, representatives of media companies where asking Google to pay €70m ($93m) per year for five years. This would be “compensation” for “abusively” indexing and linking their contents and for collecting 20 words snippets (see a previous Monday Note: The press, Google, its algorithm, their scale.) For perspective, this €70m amount is roughly the equivalent to the 2012 digital revenue of newspapers and newsmagazines that constitutes the IPG association (General and Political Information).

When the discussion came to structuring and labeling such cash transfer, IPG representatives dismissively left the question to Google: “Dress it up!”, they said. Unsurprisingly, Google wasn’t ecstatic with this rather blunt approach. Still, the search engine feels this might be the right time to hammer a deal with the press, instead of perpetuating a latent hostility that could later explode and cost much more. At least, this is how Google’s European team seems to feel. (In its hyper-centralized power structure, management in Mountain View seems slow to warm up to the idea.)

In Europe, bashing Google is more popular than ever. Not only just Google, but all the US-based internet giants, widely accused of killing old businesses (such as Virgin Megastore — a retail chain that also made every possible mistake). But the actual core issue is tax avoidance. Most of these companies hired the best tax lawyers money can buy and devised complex schemes to avoid paying corporate taxes in EU countries, especially UK, Germany, France, Spain, Italy…  The French Digital Advisory Board — set up by Nicolas Sarkozy and generally business-friendly — estimated last year that Google, Amazon, Apple’s iTunes and Facebook had a combined revenue of €2.5bn – €3bn but each paid only on average €4m in corporate taxes instead of €500m (a rough 20% to 25% tax rate estimate). At a time of fiscal austerity, most governments see this (entirely legal) tax avoidance as politically unacceptable. In such context, Google is the target of choice. In the UK for instance, Google made £2.5bn (€3bn or $4bn) in 2011, but paid only £6m (€7.1m or $9.5m) in corporate taxes. To add insult to injury, in an interview with The Independent, Google’s chairman Eric Schmidt defended his company’s tax strategy in the worst possible manner:

“I am very proud of the structure that we set up. We did it based on the incentives that the governments offered us to operate. It’s called capitalism. We are proudly capitalistic. I’m not confused about this.”

Ok. Got it. Very helpful.

Coming back to the current negotiation about the value of the click, the question was quickly handed over to Google’s spreadsheet jockeys who came up with the required “dressing up”. If the media accepted the use of the full range of Google products, additional value would be created for the company. Then, a certain amount could be derived from said value. That’s the basis for a deal reached last year with the Belgium press (the agreement is shrouded in a stringent confidentiality clause.)

Unfortunately, the French press began to eliminate most of the eggs in the basket, one after the other, leaving almost nothing to “vectorize” the transfer of cash. Almost three months into the discussion, we are stuck with antagonistic positions. The IPG representatives are basically saying: We don’t want to subordinate ourselves further to Google by adopting opaque tools that we can find elsewhere. Google retorts: We don’t want to be considered as another deep-pocketed “fund” that the French press will tap forever into without any return for our businesses; plus, we strongly dispute any notion of “damages” to be paid for linking to media sites. Hence the gap between the amount of cash asked by one side and what is (reluctantly) acceptable on the other.

However, I think both parties vastly underestimate what they’ll lose if they don’t settle quickly.

The government tax howitzer is loaded with two shells. The first one is a bill (drafted by no one else than IPG’s counsel, see PDF here), which introduces the disingenuous notion of “ancillary copyright”. Applied to the snippets Google harvests by the thousands every day, it creates some kind of legal ground to tax it the hard way. This montage is adapted from the music industry in which the ancillary copyright levy ranges from 4% to 7% of the revenue generated by a sector or a company. A rate of 7% for the revenue officially declared by Google in France (€138m) would translate into less than €10m, which is pocket change for a company that in fact generates about €1.5 billion from its French operations.

That’s where the second shell could land. Last Friday, the Ministry of Finances released a report on the tax policy applied to the digital economy  titled “Mission d’expertise sur la fiscalité de l’économie numérique” (PDF here). It’s a 200 pages opus, supported by no less than 600 footnotes. Its authors, Pierre Collin and Nicolas Colin are members of the French public elite (one from the highest jurisdiction, le Conseil d’Etat, the other from the equivalent of the General Accounting Office — Nicolas Colin being  also a former tech entrepreneur and a writer). The Collin & Colin Report, as it’s now dubbed, is based on a set of doctrines that also come to the surface in the United States (as demonstrated by the multiple references in the report).

To sum up:
— The core of the digital economy is now the huge amount of data created by users. The report categorizes different types of data: “Collected Data”, are  gathered through cookies, wether the user allows it or not. Such datasets include consumer behaviors, affiliations, personal information, recommendations, search patterns, purchase history, etc.  “Submitted Data” are entered knowingly through search boxes, forms, timelines or feeds in the case of Facebook or Twitter. And finally, “Inferred Data” are byproducts of various processing, analytics, etc.
— These troves of monetized data are created by the free “work” of users.
— The location of such data collection is independent from the place where the underlying computer code is executed: I create a tangible value for Amazon or Google with my clicks performed in Paris, while the clicks are processed in a  server farm located in Netherlands or in the United Sates — and most of the profits land in a tax shelter.
— The location of the value insofar created by the “free work” of users is currently dissociated from the location of the tax collection. In fact, it escapes any taxation.

Again, I’m quickly summing up a lengthy analysis, but the conclusion of the Collin & Colin report is obvious: Sooner or later, the value created and the various taxes associated to it will have to be reconciled. For Google, the consequences would be severe: Instead of €138m of official revenue admitted in France, the tax base would grow to €1.5bn revenue and about €500m profit; that could translate €150m in corporate tax alone instead of the mere €5.5m currently paid by Google. (And I’m not counting the 20% VAT that would also apply.)

Of course, this intellectual construction will be extremely difficult to translate into enforceable legislation. But the French authorities intend to rally other countries and furiously lobby the EU Commission to comer around to their view. It might takes years, but it could dramatically impact Google’s economics in many countries.

More immediately, for Google, a parliamentary debate over the Ancillary Copyright will open a Pandora’s box. From the Right to the Left, encouraged by François Hollande‘s administration, lawmakers will outbid each other in trashing the search engine and beyond that, every large internet company.

As for members the press, “They will lose too”, a senior official tells me. First, because of the complications in setting up the machinery the Ancillary Copyright Act would require, they will have to wait about two years before getting any dividends. Two, the governments — the present one as well as the past Sarkozy administration  — have always been displeased with what they see as the the French press “addiction to subsidies”; they intend to drastically reduce the €1.5bn in public aid. If the press gets is way through a law,  according to several administration officials, the Ministry of Finances will feel relieved of its obligations towards media companies that don’t innovate much despite large influxes of public money. Conversely, if the parties are able to strike a decent business deal on their own, the French Press will quickly get some “compensation” from of Google and might still keep most of its taxpayer subsidies.

As for the search giant, it will indeed have to stand a small stab but, for a while, will be spared the chronic pain of a long and costly legislative fight — and the contagion that goes with it: The French bill would be dissected by neighboring governments who will be only too glad to adapt and improve it.

frederic.filloux@mondaynote.com   

Next week: When dealing with Google, better use a long spoon; Why European media should rethink their approach to the search giant.