After a massive spamming attack this week that put the site offline for three days, we need to turn off the comments of the Monday Note for a while. Sorry for the inconvenience.—
After a massive spamming attack this week that put the site offline for three days, we need to turn off the comments of the Monday Note for a while. Sorry for the inconvenience.—
Moore’s Law also applies to global development. From futuristic wireless networks for rural Africa to tracking water well drillings, digital technology is a powerful boost for development as evidenced by a growing number of initiatives.
Last week, The Wall Street Journal unveiled a Google project designed to provide wireless networks in developing countries, more specifically in sub-Saharan Africa and Southeast Asia. According to the Journal, the initiative involves using the airwaves spectrum allocated for television signals or teaming up with cellular carriers already working there. In its typical “outside-of-the-box” thinking, the project might also rely on high-altitude blimps to cover infrastructure-deprived areas. Coupled with low-cost handsets using the Android operating system, or the brand new Firefox OS for mobile, this would boost the spread of cellular phones in poor countries.
Previously unavailable, mobile access will be a game changer for billions of people. At the last Mobile World Congress in Barcelona, I chatted with an Alcatel-Lucent executive who explained the experiments she witnessed in Kenya, such as providing the equivalent of index cards to nurses to upgrade their knowledge of specific treatments; the use of mobile phone translated into an unprecedented reach, even in remote areas where basic handsets are shared among many people. Similarly, tests for access to reading material were conducted by UNESCO, the United Nations branch for education and culture. Short stories, some loaded with interactive features, were sent to phones and, amazingly, kids flocked to read, share and participate. All of this was carried on “dumb” phones, sometimes with only mono-color displays. Imagine what could be done with smartphones.
Moore’s Law will keep helping. Currently, high end smartphones are out of reach for emerging markets where users rely on prepaid cards instead of subscriptions. But instead of a $400-$600 handsets (without a 2-year contract) currently sold in Western markets, Chinese manufacturers are aiming at a price of $50 for a durable handset, using a slower processor but sporting all expected features: large screen, good camera, GPS module, accelerometers, and tools for collective use. On such a foundation, dedicated applications can be developed — primarily for education and health.
As an example, the MIT Media Labs has created a system for prescribing eyeglasses that requires only a one-dollar eyepiece attached to a smartphone; compared to professional equipment costing thousands times more, it runs a very decent diagnostic. (This is part of the MIT Global Challenge Initiative).
This, coupled with liquid-filled adjustable glasses such as this one presented at TED a couple of years ago, will help solve vision problems in poor countries for a couple of dollars per person. Other systems aimed at detecting vision-related illnesses such as cataract or glaucoma are in development. So are blood-testing technologies based on bio-chips tied to a mobile app for data collection.
Last week, I attended the Google’s Zeitgeist conference in the UK — two days of enthralling TED-like talks (all videos here). Among many impressive speakers, two got my attention. The first is Sugata Mitra, a professor of education technology at Newcastle University. In his talk — filled with a mixture of Indian and British humor — he described self-organizing systems experiments in rural India built around basic internet-connected computers. The results are compelling for language learning and basic understanding of science or geography.
The other speaker was the complete opposite. Scott Harrison has an interesting trajectory: he is a former New York nightclub promoter who changed drastically his life seven years ago by launching the organization Charity:Water. Harrison’s completely fresh approach helped him redefine how a modern charitable organization should work. He built his organization around three main ideas. First, 100% of donations should reach a project. To achieve this, he created two separate funding circuits: a public one for projects and another for to support operational costs.
Principle number two, build a brand, with all the attributes that go with it: Strong visual identity and well-designed web site (most of those operated by NGO’s are terrible); the web site is rich and attractive and it looks more like than an Obama campaign fundraising machine than a NGO, (I actually tested Charity:Water’s very efficient donation system by giving $100, curious to see where the money will land.)
The third and probably the most innovative idea was to rely on simple, proven digital technologies to guarantee complete project traceability. Donors can find precisely where their money ends up — whether it is for a $60 sand-filter fountain or a $2000 well. Last, Charity:Water funded a drilling truck equipped with a GPS tracker that makes it visible on Google Maps; in addition, the truck tweets its location on a real-time basis. Thanks to a $5 million Google funding, the organization currently works with seven high-tech US companies to develop robust water sensors able to show in real-time how much water is running on a given project. About 1000 of these are to be installed before year-end. This will help detect possible malfunctions and it will also carries promotional (read: fundraising) capabilities: thanks to a mobile app, a kid who helped raise few hundreds bucks among his friends can see where his or her water is actually flowing.
As I write this, I see comments coming, denouncing the gadgetization of charity, the waste of money in technologies not directly benefiting the neediest, Google’s obscure and mercantile motives, or the future payback for cellular carriers from the mobile initiatives mentioned earlier. Sure thing, objections must be heard. But, at this time, everyone who has traveled in poor areas — like I did in India or in sub-Saharan countries such as Senegal, Mauritania and Burkina-Faso — comes back with the strong conviction that all means must be used to provide these populations with basic things we take for granted in the Western world. As for Charity:Water, results speak for themselves: Over six years, the organization has raised almost $100m and it provided drinkable water to 3m people (out of 800m who don’t have access to it in the world — still lots of work left.) Like in many areas, the benefits of new, disruptive models based on modern technologies far outweigh the disadvantages.
Internet measurement techniques need a complete overhaul. New ways have emerged, potentially displacing older panel-based technologies. This will make it hard for incumbent players to stay in the game.
The web user is the most watched consumer ever. For tracking purposes, every large site drops literally dozens of cookies in the visitor’s browser. In the most comprehensive investigation on the matter, The Wall Street Journal found that each of the 50 largest web sites in the United Sates, weighing 40% of the US page views, installed an average of 64 files on a user device. (See the WSJ’s What They Know series and a Monday Note about tracking issues.) As for server logs, they record every page sent to the user and they tell with great accuracy which parts of a page collect most of the reader’s attention.
But when it comes to measuring a digital viewer’s commercial value, sites rely on old-fashioned panels, that is limited user population samples. Why?
Panels are inherited. They go back to the old days of broadcast radio when, in order to better sell advertising, dominant networks wanted to know which station listeners tuned in to during the day. In the late thirties, Nielsen Company made a clever decision: they installed a monitoring box in 1000 American homes. Twenty years later, Nielsen did the same, on a much larger scale, with broadcast television. The advertising world was happy to be fed with plenty of data — mostly unchallenged as Nielsen dominated the field. (For a detailed history, you can read Rating the Audience, written by two Australian media academics). As Nielsen expanded to other media (music, film, books and all sorts of polls), moving to the internet measurement sounded like a logical step. As of today, Nielsen only faces smaller competitors such as ComScore and others.
I have yet to meet a publisher who is happy with this situation. Fearing retribution, very few people talk openly about it (twisting the dials is so easy, you know…), but hey all complain about inaccurate, unreliable data. In addition, the panel system is vulnerable to cheating on a massive scale. Smarty pants outfits sell a vast array of measurement boosters, from fake users that will come in just once a month to be counted as “unique” (they are indeed), to more sophisticated tactics such as undetectable “pop under” sites that will rely on encrypted URLs to deceive the vigilance of panel operators. In France for instance, 20% to 30% of some audiences can be bogus — or largely inflated. To its credit, Mediametrie — the French Nielsen affiliate that produces the most watched measurements — is expending vast resources to counter the cheating, and to make the whole model more reliable. It works, but progress is slow. In August 2012, Mediametrie Net Ratings (MNR), launched a Hybrid Measure taking into account site centric analytics (server logs) to rectify panel numbers, but those corrections are still erratic. And it takes more than a month to get the data, which is not acceptable for the real-time-obsessed internet.
Publishers monitor the pulse of their digital properties on a permanent basis. In most newsrooms, Chartbeat (also imperfect, sometimes) displays the performance of every piece of content, and home pages get adjusted accordingly. More broadly, site-centric measures detail all possible metrics: page views, time spent, hourly peaks, engagement levels. This is based on server logs tracking dedicated tags inserted in each served page. But the site-centric measure is also flawed: If you use, say, four different devices — a smartphone, a PC at home, another at work, and a tablet — you will be incorrectly counted as four different users. And if you use several browsers you could be counted even more times. This inherent site-centric flaw is the best argument for panel vendors.
But, in the era of Big Data and user profiling, panels no longer have the upper hand.
The developing field of statistical pairing technology shows great promise. It is now possible to pinpoint a single user browsing the web with different devices in a very reliable manner. Say you use the four devices mentioned earlier: a tablet in the morning and the evening; a smartphone for occasional updates on the move, and two PCs (a desktop at the office and a laptop elsewhere). Now, each time you visit a new site, an audience analytics company drops a cookie that will record every move on every site, from each of your devices. Chances are your browsing patterns will be stable (basically your favorite media diet, plus or minus some services that are better fitted for a mobile device.) Not only your browsing profile is determined from your navigation on a given site, but it is also quite easy to know which sites you have been to before the one that is currently monitored, adding further precision to the measurement.
Over time, your digital fingerprint will become more and more precise. Until then, the set of four cookies is independent from each other. But the analytics firm compiles all the patterns in single place. By data-mining them, analysts will determine the probability that a cookie dropped in a mobile application, a desktop browser or a mobile web site belongs to the same individual. That’s how multiple pairing works. (To get more details on the technical and mathematical side of it, you can read this paper by the founder of Drawbridge Inc.) I recently discussed these techniques with several engineers both in France and in the United Sates. All were quite confident that such fingerprinting is doable and that it could be the best way to accurately measure internet usage across different platforms.
Obviously, Google is best positioned to perform this task on a large scale. First, its Google Analytics tool is deployed over 100 millions web sites. And the Google Ad Planner, even in its public version, already offers a precise view of the performance of many sites in the world. In addition, as one of the engineers pointed out, Google is already performing such pairing simply to avoid showing the same ad twice to a someone using several devices. Google is also most likely doing such ranking in order to feed the obscure “quality index” algorithmically assigned to each site. It even does such pairing on a nominative basis by using its half billion Gmail accounts (425 million in June 2012) and connecting its Chrome users. As for giving up another piece of internet knowledge to Google, it doesn’t sounds like a big deal to me. The search giant knows already much more about sites than most publishers do about their own properties. The only thing that could prevent Google from entering the market of public web rankings would be the prospect of another privacy outcry. But I don’t see why it won’t jump on it — eventually. When this happens, Nielsen will be in big trouble.
Both are great American newspapers, both suffer from the advertising slump and from the transition to digital. But the New York Times’ paywall strategy is making a huge difference.
The Washington Post’s financials provide a good glance at the current status of legacy media struggling with the shift to digital. Unlike others large dailies, the components of the Post’s P&L clearly appear in its statements, they are not buried under layers of other activities. Product-wise, the Post remains a great news machine, collecting Pulitzer Prizes with clockwork regularity and fighting hard for scoops. The Post also epitomizes an old media under siege from specialized, more agile outlets such as Politico, ones that break down the once-unified coverage provided by traditional large media houses. In an interview to the New York Times last year, Robert G. Kaiser, a former editor who had been with the paper since 1963, said this:
“When I was managing editor of The Washington Post, everything we did was better than anyone in the business,” he said. “We had the best weather, the best comics, the best news report, the fullest news report. Today, there’s a competitor who does every element of what we do, and many of them do it better. We’ve lost our edge in some very profound and fundamental ways.”
The iconic newspaper has been slow to adapt to the digital era. Its transformation really started around 2008. Since then, it has checked all the required boxes: integration of print and digital productions; editors are now involved on both sides of the news production and all relentlessly push the newsroom to write more for the digital version; many blogs covering a wide array of topics have been launched; and the Post now has a good mobile application. The “quant” culture also set in, with editors now taking into account all the usual metrics and ratios associated with digital operations, including a live update of Google’s most relevant keywords prominently displayed in the newsroom. All this helped the Post collect 25.6 million unique visitors per month, vs. 4 to 5 million for Politico, and 35 million for the New York Times that historically enjoys a more global audience.
Overall, the Washington Post Company still relies heavily on its education business, as show in the table below :
Revenue:.......$4.0bn (-3% vs. 2011) Education:.....$2.2bn (-9%) Cable TV:......$0.8bn (+4%) Newspaper:.....$0.6bn (-7%) Broadcast TV:..$0.4bn (+25%)
But the education business no is longer the cash cow it used to be. Not only did its revenue decrease but, last year, it lost $105m vs. a $96m profit in 2011. As for the newspaper operation, it widened its losses to $53m in 2012 from $21m in 2011. And the trend worsens: for the first quarter of 2013, the newspaper division’s revenue decreased by 4% vs. a year ago and it lost $34m vs. $21m for Q1 2011.
Now, let’s move to a longer-term perspective. The chart below sums up the Post’s (and others legacy media’s) problem:
Translated into a table:
Q1-2007 Q1-2013 Change % Revenue (All):....$219m.....$127m.....-42% Print Ad:.........$125m.....$49m......-61% Digital Ad:.......$25m......$26m......+4%
A huge depletion in print advertising, a flat line (at best) for digital advertising, the elements sum up the equation faced by traditional newspapers going from print to online.
Now, let’s look at the circulation side using a comparison with the New York Times. (Note that it’s not possible to extract the same figures for advertising from the NYT Co.’s financial statements because they aggregate too many items.) The chart below shows the evolution of the paid circulation for the Post between 2007 and 2013:
..and for the NY Times:
Call it the paywall effect: The New York Times now aggregates both print and digital circulations. The latter now amounts to 676,000 digital subscribers that have been recruited using the NYT’s metered system (see previous Monday Notes under the “paywall” tag). (Altogether, digital subscribers to the NYT, the International Herald and the Boston Globe now number 708,000). It seems the NYT found the right formula: its digital subscribers portfolio grows at a 45% per year rate, thanks to a combination of sophisticated marketing, mining customer data and aggressive pricing (it even pushes special deals for Mother’s Day.) All this adds to the bottom line: if each digital sub brings $12 a month, the result is about $100m that didn’t exist two years ago. But it does not benefit the advertising side as it continues to suffer. For the first quarter of 2013 vs. the same period last year, the NYT Company lost 13% in print ads revenue and 4% for digital ads. (As usual in their earning calls, NYT officials mention the deflationary effects of ad exchanges as one cause of erosion in digital ads.)
One additional sign that digital advertising will remain in the doldrums: Politico, too, is exploring alternatives; it will be testing a paywall in a sample of six states and for its readers outside the United States. The system will be comparable to the NYT.com or the FT.com, with a fixed number of articles available for free (see Politico’s management internal memo.)
It is increasingly clear that readers are more willing than we once thought to pay for content they value and enjoy. With more than 300 media companies now charging for online content in the U.S., the notion of paying to read expensive-to-produce journalism is no longer that exotic for sophisticated consumers.
Jawbone is launching is UP wristband in Europe. Beyond the quirky gadget lies a much larger project: Changing healthcare — for better or for worst.
Hyperkinetic as he is, Hosain Rahman, the Jawbone founder, must be saturating his Jawbone UP wristband with data. The rubberized band, nicely designed by Yves Behar, is filled with miniaturized electronics: accelerometers and sensors monitor your activity through out the day, recording every motion in your life, from walking in the street to the micro-movements of your hand in a paradoxical sleep phase. For the fitness freak, the Up is a great stimulus to sweat even more; for the rest of us, it’s more like an activity and sleep monitoring device. (For a complete product review, see this article from Engadget, and also watch Hosain Rahman’s interview by Kevin Rose, it’s well worth your time.) Last week in Paris, after my meeting with Hosain, I headed straight to the nearest Apple Store to pick-up my Up (for €129), with the goal of exploring my sleeping habits in greater depth.
After using the device for a couple of days, the app that comes with it tells me I’m stuck in a regime of 5 to 6 hours of bad sleep — including less than three hours of slow-wave sleep commonly known as deep sleep. Interesting: Two years ago, I spend 36 hours covered with electrodes and sensors in a hospital specializing in studying and (sometimes) treating insomnia — after a 6 months on a wait list to get the test. At one point, to monitor my sleep at home, doctors lent me a cumbersome wristband, the size of a matchbox. The conclusion was unsurprising: I was suffering from severe insomnia, and there was very little they could do about it. The whole sleep exploration process must have cost 3000€ to the French public health care system, 20 times more than the Jawbone gadget (or the ones that do a similar job). I’m not contending that medical monitoring performed by professionals can be matched by a wristband loaded with sensors purchased in an electronics store. But, aside from the cost, there is another key difference: the corpus of medical observations is based on classic clinical tests of a small number of patients. On the other hand, Jawbone thinks of the UP wristband — to be worn 24/7 by millions of people — in a Big Data frame of mind. Hosain Rahman is or will soon be right when he says his UP endeavor contributes to the largest sleep study ever done.
Then it gets interesting. As fun as they can be, existing wearable monitoring devices are in the stone age compared to what they will become in three to five years. When I offered Hosain a list of features that could be embedded in future versions of the UP wristband — such as a GPS module (for precise location, including altitude), heartbeat, blood pressure, skin temperature and acidity sensors, bluetooth transmitter — he simply smiled and conceded that my suggestions were not completely off-track. (Before going that far, Jawbone must solve the battery-life issue and most likely design its own, dedicated super-low consumption processor.) But Hosain also acknowledges his company is fueled by a much larger ambition than simply build a cool piece of hardware aimed at fitness enthusiasts or hypochondriacs.
His goal is nothing less than disrupting the healthcare system.
The VC firms backing Jawbone are on the same page. The funding calendar compiled by Crunchbase speaks for itself: out of the stunning $202m raised since 2007, most of it ($169m), has been raised since 2011, the year of the first iteration of the UP wristband (it was a failure due to major design flaws). All the big houses are on board: Khosla Ventures, Sequoia, Andreessen-Horowitz, Kleiner Perkins, Deutsche Telekom… They all came with an identical scheme in mind: a massive deployment of the monitoring wristband, a series of deals with the biggest healthcare companies in America to subsidize the device. All this could result in the largest health-related dataset ever build.
The next logical step would be the development of large statistical models based on customers’ recorded data. As far as privacy is concerned, no surprise: Jawbone is pretty straightforward and transparent: see their disclosure here. It collects everything: name, gender, size and weight, location (thanks to the IP address) and, of course, all the information gathered by the device, or entered by the user, such as the eating habits. A trove of information.
Big Data businesses focusing on health issues drool over what can be done with such a detailed dataset coming from, potentially, millions of people. Scores of predictive morbidity models can be built, from the most mundane — back pain correlated to sleep deprivation — to the most critical involving heart conditions linked to various lifestyle factors. When asked about privacy issues, Hosain Rahman insists on Jawbone’s obsessive protection of his customers, but he also acknowledges his company can build detailed population profiles and characterize various risk factors with substantially greater granularity.
This means serious business for the health care and insurance sectors — and equally serious concerns for citizens. Imagine, just for a minute, the impact of such data on the pricing structure of your beloved insurance company? What about your credit rating if you fall into a category at risk? Or simply your ability to get a job? Of course, the advent of predictive health models potentially benefits everyone. But, at this time, we don’t know if and how the benefits will outweigh the risks.
In the search for new advertising models, Native Ads are booming. The ensuing Web vs. Native controversy is a festival of fake naïveté and misplaced indignation.
Native Advertising is the politically correct term for Advertorial, period. Or rather, it’s an upgrade, the digital version of an old practice dating back to the era of typewriters and lead printing presses. Everyone who’s been in the publishing business long enough has in mind the tug-of-war with the sales department who always wants its ads to to appear next to an editorial content that will provide good “context”. This makes the whole “new” debate about Native Ads quite amusing. The magazine sector (more than newspapers), always referred to “clean” and “tainted” sections. (The latter kept expanding over the years). In consumer and lifestyle sections, editorial content produced by the newsroom is often tailored to fit surrounding ads (or to flatter a brand that will buy legit placements).
The digital era pushes the trend several steps further. Today, legacy media brands such as Forbes, Atlantic Media, or the Washington Post have joined the Native Ads bandwagon. Forbes even became the poster child for that business, thanks to the completely assumed approach carried out by its chief product officer Lewis DVorkin (see his insightful blog and also this panel at the recent Paid Content Live conference.) Advertising is not the only way DVorkin has revamped Forbes. Last week, Les Echos (the business daily that’s part of the media group I work for) ran an interesting piece about it titled “The Old Press in a Startup mode” (La vielle presse en mode start-up). It details the decisive — and successful — moves by the century-old media house: a downsized newsroom, external contributors (by the thousand, and mostly unpaid) who produce a huge stream of 400 to 500 pieces a day. “In some cases”, wrote Lucie Robequain, Les Echos’s New York correspondent, “the boundary between journalism and advertorial can be thin…” To which Lewis DVorkin retorts: “Frankly, do you think a newspaper that conveys corporate voices is more noble? At Forbes, at least, we are transparent: We know which company the contributor works for and we expose potentials conflicts of interests in the first graph…” Maybe. But screening a thousand contributors sounds a bit challenging to me… And Forbes evidently exposed itself as part of the “sold” blogosphere. Les Echos’ piece also quotes Joshua Benton from Harvard’s Nieman Journalism Lab who finds the bulk of Forbes production to be, on average, not as good as it was earlier, but concedes the top 10% is actually better…
As for Native Advertising, two years ago, Forbes industrialized the concept by creating BrandVoice. Here is the official definition:
Forbes BrandVoice allows marketers to connect directly with the Forbes audience by enabling them to create content – and participate in the conversation – on the Forbes digital publishing platform. Each BrandVoice is written, edited and produced by the marketer.
Practically, Forbes lets marketers use the site’s Content Management System (CMS) to create their content at will. The commercial deal — from what we can learn — involves volumes and placements that cause the rate to vary between $50,000 to $100,000 per month. The package can also include traditional banners that will send traffic back to the BrandVoice page.
At any given moment, there are about 16 brands running on Forbes’ “Voices”. This revenue stream was a significant contributor to the publisher’s financial performances. According to AdWeek (emphasis mine):
The company achieved its best financial performance in five years in 2012, according to a memo released this morning by Forbes Media CEO Mike Perlis. Digital ad revenue, which increased 19 percent year over year, accounted for half of the company’s total ad revenue for the year, said Perlis. Ten percent of total revenue came from advertisers who incorporated BrandVoice into their buys, and by the end of this year, that share is estimated to rise to 25 percent.
Things seemed pretty positive across other areas of Forbes’ business as well. Newsstand sales and ad pages were up 2 percent and 4 percent, respectively, amid industry-wide drops in both areas. The relatively new tablet app recently broke 200,000 downloads.
A closer look gives a slightly bleaker picture: According to latest data from the Magazine Publishers Association, between Q1 2013 and Q1 2012, Forbes Magazine (the print version only) lost 16% in ads revenues ($50m to $42m). By comparison, Fast Company scored +25%, Fortune +7%, but The Economist -27% and Bloomberg Business Week -30%. The titles compiled by the MPA are stable (+0.5%).
I almost never click on banners (except to see if they work as expected on the sites and apps I’m in charge of). Most of the time their design sucks, terribly so, and the underlying content is usually below grade. However, if the subject appeals to me, I will click on Native Ads or brand contents. I’ll read it like another story, knowing full well it’s a promotional material. The big difference between a crude ad and a content-based one is the storytelling dimension. Fact is: Every company has great stories to tell about its products, strategy or vision. And I don’t see why they shouldn’t be told resorting to the same storytelling tools news media use. As long as it’s done properly, with a label explaining the contents’ origin, I don’t see the problem (for more on this question, read a previous Monday Note: The Insidious Power of Brand Content.) In my view, Forbes does blur the line a bit too much, but Atlantic’s business site Quartz is doing fine in that regard. With the required precautions, I’m certain Native Ads, or branded contents are a potent way to go, especially when considering the alarming state of other forms of digital ads. Click-through rates are much better (2%-5% vs. a fraction of a percentage for a dumb banner) and the connection to social medias works reasonably well.
For news media companies obsessed with their journalistic integrity (some still do…), the development of such new formats makes things more complicated when it comes to decide what’s acceptable and what’s not. Ultimately, the editor should call the shots. Which brings us to the governance of media companies. For digital media, the pervasive advertising pressure is likely keep growing. Today, most rely on a Chief Revenue Officer to decide what’s best for the bottom line such as balancing circulation and advertising, arbitraging between a large audience/low yield or smaller audience/higher yield, for instance. But, in the end, only the editor must be held accountable for the contents’ quality and the credibility — which contribute to the commercial worthiness of the media. Especially in the digital field, editors should be shielded from the business pressure. Editors should be selected by CEOs and appointed by boards or better, boards of trustees. Independence will become increasingly scarce.
The small Baltic republic of Estonia is run like a corporation. But its president believes government must to play a crucial role in areas of digital policy such as secure ID.
Toomas Hendrik Ilves must feel one-of-a-kind when he attends international summits. His personal trajectory has nothing in common with the backgrounds of other heads of state. Born in Stockholm in 1953 where his parents had taken refuge from the Soviet-controlled Estonia, Ilves was raised mostly in the United States. There, he got a bachelor’s degree in psychology from Columbia University and a master’s degree in the same subject from the University of Pennsylvania. In 1991, when Estonia became independent, Ilves was in Munich, working as a journalist for Radio Free Europe (he is also fluent English, German and Latin.) Two years later, he was appointed ambassador to — where else? — the United States. In 2006, a centrist coalition elected him president of the republic of Estonia (1.4m inhabitants).
One more thing about Toomas Hendrik Ilves: he programmed his first computer at the age of 13. A skill that would prove decisive for his country’s fate.
Last week in Paris, president Ilves was the keynote speaker at a conference organized by Jouve Group, a 3,000 employees French company specialized in digital distribution. The bow-tied Estonian captivated the audience with his straight speech, the polar opposite of the classic politician’s. Here are abstracts from my notes:
“At the [post-independence] time, the country, plagued by corruption, was rather technologically backward. To give an example, the phone system in the capital [Tallinn] dated back to 1938. One of our first key decisions was to go for the latest digital technologies instead of being encumbered by analog ones. For instance, Finland offered to provide Estonia with much more modern telecommunication switching systems, but still based on analog technology. We declined, and elected instead to buy the latest digital network equipment”.
Estonia’s ability to build a completely new infrastructure without being dragged down by technologies from the past (and by the old-guard defending it) was essential to the nation’s development. When I later asked him about the main resistance factors he had encountered, he mentioned legacy technologies: “You in France, almost invented the internet with the Minitel. Unfortunately, you were still pushing the Minitel when Mosaic [the first web browser] was invented”. (The videotext-based system was officially retired at last in… 2012. France lost almost a decade by delaying its embrace of Internet Protocols.)
The other key decision was introducing computers in schools and teaching programming on a large scale. Combined to the hunger for openness in a tiny country emerging from 45 years of Soviet domination, this explains why Estonia has become an energetic tech incubator, nurturing big names like Kazaa or Skype (Skype still maintains its R&D center in Tallinn.)
“Every municipality in Estonia wanted to be connected to the Internet, even when officials didn’t know what it was. (…) And we played with envy…. With neighbors such as Finland or Sweden, the countries of Nokia and Ericsson, we wanted to be like them.”
To further encourage the transition to digital, cities opened Internet centers to give access to people who couldn’t afford computers. If, in Western Europe, the Internet was seen as a prime vector of American imperialism, up in the newly freed Baltic states, it was seen as an instrument of empowerment and access to the world:
“We wanted a take the leap forward and build a modern country from the outset. The first public service we chose to go digital was the tax system. As a result, not only we eliminate corruption in the tax collection system — a computer is difficult to bribe –, but we increased the amount of money the state collected. We put some incentives in: When filing digitally, you’d get your tax refund within two weeks versus several months with paper. Today, more than 95% of tax returns are filed electronically. And the fact that we got more money overcame most of the resistance in the administration and paved the way for future developments”.
“At some point we decided to give to every citizen a chip-card… In other words, a digital ID card. When I first mentioned this to some Anglo-saxon government officials, they opposed the classic ”Big Brother” argument. Our belief was, if we really wanted to build a digital nation, the government had to be the guarantor of digital authentication by providing everyone with a secure ID. It’s the government’s responsibility to ensure that someone who connects to an online service is the right person. All was built on the public key-private key encryption system. In Estonia, digital ID is a legal signature.The issue of secure ID is essential, otherwise we’ll end-up stealing from ourselves. Big brother is not the State, Big Brother lies in Big Data.”
“In Estonia, every citizen owns his or her data and has full access to it. We currently have about 350 major services securely accessible online. A patient, never gets a paper prescription; the doctor will load the prescription in a the card and the patient can go to any pharmacy. The system will soon be extended to Sweden, Denmark, Finland, Norway, as our citizens travel a lot. In addition, everyone can access their medical records. But they can chose what doctor will see them. I was actually quite surprised when a head of State from Southern Europe told me some paper medical records bear the mention “not to be shown to the patient” [I suspect it was France…]. As for privacy protection, the ID chip-card works both ways. If a policeman wants to check on your boyfriend outside the boundaries of a legal investigation, the system will flag it — it actually happened.”
As the Estonian president explained, some good decisions also come out of pure serendipity,:
“[In the Nineties], Estonia had the will but not all the financial resources to build all the infrastructure it wanted, such as massive centralized data centers. Instead, the choice was to interconnect in the most secure way all the existing government databases. The result has been a highly decentralized network of government servers that prevent most abuses. Again, the citizen can access his health records, his tax records, the DMV [Department of Motor Vehicles], but none of the respective employees can connect to another database”.
The former Soviet Union had the small Baltic state pay the hard price for its freedom. In that respect, I recommend reading CyberWar by Richard Clarke, a former cyber-security advisor in the Clinton administration, who describes multiple cyber-attacks suffered by Estonia in 2007. These actually helped the country develop skillful specialists in that field. Since 2008, Tallinn harbors NATO’s cyber defense main center in addition to a EU large-scale IT systems center.
Toomas Hendrik Ilves stressed the importance of cyber-defense, both at the public and private sector level:
“Vulnerability to a cyber attacks must be seen as a complete market failure. It is completely unacceptable for a credit card company to deduct theft from its revenue base, or for a water supply company to invoke cyber attack as a force majeure. It is their responsibility to protect their systems and their customers. (…) Every company should be aware of this, otherwise we’ll see all our intellectual property ending up in China”.
The Norwegian media group Schibsted now aggressively invests in startups. The goal: digital dominance, one market at a time. France is in next in line. Here is a look at their strategy.
This thought haunts most media executives’ sleepless nights: “My legacy business is taking a hit from the internet; my digital conversion is basically on track, but it goes with an massive value destruction. We need both a growth engine and consolidation. How do we achieve this? What are our core assets to build upon? Should we undertake a major diversification that could benefit from our brand and know-how?” (At that moment, the buzzer goes off, it’s time to go to work.) Actually, such nighttime cogitations are a good sign, they are the privilege of people gifted with long term view.
The Scandinavian media power house Schibsted ASA falls into the long-termist category. Key FY 2012 data follow. Revenue: 15bn Norwegian Kroner (€2bn or $2.6bn.); EBIT: 13.5%. The group currently employs 7800 people spread over 29 countries. 40% of the revenue and 69% of the EBITDA come from online activities. Online classifieds account for 25% of revenue and 52% of the EBITDA; the rest in publishing. (The usual disclosure: I worked for Schibsted between 2007 and 2009, in the international division).
The company went through the delicate transition to digital about five years ahead of other media conglomerates in the Western world. To be fair, Schibsted enjoyed unique conditions: profitable print assets, huge penetration in small Nordic markets immune to foreign players, a solid grasp of all components of the business, from copy sales to subscribers for newspapers and magazines, to advertising and distribution channels. In addition, the group enjoys a stable ownership structure (controlled by a trust), and its board always encourages the management to aim high and take risks. The company is led by a lean team: only 60 people at the Oslo headquarters to oversee the entire operations, largely staffed by McKinsey alumni.
The transition began in 1995 when Schibsted came to realize the media sector’s center of gravity would inevitably shift to digital. The move could be progressive for reading habits but it would definitely be swift and hard for critical revenue streams such as classifieds and consumer services. Hence the unofficial motto that’s still remains at the core of Schibsted’s strategy: Accelerating the inevitable (before the inevitable falls on us). Such view led to speeding up the demise of print classifieds, for instance, in order to free oxygen for emerging digital products. Not exactly popular at the time but, thanks to methodical pedagogy, the transition went well.
One after the other, business units moved to digital. Then, the dot-com crash hit. In Norway and Sweden, Schibsted media properties where largely deployed online with large dedicated newsrooms, emerging consumer services built from scratch or from acquisitions. Management wondered what to do: Should we opt for a quick and massive downsizing to offset a brutal 50% drop in advertising revenue? Schibsted took the opposite tack: Yes business is terrible, but this is mostly the result of the financial crisis; the audience is still here, not only it won’t go away but, eventually, it will experience huge growth. This was the basis for two key decisions: Pursuing investments in digital journalism while finding ways to monetize it; and doing whatever it took in order to dominate the classifieds business.
In Sweden, a bright spot kept blinking on Schibsted’s radar. Blocket was growing like crazy. It was a bare-bone classifieds website, offering a mixture of free and premium ads in the simplest and most efficient way. At first, Schibsted Sweden tried to replicate Blocket’s model with the goal of killing it. After all, the group thought, it had all the media firepower needed to lift any brand… Wrong. After a while, it turned out Schibsted’s copycat still lagged behind the original. In the kind of pragmatism allowed by deep pockets, Schibsted decided to acquire Blocket (for a hefty price). The clever classifieds website will become the matrix for the group’s foray in global classifieds.
In 2006, Schibsted had acquired and developed a cluster of consumer-oriented websites, from Yellow-Pages-like directories, to price-comparisons sites, or consumer-data services. Until then, the whole assemblage had been built on pure opportunism. It was time to put things in order. Hence, in 2007, the creation of Tillväxmedier, the first iteration of Schibsted Development. (The Norwegian version was launched in 2010 and the French one starts this year).
Last week in Paris, I met Richard Sandenskog, Tillväxmedier’s investment manager and Marc Brandsma, the newly appointed CEO of Schibsted Development France. Sandenskog is a former journalist who also spent eight years in London as a product manager for Yahoo! Brandsma is a seasoned French entrepreneur and former venture capitalist. Despite local particularisms precluding a dumb replication of Nordic successes, two basics principles remain:
1. Invest in the number one in a niche market, or a potential number one in a larger one. “In the online business, there is no room for number two”, said Richard Sandenskog. “We want to leverage our dominance on a given market to build brands and drive traffic. The goal is to find the best way to expose the new brand in different channels and integrate it in various properties. The keyword is relevant traffic. We don’t care for page views for their sake, but for the value they bring. We see clicks as a currency.”
2. Picking the right product in the right sector. In Sweden, the Schibsted Developement portfolio evolves around the idea of empowering the consumer. To sum up: people are increasingly lost in a jungle of pricing, plans, offers, deals, for the services they need. It could be cell phones, energy bills, consumer loans… Hence a pattern for acquisitions: a bulk purchase web site for electricity (the Swedish market is largely deregulated with about 100 utilities companies); a helper to find the best cellular carrier plan based on individual usage; a personal finance site that lets consumers shop around for the best loan without degrading their credit rating; a personal factoring service where anyone can auction off invoices, etc.
Most are now #1 on their segment. “We give the power back to the consumer, sums up Richard Sandenskog. We are like Mother Teresa but we make money doing it….” Altogether, Tillväxmedier’s portfolio encompasses about 20 companies that made a billion of Swedish Kröner (€120m, $155m) in 2012 with a 12% EBITDA (several companies are in the growth phase.) All in five years…
France will be a different story. It’s five times bigger than Sweden, a market in which startups can be expensive. But what triggered Schibsted ASA’s decision to create a growth vehicle here is the spectacular performance of the classifieds site LeBoncoin.fr (see a previous Monday Note Schibsted’s extraordinary click machines): €98m in revenue and a cool 68% EBITDA last year. LeBoncoin draws 17m unique viewers (according to Nielsen). Based on this valuable asset, explains Marc Brandsma, the goal is to create the #1 online group in France (besides Facebook and Google). “The typical players we are looking for are B2C companies that already have a proven product — we won’t invest in PowerPoint presentations — driven by a management team aiming to be the leader in their market. Then we acquire it; we buy out all minority shareholders if necessary”. No kolkhoz here; decisions must be made quickly, without interference. “At that point, adds Brandsma we tell managers we’ll take care of growth by providing traffic, brand notoriety, marketing, all based on best practices and proven Schibsted expertise”. Two sectors Marc Brandsma says he won’t touch, though: business-to-business services and news media (ouch…)
Publishers are concerned: The shift to mobile advertising revenue is lagging way behind the transfer of users to smartphones and tablets. Solutions are coming, but it might take a while before mobile ads catch up with users.
(A mistake in the ad revenue chart has been corrected)
Last week, at a self-congratulatory celebration held by the French audit bureau of circulation (called OJD), the sports daily l’Equipe was honored for the best progression in mobile audience. (I’m also happy to mention that Les Echos, the business group I’m working for, won the award for the largest growth in overall circulation with a gain of +3.3% in 2012 — in a national market losing 3.8%.) In terms of mobile page views, l’Equipe is three times bigger than the largest national daily (Le Monde). Unfortunately, its publisher tarnished the end the ceremony a bit by saying [I’m paraphrasing]: “Well, thanks for the award. But let’s not fool ourselves. The half of our digital traffic that comes from mobile represents only 5% of our overall digital revenue. We better react quickly, otherwise we’ll be dead soon”. While that outburst triggered only reluctant applause, almost everyone in the audience agreed.
Two days before, IREP (an advertising economics research organization) released 2012 data on advertising revenue for all media. Here is a quick look:
All media........... €13,300m......-3.5% TV...................€3,300m.......-4.5% Print press (all)....€3,209m.......-8.2% National Dailies.....€233m........ -8.9% Internet Display.....€646m.........+4.8% Internet Search......€1,141m.......+7% Mobile...............€43m.........+29%
A few comments:
— The print press is nosediving faster than ever: In 2011, national dailies where losing 3.7% in revenue; in 2012, they lost almost 9%; and Q1 2013 doesn’t look better.
— On the digital side: Search is now almost twice as big as the display ads and it’s growing faster (7% vs. 4.8%). Google is grabbing most of this growth as the €1.14bn in revenue mentioned by IREP is roughly the equivalent of Google’s revenue in France.
— Mobile revenue is the fastest growing segment (+29%), but weighs only 2% of the entire digital segment (€1,830m revenue in 2012).
Looking at audiences reveals an even bleaker picture. Data compiled by the French circulation bureau for 87 media show that, between February 2012 and February 2013, the mobile applications audience grew 67% in visits and 102% in page views — again, in a segment that only grew 29% for 2012:
The conclusion is dreadful. Not only do audiences massively flock to mobile (more visits), but people spend more time in their favorite media app (with an even greater increase in page views) but, also, each viewer brings less and less money as ad revenues grew slower than visits — by a factor of two — and slower than page views — by a factor of three.
At the same time, in order to address this shift in audience, media are allocating more and more resources to mobile: Apps gain in sophistication and have to run on a greater number of devices. By the end of this year, the iOS ecosystem, until recently the simplest to deal with, will have at least five different screen sizes, and Android dozens of possible configurations. To add insult to injury, mobile apps don’t allow cookies, which prevents most measurements and users tend to randomly switch from their mobile devices to their PC or tablet, making tracking even more difficult…
Where do we go from here?
Publishers have no choice but following their readers. But, in doing so, they better be smart and select the right vectors. The coming months and years are likely to see scores of experiments. Native applications, meaning dedicated to a given ecosystem, might not last forever. As for now, they still offer superior performance but web apps, served from the internet regardless of the terminal’s operating system, are gaining traction. They become more fluid, accommodate more functionalities and improve their storage of contents for offline reading, but it will be a while before they become mainstream. In addition, web apps allow permanent improvements; if you look at the version number of web apps, you’ll see publishers pushing new releases on a weekly basis. They do so at will, as opposed to begging Apple to speed up the approval of native applications (not to mention the absence of a direct link to the customer.)
Similarly, many publishers are placing serious bets on responsive design sites that dynamically adjust to the screen size (see a previous Monday Note on Atlantic’s excellent business site Quartz). Liquid design, as it is also called, is great in theory but extremely difficult to develop and the slightest change requires diving into hugely complex HTML code (which also makes pages heavier to download and render.)
Technically speaking, in a near future, as rendering engines and processors keep improving, the shift to the mobile will no longer be a problem. But solving the low yield of mobile advertising is another matter. The advertising community evangelizes the promises of Real-Time Bidding; RTB basically removes the Ken and Barbie from the transaction process as demand and supply are matched through automated market places. But RTB is also known to pushes assets prices further down. As usual in the digital ad business, the likely winner will be Google, along with a few smaller players — before these are eventually crushed by Google.
The mobile ecosystem will come up with smarter innovations. Some will involve geo-located advertising, but the concept, great in demo, has yet to prove its revenue potential. Data collected through various means are much potent vector to stimulate mobile ads. Facebook knows it only too well: in the last quarter of 2012, it made $305m in mobile ads (that’s more than five times the French mobile ad market… in one quarter!); it accounts for 23% of FB’s total revenue.
Other technologies look more farfetched but quite promising. This article in the MIT Technology Review features a company that could solve a major issue, that is following users as they jump from one device to another. Drawbridge, Inc. was founded by Kamakshi Sivaramakrishnan, a statistics and probability PhD from Stanford. Her pitch (see a video here): bridging smartphones, tablets and PCs thanks to what she calls a “giant statistical space-time data triangulation technique”. In plain English: a model that generates clusters (based on patterns of usage and collected data) that will be used to create a “match” pinpointing an individual’s collection of devices. The goal is giving advertisers the ability to easily extend their campaigns from PC to mobile terminals. A high potential indeed. It caught the interest of two major venture capital firms, Kleiner Perkins Caufield & Byers and Sequoia Capital, who together injected $20m in the startup. Drawbridge claims to have already bridged about 540 million devices (at a rate or 800 per minute!)
This could be one of the many boards used to ride the Mobile rogue wave and, for many players, avoid drowning.
Autonomous vehicles — fully or partially — will rely on a large variety of data types. And guess who is best positioned to take advantage of this enormous new business? Yep, Google is.
The Google driveless car is an extraordinary technical achievement. To grasp the its scope, watch this video featuring a near-blind man sitting behind the wheel of an autonomous Prius as the car does the driving. Or, to get an idea of the complexity of the system, see this presentation by Sebastian Thrun (one of the main architects of Google’s self-driving car project) going through the multiple systems running inside the car.
Spectacular as it is, this public demonstration is merely the tip of the iceberg. For Google, the economics of self-driving cars lie in a vast web of data that will become a must to operate partially or fully self-driving vehicles on a massive scale. This network of data will require immense computational and storage capabilities. Consider the following needs in the context of Google’s current position in related fields.
Maps. Since the acquisition of Where2 Technologies and Keyhole Inc. in 2004, Google has been refining its mapping system over and over again (see this brief history of Google Maps). After a decade of work, Google Maps feature a rich set of layers and functions. Their mapping of the world has been supplemented by crowdsourcing systems that allow corrections as well as the creation of city maps where data do not exist. Street View has been launched in 2007 and more than 5 million miles of metropolitan area have been covered. Today, maps are augmented with satellite imagery, 3D, 45-degree aerial views, buildings and infrastructure renderings. All this is now merged, you can plunge from a satellite view to the street level.
Google’s goal is building the most complete an reliable map system in the world. Gradually, the company replaces geo-data from third party suppliers with data collected by its own crews around the world. To get an idea of how fast Google progresses, consider the following: In 2008, Google mapping covered 22 countries and offered 13 million miles with driving directions. In 2012, 187 countries where covered, 26 million miles with driving directions, including 29 countries with turn-by-turn directions. On the chart below, you can also see the growing areas of Google-sourced maps (in green) as opposed to licensed data (in red):
Apple’s failure in maps shows that, regardless of the amount of money invested, experience remains a key element. In California and India, Google maintains a staff of hundreds if not thousands of people manually checking key spots in large metropolitan areas and correcting errors. They rely on users whose individual suggestions are manually checked, using Street View imagery as shown here (the operator drags the 360° Street View image to verify signs at an intersection — click to enlarge.)
Google’s engineers even developed algorithms aimed at correcting slight misalignments between “tiles” (pieces of satellite imagery stitched together) that could result from… tectonic plates movement — it could happen when two pictures are taken two years apart. Such accuracy is not a prerequisite for current navigation, but it could be important for autonomous cars that will depend heavily on ultra-precise (think what centimers/inches mean when cars are close on the road) mapping of streets and infrastructures.
But, one might object, Google is not the only company providing geo-data and great mapping services. True: The Dutch company Tom-tom, or the Chicago-based Navteq have been doing this for years. As geo-data became strategically important, Tom-tom acquired TeleAtlas for $2.9bn in 2008, and Nokia bought Navteq in 2007. But Google intends to move one step ahead by merging its mapping and imagery technologies with its search capabilities. Like in this image:
Accurate, usable and data-rich maps are one thing. Now, when you consider the variety of data needed for autonomous or semi-autonomous vehicles, the task becomes even more enormous. The list goes on:
Traffic conditions will be a key element. It’s pointless to envision fleets of self-driving, or assisted-driving cars without systems to manage traffic. These goes along with infrastructure development. For instance, as Dr. Kara Kockelman, professor of transportation engineering at the University of Texas at Austin explained to me, in the future, we might see substantial infrastructure renovation aimed at accommodating autonomous vehicles (or vehicles set on self-driving mode). Dedicated highway corridors would be allocated to “platoons” of cars driving close together, in a faster and safer way, than manned cars. Intersections, she said, are also a key challenge as they are responsible for most traffic jams (and a quarter of accidents). With the advent of autonomous vehicles, we can see cars taken over by intersection management systems that will regroup them in platoons and feed them seamlessly in intersecting traffic flows, like in this spectacular simulation. If traffic lights are still needed, they will change every five or six seconds just to optimize the flow.
Applied to millions of vehicles, traffic and infrastructure management will turn into a gigantic data and communication problem. Again, Google might be the only entity able to write the required software and to deploy the data centers to run it. Its millions of servers will be of great use to handle weather information, road conditions (as cars might be able to monitor their actual friction on the road and transmit the data to following vehicles, or detect humidity and temperature change), parking data and fuel availability (gas or electricity). And we can even think of merging all this with day-to-day life elements such as individual calendars, commuting patterns and geolocating people through their cell phones.
If the data collection and crunching tasks can conceivably be handled by a Google-like player, communications remain an issue. “There is not enough overlap between car-to-car communication and in other fields”, Sven Beiker, director Center for Automotive Research (CARS) at Stanford told me (see his recent lecture about The Future if the Car). He is actually echoing executives from Audi (who made a strategic deal with Google), BMW and Ford; together at the Mobile World Congress, they were critical of cell phone carriers’ inability to provide the right 4G (LTE) infrastructure to handle the amount of data required by future vehicles.
Finally, there is the question of an operating system for cars. Experts are divided. Sven Beiker believes the development of self-driving vehicles will depend more on communication protocols than on an OS per se. Others believe that Google, with its fleet of self-driving Priuses criss-crossing California, is building the first OS dedicated to autonomous vehicles. At some point, the search giant could combine its mapping, imagery and local search capabilities with the accumulation of countless self-driven miles, along with scores of specific situations “learned” by the cars’ software. The value thus created would be huge, giving Google a decisive position in yet another field. The search company could become the main provider of both systems and data for autonomous or semi autonomous cars.