About Frédéric Filloux

Posts by Frédéric Filloux:

Your smartphone, your moods, their market

 

Coupled to facial imaging, the smartphone could become the ultimate media analytics tool, for evaluating editorial content or measuring the effectiveness of ads. Obviously, there are darker sides. 

When it comes to testing new products, most of us have been through the focus group experience. You sit behind a one-way mirror and watch a handpicked group of people dissect your new concept: a magazine redesign, a new website or a communication campaign. It usually lasts a couple of hours during which the session moderator does his best to extract intelligent remarks from the human sample. Inevitably, the client — you, me, behind the glass — ends up questioning the group’s relevance, the way the discussion was conducted, and so on. In the end, everyone makes up their own interpretation of the analyst’s conclusions. As usual, I’m caricaturing a bit; plus I’m rather in favor of products pre-tests as they always yield something useful. But we all agree the methods could be improved — or supplemented.

Now consider Focus Group 2.0: To a much larger sample (say few hundreds), you send a mockup of your next redesign, a new mobile app, or an upcoming ad campaign you better not flunk. The big 2.0 difference resides in a software module installed on the tester’s smartphone or computer that will use the device’s camera to decipher the user’s facial expressions.

Welcome to the brave new world of facial imaging. It could change the way visual designs are conceived and tested, making them more likely to succeed as a result . These techniques are based on the work of American psychologist Paul Ekman, who studied emotions and their relation to facial expression. Ekman was the first to work on “micro-expressions” yielding impossible to suppress, authentic reactions.

The human face has about 43 facials muscles that produce about 8,000 different combinations. None of theses epxressions are voluntary, nor are they dependent on social origin or ethnicity. The muscles react automatically and swiftly — in no more than 10 or 20 milliseconds — to cerebral cortex instructions sent to the facial nerve.

Last month, in Palo Alto, I met Rick Lazansky, a board director at the venture capital firm Sand Hill Angels. In the course of a discussion about advertising inefficiencies (I had just delivered a talk at Stanford underlining the shortcomings of digital ads), Rick told me he had invested in a Swiss-based company called Nviso. Last week, we set up a Skype conference with Tim Lellewellyn, founder and CEO of the company (Nviso is incubated on the campus of the Swiss Federal Institute of Technology in Lausanne where Dr. Matteo Sorci, Nviso’s chief scientist and co-founder used to work.)

Facial Imaging’s primary market is advertising, explains the Nviso team. Its technology consists in mapping 143 points on the face, activated by the 43 facial muscles. Altogether, their tiny movements are algorithmically translated into the seven most basic expressions : happiness, surprise, fear, anger, disgust, sadness and neutral, each of them lasting a fraction of a second. In practice, such techniques require careful adjustment as many factors tweak the raw data. But the ability to apply such measurements to hundreds of subjects, in a very short time, insures the procedure’s statistical accuracy and guarantees consistent results.

Webcams and, more importantly, smartphone cameras will undoubtedly boost uses of this technology. Tests that once involved a dozen of people in a focus group can now be performed using a sample size measured in hundreds, in a matter of minutes. (When scaling up, one issue becomes the volume of data: one minute of video for 200 respondents will generate over 100,000 images to process.)

Scores of applications are coming. The most solvent field is obviously the vast palette of market research activities. Designers can quickly test logos, layouts, mockups, story boards. Nviso works with Nielsen in Australia and New Zealand and with various advertisers in Korea. But company execs know many others fields could emerge. The most obvious one is security. Imagine sets of high-speed cameras performing real-time assessment at immigration or at customs in an airport; or a police officer using the same technology to evaluate someone’s truthfulness under interrogation. (The Miranda Warning would need its own serious facelift…) Nviso states that it stays out of this field, essentially because of the high barrier to entry.

Other uses of facial imaging technique will be less contentious. For instance, it could be of a great help to the booming sector of online education. Massive Open Online Courses (Moocs) operators are struggling with two issues: authentication and student evaluation. The former is more or less solved thanks to techniques such as encoding typing patterns, a feature reliably unique to each individual. Addressing evaluation is more complicated. As one Stanford professor told me when we were discussing the fate of Moocs, “Inevitably, after a short while, you’ll have 20% to 30% of the students that will be left behind, while roughly the same proportion will get bored…” Keeping everyone on board is therefore one of the most serious challenges of Moocs. And since Moocs are about scale, such task has to be handled by machines able to deal with thousands of students at a time. Being able to detect student moods in real-time and to guide them to relevant branches of the syllabus’ tree-structure will be essential.

These mood-analysis techniques are just nascent. Besides Nviso, several well-funded companies such as Affectiva compete for the market-research sector. The field will be reinforced by other technologies such as vocal intonations analysis deployed by startups like Beyond Verbal. And there is more in store. This story of Smithonian.com titled “One day, your smartphone will know if you are happy or sad“, sums up the state of the art with mobile apps designed to decipher your mood based on the way you type, or research conducted by Samsung to develop emotion-sensing smartphones. As far as privacy is concerned, this is just the beginning of the end. Just in case you had a doubt…

frederic.filloux@mondaynote.com

In Bangkok, with the Fast Movers

 

The WAN-IFRA congress in Bangkok showed good examples of the newspaper industry’s transformation. Here are some highlights. 

Last week, I travelled to Bangkok for the 65th congress of the World Association of Newspapers (The WAN-IFRA also includes the World Editors Forum and the World Advertising Forum.) For a supposedly dying industry, the event gathered a record crowd: 1400 delegates from all over the world (except for France, represented by at most a dozen people…) Most presentations and discussions revealed an acceleration in the transformation of the sector.

The transition is now mostly led by emerging countries seemingly eager to get rid themselves as quickly as possible of the weight of the past. At a much faster pace than in the West, Latin America and Asia publishers take advantage of their relatively healthy print business to accelerate the online transition. These many simultaneous changes involve spectacular newsroom transformations where the notion of publication gives way to massive information factories equally producing print, web and mobile content. In these new structures, journalists, multimedia producers, developers (a Costa-Rican daily has one computer wizard for five journalists…) are blended together. They all serve a vigorous form of journalism focused on the trade’s primary mission: exposing abuses of power and public or private failures (the polar opposite of the aggregation disease.) To secure and to boost the conversion, publishers rethink the newsroom architecture, eliminate walls (physical as well as mental ones), overhaul long established hierarchies and desk arrangements (often an inheritance of the paper’s sections structure.)

In the news business, modernity no longer resides in the Western hemisphere. In Europe and in the United States, a growing number of readers are indeed getting their news online, but in a terrifyingly scattered way. According to data compiled by media analyst Jim Chisholm, newspapers represent 50.4% of internet consumption when expressed in unique visitors, but only 6.8% in visits, 1.3% in time spent, and 0.9% in page views!… “The whole battle is therefore about engagement”, says WAN-IFRA general manager Vincent Peyregne, who underlines that the level of engagement for digital represents about 5% of what it is for print — which matches the revenue gap. This is consistent with Jim Chisholm’s views stated a year ago in this interview to Ria Novosti [emphasis mine]:

If you see, how often in a month do people visit media, they visit the print papers 16 times, while the for digital papers it’s just six. At that time they look at 36 pages in print and just 3.5 in digital. Over a month, print continues to deliver over 50 times the audience intensity of newspaper digital websites.

One of the best ways to solve the engagement equation is to gain a better knowledge of audiences. In this regard, two English papers lead the pack: The Daily Mail and the Financial Times. The first is a behemoth : 119 million uniques visitors per month (including 42 m in the UK) and the proof that a profusion of vulgarity remains a weapon of choice on the web. Aside from sleaziness, the Mail Online is a fantastic data collection machine. At the WAN conference, its CEO Kevin Beatty stated that DMG, the Mail’s parent company, reaches 36% of the UK population and, on a 10-day period, the company collects “50 billion things about 43 million people”. The accumulation of data is indeed critical, but all the people I spoke with — I was there to moderate a panel about aggregation and data collection — are quick to denounce an advertising market terribly slow to reflect the value of segmentation. While many media outlets spend a great deal of resources to build data analytics, media buying agencies remain obsessed with volume. For many professionals, the ad market better quickly understand what’s at stake here; the current status quo might actually backfire as it will favor more direct relationships between media outlets and advertisers. As an example, I asked to Casper de Bono, the B2B Manager for the FT.com, how its company managed to extract value from its trove of user data harvested through its paywall. De Bono used the example of an airline that asked FT.com to extract the people that logged on the site from at least four different places served by the airline in the last 90 days. The idea was to target these individuals with specific advertising — anyone can imagine the value of such customers… This is but an example of the FT.com’s ultra-precise audience segmentation.

Paywalls were also on everyone’s lips in Bangkok. “The issue is settled”, said Juan Señor, a partner at Innovation Media Consulting, “This is not the panacea but we now know that people are willing to pay for quality and depth”. Altogether, he believes that 3% to 5% of a media site’s unique visitors could become digital subscribers. And he underlined a terrible symmetry in the revenue structure of two UK papers: While the Guardian — which resists the idea of paid-for digital readers — is losing £1m per week, The Telegraph makes roughly the same amount (£50m a year, $76m or €59m) in extra revenues thanks to its digital subscriptions… No one believes paywalls will be the one and only savior of online newspapers but, at the very least, paywalls seem to prove quality journalism is back in terms of value for the reader.

frederic.filloux@mondaynote.com

Tech as a boost for development

 

Moore’s Law also applies to global development. From futuristic wireless networks for rural Africa to tracking water well drillings, digital technology is a powerful boost for development as evidenced by a growing number of initiatives.  

Last week, The Wall Street Journal unveiled a Google project designed to provide wireless networks in developing countries, more specifically in sub-Saharan Africa and Southeast Asia. According to the Journal, the initiative involves using the airwaves spectrum allocated for television signals or teaming up with cellular carriers already working there. In its typical “outside-of-the-box” thinking, the project might also rely on high-altitude blimps to cover infrastructure-deprived areas. Coupled with low-cost handsets using the Android operating system, or the brand new Firefox OS for mobile, this would boost the spread of cellular phones in poor countries.

Previously unavailable, mobile access will be a game changer for billions of people. At the last Mobile World Congress in Barcelona, I chatted with an Alcatel-Lucent executive who explained the experiments she witnessed in Kenya, such as providing the equivalent of index cards to nurses to upgrade their knowledge of specific treatments; the use of mobile phone translated into an unprecedented reach, even in remote areas where basic handsets are shared among many people. Similarly, tests for access to reading material were conducted by UNESCO, the United Nations branch for education and culture. Short stories, some loaded with interactive features, were sent to phones and, amazingly, kids flocked to read, share and participate. All of this was carried on “dumb” phones, sometimes with only mono-color displays. Imagine what could be done with smartphones.

Moore’s Law will keep helping. Currently, high end smartphones are out of reach for emerging markets where users rely on prepaid cards instead of subscriptions. But instead of a $400-$600 handsets (without a 2-year contract) currently sold in Western markets, Chinese manufacturers are aiming at a price of $50 for a durable handset, using a slower processor but sporting all expected features: large screen, good camera, GPS module, accelerometers, and tools for collective use. On such a foundation, dedicated applications can be developed — primarily for education and health.

As an example, the MIT Media Labs has created a system for prescribing eyeglasses that requires only a one-dollar eyepiece attached to a smartphone; compared to professional equipment costing thousands times more, it runs a very decent diagnostic. (This is part of the MIT Global Challenge Initiative).

This, coupled with liquid-filled adjustable glasses such as this one presented at TED a couple of years ago, will help solve vision problems in poor countries for a couple of dollars per person. Other systems aimed at detecting vision-related illnesses such as cataract or glaucoma are in development. So are blood-testing technologies based on bio-chips tied to a mobile app for data collection.

Last week, I attended the Google’s Zeitgeist conference in the UK — two days of enthralling TED-like talks (all videos here). Among many impressive speakers, two got my attention. The first is Sugata Mitra, a professor of education technology at Newcastle University. In his talk — filled with a mixture of Indian and British humor — he described self-organizing systems experiments in rural India built around basic internet-connected computers. The results are compelling for language learning and basic understanding of science or geography.

The other speaker was the complete opposite. Scott Harrison has an interesting trajectory: he is a former New York nightclub promoter who changed drastically his life seven years ago by launching the organization Charity:Water. Harrison’s completely fresh approach helped him redefine how a modern charitable organization should work. He built his organization around three main ideas. First, 100% of donations should reach a project. To achieve this, he created two separate funding circuits: a public one for projects and another for to support operational costs.

Principle number two, build a brand, with all the attributes that go with it: Strong visual identity and well-designed web site (most of those operated by NGO’s are terrible); the web site is rich and attractive and it looks more like than an Obama campaign fundraising machine than a NGO, (I actually tested Charity:Water’s very efficient donation system by giving $100, curious to see where the money will land.)

The third and probably the most innovative idea was to rely on simple, proven digital technologies to guarantee complete project traceability. Donors can find precisely where their money ends up — whether it is for a $60 sand-filter fountain or a $2000 well. Last, Charity:Water funded a drilling truck equipped with a GPS tracker that makes it visible on Google Maps; in addition, the truck tweets its location on a real-time basis. Thanks to a $5 million Google funding, the organization currently works with seven high-tech US companies to develop robust water sensors able to show in real-time how much water is running on a given project. About 1000 of these are to be installed before year-end. This will help detect possible malfunctions and it will also carries promotional (read: fundraising) capabilities: thanks to a mobile app, a kid who helped raise few hundreds bucks among his friends can see where his or her water is actually flowing.

As I write this, I see comments coming, denouncing the gadgetization of charity, the waste of money in technologies not directly benefiting the neediest, Google’s obscure and mercantile motives, or the future payback for cellular carriers from the mobile initiatives mentioned earlier. Sure thing, objections must be heard. But, at this time, everyone who has traveled in poor areas — like I did in India or in sub-Saharan countries such as Senegal, Mauritania and Burkina-Faso — comes back with the strong conviction that all means must be used to provide these populations with basic things we take for granted in the Western world. As for Charity:Water, results speak for themselves: Over six years, the organization has raised almost $100m and it provided drinkable water to 3m people (out of 800m who don’t have access to it in the world — still lots of work left.) Like in many areas, the benefits of new, disruptive models based on modern technologies far outweigh the disadvantages.

frederic.filloux@mondaynote.com

Why Google Will Crush Nielsen

 

Internet measurement techniques need a complete overhaul. New ways have emerged, potentially displacing older panel-based technologies. This will make it hard for incumbent players to stay in the game.

The web user is the most watched consumer ever. For tracking purposes, every large site drops literally dozens of cookies in the visitor’s browser. In the most comprehensive investigation on the matter, The Wall Street Journal found that each of the 50 largest web sites in the United Sates, weighing 40% of the US page views, installed an average of 64 files on a user device. (See the WSJ’s What They Know series and a Monday Note about tracking issues.) As for server logs, they record every page sent to the user and they tell with great accuracy which parts of a page collect most of the reader’s attention.

But when it comes to measuring a digital viewer’s commercial value, sites rely on old-fashioned panels, that is limited user population samples. Why?

Panels are inherited. They go back to the old days of broadcast radio when, in order to better sell advertising, dominant networks wanted to know which station listeners tuned in to during the day. In the late thirties, Nielsen Company made a clever decision: they installed a monitoring box in 1000 American homes. Twenty years later, Nielsen did the same, on a much larger scale, with broadcast television. The advertising world was happy to be fed with plenty of data — mostly unchallenged as Nielsen dominated the field. (For a detailed history, you can read Rating the Audience, written by two Australian media academics). As Nielsen expanded to other media (music, film, books and all sorts of polls), moving to the internet measurement sounded like a logical step. As of today, Nielsen only faces smaller competitors such as ComScore and others.

I have yet to meet a publisher who is happy with this situation. Fearing retribution, very few people talk openly about it (twisting the dials is so easy, you know…), but hey all complain about inaccurate, unreliable data. In addition, the panel system is vulnerable to cheating on a massive scale. Smarty pants outfits sell a vast array of measurement boosters, from fake users that will come in just once a month to be counted as “unique” (they are indeed), to more sophisticated tactics such as undetectable “pop under” sites that will rely on encrypted URLs to deceive the vigilance of panel operators. In France for instance, 20% to 30% of some audiences can be bogus — or largely inflated. To its credit, Mediametrie — the French Nielsen affiliate that produces the most watched measurements — is expending vast resources to counter the cheating, and to make the whole model more reliable. It works, but progress is slow. In August 2012, Mediametrie Net Ratings (MNR), launched a Hybrid Measure taking into account site centric analytics (server logs) to rectify panel numbers, but those corrections are still erratic. And it takes more than a month to get the data, which is not acceptable for the real-time-obsessed internet.

Publishers monitor the pulse of their digital properties on a permanent basis. In most newsrooms, Chartbeat (also imperfect, sometimes) displays the performance of every piece of content, and home pages get adjusted accordingly. More broadly, site-centric measures detail all possible metrics: page views, time spent, hourly peaks, engagement levels. This is based on server logs tracking dedicated tags inserted in each served page. But the site-centric measure is also flawed: If you use, say, four different devices — a smartphone, a PC at home, another at work, and a tablet — you will be incorrectly counted as four different users. And if you use several browsers you could be counted even more times. This inherent site-centric flaw is the best argument for panel vendors.

But, in the era of Big Data and user profiling, panels no longer have the upper hand.

The developing field of statistical pairing technology shows great promise. It is now possible to pinpoint a single user browsing the web with different devices in a very reliable manner. Say you use the four devices mentioned earlier: a tablet in the morning and the evening; a smartphone for occasional updates on the move, and two PCs (a desktop at the office and a laptop elsewhere). Now, each time you visit a new site, an audience analytics company drops a cookie that will record every move on every site, from each of your devices. Chances are your browsing patterns will be stable (basically your favorite media diet, plus or minus some services that are better fitted for a mobile device.) Not only your browsing profile is determined from your navigation on a given site, but it is also quite easy to know which sites you have been to before the one that is currently monitored, adding further precision to the measurement.

Over time, your digital fingerprint will become more and more precise. Until then, the set of four cookies is independent from each other. But the analytics firm compiles all the patterns in single place. By data-mining them, analysts will determine the probability that a cookie dropped in a mobile application, a desktop browser or a mobile web site belongs to the same individual. That’s how multiple pairing works. (To get more details on the technical and mathematical side of it, you can read this paper by the founder of Drawbridge Inc.) I recently discussed these techniques with several engineers both in France and in the United Sates. All were quite confident that such fingerprinting is doable and that it could be the best way to accurately measure internet usage across different platforms.

Obviously, Google is best positioned to perform this task on a large scale. First, its Google Analytics tool is deployed over 100 millions web sites. And the Google Ad Planner, even in its public version, already offers a precise view of the performance of many sites in the world. In addition, as one of the engineers pointed out, Google is already performing such pairing simply to avoid showing the same ad twice to a someone using several devices. Google is also most likely doing such ranking in order to feed the obscure “quality index” algorithmically assigned to each site. It even does such pairing on a nominative basis by using its half billion Gmail accounts (425 million in June 2012) and connecting its Chrome users. As for giving up another piece of internet knowledge to Google, it doesn’t sounds like a big deal to me. The search giant knows already much more about sites than most publishers do about their own properties. The only thing that could prevent Google from entering the market of public web rankings would be the prospect of another privacy outcry. But I don’t see why it won’t jump on it — eventually. When this happens, Nielsen will be in big trouble.

frederic.filloux@mondaynote.com

Two strategies: The Washington Post vs. The NYT

 

Both are great American newspapers, both suffer from the advertising slump and from the transition to digital. But the New York Times’ paywall strategy is making a huge difference. 

The Washington Post’s financials provide a good glance at the current status of legacy media struggling with the shift to digital. Unlike others large dailies, the components of the Post’s P&L clearly appear in its statements, they are not buried under layers of other activities. Product-wise, the Post remains a great news machine, collecting Pulitzer Prizes with clockwork regularity and fighting hard for scoops. The Post also epitomizes an old media under siege from specialized, more agile outlets such as Politico, ones that break down the once-unified coverage provided by traditional large media houses. In an interview to the New York Times last year, Robert G. Kaiser, a former editor who had been with the paper since 1963, said this:

“When I was managing editor of The Washington Post, everything we did was better than anyone in the business,” he said. “We had the best weather, the best comics, the best news report, the fullest news report. Today, there’s a competitor who does every element of what we do, and many of them do it better. We’ve lost our edge in some very profound and fundamental ways.”

The iconic newspaper has been slow to adapt to the digital era. Its transformation really started around 2008. Since then, it has checked all the required boxes: integration of print and digital productions; editors are now involved on both sides of the news production and all relentlessly push the newsroom to write more for the digital version; many blogs covering a wide array of topics have been launched; and the Post now has a good mobile application. The “quant” culture also set in, with editors now taking into account all the usual metrics and ratios associated with digital operations, including a live update of Google’s most relevant keywords prominently displayed in the newsroom. All this helped the Post collect 25.6 million unique visitors per month, vs. 4 to 5 million for Politico, and 35 million for the New York Times that historically enjoys a more global audience.

Overall, the Washington Post Company still relies heavily on its education business, as show in the table below :

 Revenue:.......$4.0bn (-3% vs. 2011)
 Education:.....$2.2bn (-9%)
 Cable TV:......$0.8bn (+4%)
 Newspaper:.....$0.6bn (-7%)
 Broadcast TV:..$0.4bn (+25%)

But the education business no is longer the cash cow it used to be. Not only did its revenue decrease but, last year, it lost $105m vs. a $96m profit in 2011. As for the newspaper operation, it widened its losses to $53m in 2012 from $21m in 2011. And the trend worsens: for the first quarter of 2013, the newspaper division’s revenue decreased by 4% vs. a year ago and it lost $34m vs. $21m for Q1 2011.

Now, let’s move to a longer-term perspective. The chart below sums up the Post’s (and others legacy media’s) problem:

Translated into a table:

                  Q1-2007   Q1-2013  Change %
 Revenue (All):....$219m.....$127m.....-42%
 Print Ad:.........$125m.....$49m......-61%
 Digital Ad:.......$25m......$26m......+4%

A huge depletion in print advertising, a flat line (at best) for digital advertising, the elements sum up the equation faced by traditional newspapers going from print to online.

Now, let’s look at the circulation side using a comparison with the New York Times. (Note that it’s not possible to extract the same figures for advertising from the NYT Co.’s financial statements because they aggregate too many items.) The chart below shows the evolution of the paid circulation for the Post between 2007 and 2013:

..and for the NY Times:

Call it the paywall effect: The New York Times now aggregates both print and digital circulations. The latter now amounts to 676,000 digital subscribers that have been recruited using the NYT’s metered system (see previous Monday Notes under the “paywall” tag). (Altogether, digital subscribers to the NYT, the International Herald and the Boston Globe now number 708,000). It seems the NYT found the right formula: its digital subscribers portfolio grows at a 45% per year rate, thanks to a combination of sophisticated marketing, mining customer data and aggressive pricing (it even pushes special deals for Mother’s Day.) All this adds to the bottom line: if each digital sub brings $12 a month, the result is about $100m that didn’t exist two years ago. But it does not benefit the advertising side as it continues to suffer. For the first quarter of 2013 vs. the same period last year, the NYT Company lost 13% in print ads revenue and 4% for digital ads. (As usual in their earning calls, NYT officials mention the deflationary effects of ad exchanges as one cause of erosion in digital ads.)

One additional sign that digital advertising will remain in the doldrums: Politico, too, is exploring alternatives; it will be testing a paywall in a sample of six states and for its readers outside the United States. The system will be comparable to the NYT.com or the FT.com, with a fixed number of articles available for free (see Politico’s management internal memo.)

It is increasingly clear that readers are more willing than we once thought to pay for content they value and enjoy. With more than 300 media companies now charging for online content in the U.S., the notion of paying to read expensive-to-produce journalism is no longer that exotic for sophisticated consumers.

frederic.filloux@mondaynote.com

 

This Wristband Could Change Healthcare

 

Jawbone is launching is UP wristband in Europe. Beyond the quirky gadget lies a much larger project: Changing healthcare — for better or for worst. 

 Hyperkinetic as he is, Hosain Rahman, the Jawbone founder, must be saturating his Jawbone UP wristband with data. The rubberized band, nicely designed by Yves Behar, is filled with miniaturized electronics: accelerometers and sensors monitor your activity through out the day, recording every motion in your life, from walking in the street to the micro-movements of your hand in a paradoxical sleep phase. For the fitness freak, the Up is a great stimulus to sweat even more; for the rest of us, it’s more like an activity and sleep monitoring device. (For a complete product review, see this article from Engadget, and also watch Hosain Rahman’s interview by Kevin Rose, it’s well worth your time.) Last week in Paris, after my meeting with Hosain, I headed straight to the nearest Apple Store to pick-up my Up (for €129), with the goal of exploring my sleeping habits in greater depth.

After using the device for a couple of days, the app that comes with it tells me I’m stuck in a regime of 5 to 6 hours of bad sleep — including less than three hours of slow-wave sleep commonly known as deep sleep. Interesting: Two years ago, I spend 36 hours covered with electrodes and sensors in a hospital specializing in studying and (sometimes) treating insomnia — after a 6 months on a wait list to get the test. At one point, to monitor my sleep at home, doctors lent me a cumbersome wristband, the size of a matchbox. The conclusion was unsurprising: I was suffering from severe insomnia, and there was very little they could do about it. The whole sleep exploration process must have cost 3000€ to the French public health care system, 20 times more than the Jawbone gadget (or the ones that do a similar job). I’m not contending that medical monitoring performed by professionals can be matched by a wristband loaded with sensors purchased in an electronics store. But, aside from the cost, there is another key difference: the corpus of medical observations is based on classic clinical tests of a small number of patients. On the other hand, Jawbone thinks of the UP wristband — to be worn 24/7 by millions of people — in a Big Data frame of mind. Hosain Rahman is or will soon be right when he says his UP endeavor contributes to the largest sleep study ever done.

Then it gets interesting. As fun as they can be, existing wearable monitoring devices are in the stone age compared to what they will become in three to five years. When I offered Hosain a list of features that could be embedded in future versions of the UP wristband — such as a GPS module (for precise location, including altitude), heartbeat, blood pressure, skin temperature and acidity sensors, bluetooth transmitter — he simply smiled and conceded that my suggestions were not completely off-track. (Before going that far, Jawbone must solve the battery-life issue and most likely design its own, dedicated super-low consumption processor.) But Hosain also acknowledges his company is fueled by a much larger ambition than simply build a cool piece of hardware aimed at fitness enthusiasts or hypochondriacs.

His goal is nothing less than disrupting the healthcare system.

The VC firms backing Jawbone are on the same page. The funding calendar compiled by Crunchbase speaks for itself: out of the stunning $202m raised since 2007, most of it ($169m), has been raised since 2011, the year of the first iteration of the UP wristband (it was a failure due to major design flaws). All the big houses are on board: Khosla Ventures, Sequoia, Andreessen-Horowitz, Kleiner Perkins, Deutsche Telekom… They all came with an identical scheme in mind: a massive deployment of the monitoring wristband, a series of deals with the biggest healthcare companies in America to subsidize the device. All this could result in the largest health-related dataset ever build.

The next logical step would be the development of large statistical models based on customers’ recorded data. As far as privacy is concerned, no surprise: Jawbone is pretty straightforward and transparent: see their disclosure here. It collects everything: name, gender, size and weight, location (thanks to the IP address) and, of course, all the information gathered by the device, or entered by the user, such as the eating habits. A trove of information.

Big Data businesses focusing on health issues drool over what can be done with such a detailed dataset coming from, potentially, millions of people. Scores of predictive morbidity models can be built, from the most mundane — back pain correlated to sleep deprivation — to the most critical involving heart conditions linked to various lifestyle factors. When asked about privacy issues, Hosain Rahman insists on Jawbone’s obsessive protection of his customers, but he also acknowledges his company can build detailed population profiles and characterize various risk factors with substantially greater granularity.

This means serious business for the health care and insurance sectors — and equally serious concerns for citizens. Imagine, just for a minute, the impact of such data on the pricing structure of your beloved insurance company? What about your credit rating if you fall into a category at risk? Or simply your ability to get a job? Of course, the advent of predictive health models potentially benefits everyone. But, at this time, we don’t know if and how the benefits will outweigh the risks.

frederic.filloux@mondaynote.com

What’s the Fuss About Native Ads?

 

In the search for new advertising models, Native Ads are booming. The ensuing Web vs. Native controversy is a festival of fake naïveté and misplaced indignation. 

Native Advertising is the politically correct term for Advertorial, period. Or rather, it’s an upgrade, the digital version of an old practice dating back to the era of typewriters and lead printing presses. Everyone who’s been in the publishing business long enough has in mind the tug-of-war with the sales department who always wants its ads to to appear next to an editorial content that will provide good “context”. This makes the whole “new” debate about Native Ads quite amusing. The magazine sector (more than newspapers), always referred to “clean” and “tainted” sections. (The latter kept expanding over the years). In consumer and lifestyle sections, editorial content produced by the newsroom is often tailored to fit surrounding ads (or to flatter a brand that will buy legit placements).

The digital era pushes the trend several steps further. Today, legacy media brands such as Forbes, Atlantic Media, or the Washington Post have joined the Native Ads bandwagon. Forbes even became the poster child for that business, thanks to the completely assumed approach carried out by its chief product officer Lewis DVorkin (see his insightful blog and also this panel at the recent Paid Content Live conference.) Advertising is not the only way DVorkin has revamped Forbes. Last week, Les Echos (the business daily that’s part of the media group I work for) ran an interesting piece about it titled “The Old Press in a Startup mode” (La vielle presse en mode start-up). It details the decisive — and successful — moves by the century-old media house: a downsized newsroom, external contributors (by the thousand, and mostly unpaid) who produce a huge stream of 400 to 500 pieces a day. “In some cases”, wrote Lucie Robequain, Les Echos’s New York correspondent, “the boundary between journalism and advertorial can be thin…” To which Lewis DVorkin retorts: “Frankly, do you think a newspaper that conveys corporate voices is more noble? At Forbes, at least, we are transparent: We know which company the contributor works for and we expose potentials conflicts of interests in the first graph…” Maybe. But screening a thousand contributors sounds a bit challenging to me… And Forbes evidently exposed itself as part of the “sold” blogosphere. Les Echos’ piece also quotes Joshua Benton from Harvard’s Nieman Journalism Lab who finds the bulk of Forbes production to be, on average, not as good as it was earlier, but concedes the top 10% is actually better…

As for Native Advertising, two years ago, Forbes industrialized the concept by creating BrandVoice. Here is the official definition:

Forbes BrandVoice allows marketers to connect directly with the Forbes audience by enabling them to create content – and participate in the conversation – on the Forbes digital publishing platform. Each BrandVoice is written, edited and produced by the marketer.

Practically, Forbes lets marketers use the site’s Content Management System (CMS) to create their content at will. The commercial deal — from what we can learn — involves volumes and placements that cause the rate to vary between $50,000 to $100,000 per month. The package can also include traditional banners that will send traffic back to the BrandVoice page.

At any given moment, there are about 16 brands running on Forbes’ “Voices”. This revenue stream was a significant contributor to the publisher’s financial performances. According to AdWeek (emphasis mine):

The company achieved its best financial performance in five years in 2012, according to a memo released this morning by Forbes Media CEO Mike Perlis. Digital ad revenue, which increased 19 percent year over year, accounted for half of the company’s total ad revenue for the year, said Perlis. Ten percent of total revenue came from advertisers who incorporated BrandVoice into their buys, and by the end of this year, that share is estimated to rise to 25 percent.

Things seemed pretty positive across other areas of Forbes’ business as well. Newsstand sales and ad pages were up 2 percent and 4 percent, respectively, amid industry-wide drops in both areas. The relatively new tablet app recently broke 200,000 downloads.

A closer look gives a slightly bleaker picture: According to latest data from the Magazine Publishers Association, between Q1 2013 and Q1 2012, Forbes Magazine (the print version only) lost 16% in ads revenues ($50m to $42m). By comparison, Fast Company scored +25%, Fortune +7%, but The Economist -27% and Bloomberg Business Week -30%. The titles compiled by the MPA are stable (+0.5%).

I almost never click on banners (except to see if they work as expected on the sites and apps I’m in charge of). Most of the time their design sucks, terribly so, and the underlying content is usually below grade. However, if the subject appeals to me, I will click on Native Ads or brand contents. I’ll read it like another story, knowing full well it’s a promotional material. The big difference between a crude ad and a content-based one is the storytelling dimension. Fact is: Every company has great stories to tell about its products, strategy or vision. And I don’t see why they shouldn’t be told  resorting to the same storytelling tools news media use. As long as it’s done properly, with a label explaining the contents’ origin, I don’t see the problem (for more on this question, read a previous Monday Note: The Insidious Power of Brand Content.) In my view, Forbes does blur the line a bit too much, but Atlantic’s business site Quartz is doing fine in that regard. With the required precautions, I’m certain Native Ads, or branded contents are a potent way to go, especially when considering the alarming state of other forms of digital ads. Click-through rates are much better (2%-5% vs. a fraction of a percentage for a dumb banner) and the connection to social medias works reasonably well.

For news media companies obsessed with their journalistic integrity (some still do…), the development of such new formats makes things more  complicated when it comes to decide what’s acceptable and what’s not. Ultimately, the editor should call the shots. Which brings us to the governance of media companies. For digital media, the pervasive advertising pressure is likely keep growing. Today, most rely on a Chief Revenue Officer to decide what’s best for the bottom line such as balancing circulation and advertising, arbitraging between a large audience/low yield or smaller audience/higher yield, for instance. But, in the end, only the editor must be held accountable for the contents’ quality and the credibility — which contribute to the commercial worthiness of the media. Especially in the digital field, editors should be shielded from the business pressure. Editors should be selected by CEOs and appointed by boards or better, boards of trustees. Independence will become increasingly scarce.

frederic.filloux@mondaynote.com

A lesson of Public e-Policy

 

The small Baltic republic of Estonia is run like a corporation. But its president believes government must to play a crucial role in areas of digital policy such as secure ID. 

Toomas Hendrik Ilves must feel one-of-a-kind when he attends international summits. His personal trajectory has nothing in common with the backgrounds of other heads of state. Born in Stockholm in 1953 where his parents had taken refuge from the Soviet-controlled Estonia, Ilves was raised mostly in the United States. There, he got a bachelor’s degree in psychology from Columbia University and a master’s degree in the same subject from the University of Pennsylvania. In 1991, when Estonia became independent, Ilves was in Munich, working as a journalist for Radio Free Europe (he is also fluent English, German and Latin.) Two years later, he was appointed ambassador to — where else? — the United States. In 2006, a centrist coalition elected him president of the republic of Estonia (1.4m inhabitants).

One more thing about Toomas Hendrik Ilves: he programmed his first computer at the age of 13. A skill that would prove decisive for his country’s fate.

Last week in Paris, president Ilves was the keynote speaker at a conference organized by Jouve Group, a 3,000 employees French company specialized in digital distribution. The bow-tied Estonian captivated the audience with his straight speech, the polar opposite of the classic politician’s. Here are abstracts from my notes:

“At the [post-independence] time, the country, plagued by corruption, was rather technologically backward. To give an example, the phone system in the capital [Tallinn] dated back to 1938. One of our first key decisions was to go for the latest digital technologies instead of being encumbered by analog ones. For instance, Finland offered to provide Estonia with much more modern telecommunication switching systems, but still based on analog technology. We declined, and elected instead to buy the latest digital network equipment”.  

Estonia’s ability to build a completely new infrastructure without being dragged down by technologies from the past (and by the old-guard defending it) was essential to the nation’s development. When I later asked him about the main resistance factors he had encountered, he mentioned legacy technologies: “You in France, almost invented the internet with the Minitel. Unfortunately, you were still pushing the Minitel when Mosaic [the first web browser] was invented”. (The videotext-based system was officially retired at last in… 2012. France lost almost a decade by delaying its embrace of Internet Protocols.)

The other key decision was introducing computers in schools and teaching programming on a large scale. Combined to the hunger for openness in a tiny country emerging from 45 years of Soviet domination, this explains why Estonia has become an energetic tech incubator, nurturing big names like Kazaa or Skype (Skype still maintains its R&D center in Tallinn.)

“Every municipality in Estonia wanted to be connected to the Internet, even when officials didn’t know what it was. (…) And we played with envy…. With neighbors such as Finland or Sweden, the countries of Nokia and Ericsson, we wanted to be like them.”  

To further encourage the transition to digital, cities opened Internet centers to give access to people who couldn’t afford computers. If, in Western Europe, the Internet was seen as a prime vector of American imperialism, up in the newly freed Baltic states, it was seen as an instrument of empowerment and access to the world:

“We wanted a take the leap forward and build a modern country from the outset. The first public service we chose to go digital was the tax system. As a result, not only we eliminate corruption in the tax collection system — a computer is difficult to bribe –, but we increased the amount of money the state collected. We put some incentives in: When filing digitally, you’d get your tax refund within two weeks versus several months with paper. Today, more than 95% of tax returns are filed electronically. And the fact that we got more money overcame most of the resistance in the administration and paved the way for future developments”. 

“At some point we decided to give to every citizen a chip-card… In other words, a digital ID card. When I first mentioned this to some Anglo-saxon government officials, they opposed the classic ”Big Brother” argument. Our belief was, if we really wanted to build a digital nation, the government had to be the guarantor of digital authentication by providing everyone with a secure ID. It’s the government’s responsibility to ensure that someone who connects to an online service is the right person. All was built on the public key-private key encryption system. In Estonia, digital ID is a legal signature.The issue of secure ID is essential, otherwise we’ll end-up stealing from ourselves. Big brother is not the State, Big Brother lies in Big Data.”

“In Estonia, every citizen owns his or her data and has full access to it. We currently have about 350 major services securely accessible online. A patient, never gets a paper prescription; the doctor will load the prescription in a the card and the patient can go to any pharmacy. The system will soon be extended to Sweden, Denmark, Finland, Norway, as our citizens travel a lot. In addition, everyone can access their medical records. But they can chose what doctor will see them. I was actually quite surprised when a head of State from Southern Europe told me some paper medical records bear the mention “not to be shown to the patient” [I suspect it was France...]. As for privacy protection, the ID chip-card works both ways. If a policeman wants to check on your boyfriend outside the boundaries of a legal investigation, the system will flag it — it actually happened.” 

As the Estonian president explained, some good decisions also come out of pure serendipity,:

“[In the Nineties], Estonia had the will but not all the financial resources to build all the infrastructure it wanted, such as massive centralized data centers. Instead, the choice was to interconnect in the most secure way all the existing government databases. The result has been a highly decentralized network of government servers that prevent most abuses. Again, the citizen can access his health records, his tax records, the DMV [Department of Motor Vehicles], but none of the respective employees can connect to another database”.

The former Soviet Union had the small Baltic state pay the hard price for its freedom. In that respect, I recommend reading CyberWar by Richard Clarke, a former cyber-security advisor in the Clinton administration, who describes multiple cyber-attacks suffered by Estonia in 2007. These actually helped the country develop skillful specialists in that field. Since 2008, Tallinn harbors NATO’s cyber defense main center in addition to a EU large-scale IT systems center.

Toomas Hendrik Ilves stressed the importance of cyber-defense, both at the public and private sector level:

“Vulnerability to a cyber attacks must be seen as a complete market failure. It is completely unacceptable for a credit card company to deduct theft from its revenue base, or for a water supply company to invoke cyber attack as a force majeure. It is their responsibility to protect their systems and their customers. (…) Every company should be aware of this, otherwise we’ll see all our intellectual property ending up in China”. 

–frederic.filloux@mondaynote.com

Schibsted’s High Octane Diversification

 

The Norwegian media group Schibsted now aggressively invests in startups. The goal: digital dominance, one market at a time. France is in next in line. Here is a look at their strategy. 

This thought haunts most media executives’ sleepless nights:My legacy business is taking a hit from the internet; my digital conversion is basically on track, but it goes with an massive value destruction. We need both a growth engine and consolidation. How do we achieve this? What are our core assets to build upon? Should we undertake a major diversification that could benefit from our brand and know-how?” (At that moment, the buzzer goes off, it’s time to go to work.) Actually, such nighttime cogitations are a good sign, they are the privilege of people gifted with long term view.

The Scandinavian media power house Schibsted ASA falls into the long-termist category.  Key FY 2012 data follow. Revenue: 15bn Norwegian Kroner (€2bn or $2.6bn.); EBIT: 13.5%. The group currently employs 7800 people spread over 29 countries. 40% of the revenue and 69% of the EBITDA come from online activities. Online classifieds account for 25% of revenue and 52% of the EBITDA; the rest in publishing. (The usual disclosure: I worked for Schibsted between 2007 and 2009, in the international division).

The company went through the delicate transition to digital about five years ahead of other media conglomerates in the Western world. To be fair, Schibsted enjoyed unique conditions: profitable print assets, huge penetration in small Nordic markets immune to foreign players, a solid grasp of all components of the business, from copy sales to subscribers for newspapers and magazines, to advertising and distribution channels. In addition, the group enjoys a stable ownership structure (controlled by a trust), and its board always encourages the management to aim high and take risks. The company is led by a lean team: only 60 people at the Oslo headquarters to oversee the entire operations, largely staffed by McKinsey alumni.

The transition began in 1995 when Schibsted came to realize the media sector’s center of gravity would inevitably shift to digital. The move could be progressive for reading habits but it would definitely be swift and hard for critical revenue streams such as classifieds and consumer services. Hence the unofficial motto that’s still remains at the core of Schibsted’s strategy: Accelerating the inevitable (before the inevitable falls on us). Such view led to speeding up the demise of print classifieds, for instance, in order to free oxygen for emerging digital products. Not exactly popular at the time but, thanks to methodical pedagogy, the transition went well.

One after the other, business units moved to digital. Then, the dot-com crash hit. In Norway and Sweden, Schibsted media properties where largely deployed online with large dedicated newsrooms, emerging consumer services built from scratch or from acquisitions. Management wondered what to do: Should we opt for a quick and massive downsizing to offset a brutal 50% drop in advertising revenue? Schibsted took the opposite tack: Yes business is terrible, but this is mostly the result of the financial crisis; the audience is still here, not only it won’t go away but, eventually, it will experience huge growth. This was the basis for two key decisions: Pursuing investments in digital journalism while finding ways to monetize it; and doing whatever it took in order to dominate the classifieds business.

In Sweden, a bright spot kept blinking on Schibsted’s radar. Blocket was growing like crazy. It was a bare-bone classifieds website, offering a mixture of free and premium ads in the simplest and most efficient way. At first, Schibsted Sweden tried to replicate Blocket’s model with the goal of killing it. After all, the group thought, it had all the media firepower needed to lift any brand… Wrong. After a while, it turned out Schibsted’s copycat  still lagged behind the original. In the kind of pragmatism allowed by deep pockets, Schibsted decided to acquire Blocket (for a hefty price). The clever classifieds website will become the matrix for the group’s foray in global classifieds.

In 2006, Schibsted had acquired and developed a cluster of consumer-oriented websites, from Yellow-Pages-like directories, to price-comparisons sites, or consumer-data services. Until then, the whole assemblage had been built on pure opportunism. It was time to put things in order. Hence, in 2007, the creation of Tillväxmedier, the first iteration of Schibsted Development. (The Norwegian version was launched in 2010 and the French one starts this year).

Last week in Paris, I met Richard Sandenskog, Tillväxmedier’s investment manager and Marc Brandsma, the newly appointed CEO of Schibsted Development France. Sandenskog is a former journalist who also spent eight years in London as a product manager for Yahoo!  Brandsma is a seasoned French entrepreneur and former venture capitalist. Despite local particularisms precluding a dumb replication of Nordic successes, two basics principles remain:

1. Invest in the number one in a niche market, or a potential number one in a larger one. “In the online business, there is no room for number two”, said Richard Sandenskog. “We want to leverage our dominance on a given market to build brands and drive traffic. The goal is to find the best way to expose the new brand in different channels and integrate it in various properties. The keyword is relevant traffic. We don’t care for page views for their sake, but for the value they bring. We see clicks as a currency.”

2. Picking the right product in the right sector. In Sweden, the Schibsted Developement portfolio evolves around the idea of empowering the consumer. To sum up: people are increasingly lost in a jungle of pricing, plans, offers, deals, for the services they need. It could be cell phones, energy bills, consumer loans… Hence a pattern for acquisitions: a bulk purchase web site for electricity (the Swedish market is largely deregulated with about 100 utilities companies); a helper to find the best cellular carrier plan based on individual usage; a personal finance site that lets consumers shop around for the best loan without degrading their credit rating; a personal factoring service where anyone can auction off invoices, etc.
Most are now #1 on their segment. “We give the power back to the consumer, sums up Richard Sandenskog. We are like Mother Teresa but we make money doing it….” Altogether, Tillväxmedier’s portfolio encompasses about 20 companies that made a billion of Swedish Kröner (€120m, $155m) in 2012 with a 12% EBITDA (several companies are in the growth phase.) All in five years…

France will be a different story. It’s five times bigger than Sweden, a market in which startups can be expensive. But what triggered Schibsted ASA’s decision to create a growth vehicle here is the spectacular performance of the classifieds site LeBoncoin.fr (see a previous Monday Note Schibsted’s extraordinary click machines): €98m in revenue and a cool 68% EBITDA last year. LeBoncoin draws 17m unique viewers (according to Nielsen). Based on this valuable asset, explains Marc Brandsma, the goal is to create the #1 online group in France (besides Facebook and Google). “The typical players we are looking for are B2C companies that already have a proven product — we won’t invest in PowerPoint presentations — driven by a management team aiming to be the leader in their market. Then we acquire it; we buy out all minority shareholders if necessary”. No kolkhoz here; decisions must be made quickly, without interference. “At that point, adds Brandsma we tell managers we’ll take care of growth by providing traffic, brand notoriety, marketing, all based on best practices and proven Schibsted expertise”. Two sectors Marc Brandsma says he won’t touch, though: business-to-business services and news media (ouch…)

frederic.filloux@mondaynote.com