nielsen

News: Mobile Trends to Keep In Mind

 

For publishers, developing an all-out mobile strategy has become both more necessary and more challenging. Today, we look at key data points and trends for such a task. 

#1 The Global Picture
— 1.7bn mobile phones (feature phones and smartphones) were sold in 2012 alone
— 3.2bn people use a mobile phone worldwide
— Smartphones gain quickly as phones are replaced every 18 to 24 months
— PCs are completely left in the dust as shown in this slide from Benedict Evans’ excellent Mobile is Eating the World presentation:

ben-evans

The yellow line has two main components:
— 1 billion Android smartphones are said to be in operation worldwide (source: Google)
— 700 million iOS devices have been sold over time, with 500 million still in use, which corresponds to the number of iTunes accounts (source: Asymco, one of the best references for the mobile market.)
— 450 million Symbian-based feature phones are in operation (Asymco.)

#2 The Social Picture 

Mobile phone usage for news consumption gets increasingly tied to social networks. Here are some key numbers :
— Facebook: about 1.19bn users; we don’t exactly know how many are active
— Twitter: 232 million users
— LinkedIn: 259 million users

When it comes to news consumption in a social environment, these three channels have different contributions. This chart, drawn from a Pew Research report, shows the penetration of different social networks and the proportion of the US population who get their news from it.

300_pew

One of the most notable data points in the Pew Report is the concentration of sources for social news:
— 65% say to get their news from one social site
— 26% from two sites
— 9% from three sources or more (such as Google +, LinkedIn)

But, as the same time, these sources are completely intertwined. Again, based on the Pew survey, Twitter appears to be the best distributor of news.

Among those who get their news from Twitter:
— 71% also get their news on Facebook
— 27% on YouTube
— 14% on Google+
— 7% on LinkedIn

Put another way, Facebook collects more than half of the adult population’s news consumption on social networks.

But a closer looks at demographics slightly alters the picture because all social networks are not equal when it comes to education and income segmentation:

If you want to reach the Bachelor+ segment, you will get:
— 64% of them on LinkedIn
— 40% on Twitter
but…
— only 30% on Facebook
— 26% on G+
— 23% on YouTube

And if you target the highest income segment (more than $75K per year), you will again favor LinkedIn that collects 63% of news consumers in this slice, more than Facebook (41%)

Coming back to the mobile strategy issue, despite Facebook’s huge adoption, Twitter appears to be the best bet for news content. According to another Pew survey, the Twitter user is more mobile :

Mobile devices are a key point of access for these Twitter news consumers. The vast majority, 85%, get news (of any kind) at least sometimes on mobile devices. That outpaces Facebook news consumers by 20 percentage points; 64% of Facebook news consumers use mobile devices for news. The same is true of 40% of all U.S. adults overall. Twitter news consumers stand out for being younger and more educated than both the population overall and Facebook news consumers

 And, as we saw earlier, Twitter redistributes extremely well on other social platforms. It’s a no brainer: any mobile site or app should carry a set of hashtags, whether it’s a stream of information produced by the brand or prominent bylines known for their insights.

 #3 The Time Spent Picture

Here is why news is so complicated to handle in mobile environments. According to Flurry Analytics: On the 2 hours and 38 minutes spent each day on a smartphone and an a tablet by an American user, news accounts for 2% as measured in app consumption, which accounts for 80% of time spent. The remaining 20% is spent in a browser where we can assume the share of the news to be much higher. But even in the most optimistic hypothesis, news consumption on a mobile device amounts to around 5 to 6% of time spent (this is correlated by other sources such as Nielsen). Note that this proportion seems to decrease as, in May 2011, Flurry Analytics stated news in the apps ecosystems accounted for 9% of time spent.

This view is actually consistent with broader pictures of digital news consumption, such as these two provided by Nielsen, which show that while users spend 50 minutes per month on CNN (thanks to is broad appeal and to its video content), they only spend 18 minutes on the NYT and a mere 8 minutes on the Washington Post:

300 nielsen

All of the above compares to 6hrs 42min spent on Facebook, 2hrs on YouTube or Yahoo sites.

In actionable terms, this shows the importance of having smartphones apps (or mobile web sites) sharply aimed at providing news in the most compact and digestible way. The “need to know” focus is therefore essential in mobile because catching eyeballs and attention has become increasingly challenging. That’s why The New York Times is expected to launch a compact version of its mobile app (currently dubbed N2K, Need to Know, precisely), aimed at the market’s youngest segment and most likely priced just below $10 a month. (The Times also does it because the growth of digital subscriptions aimed at the upper market is slowing down.) At the other end of the spectrum, the NYT is also said to work on digital magazine for iPad, featuring rich multimedia-narrative on (very) long form such the Pulitzer winning Snow Fall (on that matter, the Nieman analysis is worth a read).

This also explains why the most astute digital publishers go for newsletters designed for mobile that are carefully – and wittily – edited by humans. (One example is the Quartz Daily Brief; it’s anecdotal but everyone I recommended this newsletter to now reads it on a daily basis.) I personally no longer believe in automated newsletters that repackage web site headlines, regardless of their quality. On smartphones, fairly sophisticated users (read: educated and affluent) sought by large media demand time-saving services, to the point content, neatly organized in an elegant visual, and — that’s a complicated subject — tailored to their needs way.

#4 The ARPU View

On mobile devices, the Average Revenue per User should be a critical component when shaping a mobile strategy. First, let’s settle the tablet market question. Even though the so-called “cheap Android” segment  ($100-150 for a plastic device running an older version of Android) thrive in emerging markets, when it comes to extracting significant money from users, the iPad runs the show. It accounts for 80% of the tablet web traffic in the US, UK, Germany, France, Japan, and even China (source: Adobe.)

The smartphone is more complicated. A year ago, many studies made by AppAnnie or Flurry Analytics showed that the iPhone ecosystem brought four times more revenue than Android. More recently, Flurry Analytics ran a story stating that the average app price for Android was $0.06 vs. $0.19 for the iPhone and $0.50 for the iPad.

The gap is closing as Android terminals attracts a growing number of affluent users. Still, compared to iOS, it is notoriously difficult to carry paid-for apps and services in the Android ecosystem, and Android ads remains cheaper. It’s likely to remain the case for quite a while as iOS devices are likely to remain much more expensive than Android ones, and therefore more able to attract high-end demographics and the ads that go to them.

How this impacts a smartphone strategy: Publishers might consider different business models for the two main ecosystems. They could go for fairly sophisticated apps in the iOS world, served  by a well-oiled payment system allowing many flavors of In-App add-ons. By contrast, the Android environment favors a more “go-for-volume” approach; but things could evolve quickly as the Android share of high-end audience grows and as the PlayStore gains in sophistication and gets as friction-free as the AppStore.

frederic.filloux@mondaynote.com

Why Google Will Crush Nielsen

 

Internet measurement techniques need a complete overhaul. New ways have emerged, potentially displacing older panel-based technologies. This will make it hard for incumbent players to stay in the game.

The web user is the most watched consumer ever. For tracking purposes, every large site drops literally dozens of cookies in the visitor’s browser. In the most comprehensive investigation on the matter, The Wall Street Journal found that each of the 50 largest web sites in the United Sates, weighing 40% of the US page views, installed an average of 64 files on a user device. (See the WSJ’s What They Know series and a Monday Note about tracking issues.) As for server logs, they record every page sent to the user and they tell with great accuracy which parts of a page collect most of the reader’s attention.

But when it comes to measuring a digital viewer’s commercial value, sites rely on old-fashioned panels, that is limited user population samples. Why?

Panels are inherited. They go back to the old days of broadcast radio when, in order to better sell advertising, dominant networks wanted to know which station listeners tuned in to during the day. In the late thirties, Nielsen Company made a clever decision: they installed a monitoring box in 1000 American homes. Twenty years later, Nielsen did the same, on a much larger scale, with broadcast television. The advertising world was happy to be fed with plenty of data — mostly unchallenged as Nielsen dominated the field. (For a detailed history, you can read Rating the Audience, written by two Australian media academics). As Nielsen expanded to other media (music, film, books and all sorts of polls), moving to the internet measurement sounded like a logical step. As of today, Nielsen only faces smaller competitors such as ComScore and others.

I have yet to meet a publisher who is happy with this situation. Fearing retribution, very few people talk openly about it (twisting the dials is so easy, you know…), but hey all complain about inaccurate, unreliable data. In addition, the panel system is vulnerable to cheating on a massive scale. Smarty pants outfits sell a vast array of measurement boosters, from fake users that will come in just once a month to be counted as “unique” (they are indeed), to more sophisticated tactics such as undetectable “pop under” sites that will rely on encrypted URLs to deceive the vigilance of panel operators. In France for instance, 20% to 30% of some audiences can be bogus — or largely inflated. To its credit, Mediametrie — the French Nielsen affiliate that produces the most watched measurements — is expending vast resources to counter the cheating, and to make the whole model more reliable. It works, but progress is slow. In August 2012, Mediametrie Net Ratings (MNR), launched a Hybrid Measure taking into account site centric analytics (server logs) to rectify panel numbers, but those corrections are still erratic. And it takes more than a month to get the data, which is not acceptable for the real-time-obsessed internet.

Publishers monitor the pulse of their digital properties on a permanent basis. In most newsrooms, Chartbeat (also imperfect, sometimes) displays the performance of every piece of content, and home pages get adjusted accordingly. More broadly, site-centric measures detail all possible metrics: page views, time spent, hourly peaks, engagement levels. This is based on server logs tracking dedicated tags inserted in each served page. But the site-centric measure is also flawed: If you use, say, four different devices — a smartphone, a PC at home, another at work, and a tablet — you will be incorrectly counted as four different users. And if you use several browsers you could be counted even more times. This inherent site-centric flaw is the best argument for panel vendors.

But, in the era of Big Data and user profiling, panels no longer have the upper hand.

The developing field of statistical pairing technology shows great promise. It is now possible to pinpoint a single user browsing the web with different devices in a very reliable manner. Say you use the four devices mentioned earlier: a tablet in the morning and the evening; a smartphone for occasional updates on the move, and two PCs (a desktop at the office and a laptop elsewhere). Now, each time you visit a new site, an audience analytics company drops a cookie that will record every move on every site, from each of your devices. Chances are your browsing patterns will be stable (basically your favorite media diet, plus or minus some services that are better fitted for a mobile device.) Not only your browsing profile is determined from your navigation on a given site, but it is also quite easy to know which sites you have been to before the one that is currently monitored, adding further precision to the measurement.

Over time, your digital fingerprint will become more and more precise. Until then, the set of four cookies is independent from each other. But the analytics firm compiles all the patterns in single place. By data-mining them, analysts will determine the probability that a cookie dropped in a mobile application, a desktop browser or a mobile web site belongs to the same individual. That’s how multiple pairing works. (To get more details on the technical and mathematical side of it, you can read this paper by the founder of Drawbridge Inc.) I recently discussed these techniques with several engineers both in France and in the United Sates. All were quite confident that such fingerprinting is doable and that it could be the best way to accurately measure internet usage across different platforms.

Obviously, Google is best positioned to perform this task on a large scale. First, its Google Analytics tool is deployed over 100 millions web sites. And the Google Ad Planner, even in its public version, already offers a precise view of the performance of many sites in the world. In addition, as one of the engineers pointed out, Google is already performing such pairing simply to avoid showing the same ad twice to a someone using several devices. Google is also most likely doing such ranking in order to feed the obscure “quality index” algorithmically assigned to each site. It even does such pairing on a nominative basis by using its half billion Gmail accounts (425 million in June 2012) and connecting its Chrome users. As for giving up another piece of internet knowledge to Google, it doesn’t sounds like a big deal to me. The search giant knows already much more about sites than most publishers do about their own properties. The only thing that could prevent Google from entering the market of public web rankings would be the prospect of another privacy outcry. But I don’t see why it won’t jump on it — eventually. When this happens, Nielsen will be in big trouble.

frederic.filloux@mondaynote.com