advertising

Brace For The Corporate Journalism Wave

 

 [Updated with fresh data]

Corporations are tempted to take over journalism with increasingly better contents. For the profession, this carries both dangers and hopes for new revenue streams. 

Those who fear Native Advertising or Branded Content will dread the unavoidable rise of Corporate Journalism. At first glance, associating the two words sounds like of an oxymoron of the worst possible taste, an offense punishable by tarring and feathering. But, as I will now explain, the idea deserves a careful look.

First, consider the chart below, lifted form an Economist article titled Slime-slinging Flacks vastly outnumber hacks these days. Caveat lector, published in 2011. The numbers are a bit old (I tried to update them without success), but the trend was obvious and is likely to have continued:

336_PRvsJ_516px

Update:
As several readers pointed out, I failed to mention a Pew Research story by Alex T. Williams that contains recent data that further confirm the trend: (emphasis mine)

There were 4.6 public relations specialists for every reporter in 2013, according to the [Bureau of Labor Statistics] data. That is down slightly from the 5.3 to 1 ratio in 2009 but is considerably higher than the 3.2 to 1 margin that existed a decade ago, in 2004.

[Over the last 10 years], the number of reporters decreased from 52,550 to 43,630, a 17% loss according to the BLS data. In contrast, the number of public relations specialists during this timeframe grew by 22%, from 166,210 to 202,530.

 Williams also exposes the salary gap between PR people and news reporters:

In 2013, according to BLS data, public relations specialists earned a median annual income of $54,940 compared with $35,600 for reporters.

And I should also mention this excellent piece in this Weekend FT, on The invasion of Corporate News. –

In short, while the journalistic staffing is shrinking dramatically in every mature market (US, Europe), the public relation crowd is rising in a spectacular fashion. It grows in two dimensions: the spinning aspect, with more highly capable people, most often former seasoned writers willing to become spin-surgeons. These are both disappointed by the evolution of their noble trade and attracted by higher compensation. The second dimension is the growing inclination for PR firms, communication agencies and corporations themselves to build fully-staffed newsrooms with editor-in-chief, writers, photo and video editors.

That’s the first issue.

The second trend is the evolution of corporate communication. Slowly but steadily, it departs from the traditional advertising codes that ruled the profession for decades. It shifts toward a more subtle and mature approach based on storytelling. Like it or not, that’s exactly what branded content is about: telling great stories about a company in a more intelligent way versus simply extolling a product’s merits.

I’m not saying that one will disappear at the other’s expense. Communication agencies will continue to plan, conceive and produce scores of plain, product-oriented campaigns. This is first because brands need it, but also because there are often no other ways to promote a product than showing it in the most effective (and sometimes aesthetic) fashion. But fact is, whether it is to stage the manufacturing process of a luxury watch, or the engineering behind a new medical imagery device, more and more companies are getting into a full-blown storytelling. To do so, they (or their surrogates) are hiring talent — which happens to be in rather large supply these days.

The rise of digital media is no stranger to this trend. In the print era, for practical reasons, it would have been inconceivable to intertwine classic journalism with editorial treatments. In the digital world things are completely different. Endless space, the ability to link, insert expandable formats all open new possibilities when it comes to accommodating large, rich, multimedia contents.

This evolution carries both serious hazards for traditional journalism as well as tangible economic opportunities. Let’s start with the business side.

Branded content (or native advertising) has achieved significant traction in the modern media business — even if the quality of its implementation varies widely. Some companies (that I will refrain from naming) screwed up big time by failing to properly identify what was paid-content as opposed to genuine journalistic production. And a misled reader is a lost reader (especially if there is a pattern). But for those who pull out good execution, both in terms of ethics and products, native ads carry a much better value than banners, billboards, pushdowns, interstitials, or other pathetic “creations” massively rejected by readers. I know of several media selling dumb IAB formats that find out they can achieve rates 5x to 8x higher by relying on high quality, bespoke branded contents. These more parsimonious and non invasive products achieve a much better audience acceptance than traditional formats.

For media companies, going decisively for branded content is also a way to regain control on their own business. Instead of getting avalanches of ready-to-eat campaigns from media buying agencies, they retain more control on the creation of advertising elements by dealing with the creative agencies or even with the brand themselves. Such a move goes with some constraints, though. Entering branded content at a credible scale requires investments. To serve its advertising clients, BuzzFeed maintains 50 people in its own design studio. Relative to the size of their entire staff, many other new media companies decided from the outset to build fairly large creative teams (including Quartz). That’s precisely why I believe most legacy media will miss this train (again). Focused on short-term cost control, also under pressure from conservative newsrooms who see branded content as the Antichrist, they will delay the move. In the meantime, pure players will jump on the opportunity.

Newsrooms have reasons to fear Corporate Journalism — in the sense of the ultimate form of branded content entirely packaged by the advertiser — but not for the reasons editors usually put forward. Dealing with the visual segregation of native ads vs. editorial is not utterly complicated; it depends mostly on the mutual understanding between the head of sales (or the publisher) and the editor; the latter needs to be credible enough among his peers to impose his/er choices without yielding to corporatism-induced demagoguery.

But the juxtaposition of articles (or multimedia contents) produced on one side by the newsroom and on another hand by a sponsor willing to build its storytelling at any cost might trigger another kind of conflict, around means and sources.

In the end, journalism is all about access. Beat reporters from a news media will do their best to circumvent the PR fence to get access to sources, while at the same time the PR team will order a bespoke story from its own staff writers. Both teams might actually find themselves in competition. Let’s say a media wants to write a piece on the strategy shift of major energy conglomerate with respect to global warming; the news team will talk to scores of specialists outside the company, financial analysts who challenge management’s choices, shareholders who object to expensive diversification, advocacy group who monitor operations in sensitive areas, unions, etc. They will also try to gain access to those who decide the fate of the company, i.e. top management, strategic committees, etc. Needless to say, such access will be tightly controlled.

On the corporate journalism side, the story will be told differently: strategist and managers will talk openly and in a very interesting way (remember, they are interviewed by pros). At the same time, a well-crafted on-site video shot in an oil-field in Borneo, or on a solar farm in Africa will reinforce the message, in a 60 Minutes way. The whole package won’t carry silly corporate messages, it will be rich, carefully balanced for credibility and well-staged. Click-wise, it is also likely to be quite attractive with its glowing, sleek videos and great text that will have the breadth (but not the substance) of professional reporting.

I’m painting this in broad strokes. But you get my point: Authentic news reporting and corporate journalism are bound to compete as audience could increasingly enjoy informative, well-design corporate production over drier journalistic work — even though it is labelled as such. Of course, corporate journalism will remain small compared to the editorial content produced by a newsroom, but it could be quite effective on the long run.

frederic.filloux@mondaynote.com

The Browser Is The OS: 19 Years Later

 

So it was declared in the early days: Web apps will win over native apps. Why let the facts cloud an appealing theory?

Marc Andreessen, the Netscape co-founder, is credited with many bold, visionary claims such as “Everyone Will Have the Web” (ca. 1992), “Web Businesses Will Live in the Cloud” (1999), “Everything Will Be Social” (2004, four years before joining Facebook’s Board), and “Software Will Eat the World” (2009).

But not all of Andreessen’s predictions are as ringing and relevant. His 1995 proclamation that “The Browser Will Be the Operating System” still reverberates around the Web, despite the elusiveness of the concept.

The idea is that we can rid our computing devices of their bulky, buggy operating systems by running apps in the Cloud and presenting the results in a Web browser. The heavy lifting is performed by muscular servers while our lightweight devices do nothing more than host simple input/output operations. As a result, our devices will become more agile and reliable, they’ll be less expensive to buy and maintain, we’ll never again have to update their software.

The fly in the ointment is the word connected. As Marc Andreessen himself noted in a 2012 Wired interview [emphasis mine]:

[I]f you grant me the very big assumption that at some point we will have ubiquitous, high-speed wireless connectivity, then in time everything will end up back in the web model.

So what do we do until we have ubiquitous, high-speed wireless connectivity?

We must build off-line capabilities into our devices, local programs that provide the ability to format and edit text documents, spreadsheets, and presentations in the absence of a connection to the big App Engines in the Cloud. Easy enough, all you have to do is provide a storage mechanism (a.k.a. a file system), local copies of your Cloud apps, a runtime environment that can host the apps, a local Web server that your Browser can talk to… The inventory of software modules that are needed to run the “Browser OS” in the absence of a connection looks a lot like a conventional operating system… but without a real OS’s expressive power and efficiency.

For expressive power, think of media intensive applications. Photoshop is a good example: It could never work with a browser as the front end, it requires too much bandwidth, the fidelity of the image is too closely tied to the specifics of the display.

With regard to efficiency, consider the constant low-level optimizations required to conserve battery power and provide agile user interaction, none of which can be achieved in a browser plug-in.

Certainly, there are laudable arguments in support of The Browser Is The OS theory. For example: Unified cross-platform development. True, developing an app that will run on a standardized platform decreases development costs, but, let’s think again, do we really want to go for the lowest common denominator? A single standard sounds comfy and economical but it throttles creativity, it discourages the development of apps that take advantage of a device’s specialized hardware.

Similarly, a world without having to update your device because the Cloud always has the latest software is a comforting thought.. but, again, what about when you’re off-line? Also, a growing number of today’s computing devices automatically update themselves.

In any case, the discussion may be moot: The people who pay our salaries — customers — blithely ignore our debates. A recent Flurry Analytics report shows that “Six years into the Mobile Revolution” apps continue to dominate the mobile Web. We spend 86% of our time using apps on our mobile devices and only 14% in our browsers:

Apps 86 Browser 14

…and app use is on the rise, according to the Flurry Analytics forecast for 2014:

Apps Web Flurry 2013 2014

So how did Andreessen get it so wrong, why was his prediction so wide of the mark? It ends up he wasn’t wrong… because he never said “The Browser Will Be the Operating System”. Although it has been chiseled into the tech history tablets, the quote is apocryphal. 

While doing a little bit of research for this Monday Note, I found a 1995 HotWired article, by Chip Bayers, strangely titled “Why Bill Gates Wants to Be the Next Marc Andreessen”. (Given Microsoft’s subsequent misses and Marc Andreessen’s ascendency, perhaps we ought to look for other Chip Bayer prophecies…) The HotWired piece gives us a clear “asked and answered” Andreessen quote [emphasis mine]:

“Does the Web browser become something like an operating system?

No, it becomes a new type of platform. It doesn’t try to do the things an operating system does. Instead of trying to deal with keyboards, mouses, memory, CPUs, and disk drives, it deals with databases and files that people want to secure – transactions and things like that. We’re going to make it possible for people to plug in anything they want.”

Nearly two decades later, we still see stories that sonorously expound “The Browser Is The OS” theory. Just google the phrase and you’ll be rewarded with 275M results such as “10 reasons the browser is becoming the universal OS” or “The Browser Is The New Operating System”. We also see stories that present Google’s Chrome and Chromebooks as the ultimate verification that the prediction has come true.

The Browser Is The OS is a tech meme, an idea that scratches an itch. The nonquote was repeated, gained momentum, and, ultimately, became “Truth”. We’ll be polite and say that the theory is “asymptotically correct”… while we spend more energy figuring out new ways to curate today’s app stores.

JLG@mondaynote.com

The Quartz Way (2)

 

Last week, we looked at Atlantic Media’s business site Quartz (qz.com) from an editorial and product standpoint. Today, we focus on its business model based on an emerging form of advertising. 

The Quartz business model is simple: it’s free and therefore entirely ad supported. Why? Doesn’t qz.com target a business readership that shouldn’t mind spending nine dollars a month? “It was part of the original equation: Mobile first, and free, embracing the open web”, explains publisher Jay Lauf, whom I met in Paris a couple of weeks ago. Jay is also an Atlantic Media senior vice-president and the group publisher (he once was Wired’s publisher).

jay-lauf-head-shot

Jay Lauf, Publisher (Photo: Quartz)

According to him, launching Quartz was the latest iteration of a much grander plan. Four years ago, Atlantic Media held a meeting aimed at defining their strategy: “What we will do, but also what we will not do”, says Jay Lauf. The group came up with three key priorities: #1 being a growth company (as opposed to passively manage the shift from print to digital). That idea was greatly helped by Atlantic’s ownership structure controlled by David Bradley. #2 “Digitally lead for everything”, which was not obvious for a ancient publication — Atlantic Monthly was created in 1857. #3 Atlantic must focus on “decision makers and influential people”.

Today, the goals set four years ago translate into a cluster of media brands reaching every month a highly solvent readership of 30 million people:

  • The Atlantic, the digital version of the eponymous magazine.
  • The Atlantic Wire aimed at a younger generation mostly relying on social media.
  • The Atlantic Cities, that focuses of urban centers and urban planning.
  • The National Journal that itself includes several publications, mostly about politics and society.
  • Government Executive Media, which operates a number niche publications covering the federal government (including its use of technology)
  • Atlantic Media Strategies, an independent division offering a full catalogue of advertising and marketing solutions. These range from analytics, social media campaigns and content creation, such as this one with General Electric in which a dedicated site features America’s economic futures – according to GE.

quartz_graph

All this brings us to Quartz’s business model. It relies entirely on native advertising also known as branded or sponsored content (see a previous Monday Note What’s Fuss About Native Ads?). Quartz’s implementation is straightforward: a small number of advertisers, served with high yield campaigns.

Below is yesterday’s screenshot of Quartz’s endless scroll, featuring regular displays of branded content (in this case Boeing):

qz_scroll_ads

Most of the time, the content is made or adapted especially for Quartz with a variable involvement of its advertising division (the branded content operations are kept segregated from the editorial department.) Quartz staff involvement goes from collaborating on the ad content to setting up HTML5 integration. On purpose, Quartz maintains a staff of copywriters and graphic designers assigned to assist brands in their communication. While ad spaces are clearly identified, their content is never completely dissociated from surrounding articles. Quite often, it reflects the newsroom’s “Obsessions“. Such precautions, plus the Quartz layout, warrant good click-rates and high prices. Quartz people are discreet about the KPIs, but sources in the ad community said that CPMs for its native ads content could be roughly ten times higher than traditional display ads.

Atlantic Media’s weight and bargaining power helped jumpstart the ad pump. A year ago, the site started with four brands: Chevron, Boeing, Credit Suisse and Cadillac. Today, Quartz has more twenty advertisers from the same league. Unlike other multi-page websites, its one-scroll structure not only proposes a single format, but also re-creates scarcity. (Plus the fact that Quartz does not have any mobile apps greatly simplifies the commercial process.) Still, it can be a double-edged sword: scarcity could indeed translate into high prices, but it also limits the number of available slots, therefore capping the revenue stream. Quartz’s publisher and head of sales made a tough choice — high rates vs. high volume — and so far it seems to work fine as the site is close to break-even ahead of schedule.

How far it can go remains to be seen. Quartz is a relatively small operation (50 people altogether, including 25 journalists producing 35-40 stories a day and a nice location in NYC’s Soho district.) My guess is it shouldn’t burn more than $10m a year. By extrapolating from the site’s audience, profitability sounds in reach of Quartz’s current “value model”. But the asymptote — factoring ads rates, number of slots, advertisers’ “dimension”, and traffic — could also be near and therefore constrain Quartz’s ability to scale up. That’s why the publication is now entering the crowded sector of conferences with its “Quartz Live”, featuring its customary exclusive attendance and editorial-rich ways. Will Quartz escape the temptation to launch paid-for products? Its journalistic content leaves open many opportunities in that field. For example, a mixture of semantic-assembled, high-end briefings, tailored to carefully profiled segments of its audience could generate a nice revenue stream, or ebooks and long-form features.
To be continued next year…

frederic.filloux@mondaynote.com

 

Memo #3 to Jeff — Data & User Profiling for The Washington Post

 

For customer-related technologies, the financial and intellectual backing of Jeff Bezos, and his Amazon experience can give The Post a huge competitive advantage. Here is what should be at the top of the to-do list. 

Every digital manager must plan to tap into Amazon’s fantastic engineering firepower. (Even though Bezos bought the newspaper out of his own pocket, the first thing he’ll do — if he hasn’t already — will be drafting some of his techies as “advisors” to The Post.) The key point being: the influx of engineering brainpower must not be limited to the digital side of the house, or to the newspaper’s IT infrastructure. It should impact all activities: editorial, marketing, subscriptions and paid-for products. Let’s dive into details.

Turbo-boosting the editorial. Let’s start with the basics: What characterizes media outlets playing in The Washington Post’s league? It is their ability to line up top journalistic resources to cover stories that matter, in-depth, with multiple angles and treatment modes (text, features stories, photographs, graphics, multimedia storytelling, live blogging, opinions, etc.), while deploying the best expertise on topics covered. These are the five items that make the difference between the bulk of pure players and true legacy media.

In many ways, the above is anti-economic, it is loaded with inherent inefficiencies — dry holes, dead ends, waste of time on promising leads —  that drive nuts “quant zealots” obsessed with KPI’s and productivity measurements. At this point, the difference between great newsroom managers (i.e. editors) and average ones lies in their ability to make some room for “managed inefficiencies”. An editor’s key, delicate duty is weighing the purpose of resource-intensive tasks such as flummoxing the competition, pursuing a worthy story, or launching a months-long journalistic project aimed at a Pulitzer prize. Unfortunately, weak leadership, balking at tough choices and yielding instead to a sorry attempt to spread an even level of (dis)satisfaction among constituencies causes inefficiencies to grow like weed.

The foremost goal of technology-enhanced news content is smartly weaving together all components of a topic. The idea is to keep the reader aboard by encouraging multiple levels of reading, with different angles for a subject, calls to essential archives or to other forms of journalism such as blogs or infographics. In this field, Amazon is light-years ahead of the news industry. By raising the number of editorial treatments seen by the reader, almost twenty years of Amazon’s e-commerce recommendation engine refinements will undoubtedly benefit The Post.

Another key item will be the level of news personalization. What should a Post reader see mostly? News that matters to him or her, or everything the paper’s staff collects? How to define mostly? Fully tailored contents based on past navigation? Stated preferences combined with the preserved serendipity that together make the core of news construction? This is a deeply involved problem — and the subject of a future Monday Note.

Reader profiling. All digital publishers dream of knowing exactly what reader sees what content, where, at what time of the day and on which vector: web, smartphone, tablet. The finer the granularity, the better. Slicing and dicing readership in segments of age, professions, residence, income, interests yields three types of uses:

  • increasing news content stickiness by serving customized content as mentioned earlier
  • smarter customized advertising, as opposed to dumbly drowning users into a flood of ads for months by using data collected during the shopping season. This practice, known as “retargeting”, is one of the internet “seven plagues” and the most potent repellent to advertising
  • channelling the reader to the catalogue of ancillary products any news outlet should operate. For example: once a reader is identified (even anonymously) as working in the legal field, for a media group struggling to fill the last seats of its conference on privacy laws, why not show this loyal reader a one-time only, 50% discounted ticket, valid for 24 hours only? Simplistic as this example might seem, its large scale application is far from trivial: it requires super-accurate analytics, the deployment of “event engines” that will trigger the display of the right offer, at the right time, to the right segment of the population. Fortunately, this is the kind of work Amazon geeks are particularly good at.

For The Washington Post, the benefits are numerous. Research shows that serving the right ad to the right profile can raise its value by a factor of 1.5x to 2x. And the performance of ancillary products (conferences, business events, news-related ebooks or professional products, education packages, etc.) will become easier to measure.

Impact on paywall and subscription models. Paywall theory can be summarized as follows:

  • deploying a wide range of tactics all aimed at significantly raising the number of news contents items (not necessarily articles) a reader watches every month. Let’s make no mistakes: the main dial is under the newsroom’s control, marketing wizardry won’t do the trick
  • finding readers most likely to convert to a paid-for subscription and, week after week, serving them (I write serving, not bombarding) offers they can’t refuse: an extended test-period, or a news-related bonus that reflects the breadth of the company’s line of products.

As with most theories, practice is much harder. A paid-for system is a long-term, investment-intensive, staffing-critical effort. Two legacy media did it particularly well: The Financial Times and The New York Times. The former built a subscription base that now surpasses the paper’s; the latter added $100m a year in revenue that did not exist three years ago. Most paywall strategies underperform for two reasons: first, an error in predicting the editorial contents’ ability to retain readers beyond a free threshold of 10, 15, or 20 stories a month; second, a failure to build the data-driven infrastructure that is mandatory for any paid-for product. The Washington Post does relatively well with the first test. For the second, the backing of Amazon tech brains will give it the best chances to succeed.

frederic.filloux@mondaynote.com

Your smartphone, your moods, their market

 

Coupled to facial imaging, the smartphone could become the ultimate media analytics tool, for evaluating editorial content or measuring the effectiveness of ads. Obviously, there are darker sides. 

When it comes to testing new products, most of us have been through the focus group experience. You sit behind a one-way mirror and watch a handpicked group of people dissect your new concept: a magazine redesign, a new website or a communication campaign. It usually lasts a couple of hours during which the session moderator does his best to extract intelligent remarks from the human sample. Inevitably, the client — you, me, behind the glass — ends up questioning the group’s relevance, the way the discussion was conducted, and so on. In the end, everyone makes up their own interpretation of the analyst’s conclusions. As usual, I’m caricaturing a bit; plus I’m rather in favor of products pre-tests as they always yield something useful. But we all agree the methods could be improved — or supplemented.

Now consider Focus Group 2.0: To a much larger sample (say few hundreds), you send a mockup of your next redesign, a new mobile app, or an upcoming ad campaign you better not flunk. The big 2.0 difference resides in a software module installed on the tester’s smartphone or computer that will use the device’s camera to decipher the user’s facial expressions.

Welcome to the brave new world of facial imaging. It could change the way visual designs are conceived and tested, making them more likely to succeed as a result . These techniques are based on the work of American psychologist Paul Ekman, who studied emotions and their relation to facial expression. Ekman was the first to work on “micro-expressions” yielding impossible to suppress, authentic reactions.

The human face has about 43 facials muscles that produce about 8,000 different combinations. None of theses epxressions are voluntary, nor are they dependent on social origin or ethnicity. The muscles react automatically and swiftly — in no more than 10 or 20 milliseconds — to cerebral cortex instructions sent to the facial nerve.

Last month, in Palo Alto, I met Rick Lazansky, a board director at the venture capital firm Sand Hill Angels. In the course of a discussion about advertising inefficiencies (I had just delivered a talk at Stanford underlining the shortcomings of digital ads), Rick told me he had invested in a Swiss-based company called Nviso. Last week, we set up a Skype conference with Tim Lellewellyn, founder and CEO of the company (Nviso is incubated on the campus of the Swiss Federal Institute of Technology in Lausanne where Dr. Matteo Sorci, Nviso’s chief scientist and co-founder used to work.)

Facial Imaging’s primary market is advertising, explains the Nviso team. Its technology consists in mapping 143 points on the face, activated by the 43 facial muscles. Altogether, their tiny movements are algorithmically translated into the seven most basic expressions : happiness, surprise, fear, anger, disgust, sadness and neutral, each of them lasting a fraction of a second. In practice, such techniques require careful adjustment as many factors tweak the raw data. But the ability to apply such measurements to hundreds of subjects, in a very short time, insures the procedure’s statistical accuracy and guarantees consistent results.

Webcams and, more importantly, smartphone cameras will undoubtedly boost uses of this technology. Tests that once involved a dozen of people in a focus group can now be performed using a sample size measured in hundreds, in a matter of minutes. (When scaling up, one issue becomes the volume of data: one minute of video for 200 respondents will generate over 100,000 images to process.)

Scores of applications are coming. The most solvent field is obviously the vast palette of market research activities. Designers can quickly test logos, layouts, mockups, story boards. Nviso works with Nielsen in Australia and New Zealand and with various advertisers in Korea. But company execs know many others fields could emerge. The most obvious one is security. Imagine sets of high-speed cameras performing real-time assessment at immigration or at customs in an airport; or a police officer using the same technology to evaluate someone’s truthfulness under interrogation. (The Miranda Warning would need its own serious facelift…) Nviso states that it stays out of this field, essentially because of the high barrier to entry.

Other uses of facial imaging technique will be less contentious. For instance, it could be of a great help to the booming sector of online education. Massive Open Online Courses (Moocs) operators are struggling with two issues: authentication and student evaluation. The former is more or less solved thanks to techniques such as encoding typing patterns, a feature reliably unique to each individual. Addressing evaluation is more complicated. As one Stanford professor told me when we were discussing the fate of Moocs, “Inevitably, after a short while, you’ll have 20% to 30% of the students that will be left behind, while roughly the same proportion will get bored…” Keeping everyone on board is therefore one of the most serious challenges of Moocs. And since Moocs are about scale, such task has to be handled by machines able to deal with thousands of students at a time. Being able to detect student moods in real-time and to guide them to relevant branches of the syllabus’ tree-structure will be essential.

These mood-analysis techniques are just nascent. Besides Nviso, several well-funded companies such as Affectiva compete for the market-research sector. The field will be reinforced by other technologies such as vocal intonations analysis deployed by startups like Beyond Verbal. And there is more in store. This story of Smithonian.com titled “One day, your smartphone will know if you are happy or sad“, sums up the state of the art with mobile apps designed to decipher your mood based on the way you type, or research conducted by Samsung to develop emotion-sensing smartphones. As far as privacy is concerned, this is just the beginning of the end. Just in case you had a doubt…

frederic.filloux@mondaynote.com

Why Google Will Crush Nielsen

 

Internet measurement techniques need a complete overhaul. New ways have emerged, potentially displacing older panel-based technologies. This will make it hard for incumbent players to stay in the game.

The web user is the most watched consumer ever. For tracking purposes, every large site drops literally dozens of cookies in the visitor’s browser. In the most comprehensive investigation on the matter, The Wall Street Journal found that each of the 50 largest web sites in the United Sates, weighing 40% of the US page views, installed an average of 64 files on a user device. (See the WSJ’s What They Know series and a Monday Note about tracking issues.) As for server logs, they record every page sent to the user and they tell with great accuracy which parts of a page collect most of the reader’s attention.

But when it comes to measuring a digital viewer’s commercial value, sites rely on old-fashioned panels, that is limited user population samples. Why?

Panels are inherited. They go back to the old days of broadcast radio when, in order to better sell advertising, dominant networks wanted to know which station listeners tuned in to during the day. In the late thirties, Nielsen Company made a clever decision: they installed a monitoring box in 1000 American homes. Twenty years later, Nielsen did the same, on a much larger scale, with broadcast television. The advertising world was happy to be fed with plenty of data — mostly unchallenged as Nielsen dominated the field. (For a detailed history, you can read Rating the Audience, written by two Australian media academics). As Nielsen expanded to other media (music, film, books and all sorts of polls), moving to the internet measurement sounded like a logical step. As of today, Nielsen only faces smaller competitors such as ComScore and others.

I have yet to meet a publisher who is happy with this situation. Fearing retribution, very few people talk openly about it (twisting the dials is so easy, you know…), but hey all complain about inaccurate, unreliable data. In addition, the panel system is vulnerable to cheating on a massive scale. Smarty pants outfits sell a vast array of measurement boosters, from fake users that will come in just once a month to be counted as “unique” (they are indeed), to more sophisticated tactics such as undetectable “pop under” sites that will rely on encrypted URLs to deceive the vigilance of panel operators. In France for instance, 20% to 30% of some audiences can be bogus — or largely inflated. To its credit, Mediametrie — the French Nielsen affiliate that produces the most watched measurements — is expending vast resources to counter the cheating, and to make the whole model more reliable. It works, but progress is slow. In August 2012, Mediametrie Net Ratings (MNR), launched a Hybrid Measure taking into account site centric analytics (server logs) to rectify panel numbers, but those corrections are still erratic. And it takes more than a month to get the data, which is not acceptable for the real-time-obsessed internet.

Publishers monitor the pulse of their digital properties on a permanent basis. In most newsrooms, Chartbeat (also imperfect, sometimes) displays the performance of every piece of content, and home pages get adjusted accordingly. More broadly, site-centric measures detail all possible metrics: page views, time spent, hourly peaks, engagement levels. This is based on server logs tracking dedicated tags inserted in each served page. But the site-centric measure is also flawed: If you use, say, four different devices — a smartphone, a PC at home, another at work, and a tablet — you will be incorrectly counted as four different users. And if you use several browsers you could be counted even more times. This inherent site-centric flaw is the best argument for panel vendors.

But, in the era of Big Data and user profiling, panels no longer have the upper hand.

The developing field of statistical pairing technology shows great promise. It is now possible to pinpoint a single user browsing the web with different devices in a very reliable manner. Say you use the four devices mentioned earlier: a tablet in the morning and the evening; a smartphone for occasional updates on the move, and two PCs (a desktop at the office and a laptop elsewhere). Now, each time you visit a new site, an audience analytics company drops a cookie that will record every move on every site, from each of your devices. Chances are your browsing patterns will be stable (basically your favorite media diet, plus or minus some services that are better fitted for a mobile device.) Not only your browsing profile is determined from your navigation on a given site, but it is also quite easy to know which sites you have been to before the one that is currently monitored, adding further precision to the measurement.

Over time, your digital fingerprint will become more and more precise. Until then, the set of four cookies is independent from each other. But the analytics firm compiles all the patterns in single place. By data-mining them, analysts will determine the probability that a cookie dropped in a mobile application, a desktop browser or a mobile web site belongs to the same individual. That’s how multiple pairing works. (To get more details on the technical and mathematical side of it, you can read this paper by the founder of Drawbridge Inc.) I recently discussed these techniques with several engineers both in France and in the United Sates. All were quite confident that such fingerprinting is doable and that it could be the best way to accurately measure internet usage across different platforms.

Obviously, Google is best positioned to perform this task on a large scale. First, its Google Analytics tool is deployed over 100 millions web sites. And the Google Ad Planner, even in its public version, already offers a precise view of the performance of many sites in the world. In addition, as one of the engineers pointed out, Google is already performing such pairing simply to avoid showing the same ad twice to a someone using several devices. Google is also most likely doing such ranking in order to feed the obscure “quality index” algorithmically assigned to each site. It even does such pairing on a nominative basis by using its half billion Gmail accounts (425 million in June 2012) and connecting its Chrome users. As for giving up another piece of internet knowledge to Google, it doesn’t sounds like a big deal to me. The search giant knows already much more about sites than most publishers do about their own properties. The only thing that could prevent Google from entering the market of public web rankings would be the prospect of another privacy outcry. But I don’t see why it won’t jump on it — eventually. When this happens, Nielsen will be in big trouble.

frederic.filloux@mondaynote.com

What’s the Fuss About Native Ads?

 

In the search for new advertising models, Native Ads are booming. The ensuing Web vs. Native controversy is a festival of fake naïveté and misplaced indignation. 

Native Advertising is the politically correct term for Advertorial, period. Or rather, it’s an upgrade, the digital version of an old practice dating back to the era of typewriters and lead printing presses. Everyone who’s been in the publishing business long enough has in mind the tug-of-war with the sales department who always wants its ads to to appear next to an editorial content that will provide good “context”. This makes the whole “new” debate about Native Ads quite amusing. The magazine sector (more than newspapers), always referred to “clean” and “tainted” sections. (The latter kept expanding over the years). In consumer and lifestyle sections, editorial content produced by the newsroom is often tailored to fit surrounding ads (or to flatter a brand that will buy legit placements).

The digital era pushes the trend several steps further. Today, legacy media brands such as Forbes, Atlantic Media, or the Washington Post have joined the Native Ads bandwagon. Forbes even became the poster child for that business, thanks to the completely assumed approach carried out by its chief product officer Lewis DVorkin (see his insightful blog and also this panel at the recent Paid Content Live conference.) Advertising is not the only way DVorkin has revamped Forbes. Last week, Les Echos (the business daily that’s part of the media group I work for) ran an interesting piece about it titled “The Old Press in a Startup mode” (La vielle presse en mode start-up). It details the decisive — and successful — moves by the century-old media house: a downsized newsroom, external contributors (by the thousand, and mostly unpaid) who produce a huge stream of 400 to 500 pieces a day. “In some cases”, wrote Lucie Robequain, Les Echos’s New York correspondent, “the boundary between journalism and advertorial can be thin…” To which Lewis DVorkin retorts: “Frankly, do you think a newspaper that conveys corporate voices is more noble? At Forbes, at least, we are transparent: We know which company the contributor works for and we expose potentials conflicts of interests in the first graph…” Maybe. But screening a thousand contributors sounds a bit challenging to me… And Forbes evidently exposed itself as part of the “sold” blogosphere. Les Echos’ piece also quotes Joshua Benton from Harvard’s Nieman Journalism Lab who finds the bulk of Forbes production to be, on average, not as good as it was earlier, but concedes the top 10% is actually better…

As for Native Advertising, two years ago, Forbes industrialized the concept by creating BrandVoice. Here is the official definition:

Forbes BrandVoice allows marketers to connect directly with the Forbes audience by enabling them to create content – and participate in the conversation – on the Forbes digital publishing platform. Each BrandVoice is written, edited and produced by the marketer.

Practically, Forbes lets marketers use the site’s Content Management System (CMS) to create their content at will. The commercial deal — from what we can learn — involves volumes and placements that cause the rate to vary between $50,000 to $100,000 per month. The package can also include traditional banners that will send traffic back to the BrandVoice page.

At any given moment, there are about 16 brands running on Forbes’ “Voices”. This revenue stream was a significant contributor to the publisher’s financial performances. According to AdWeek (emphasis mine):

The company achieved its best financial performance in five years in 2012, according to a memo released this morning by Forbes Media CEO Mike Perlis. Digital ad revenue, which increased 19 percent year over year, accounted for half of the company’s total ad revenue for the year, said Perlis. Ten percent of total revenue came from advertisers who incorporated BrandVoice into their buys, and by the end of this year, that share is estimated to rise to 25 percent.

Things seemed pretty positive across other areas of Forbes’ business as well. Newsstand sales and ad pages were up 2 percent and 4 percent, respectively, amid industry-wide drops in both areas. The relatively new tablet app recently broke 200,000 downloads.

A closer look gives a slightly bleaker picture: According to latest data from the Magazine Publishers Association, between Q1 2013 and Q1 2012, Forbes Magazine (the print version only) lost 16% in ads revenues ($50m to $42m). By comparison, Fast Company scored +25%, Fortune +7%, but The Economist -27% and Bloomberg Business Week -30%. The titles compiled by the MPA are stable (+0.5%).

I almost never click on banners (except to see if they work as expected on the sites and apps I’m in charge of). Most of the time their design sucks, terribly so, and the underlying content is usually below grade. However, if the subject appeals to me, I will click on Native Ads or brand contents. I’ll read it like another story, knowing full well it’s a promotional material. The big difference between a crude ad and a content-based one is the storytelling dimension. Fact is: Every company has great stories to tell about its products, strategy or vision. And I don’t see why they shouldn’t be told  resorting to the same storytelling tools news media use. As long as it’s done properly, with a label explaining the contents’ origin, I don’t see the problem (for more on this question, read a previous Monday Note: The Insidious Power of Brand Content.) In my view, Forbes does blur the line a bit too much, but Atlantic’s business site Quartz is doing fine in that regard. With the required precautions, I’m certain Native Ads, or branded contents are a potent way to go, especially when considering the alarming state of other forms of digital ads. Click-through rates are much better (2%-5% vs. a fraction of a percentage for a dumb banner) and the connection to social medias works reasonably well.

For news media companies obsessed with their journalistic integrity (some still do…), the development of such new formats makes things more  complicated when it comes to decide what’s acceptable and what’s not. Ultimately, the editor should call the shots. Which brings us to the governance of media companies. For digital media, the pervasive advertising pressure is likely keep growing. Today, most rely on a Chief Revenue Officer to decide what’s best for the bottom line such as balancing circulation and advertising, arbitraging between a large audience/low yield or smaller audience/higher yield, for instance. But, in the end, only the editor must be held accountable for the contents’ quality and the credibility — which contribute to the commercial worthiness of the media. Especially in the digital field, editors should be shielded from the business pressure. Editors should be selected by CEOs and appointed by boards or better, boards of trustees. Independence will become increasingly scarce.

frederic.filloux@mondaynote.com