Why the right use of Big Data can change the economics of digital publishing. 

Digital publishing is vastly undervalued. Advertising has yet to fulfill its promises -- it is nosediving on the web and it failed on mobile (read JLG's previous column Mobile Advertising: The $20 billion Opportunity Mirage). Readers come, and often go, as many digital publications are unable to retain them beyond a few dozen articles and about thirty minutes per month. Most big names in the digital news business are stuck with single digit ARPUs. People do flock to digital, but cash doesn't follow -- at least, not in amounts required to sustain the production of quality information. Hence the crux of the situation: if publishers are unable to extract significantly more money per user than they do now, most of them will simply die. As a result, the bulk of the population -- with the notable exception of the educated wealthy -- will rely on high audience web sites merely acting as echo chambers for shallow commodity news snippets.

The solution, the largest untaped value resides right before publisher's eyes: readers profiles and contents, all matched against the "noise" of the internet.

Extracting such value is a Big Data problem. But, before we go any further, what is Big Data? The simplest answer: data sets too large to be ingested and analyzed by conventional data base management tools. At first, I was a suspicious, this sounded like a marketing concept devised by large IT players struggling to rejuvenate their aging brands. I changed my mind when I met people with hands-on experience, from large corporations down to a 20-staff startup. They work on tangible things, collecting data streams from fleets of cars or airplanes, processing them in real time and, in some cases, matching them against other contexts. Patterns emerge and, soon, manufacturers predict what is likely to break in a car, find out ways to refine the maintenance cycle of a jet engine, or realize which software modification is needed to increase the braking performance of a luxury sedan.

Phone carriers, large retail chains have been using such techniques for quite a while and have adjusted their marketing as a result. Just for fun, read this New York Times Magazine piece depicting, among other things, the predictive pregnancy model developed by Target (a large US supermarket chain). Through powerful data mining, the rightfully named Target corporation is able to pinpoint customers reaching their third pregnancy month, a pivotal moment in their consuming habits. Or look at Google Flu Trends providing better tracking of flu outbreaks than any government agency.

Now, let's narrow the scope back to the subject of today's column and see how these technologies could be used to extract more value from digital news.

The internet already provides the necessary tools to see who is visiting a web site, what he (she) likes, etc. The idea is to know the user with greater precision and to anticipate its needs.

Let's project an analogy with Facebook. By analyzing carefully the "content" produced by its users -- statements, photos, links, interactions among friends, "likes", "pokes", etc. -- the social network has been able to develop spectacular predictive models. It is able to detect the change in someone's status (single, married, engaged, etc.) even if the person never mentioned it explicitly. Similarly, Facebook is able to predict with great accuracy the probability for two people exchanging casually on the network to become romantically involved. The same applies to a change in someone's financial situation or to health incidents. Without telling anyone, semantic analysis correlated by millions of similar behaviors will detect who is newly out of job, depressed, bipolar, broke, high, elated, pregnant, or engaged. Unbeknownst to them, online behavior makes people completely transparent. For Facebook, it could translate into an unbearable level of intrusiveness such as showing embarrassing ads or making silly recommendations -- that are seen by everyone.

Applied to news news contents, the same techniques could help refine what is known about readers. For instance, a website could detect someone's job changes by matching his reading patterns against millions of other monthly site visits. Based on this, if Mrs. Laura Smith is spotted with a 70% probability to have been: promoted as a marketing manager in a San Diego-based biotech startup (five items), she can be served with targeted advertising especially if she has also appears to be a active hiker (sixth item). More importantly, over time, the website could slightly tailor itself: of course, Mrs Smith will see more biotech stories in the business section than the average reader, but the Art & Leisure section will select more contents likely to fit her taste, the Travel section will look more like an outdoor magazine than a guide for compulsive urbanites. Progressively, the content Mrs. Smith gets will become both more useful and engaging.

The economic consequences are obvious. Advertising -- or, better, advertorial contents branded as such (users are sick with banners)-- will be sold at a much higher price by the web site and more relevant content will induce Mrs. Smith to read more pages per month. (Ad targeting companies are doing this, but in such a crude and saturating way that it is now backfiring). And since Mrs Smith makes more money, her growing interest for the web site could make her a good candidate to become a premium subscriber, then she'll be served with a tailor-made offer at the right time.

Unlike Facebook who will openly soak the intimacy of its users under the pretext of they are willing to give up their privacy in exchange for a great service (good deal for now, terrible in the future), news publishers will be more careful. First, readers will be served with ads and contents they will be the only ones to see -- not their 435 Facebook "friends". This is a big difference, one that requires a sophisticated level of customization. Also, when it comes to reading, preserving serendipity is essential. By this I mean no one will enjoy a 100% tailor-made site; inevitably, it will feel a bit creepy and cause the reader to go elsewhere to find refreshing stuff.

Even with this sketchy description, you get my point: by compiling and analyzing millions of behavioral data, it is possible to make a news service way more attractive for the reader -- and much more profitable for the publisher.

How far-reaching is this? In the news sector, Big Data is still in infancy. But as Moore's Law keeps working, making the required large amounts of computing power more affordable, it will become more accessible to publishers. Twenty years ago, only the NSA was able to handle large sets of data with its stadium-size private data centers. Now publishers can work with small companies that outsource CPU time and storage capabilities to Amazon Web Services and use Hadoop, the open source version of Google master distributed applications software to pore over millions of records. That's why Big Data is booming and provides news companies with new opportunities to improve their business model.

frederic.filloux@mondaynote.com

Print Friendly