About Frédéric Filloux

http://

Posts by Frédéric Filloux:

Quartz: Interesting… and uncertain

 

Atlantic’s new digital venture named Quartz is aimed at global business people. It innovates in many radical ways, but its business model remains dicey.

Two years ago, Atlantic Media’s president Justin Smith was interviewed by the New York Times. The piece focused on the digital strategy he successfully executed:

“We imagined ourselves as a Silicon Valley venture-backed startup whose mission was to attack and disrupt The Atlantic. In essence, we brainstormed the question: What would we do if the goal was to aggressively cannibalize ourselves?”

In most media companies, that kind of statement would have launched a volley of rotten tomatoes. Atlantic’s disruptive strategy gave birth to a new offspring: Quartz (URL: qz.com), launched a couple of weeks ago.

Quartz is a fairly light operation based in New York and headed by Kevin Delaney, a former managing editor at the WSJ.com. Its staff of 25 was pulled together from great brands in business journalism: Bloomberg, The Wall Street Journal, The Economist and the New York Times. According to the site’s official introduction, this is a team with a record of reporting in 119 countries and speaking 19 languages. Not exactly your regular gang of digital serfs or unpaid contributors that most digital pure players are built on.

This professional maturity, along with the backing of the Atlantic Media Company, a 155 years-old organization, might explain the set of rather radical options that makes Quartz so interesting.

Here are a few:

Priority on mobile use. Quartz is the first of its kind to deliberately reverse the old hierarchy: first, traditional web (for PC), and mobile interfaces, second. This is becoming a big digital publishing debate as many of us strongly believe we should go for mobile first and design our services accordingly (I fall in that category).

Quartz founders mentioned market research showing their main target — people on the road interested in global economy — uses 4.21 mobiles devices on average (I love those decimals…): one laptop, one iPad, and two (!) Blackberrys. (Based on multiple observations, I’d rather say, one BB and one iPhone.)

No native mobile app. Similarly, Quartz went for an open HTML5 design instead of apps. We went through this before in the Monday Note. Apps are mandatory for CPU intensive features such as heavy graphics, 3D rendering and games. For news, HTML5 — as messy as it is — does the job just fine. In addition, Quartz relies on “responsive design”, one that allows a web site to dynamically morph in response to the specific connected device (captures are not to scale):

Here is how it looks on a desktop screen:

… on an iPad in landscape mode:

 

…on an iPad in portrait mode:

on a small tablet:

..on an iPhone:

and on a small phone:

(I used Matt Kerlsey Responsive Design Test Site to capture Quartz renderings, it’s an excellent tool to see how your site will look like on various devices).

A river-like visual structure. Quartz is an endless flow of stories that automatically load one below the other as you scroll down. The layout is therefore pretty straightforward: no page-jumps, no complicated navigational tools, just a lateral column with the latest headlines and the main windows where articles concatenate. Again, the priority given to mobile use dictates design purity.

A lightweight technical setup. Quartz does not rely on a complex Content Management System for its production but on WordPress. In doing so, it shows the level of sophistication reached by what started as a simple blog platform. Undoubtedly, the Quartz design team invested significant resources in finding the best WP developers, and the result speaks for itself (despite a few bugs, sure to be short-lived…).

Editorial choices. Instead of the traditional news “beats” (national, foreign, economy, science…), Quartz went boldly for what it calls “obsessions”. This triggered a heated debate among media pundits: among others, read C.W. Anderson piece What happens when news organizations move from “beats” to “obsessions”? on the Nieman Journalism Lab.  Admittedly, the notion of “beats” sounds a bit old-fashioned. Those who have managed newsrooms know beats encourages fiefdoms, fence-building and bureaucracy… Editors love them because they’re much simpler to manage on a day-to-day basis; editorial meetings can therefore be conducted on the basis of a rigid organizational chart; it’s much easier to deal with a beat reporter or his/her desk chief than with some fuzzy “obsession” leader. At Quartz, current “Obsessions” appear in a discreet toolbar. They includes China Slowdown, The Next Crisis, Modern States, Digital, Money, Consumer Class, Startups, etc.

To me, this “obsessive” way of approaching news is way more modern than the traditional “beat” mode. First, it conveys the notion of adjustability to news cycles as “obsessions” can — should — vary. Second, it breeds creativity and transversal treatments among writers (most business publications are quite boring precisely due to their “silo culture”.) Third, digital journalism is intrinsically prone to “obsession”, i.e. strong choices, angles, decisions. For sure, facts are sacred, but they are everywhere: when reporting about the last alarming report from the World Bank, there is no need to repeat what lies just one click away — just sum up the main facts, and link back to the original source! Still, this shouldn’t preclude balanced treatment, fairness and everything in the basic ethics formulary. (Having said that, let’s be realistic: managing a news flow through “obsessions” is fine for  an editorial staff of 20, certainly not so for hundreds of writers.)

Quartz business side. Quartz is a free publication. No paywall, no subscription, nothing. Very few ads either. Again, it opted for a decisive model by getting rid of the dumb banner. And it’s a good thing: traditional display advertising kills designs, crappy targeting practices irritate readers and bring less and less money. (Most news sites are now down to single digital digits in CPM [Cost Per Thousand page views], and it will get worse as ad exchanges keep gaining power, buying remnant inventories by the bulk and reselling those for nothing.) Instead, Quartz started with four sponsors:  Chevron, Boeing, Credit Suisse and Cadillac, all showing quality brand contents. It’s obviously too early to assess this strategy. But Quartz business people opted for being extremely selective in their choice of sponsors (one car-maker, one bank, etc.), with rates negotiated accordingly.

Two, brands are displayed prominently with embedded contents instead of usual formats. Quartz is obviously shooting for very high CPMs. At the very least, they are right to try. I recently meet a European newspaper that extracts €60 to €100 CPMs by tailoring ads and making special ads placements for a small list of advertisers.

Again: such strategy is fine for a relatively small operation: as it is now, Quartz should not burn more than $3-4M a year. Betting on high CPMs is way more difficult for large websites — but niches can be extremely profitable. (For more on Quartz economics, read Ken Doctor’s piece also on Nieman.)

To sum up, three elements will be key to Quartz’ success. 

1 . Quickly build a large audience. Selected advertisers are not philanthropists; they want eyeballs, too. Because of its editorial choices, Quartz will never attract HuffPo-like audiences. To put things in perspective, the Economist gets about 7M uniques browsers a month (much less unique visitors) and has 632,000 readers on its app.

2 . Quartz bets on foreign audiences (already 60% of the total). Fine. But doing so is extremely challenging. Take The Guardian: 60 million uniques visitors per month — one third in the UK, another in the US, and the rest abroad — a formidable journalistic firepower, and a mere £40m in revenue (versus $160m in advertising alone for the NYTimes.com with half of the Guardian’s audience, that’s a 5 to 1 ratio per reader.)

3 . Practically, it means Quartz will have to deploy the most advanced techniques to qualify its audience: it will be doomed if it is unable to tell its advertisers (more than four we hope) it can identify a cluster of readers traveling to Dubai more than twice a year, or another high income group living in London and primarily interested in luxury goods and services (see a previous Monday Note on extracting reader’s value through Big Data)

4 . In the end, Quartz is likely to face a growth question: staying in a niche or broadening its reach (and its content, and increasing its staff) to satisfy the ad market. Once its audience levels off, it might have no other choice than finding a way to make its readers pay. It should not be a problem as it focuses on a rather solvent segment.

frederic.filloux@mondaynote.com

Facebook’s Gen Y Nightmare

 

GenerationY will — paradoxically — pay a high price for giving up its privacy to Facebook.                  

Taos, New Mexico, Fall 2012. At 18, Tina Porter has been on Facebook for four years. Duly briefed by her parents, a teacher and a therapist, she takes great care not to put contents — remarks on her wall, photos, videos — that could expose her in a unwanted manner.

Still. Spending about 30 hours a month on the social network, she has become as transparent as a looking glass. It will impact the cost of her health insurance, her ability to get a loan and to find a job.

Denver, Colorado, spring 2018. Tina is now 24. She’s finishing her law degree at Colorado State University. She’s gone through a lot: experimenting with substances, been pulled over for speeding a couple of times, relying on pills to regain some sleep after being dumped by her boyfriend.  While Tina had her share of downs, she also has her ups. Living in Denver she never missed an opportunity to go hiking, mountain biking, or skiing — except when she had to spend 48 gruesome hours in the dark, alone with a severe migraine. But she remains fit, and she likes to record her sports performances on health sites — all connected to Facebook — and compare with friends.

Seattle, winter 2020. In a meeting room overlooking the foggy Puget Sound, Alan Parsons, head of human resources at the Wilson, McKenzie & Whitman law firm holds his monthly review of the next important hires. Parsons is with Marcus Chen, a senior associate at Narrative Data Inc., both are poring over a selection of resumés. Narrative Data was created in 2015 by a group of MIT graduates. Still headquartered in Cambridge, Massachusetts, the startup now helps hundreds of corporations pick the right talent.

Narrative Data doesn’t track core competencies. The firm is more into character and personality analysis; it assesses ability to sustain stress, to make the right decision under pressure. To achieve this, Narrative Data is staffed with linguists, mathematicians, statisticians, psychologists, sociologists, neuroscientists. What they basically do is data-mining the social internet: blogs, forums, Twitter, and of course Facebook. Over the years, they’ve drawn a map of behaviors, based on language people use. Thanks to Narrative Data’s algorithm, everyone aged above 20, can have his or her life unfolded like a gigantic electronic papyrus scroll. HR people and recruiters love it. So do insurance companies and banks.

Of course, in 2015 no one will be dumb enough to write on his Facebook wall something like “Gee, bad week ahead, I’m heading to my third chemotherapy session”. But Narrative Data is able to pinpoint anyone’s health problems by weaving together language patterns. For instance, it pores over health forums where people talk, openly but anonymously, about their conditions. By analyzing millions of words, Narrative Data has mapped what it calls Health Clusters, data aggregates that provide remarkable accuracy in revealing health conditions. The Cambridge company is even working on a black program able to “de-anonymize” health forum members thanks to language patterns cross-matching with Facebook pages. But the project raises too many privacy issues do be rolled out — yet.

Tina Porter’s resumé popped up thanks to LinkedIn Expert, the social network’s high-end professional service. LinkedIn, too, developed its own technology to data-mine resumés for specific competences. Tina’s research on trade disputes between Korea and the United States caught everyone’s interest at Wilson, McKenzie. That’s why her “3D Resumé” — a Narrative Data trademark — is on the top of the pile, that is displayed on a large screen in the meeting room.

Narrative’s Marcus Chen does the pitch:
“Tina Porter, 26. She’s what you need for the transpacific trade issues you just mentioned, Alan. Her dissertation speaks for itself, she even learned Korean…”
He pauses.
“But?…” Asks the HR guy.
“She’s afflicted with acute migraine. It occurs at least a couple of times a month. She’s good at concealing it, but our data shows it could be a problem”, Chen said.
“How the hell do you know that?”
“Well, she falls into this particular Health Cluster. In her Facebook babbling, she sometimes refers to a spike in her olfactory sensitivity — a known precursor to a migraine crisis. In addition, each time, for a period of several days, we see a slight drop in the number of words she uses in her posts, her vocabulary shrinks a bit, and her tweets, usually sharp, become less frequent and more nebulous. That’s an obvious pattern for people suffering from serious migraine. In addition, the Zeo Sleeping Manager website and the stress management site HeartMath — both now connected with Facebook –  suggest she suffers from insomnia. In other words, Alan, we think you can’t take Ms Porter in the firm. Our Predictive Workforce Expenditure Model shows that she will cost you at least 15% more in lost productivity. Not to mention the patterns in her Facebook entries suggesting a 75% chance for her to become pregnant in the next 18 months, again according to our models.”
“Not exactly a disease from what I know. But OK, let’s move on”.

I stop here. You might think I’m over the top with this little tale. But the (hopefully) fictitious Narrative Data Inc. could be the offspring of existing large consumer research firms, combined to semantic and data-mining experts such as Recorded Future. This Gothenburg (Sweden)-based company — with a branch in… Cambridge, Mass. –  provides real time analysis of about 150,000 sources (news services, social networks, blogs, government web sites). The firm takes pride in its ability to predict a vast array of events (see this Wired story).

Regarding the “de-anonymizing” the web, two years ago in Paris, I met a mathematician working on pattern detection models. He focused on locating individuals simply through their cell phones habits. Even if the person buys a cell phone with a fake ID and uses it with great care, based on past behavior, his/her real ID will be recovered in a matter of weeks. (As for Facebook, it recently launched a snitching program aimed at getting rid of pseudonyms — cool.)

Expanding such capabilities is only a matter of refining algorithms, setting up the right data hoses and lining up the processing power required to deal with petabytes of unstructured data. Not an issue anymore. Moore’s Law is definitely on the Inquisitors’ side.

frederic.filloux@mondaynote.com

The value is in the reader’s Big Data

 

Why the right use of Big Data can change the economics of digital publishing. 

Digital publishing is vastly undervalued. Advertising has yet to fulfill its promises — it is nosediving on the web and it failed on mobile (read JLG’s previous column Mobile Advertising: The $20 billion Opportunity Mirage). Readers come, and often go, as many digital publications are unable to retain them beyond a few dozen articles and about thirty minutes per month. Most big names in the digital news business are stuck with single digit ARPUs. People do flock to digital, but cash doesn’t follow — at least, not in amounts required to sustain the production of quality information. Hence the crux of the situation: if publishers are unable to extract significantly more money per user than they do now, most of them will simply die. As a result, the bulk of the population — with the notable exception of the educated wealthy — will rely on high audience web sites merely acting as echo chambers for shallow commodity news snippets.

The solution, the largest untaped value resides right before publisher’s eyes: readers profiles and contents, all matched against the “noise” of the internet.

Extracting such value is a Big Data problem. But, before we go any further, what is Big Data? The simplest answer: data sets too large to be ingested and analyzed by conventional data base management tools. At first, I was a suspicious, this sounded like a marketing concept devised by large IT players struggling to rejuvenate their aging brands. I changed my mind when I met people with hands-on experience, from large corporations down to a 20-staff startup. They work on tangible things, collecting data streams from fleets of cars or airplanes, processing them in real time and, in some cases, matching them against other contexts. Patterns emerge and, soon, manufacturers predict what is likely to break in a car, find out ways to refine the maintenance cycle of a jet engine, or realize which software modification is needed to increase the braking performance of a luxury sedan.

Phone carriers, large retail chains have been using such techniques for quite a while and have adjusted their marketing as a result. Just for fun, read this New York Times Magazine piece depicting, among other things, the predictive pregnancy model developed by Target (a large US supermarket chain). Through powerful data mining, the rightfully named Target corporation is able to pinpoint customers reaching their third pregnancy month, a pivotal moment in their consuming habits. Or look at Google Flu Trends providing better tracking of flu outbreaks than any government agency.

Now, let’s narrow the scope back to the subject of today’s column and see how these technologies could be used to extract more value from digital news.

The internet already provides the necessary tools to see who is visiting a web site, what he (she) likes, etc. The idea is to know the user with greater precision and to anticipate its needs.

Let’s project an analogy with Facebook. By analyzing carefully the “content” produced by its users — statements, photos, links, interactions among friends, “likes”, “pokes”, etc. — the social network has been able to develop spectacular predictive models. It is able to detect the change in someone’s status (single, married, engaged, etc.) even if the person never mentioned it explicitly. Similarly, Facebook is able to predict with great accuracy the probability for two people exchanging casually on the network to become romantically involved. The same applies to a change in someone’s financial situation or to health incidents. Without telling anyone, semantic analysis correlated by millions of similar behaviors will detect who is newly out of job, depressed, bipolar, broke, high, elated, pregnant, or engaged. Unbeknownst to them, online behavior makes people completely transparent. For Facebook, it could translate into an unbearable level of intrusiveness such as showing embarrassing ads or making silly recommendations — that are seen by everyone.

Applied to news news contents, the same techniques could help refine what is known about readers. For instance, a website could detect someone’s job changes by matching his reading patterns against millions of other monthly site visits. Based on this, if Mrs. Laura Smith is spotted with a 70% probability to have been: promoted as a marketing manager in a San Diego-based biotech startup (five items), she can be served with targeted advertising especially if she has also appears to be a active hiker (sixth item). More importantly, over time, the website could slightly tailor itself: of course, Mrs Smith will see more biotech stories in the business section than the average reader, but the Art & Leisure section will select more contents likely to fit her taste, the Travel section will look more like an outdoor magazine than a guide for compulsive urbanites. Progressively, the content Mrs. Smith gets will become both more useful and engaging.

The economic consequences are obvious. Advertising — or, better, advertorial contents branded as such (users are sick with banners)– will be sold at a much higher price by the web site and more relevant content will induce Mrs. Smith to read more pages per month. (Ad targeting companies are doing this, but in such a crude and saturating way that it is now backfiring). And since Mrs Smith makes more money, her growing interest for the web site could make her a good candidate to become a premium subscriber, then she’ll be served with a tailor-made offer at the right time.

Unlike Facebook who will openly soak the intimacy of its users under the pretext of they are willing to give up their privacy in exchange for a great service (good deal for now, terrible in the future), news publishers will be more careful. First, readers will be served with ads and contents they will be the only ones to see — not their 435 Facebook “friends”. This is a big difference, one that requires a sophisticated level of customization. Also, when it comes to reading, preserving serendipity is essential. By this I mean no one will enjoy a 100% tailor-made site; inevitably, it will feel a bit creepy and cause the reader to go elsewhere to find refreshing stuff.

Even with this sketchy description, you get my point: by compiling and analyzing millions of behavioral data, it is possible to make a news service way more attractive for the reader — and much more profitable for the publisher.

How far-reaching is this? In the news sector, Big Data is still in infancy. But as Moore’s Law keeps working, making the required large amounts of computing power more affordable, it will become more accessible to publishers. Twenty years ago, only the NSA was able to handle large sets of data with its stadium-size private data centers. Now publishers can work with small companies that outsource CPU time and storage capabilities to Amazon Web Services and use Hadoop, the open source version of Google master distributed applications software to pore over millions of records. That’s why Big Data is booming and provides news companies with new opportunities to improve their business model.

frederic.filloux@mondaynote.com

Google’s Amazing “Surveywall”

 

How Google could reshape online market research and also reinvent micro-payments. 

Eighteen months ago — under non disclosure — Google showed publishers a new transaction system for inexpensive products such as newspaper articles. It worked like this: to gain access to a web site, the user is asked to participate to a short consumer research session. A single question, a set of images leading to a quick choice. Here are examples Google recently made public when launching its Google Consumer Surveys:

Fast, simple and efficient. As long as the question is concise and sharp, it can be anything: pure market research for a packaging or product feature, surveying a specific behavior,  evaluating a service, intention, expectation, you name it.

This caused me to wonder how such a research system could impact digital publishing and how it could benefit web sites.

We’ll start with the big winner: Google, obviously. The giant wins on every side. First, Google’s size and capillarity puts it in a unique position to probe millions of people in a short period of time. Indeed, the more marketeers rely on its system, the more Google gains in reliability, accuracy, granularity (i.e. ability to probe a segment of blue collar-pet owners in Michigan or urbanite coffee-drinkers in London).The bigger it gets, the better it performs. In the process, Google disrupts the market research sector with its customary deflationary hammer. By playing on volumes, automation (no more phone banks), algorithms (as opposed to panels), the search engine is able to drastically cut prices. By 90% compared to  traditional surveys, says Google. Expect $150 for 1500 responses drawn from the general US internet population. Targeting a specific group can cost five times as much.

Second upside for Google: it gets a bird’s eye on all possible subjects of consumer researches. Aggregated, anonymized, recompiled, sliced in every possible way, these multiple datasets further deepen Google’s knowledge of consumers — which is nice for a company that sells advertising. By the way, Google gets paid for research it then aggregates into its own data vault. Each answer collected contributes a smallish amount of revenue; it will be a long while, if ever, before such activity shows in Google’s quarterly results — but the value is not there, it resides in the data the company gets to accumulate.

The marketeers’ food chain should be happy. With the notable exception of those who make a living selling surveys, every company, business unit or department in charge of a product line or a set of services will be able to throw a poll quickly, efficiently and cheaply. Of course, legacy pollsters will argue Google Consumer Surveys are crude, inaccurate. They will be right. For now. Over time the system will refine itself, and Google will have put  a big lock on another market.

What’s in Google’s Consumer Surveys for publishers whose sites will host a surveywall? In theory, the mechanism finally solves the old quest for tiny, friction-free transactions: replace the paid-for zone with a survey-zone through which access is granted after answering a quick question. Needless to say, it can’t be recommended for all sites. We can’t reasonably expect a general news site, not to mention a business news one, to adopt such a scheme. It would immediately irritate the users and somehow taint the content.

But a young audience should be more inclined to accept such a surveywall. Younger surfers will always resist any form of payment for digital information, regardless of quality, usefulness, relevance. Free is the norm. Or its illusion. Young people have already demonstrated their willingness to give up their privacy in exchange for free services such as Facebook — they have yet to realize they paid the hard price, but that’s another subject.
On the contrary, a surveywall would be at least more straightforward, more honest: users gives a split second of their time by clicking on an image or checking a box to access the service (whether it is an article, a video or a specific zone.) The system could even be experienced as fun as long as the question is cleverly put.
Economically, having one survey popping up from time to time — for instance when the user reconnects to a site — makes sense. Viewed from a spreadsheet (I ran simulations with specific sites and varying parameters), it could yield more money than the cheap ads currently in use. This, of course, assumes broad deployment by Google with thousands of market research sessions running at the same time.

A question crosses my mind : how come Facebook didn’t invented the surveywall?

–frederic.filloux@mondaynote.com

 

 

Why newspapers must raise their price

For quite a while, I’ve been advocating a newspapers price hike. My take: the news market is undergoing an irreversible split. On one side, digital distribution (on web, mobile and tablets) will thrive through higher volumes and deeper penetration; revenue is not easy to squeeze out of digital subscribers and advertisers but, as some consolation, serving one or ten million customers costs about the same.

On the other side, print is built on a different equation: gaining audience is costly; every additional reader comes with tangible industrial costs (printing, shipping, home delivery). Having said that, each print reader carries a much better ARPU than its online counterpart (bet on a 5 to 15 times higher yield, depending on the product). And, for a long time, there will be a significant share of the audience that will favor the print version, regardless of  price (almost). Those are super loyal and super solvent readers.

Last week, my little solo tune about price hikes received independent support of people much better equipped to define prices and value. Global marketing consultants Simon-Kucher & Partners released conclusions from an in-depth study of newspaper price evolution and its impact on circulation (PDF summary here). The headline: “Calling all newspapers: A premium model is your best hope”, which the authors, Andre Weber and Kyle Poyar, sum up thusly:

Newspapers are in an unenviable, but not uncommon position: raising print prices may shrink their already anemic readership base, but may also be their best hope for staying afloat.

Headquartered in Germany, with 23 branches across the world, Simon-Kucher specializes in marketing, sales and pricing strategies. They rely on thorough analysis and models to help their clients value a wide range of products and services. For this study, they surveyed the 25 largest US newspapers (ABC’s listing here). Before that, they’d worked on quality newspapers in the UK. Their findings:

– When technological disruption causes an irrevocable market decline, “it’s almost prudent to raise prices”. To support their claim, SKP mentions AOL which, at a critical point of its existence, raised its rates and generated large amounts of cash. This helped the online service finance major shifts in its business. To the contrary, Kodak continuously lowered the price of its film products, found itself unable to invest in a digital turnaround and finally went bankrupt.

– There is no elasticity in newspaper prices. In other words, a significant price hike won’t necessarily translate into a material drop in circulation. But the extra money raised in the process will provide welcomed help for investments in digital technologies.

– Raising prices discourages price wars. Many sectors are engaged in a downward spiral that doesn’t always translate into higher volume, but guarantees weaker revenues.

They conclude:

The print business is not your legacy, it’s your bank.

For publishing companies with struggling print divisions, SKP’s shibboleth might appear a bit overstated but it still contains valuable truths.

Let’s come back to the price elasticity issue. It’s an endless debate within publishing houses. Fact is there is none. For the US market, here are the effects of specific price hikes on circulation revenues:

 

…In an earlier UK market study, SKP looked at the consequences of price increases between 2007 and 2010 for these quality papers:

                Price        Variation in     Variation in 
                Increase     Circ. volume     Circ. revenue
 The Times        +54%         -24%             +16.7%
 The Guardian     +43%         -19%             +15.8%
 The Independent  +43%         -21%             +13%
 The Telegraph    +43%         -25%             +7%

When I spoke with Andre Weber and Kyle Poyar, the authors of the study, they were reluctant to evaluate which part of the circulation drop was attributable to the natural erosion of print, and which part was linked to the price hike. Also, they were careful not to venture into the consequences of the drop in circulation on advertising (as ad rates are tied to the circulation.)

However, they didn’t dispute that the bulk of the drop in circulation was linked to the erosion of print caused by the shift to digital. If there is any remaining doubt, watch this chart compiled the Pew Research Center:

With the left scale showing the percentage drop (!), the plunge is obvious, even though a change in the counting system by the Audit Bureau of Circulation embellishes the situation a bit.

The price equation for print newspapers can be summed-up as follows:

#1 Price hikes –both for street price and subscriptions– only marginally impact circulation already devastated by the conversion to digital.

#2 Additional revenue coming from price hikes far outpaces the loss in circulation (which will occur anyway). Ten or twenty years ago, US newspapers drew most of their revenue (70%-sometimes 80%) from advertising. Now the revenue structure is more balanced. The NY Times, for instance, evolves into an evenly split revenue structure, as shown in its Q2 2012 financial statement:

#3 There is room for further price increases. When asked about the threshold that could trigger a serious loss in readership, Andre Weber and Kyle Poyar opine that the least loyal customers are already gone, and that we have not yet reached the critical threshold that will discourage the remaining base of loyal readers.

#4 Advertising is indeed an issue, but again, its decline will occur regardless of circulation strategies. The main reason (other than difficult economic conditions): the adjustment between time spent and advertising expenditures on print that will inevitably affect print ads.

(source: Mary Meeker’s State of the Internet, KPCB)

#5 High prices on print versions will help maintain decent prices for digital paid-for contents, through subscriptions, paywalls, etc. As Weber and Poyar point out, for a publisher, the quality of print and digital products must remain connected, the two must work together (even though digital subscriptions will always be substantially lower than print.)

#6  When it comes to pricing strategies, quality rules the game. Simon-Kucher’s conclusions applies for high-end products. The New York Times, The Guardian, or The Sydney Morning Herald won’t have problems raising their prices by substantial amounts. But for tabloids or low end regional papers filled with cheap contents and newswire fodder, it’ll be another story.

#7 Pricing issues can’t be insulated from distribution.  In many countries, publishers of national dailies should consider refocusing their distribution map down to major cities only. The move would save shipping costs without too much of an impact on the advertising side as the solvent readership — the one dearly loved by advertisers — is mostly urban.

–frederic.filloux@mondaynote.com

Why Murdoch’s The Daily Doesn’t Fly

Is there a future for The Daily? According to last week’s reports by The New York Observer and The New York Times, News Corp’s “tablet newspaper” is on probation: Rupert Murdoch might pull the plug on The Daily which looses $30 million a year. But, in an open email to the publication’s staff, Jesse Angelo, its editor-in-chief, was quick to deny such rumors.

Eighteen months ago, The Daily was held up as embodying the newsmedia’s future. It was the first to be designed for the iPad, it bore the blessing of Steve Jobs himself (quite notable for someone who usually loathed the news sector), and it had the backing of the deep-pocketed News Corporation conglomerate. The project’s success would be measured over time (five years), supported by a considerable amount of funding. It had all it needed to be a success.

Fact is, The Daily never took-off. Six months after its high-wattage launch, it only claimed 80,000 paid-for subscribers. Today, Jesse Angelo mentions a mere 100,000 subs. It is both far from the 500,000 necessary to break-even and totally out of step with the growth of the iPad (and the iPhone, and the Android) installed base.

Something’s wrong with The Daily’s concept.

I subscribed. Twice, actually. At 99 cents a week ($39.99 a year), it was supposed to be a painless addition to my vast set of digital subscriptions. Strangely, it never succeeded in becoming part of my reading habits.

For The Daily, this might be its first problem: It is everything and nothing special at the same time. It’s not a tabloid, but it doesn’t carry in-depth, enterprise journalism either. It’s a sophisticated container for commodity news — i.e. the news that you can get everywhere, in real-time and for free. If I crave celebrity fodder, I go to TMZ or to the Huffington Post. If I want business news, I’ll find everything on CNN Money or Business Insider, all very efficiently and appealingly edited. No need to go through the tedious download of a 100 pages-plus issue.

The Daily’s inherent competition with the web (and mobile) was completely underestimated. Real time is now mandatory, so is the ability to generate conversations. For The Daily, a comparison of last weekend’s newscycle is cruel. Its coverage of the Mitt Romney tax return controversy triggered 179 comments on The Daily vs. 28,464 comments on the Huffington Post. (Note the HuffPo built it on a 150 words Associated Press story and a one minute video segment from CNN — that’s the digital version of the multiplication of the loaves…)

The Daily is like an old concept in a high-tech package. Some draw a parallel with USA Today, the first truly national newspapers launched in 1982. Two things made the paper a success:  its positioning as the first truly national newspaper in the United States, and its innovative format and layout; USA Today was designed for quick reads and explanatory journalism enhanced by graphics. That uniqueness was key to installing the paper America’s news landscape.

By contrast, The Daily does not enjoy the specificity of a “visually attractive convenience”. Its sophistication and its cognitive profusion lead to an excess of complexity which ends up leveling its content off. The Daily has no asperities, nothing to retain the reader’s attention in a recurring manner. A media is driven by its affinity for its audience. An intellectual, cultural or political affinity — or a combination of all three. The Daily lacks such engines. Even its Murdoch-induced anti-Obama stance fails to keep readers coming back.

Another key question for The Daily’s future pertains to its business model. On average, according to the Newspaper Association of America, 70% of the revenue of US dailies comes form advertising, and 14% of the ad revenue comes from digital. By any means, The Daily should have been loaded with ads. There is almost none. Including for the new “WKND” edition. This is worrisome: many papers draw a third or half of their revenue of their weekend editions. The per-copy price won’t help either. At 99 cents a week, it’s pocket change. After Apple’s 30% cut, this leaves less than 10 cents per issue. Even with a streamlined newsroom of 150, it can’t fly. (For more about The Daily’s economics, read Ken Doctor’s January 2011 Newsonomics article, or last week’s piece by Staci Kramer in PaidContent, both explain the issue pretty well.)

The Daily also illustrates the difficulty in building a digital media brand. Many tried, few succeeded. Slate, Salon, excellent as they are, journalistically speaking, never took off audience-wise. The Huffington Post made it through a unique combination of unscrupulous “aggrelooting” of contents from a variety of willing and unwilling sources, legions of unpaid bloggers, Arianna’s media footprint, and unparalleled audience-building techniques (see the previous Monday Note: Transfer of Value). A combination that proved very complicated to reproduce — even for someone as resourceful as Rupert Murdoch.

The Australian-born media mogul thought he could launch a new breed of news product from scratch. But in his quest for bold digital efficiency, he failed to see that a news product with no history, no breadth, no soul, no character could only face an uncertain future.

frederic.filloux@mondaynote.com

Transfer of Value

This is a story of pride vs. geekiness: Traditional newspapers that move online are about to lose the war against pure players and aggregators. Armed with the conviction their intellectual superiority makes them immune to digital modernity, newspapers neglected today’s internet driving forces: relying on technology to build audiences and the ability to coalesce a community over any range of subjects — even the most mundane ones.

When I discuss this with seasoned newsroom people on both sides of the Atlantic, most still firmly believe the quality of their work guarantees their survival against a techno-centric approach to digital contents.

I’m afraid they are wrong. Lethally so.

We are a facing a culture shock. On one side, legacy medias: Great franchises who grew on strong values, such as “pristine” journalism, independence, storytelling, fact-checking, solid editing, respect for the copyright… Along the way, they made their share of mistakes, but, overall, the result is great. After all, at the height of the Fourth Estate’s power, the population was better informed than today’s Facebook cherry-pickers.  Now, this (aging) fraternity faces a new generation of media people who build their fiefdom on a completely different set of values. For instance, the notion of copyright has become exceedingly elastic. A few months ago, Flipboard began to aggregate contents from French news organizations, taking large excerpts — roughly capturing the essence of a story — along with a token link back to the original content. Publishers sent polite letters saying, in substance: ‘Guys, although we are fond of your iOS applications, you can’t simply pick up our stuff without permission, we need to talk first…’

Publishers’ attitude toward aggregators has always been ambiguous. Google is the perfect example: on one hand, publishers complained about the search giant’s power; and, at the same time, they spend huge sums of money optimizing their sites, purchasing relevant keywords, all to make the best use of the very power they criticize. In Belgium, publishers challenged Google in court for the Google News product before realizing they really depended a lot on it, and begging for reintegration in the Google traffic cauldron.

Another example of the culture shock: reliance on technology. It’s a religion for the newcomers but merely a support function for traditional editors. Unfortunately, evidence shows how wrong it is to snub the traffic building arsenal. Here are a few examples.

On July 5th, The Wall Street Journal runs an editorial piece about Mitt Romney’s position on Obamacare.

The rather dull and generic “Romney’s Tax Confusion” title for this 1000 words article attracted a remarkable 938 comments.

But look at what the Huffington Post did: a 500 words treatment including a 300 words article, plus a 200 words excerpt of the WSJ opinion and a link back (completely useless). But, unlike the Journal, the HuffPo ran a much sexier headline :

A choice of words that takes in account all Search Engine Optimization (SEO) prerequisites, using high yield words such as “Squandering”, “Snafu”, in conjunction with much sought-after topics such as “Romney” and “Health Care”. Altogether, this guarantees a nice blip on Google’s radar — and a considerable audience : 7000+ comments (7x more than the original), 600 Facebook shares, etc.

HuffPo’s editors took no chance: the headline they picked is algorithm-designed to yield the best results in Google. The aggregator invested a lot in SEO tools: I was told that every headline is matched in realtime against Google most searched items right before being posted. If the editor’s choice scores low in SEO, the system suggests better terms. In some instances the HuffPo will A/B test headlines: It will serve different versions of a page to a couple of random groups and, after five minutes, the best headline will be selected. Found on Quora, here are explanations by Whitney Snyder, HuffPost’s senior news editor:

The A/B testing was custom built. We do not, however, A/B test every headline. We often use it to see if our readers are familiar with a person’s name (i.e. John Barrasso vs GOP Senator), or to play up two different aspects of a story and see which one interests readers more. We also A/B test different images.

Other examples below will prove the effectiveness of HuffPo’s approach. Here is a media story about a TV host whose position is in jeopardy; the Daily News version: a 500 words article that looks like this:

The Huffington Post summed it up in a 175 words form, but introduced it with a much more potent headline including strong, Google-friendly locutions:

Results speak for themselves:

Daily  News original version : 2 comments, 1 tweet, 1 Facebook share
HuffingtonPost version : 4601 comments, 79 tweets, 155 share.

Like no one else, the HuffPo masters eye-grabbing headline such as these :
Watch Out Swimmers: Testicle-Eating Fish Species Caught in US Lake (4,000 Facebook recommendations), or: Akron Restaurant Owner Dies After Serving Breakfast To Obama (3300 comments) or yesterday’s home: LEPAGE LOSES IT: IRS ‘THE NEW GESTAPO’ displayed in a 80 points font-size; this adaptation of the Maine’s daily Press Herald generated about 6000 comments on the aggregator.

The point is not to criticize the Huffington Post for being extremely efficient at optimizing its work. They invested a lot, they trained their people well. Of course, the bulk of HuffPo’s content  comes from : a) unpaid bloggers — 9,884 new ones last year alone according to Arianna’s count; b) content borrowed from others media and re-engineered by 170 journalists, a term that encompass various kinds of news producers and a bunch of true writers and editors; c) a small percentage of original reporting.  Each day, all this concurs to “over 1,000 stories published” that will translate into 1.4 million of Facebook referrals and 250,000 comments. Staggering numbers indeed. With some downsides, too: 16,000 comments (!) for an 200 words article about Barack Obama asking to turn off Fox News during a campaign tour is not likely to attract enviable demographics advertising-wise. The HuffPo might make a billion page views per month, but most of them only yield dimes.

The essence of what we’re seeing here is a transfer of value. Original stories are getting very little traffic due to the poor marketing tactics of old-fashion publishers. But once they are swallowed by the HuffPo’s clever traffic-generation machine, the same journalistic item will make tens or hundred  times better traffic-wise. Who is right?  Who can look to the better future in the digital world ? Is it the virtuous author carving language-smart headlines or the aggregator generating eye-gobbling phrases thanks to high tech tools?  Your guess. Maybe it’s time to wake-up.

frederic.filloux@mondaynote.com

Lessons from ProPublica

Paul Steiger is one of the men I admire the most in my profession. Five years ago, at the age of 65, and after a 16-year tenure as the Wall Street Journal’s managing editor, he seized the opportunity to create a new form of investigative journalism. Steiger created ProPublica, a non-profit newsroom dedicated to the public interest and to deep dive reporting. He hired a bunch of young staffers (coached by seasoned editors and reporters) that could help him lift data journalism and computer-assisted reporting to the highest level. Thanks to wisely managed partnerships, he gave ProPublica a wide audience and the quality and breadth of his newsroom’s reporting landed it scores of awards, including two Pulitzer Prizes. ProPublica was the first online news organization to receive such a seal of approval.

All this in five years, with now 33 journalists. Kudos.

Last wednesday, at the end of quick hop to New York, I paid Paul Steiger a visit. His corner office nests on the 23rd floor of Broadway, overlooking Wall Street’s canyons and Manhattan’s southern tip. At 70, Steiger has a twinkle in the eye that you don’t often find in reporters half his age. Especially when he speaks about ProPublica’s most shining journalistic moments.

In late 2006, the Sandler Foundation, approached Steiger with a wish to allocate a fraction of its immense wealth to the funding of investigative reporting. The newsman made four recommendations:

– The first one was to rely on a permanent staff as opposed to hired guns. “To do the kind of journalism we wanted to do, you must have people comfortable enough to stay on the story as long as needed. You also must accept dry holes. Freelancers will starve in such conditions!”

– Two, for the biggest stories, he wanted to partner with one or two large news organizations that could be granted some exclusivity over a short period of time in exchange for good visibility.

– Three, in order to guarantee the widest reach, Paul Steiger wanted to distribute the material freely on the web.

– Four, he would solely be responsible for content; funders or other contributors would not be involved in selecting stories. (Actually, on ProPublica’s first board meeting, none of the financial backers knew what the newsroom was working on.)

The partnership proved to be a great idea and expanded much farther than anticipated. It relied quite a lot on Paul Steiger’s and Stephen Engelberg’s personal connections (Engelberg is ProPublica’s editor-in-chief.) Quite often, Steiger explained, once a story neared completion, he’d place a call directly to a key editor in a major publications. “Sometimes, I could feel the excitement over the phone”, he laughs. He had to be very careful not to say too much before hammering the deal. I asked him how he handles confidential sources: “Well, we do not mind giving sources’ names to editors and lawyers, but less so to beat reporters… You know, reporters are human, and they might be tempted to call the sources themselves…”

Cooperation with other medias turned out to breed an unexpected advantage: transforming good national stories into a local ones. The best example, is the Dollars for Docs project. In a nutshell: a sizable portion of pharmaceutical firms operating in the United States are now required to reveal all direct contributions to doctors. (It’ll be 100% next year.) Needless to say, they complied reluctantly, providing a sloppy, almost useless database. As a result, the two reporters assigned to the subject were at a loss when it came to retrieve relevant data. Then, a young ProPublica in-house data specialist joined the team and solved the puzzle in a few weeks. The story was published by ProPublica’s five partners: The Chicago Tribune, The Boston Globe, PBS, NPR and Consumer Reports. Why Consumer Reports? “Because they had polling capabilities”, Steiger said. “Pharmaceuticals companies were saying patients didn’t mind if doctors got fees from them, we proved patients actually care…” After a few days for the key partners’ exclusivity window, the database was released on the web on October 19, 2010. In an easily searchable way, it showed the status of 17,000 doctors receiving a total of $750 million. A small stimulus to keep the flow of prescriptions smooth and steady — and to contribute to the indecent cost of healthcare in America.

Then the local mechanics kicked in. In the months afterwards, no less than 125 local outlets picked up the story, extracting relevant local information from the database and adding context. That’s one of the most interesting aspects of ProPublica’s work: its ability to cause a national interest story to percolate down to the local news organizations which, in turn, will give the story more depth by connecting it to its relevant community. (ProPublica now has 78 partners)

I asked Paul Steiger if he believes this model could be translated into a classic business. After all, why not gather half a dozen non-competiting news outlets, happy to split the price of large journalistic projects — each costing from $100,000 to $200,000 to produce — in addition to a small fee from local news? Paul Steiger says it cannot be made to work. To sum it up, by asking a newspaper or a TV network to pay, ProPublica would directly compete with their clients’ internal economics. Inevitably, someone will say, hey, last year, we paid x thousands dollars in fees for ProPublica’s stories, that’s the equivalent of y staffers. Not to mention the state of the news industry with, in fact, very few companies willing (and able) to pay extra editorial costs. The consequence would be a down spiral: deprived of the vast audience it now enjoys, the organization would have a hard time attracting clients for its content, nor would it be able to attract donations. Fact is, such syndication doesn’t work. California Watch, which operates on the same beat as ProPublica, burns more than $2 million a year but collects less than… $30,000 dollars in syndication fees.

That’s why ProPublica plans to stick to its original structure. Next year, Paul Steiger will step down as ProPublica’s editor-in-chief and chief executive, he’ll become executive chairman, a position in which he will spend most of his time raising money for the organization. As it stands today, ProPublica is on a sound path. The first two years of operation were solely funded by the Sandler family, for about $10 million a year. This year their contribution will be down to $4 million, with $6m coming from other sources. In 2013, the breakdown will be $3m and $7 million. Not only did ProPublica put itself at the forefront of the public interest, high quality, digitally boosted, modern journalism, but it also created a sustainable way to support it.

frederic.filloux@mondaynote.com

The Insidious Power of Brand Content

Dassault Systemes is one of the French industry’s greatest successes. Everyday, unbeknownst to most of us, we use products designed using DS software: cars, gadgets, buildings and even clothes. This €2bn company provides all the necessary tools for what has become known as Product Lifecycle Management: starting from the initial design, moving to the software that runs the manufacturing process, then to distribution logistics and, at the end of its life, disposing of the product.

Hence a simple question: What could be the axis of communication for such a company? The performance of its latest release of CAD software? Its simulation capabilities?

No. Dassault Systemes opted to communicate on an science-fiction iceberg-related project. The pitch: a French engineer — the old-fashion type, a dreamer who barely speaks English — envisions capturing an iceberg from a Greenland glacier and tugging it down to the thirsty Canary Islands. The DS mission (should it choose to really accept it): devise all the relevant techniques for the job, minimize melting, maximize fuel-efficiency. The result is a remarkable and quite entertaining documentary, a 56 minutes high-tech festival of solutions for this daunting task’s numerous challenges. I watched it in HD on my iPad, in exchange for my email address (the one I’m dedicating to marketers). It’s a huge, multimillion video production, with scores of the helicopters shots, superb views of Greenland and, of course, spectacular 3D imaging, the core DS business. The budget is so high and the project so ambitious, that the documentary was co-produced by several large European TV channels such as France Televisions and the German ZDF. Quite frankly, it fits the standard of public TV — for such a genre.

But this is neither journalism nor National Geographic film-making. It’s a Brand Content operation.

In advertising, Brand Content is the new black. You can’t bump into an ad exec without hearing about it. It’s the new grail, the replacement for the other formats that failed and the latest hope for an ailing industry. But there are side effects.

Let’s have a closer look.

1/ What defines Brand Content as opposed to traditional advertising?

In a good BC product, the brand can be almost absent. It’s the content that’s front and center. In France, advertisers often quote a series made by the French Bank BNP-Paribas titled “Mes Colocs” (My roommates). The title says it all. Launched two years ago, it featured 20 shorts episodes, later supplemented by… 30 bonus ones, all broadcast on YouTube and DailyMotion. Mes Colocs became such a success that two cable TV channels picked it up. The brand name does not appear — except in the opening credits. But, far from being a philanthropic operation, its performance was carefully monitored. BNP-Paribas’ goal was obvious: raising its awareness among young people. And it seems to have worked: the operation translated into a 1.6% increase in accounts opening and a rise of 6.5% in the number of loans granted to young adults (details in this promotional parody produced by the agency.)

This dissociation between brand and content is essential. An historical French brand has been rightly celebrated for being the first to do brand content decades before the term was coined: Michelin with its eponymous guides provided a genuine service without promoting its tires (read Jean-Louis’ Monday Note Why Apple Should Follow Michelin.)

The following opposition can be drawn between traditional advertising and content-based message :

2 / Why the hype ?

First of all, medias are increasingly fragmented. Advertisers and marketers have a hard time targeting the right audience. BC is a good way to let the audience build itself — for instance through virality. It is much more subtle than relying on the heavily (and easily) corrupted blogosphere.

Second, most digital formats are faltering. Display advertising is spiraling down due to well-known factors: unlimited inventories, poor creativity, excessive discounts, bulk purchasing, cannibalization by value killing ad networks, etc. Behavioral targeting is technically spectacular but people get irritated by invasive tracking techniques (see my previous take: Pro (Advertising) Choice.)

Three, marketers have matured. The caricatural advertorial grossly extolling a product is long gone.  Today’s contents are much smarter, they provide information (real or a respectable imitation), and good entertainment. Everything is increasingly well-crafted. Why? Because — and that is reason #4 for growth in BC — there is a lot of available talent out there. As news media shrink, advertising agencies find an abundance of writers, producers, film-makers all eager to work for much more money they could hope to get in their former jobs. Coming in with a fresh mindset, not (yet) brain-washed by marketing, they will do their job professionally, accepting “minor” constraints in exchange for great working conditions — no penny pinching when you do a web series for a global brand.

Five, compared to traditional advertising messages, Brand Content is cheap. As an example, see the making of a recent and highly conceptual Air France commercial shot in Morocco; the cost ran into seven figures. Now, imagine how many brand content products can be done with the same investment. Brand content allows an advertiser to place multiple bets at the same time.

3/ The risks. (Here comes the newsman’s point of view)

Brand content is the advertiser’s dream come true. The downfall of the print press has opened floodgates: publishers become less and less scrupulous in their blurring of the line between editorial and promotion — which is precisely what ad agencies always shoot for. Most women magazines, the luxury press, and now mainstream glossies allocate between 30% and 70% to such “tainted” editorial: nice “journalistic” treatment in exchange for favors on the advertising side. I’m not blaming publishers who do their best to save their business, I’m just stating the facts.

The consequence is obvious: readers are not informed as they should about products. Less and less so. (Although islands of integrity like Consumer Reports remain.) That is not good for the print media as it feeds the public’s distrust. While many publications lose what’s left of their credibility by being too cosy with their advertisers, brands are becoming increasingly savvy at producing quality contents that mimic traditional editorial. As brands tend to become full blown medias, the public will get confused. Sooner or later, it will be difficult to distinguish between a genuine, editorially-driven prime-time TV show and another one sponsored by an advertiser. Call it the ever shrinking journalism.

frederic.filloux@mondaynote.com

Off The eBook Shelf

Readers are voting with their wallets: The eBook is winning. In the US, eBooks sales are now topping hardcovers for the first time (story in TechCrunch). Not everywhere of course. According to the Bowker Global eBook Research, the global market for eBooks is driven — in that order — by India, Australia, the UK and the United States. The laggards are Japan and (no surprise) France. The chart below shows the percentage of internet population reporting the purchase of a digital book over the last six months prior to the survey.

Interestingly, for most population samples, the level of purchases is not correlated with awareness. France enjoys the highest level of awareness but its internet population buys five times less eBooks than India’s. Once an Indian internet user finds an attractive digital book offer, he/she will most likely jump on it. This could lead to the following: in emerging countries, the cellular phone has become the main communication tool, leapfrogging the deployment of land lines; similarly, we could see eBooks bypassing print in countries like India where a large segment of the population is getting both literate and connected at a fast pace. (Actually, Bowker also reports that over 50% of respondents in India and Brazil are likely to buy an eBook in the next six months, ten times more than in France.)

If the rise of the eBook happily provides access to knowledge in emerging countries, the picture is more contrasted in countries with a long history and high penetration of printed books.

For instance, let’s have look at the ISBN registrations data for the United States. The chart below, drawn again from Bowkers (full PDF table here) shows a steady progression:

Between 2002 and 2011, in the US market, ISBN registration grew by 61% and reached 347,178 new titles. (A technical note: I’m only taking into account books that fall in an identified category, such as Arts, Biography, Business, etc. I’m excluding the huge segment labeled as nontraditional, which includes reprints, public domain, and titles printed on demand; this segment grew by over 3500% to 1.2 million registrations, which would distort the picture.)

We clearly see the impact of mainstream e-readers such as the Kindle and the iPad. Without any doubt, they contributed to the growth of registrations. (Unfortunately, ISBN counts does not provide a breakdown between print and digital.) Over the last nine years, some book publishing segments fared better than others. See the chart below:

Fiction is doing twice better than all other categories together. The Digital Book is the medium of choice for fiction: a) eBooks are set to be cheaper that print and price elasticity is now a proven fact, the cheaper a book is, the more likely a reader is to try it; b) e-commerce breeds impulse buying (cf the famous “One-Click® feature); c) readers can test the product more efficiently than in the printed world as Amazon and the iBooks Store make larges sample available for free. No surprise, then, to see the Fiction category holding well.

No surprise either in seeing the three worst performers also as prime victims of the digital era. History books have to compete with the vast trove of material available on the web; that’s the Encyclopaedia Britannica syndrome, going out of print after 244 years of duty, demoted by the 11-year-old Wikipedia. Like it or not, most history books publishers will follow the same fate.

Similarly,Travel and Computer books are in direct competition with mostly free online services. Who will buy a “how-to” computer book today? There are plenty of video tutorials explaining how to replace a hard drive or how to struggle with Photoshop? And let’s not even mention the Travel segment with tons of guides, reviews, price comparators and transactions services. As for the language sections of the bookstore, again, a simple query in Google can help with spelling, translation and grammar… Even the precious Roget’s Thesaurus is online, and rather efficiently so. I’ll just venture French Canadians did Roget one better: A company called Druide publishes a suite of applications for PCs, tablets and smartphones called Antidote. It’s an unusually clever combination of dictionary, thesaurus, quotations, etymology and more. I wondered for a while about the name Antidote — until I realized Quebecois saw the product as an antidote to… English. An old struggle.

The main eBooks casualty is likely to be bookstores. In a city like New York, in the Fifties, about 330 bookstores were in business. Now they are down to 30 or even less, laments André Schiffrin, former head of Pantheon Books, in his recent book Words & Money. Countries like France or Germany have laws that protect independent bookstores: From Bordeaux to Berlin, citizens are thankful for finding warmer and more relevant recommendations than the algorithm-based suggestions provided by Amazon. But how long will it last?

frederic.filloux@mondaynote.com