About Frédéric Filloux

http://

Posts by Frédéric Filloux:

Twitter, Facebook and Apps Scams

Here is the latest Twitter scam I’ve heard this week. Consider two fictitious media, the Gazette and the Tribune operating on the same market, targeting the same demographics, competing fort the same online eyeballs (and the brains behind those). Our two online papers rely on four key traffic drivers:

  1. Their own editorial efforts, aimed at building the brand and establishing a trusted relationship with the readers. Essential but, by itself, insufficient to reach the critical mass needed to lure advertisers.
  2. Getting in bed with Google, with a two-strokes tactic: Search Engine Optimization (SEO), which helps climb to the top of search results page; and Search Engine Marketing (SEM), in which a brand buys keywords to position its ads in the best possible context.
  3. An audience acquisition strategy that will artificially grow page views as well as the unique visitors count. Some sites will aggregate audiences that are remotely related to their core product, but that will better dress them up for the advertising market (more on this in a forthcoming column).
  4. An intelligent use of social medias such Facebook, Twitter, LinkedIn and of the apps ecosystem as well.

Coming back to the Tribune vs. Gazette competition, let’s see how they deal with the latter item.

For both, Twitter is a reasonable source of audience, worth a few percentage points. More importantly, Twitter is a strong promotional vehicle. With 27,850 followers, the Tribune lags behind the Gazette and its 40,000 followers. Something must be done. The Tribune decides to work with a social media specialist. Over a couple of months, the firm gets to the Tribune to follow (in the Twitter sense) most of the individuals who already are Gazette followers. This mechanically translates into a “follow-back” effect powered by implicit flattery: ‘Wow, I’ve been spotted by the Tribune, I must have voice on some sort…’ In doing so, the Tribune will be able to vacuum up about a quarter or a third — that’s a credible rate of follow-back — of the Gazette followers. Later, the Tribune will “unfollow” the defectors to cover its tracks.

Compared to other more juvenile shenanigans, that’s a rather sophisticated scam. After all, in our example, one media is exploiting its competitor’s audience the way it would buy a database of prospects. It’s not ethical but it’s not illegal. And it’s effective: a significant part of the the followers so “converted” to the Tribune are likely stick to it as the two media do cover the same beat.

Sometimes, only size matters. Last December, the French blogger Cyroul (also a digital media consultant) uncovered a scam performed by Fred & Farid, one of the hippest advertising advertising agencies. In his post (in French) Cyroul explained how the ad agency got 5000 followers in a matter of five days. As in the previous example, the technique is based on the “mass following” technique but, this time, it has nothing to do with recruiting some form of “qualified” audience. Fred & Farid arranged to follow robots that, in turn, follow their account.  The result is a large number of new followers from Japan or China, all sharing the same characteristic: the ratio between following/followed is about one, which is, Cyroul say, the signature of bots-driven mass following. Pathetic indeed. His conclusion:

One day, your “influence” will be measured against real followers or fans as opposed to bots-induced accounts or artificial ones. Then, brands will weep as their fan pages will be worth nothing; ad agencies will cry as well when they realize that Twitter is worth nothing.

But wait, there are higher numbers on the crudeness scale: If you type “increase Facebook fans” in Google, you’ll get swamped with offers. Wading through the search results, I spotted one carrying a wide range of products: 10,000 views on YouTube for €189; 2000 Facebook “Likes” for €159; 10,000 followers on Twitter for €890, etc. You provide your URL, you pay on a secure server, it stays anonymous and the goods are delivered between 5 and 30 days.

The private sector is now allocating huge resources to fight the growing business of internet scams. Sometimes, it has to be done in a opaque way. One of the reasons why Google is not saying much about its ranking algorithm is — also — to prevent fraud.

As for Apple, its application ecosystem faces the same problem in. Over time, its ranking system became questionable as bots and download farms joined the fray. In a nutshell, as for the Facebook fans harvesting, the more you were willing to pay, the more notoriety you got thanks to inflated rankings and bogus reviews. Last week, Apple issued this warning to its developer community:

Adhering to Guidelines on Third-Party Marketing Services

Feb 6, 2012
Once you build a great app, you want everyone to know about it. However, when you promote your app, you should avoid using services that advertise or guarantee top placement in App Store charts. Even if you are not personally engaged in manipulating App Store chart rankings or user reviews, employing services that do so on your behalf may result in the loss of your Apple Developer Program membership.

Evidently, Apple has a reliability issue on how its half million apps are ranked and evaluated by users. Eventually, it could affect its business as the AppStore could become a bazaar in which the true value of a product gets lost in a quagmire of mediocre apps. This, by the way, is a push in favor of an Apple-curated guide described in the Monday Note by Jean-Louis (see Why Apple Should Follow Michelin). In the UK, several print publishers have detected the need for independent reviews; there, newsstands carry a dozen of app review magazines, not only covering Apple, but the Android market as well.

Obviously there is a market for that.

Because they depend heavily on advertising, preventing scams is critical for social networks such as Facebook or Twitter. In Facebook’s pre-IPO filing, I saw no mention of scams in the Risk Factors section, except in vaguest of terms. As for Twitter, all we know is the true audience is much smaller than the company says it is: Business Insider calculated that, out of the 175 million accounts claimed by Twitter, 90 million have zero followers.

For now, the system stills holds up. Brands remain convinced that their notoriety is directly tied to the number of fan/followers they claim — or their ad agency has been able to channel to them. But how truly efficient is this? How large is the proportion of bogus audiences? Today there appears to be no reliable metric to assess the value of a fan or a follower. And if there is, no one wants to know.

frederic.filloux@mondaynote.com

Strange Facebook Economics

Exactly three years ago, Charlie Rose interviewed Marc Andreessen, the creator of Netscape and Facebook board member. In his trademark rapid-fire talk, Marc shared his views on Facebook. (Keep the February 2009 context in mind: the social network had 175 million users and Microsoft had just made an investment setting Facebook’s valuation at $15 billion.)

About Mark Zuckerberg’s vision:

The big vision basically is — I mean the way I would articulate it is connect everybody on the planet, right? So I mean [there are] 175 million people on the thing now. Adding a huge number of users every day. 6 billion people on the planet. Probably 3 billion of them with modern electricity and maybe telephones. So maybe the total addressable market today is 3 billion people. 175 million to 3 billion is a big challenge. A big opportunity.

Indeed.
About monetization:

There’s a lot of confusion out there. Facebook is deliberately not taking the kind of normal brand advertising that a lot of Web sites will take. So you go to a company like Yahoo which is another fantastic business and they’ve got these banner ads and brand ads all over the place, Facebook has made a strategic decision not to take a lot of that business in favor of building its own sort of organic business model; and it’s still in the process of doing that and if they crack the code, which I think that thy will, then I think it will be very successful and will be very large. The fallback position is to just take normal advertising. And if Facebook just turned on the spigot for normal advertising today, it’d be doing over a billion dollars in revenue. So it’s much more a matter of long term (…)  It could sell out the homepage and it would start making just a gigantic amount of money. So there’s just tremendous potential and it’s just a question exactly how they choose to exploit it. What’s significant about that is that Mark [Zuckerberg] is very determined to build a long term company.

In another interview last year, commenting on Facebook’s generous cumulated funding ($1.3 billion as of January 2011), Andreessen said the whole amount actually was a shrewd investment as it translated into an acquisition cost of a “one or two dollars per user” ($1.53 to be precise), which sounded perfectly acceptable to him.

Now, take a look at last week’s pre-iPO filing: Marc Andreessen was right both in 2009 and in 2011.

Last year, each of the 845 million active members brought $4.39 in revenue and $1.18 in net income. Even better, based on the $3.9 billion in cash and marketable securities on FB’s balance sheet, each of these users generated a cosy cash input of $1.53 dollars.

How much is the market expected to value each user after the IPO? Based on the projected  $100 billion valuation, each Facebooker would carry a value of $118. Keep this number in mind.

How does it compare with other media and internet properties?

Take LinkedIn: The social network for professionals is fare less glamorous than Facebook, a fact reflected in its members’ valuation. Today, LinkedIn has about 145 millions users, for a $7.7 billion market cap; that’s a value of $57 per user, half a Facebooker. A bit strange considering LinkedIn demographics, in theory much more attractive than Facebook advertising wise. (See a detailed analysis here). Per user and per year, LindkedIn makes $3.5 in revenue and $0.78 in profit.

Let’s now switch to traditional medias. Some, like the New York Times, were put on “deathwatch” by Marc Andreessen three years ago.

Assessing the number of people who interact with NYT brands is quite difficult. For the company’s numerous websites, you have to deal with domestic and global reaches: 43 millions UVs for the Times globally, 60 millions for its guide site About.com, etc. Then, you must take into account print circulation for the NY Times and the Boston Globe, the numbers of readers per physical copy, audience overlaps between businesses, etc.

I’ll throw an approximate figure of 50 million people worldwide who, one way or the other, are in some form of regular contact with one of the NYT’s brands. Based on today’s $1.14 billion market cap, this yields a valuation of $23 per NYT customer, five times less than Facebook. That’s normal, many would say. Except for one fact: In 2011, each NYT customer brought $46 in revenue, almost ten times more than Facebook. As for the profit (a meager $56 million for the NYT), each customer brought a little more than a dollar.

I did the same math with various media companies operating in print, digital, broadcast and TV. Gannett Company, for instance, makes between $50 and $80 per year in revenue  per customer, and, depending on the way you count, the market values that customer at about $50.

Indeed, measured by trends (double digit growth), global reach and hype, Facebook or LinkedIn are flying high while traditional medias are struggling; when Facebook achieves a 47% profit margin, Gannett or News Corp are in the 10% range.

Still. If we pause at today’s snapshot, Facebook economics appear out of touch with reality: each customer brings then times less than legacy media, and the market values that customer up to five times more. And when News Corp gets a P/E of 17, Gannett a P/E of 8, Facebook is preparing to offer shares a multiple of 100 times its earnings and 25 times its revenue. Even by Silicon Valley ambitious standards, market expectation for Facebook seems excessive: Apple is worth 13 times its earnings and Google 20 times.

Facebook remains a stunning achievement: it combines long term vision, remarkable execution, and a ferociously focused founder. But, even with a potential of 3 billion internet-connected people in 2016 vs. 1.6 billion in 2010 (a Boston Consulting Group projection), it seems the market has put Facebook in a dangerous bubble of its own.

frederic.filloux@mondaynote.com

Refining the Model

Let’s come back to the business model question. My January 15 column featuring a Simple Model for digital newspapers triggered a number of emails and comments, many  questioning my assumptions (my thanks to readers of the Monday Note who take the time to make insightful contributions to the discussion).

Let’s see if we can sort through the questions and come up with a few helpful answers.

1 / Advertising revenue. Let me set the backdrop here. My model projects what I’ll call a mature market. First and foremost, time spent vs. ad spending for print, web and mobile, which currently looks look this…

Source: Internet Trends, Mary Meeker, KPCB Oct 2011

… will have morphed into a graph showing more balance between categories. In my projections, ad spending converges to time effectively spent on various medias. Also, we’ll see a sharp rise of the mobile segment, and a sub-segment made by tablets will carry its specific business model (apps, subscription, ads).
This will happen at the expense of the print media, a sector that, considering the time people now spent on it, is still vastly over-invested. Dailies are bound to suffer more than weeklies (or Sunday editions) because their primary function (delivering news) collides with mobile devices. Having said that, newspapers will survive (after further shrinkage) thanks to an unabated base of loyal readers ready to pay almost any price for their favorite daily. This is the rationale behind recent price hikes (see Cracking the Paywall). In Europe, I see all quality papers priced at 2€ within two to three years and I don’t believe such prices will accelerate reader depletion. Holding print prices up might be critical for survival.

On this topic, this is the email I received from Jim Moroney, publisher and CEO of the Dallas Morning News:

  • On May 1, 2009, The Dallas Morning News raised home delivery rates across the board by 40%. The price increase was even greater for the most geographically distant delivery.
  • We doubled daily single copy price to $1.00 and Sunday single copy price to $3.00 in two steps each.
  • Today we yield 93% of our retail rate, i.e., we are doing very little discounting. Lots of papers claiming to raise their home delivery rates and then turnaround and offer discount after discount. If the most valuable asset we have is the content we originate, as an industry, why do we keep deeply discounting it as if it were damaged goods?
  • Our home delivery rate is $36.95 per month, making it the third highest priced metro in the U.S. after NYT and Boston Globe.
  • In March, we made all access to what we distribute digitally paid access.
  • Website, iPad and smartphone are $9.99 each per month. All digital access is $16.95 per month.
  • So there is a lowly metro doing something akin to the NYT and FT.

Also, because of its unique advertising value proposition, I won’t sell short print media. In a nutshell, no one expects a Dior campaign to look as gorgeous on the screen of a computer or on the 4-inches display of a smartphone as it does on quality print. For such high-priced ads, print is likely to remain vastly superior for a long time — and should therefore be part of any well-rounded business strategy.

Coming back to digital media, in my view, a mature market also means a clean one. Today, many news websites URLs have very little to do with editorial. In places, the number of URLs whose only purpose is to gather “eyeballs” represents as much as 30% to 40% of all page views.
Look at what Le Monde does: when you look at a web page through Readability (an app that basically extracts relevant text), you see every verb appear in red and linking to… Le Monde’s grammar conjugation service:

That’s good for SEO shenanigans. Nothing is too petty to churn audience numbers (and Le Monde is no worse than its competitors)

To sum up, here is why I think prices on the internet are likely to go up in a near (2-3 years) future :

  • a cleaner internet will yield a much better performance advertising-wise than it does today,
  • inventories will have to be limited (read: closed down). No market whatsoever can withstand the type of unlimited supply we see today on the web. In our current oversupply situation, we often see more than half of the pages sold for a CPM below one dollar or euro,
  • as discussed before, we can expect a strong adjustment on ad spending vs. time spent, it will benefit digital media,
  • the ad market suffers greatly from current economic conditions (debt,  political tensions abroad, elections in several countries, uncertainties everywhere…) Those won’t last forever.

My mention of a $20 CPM sounded overly optimistic to many readers? It is by today’s standards, no doubt. But once a number of adverse factors are attended to, I think the $20 assumption will hold (and, by the way, I’m referring to revenue per page, not per module).

2 / Subscription revenue. Many are challenging my 10% transformation rate (one reader out of ten willing to pay $10 a month in my model.) Objection taken. Again, my projections go beyond today’s deflated market. It will take a while to get to 10% when a large site such as the New York Times is at 1% or 2%. And converting readers to pay something/somehow will require imagination beyond single pricing; I’m told large newspapers charging $15 or $25 a month are considering low-cost subscriptions plans as low as $5 per month to capture young readers and boost their conversion rate. From an editorial product perspective though, I’m a bit skeptical. What will such a downgraded offer look like: stricter paywall; low-cost apps?

3 / Mobile apps. Although I explored this issue in previous Monday Notes (see The Capsule’s Price and Mobile First, and a Mag), I should have been more forthcoming about mobile apps. My belief is this: overtime, thanks their greater ability to carry subscriptions and high yield ads, apps, not web sites, will be the path to decent ARPUs.

I will acknowledge another misconception in my plans and leave it to Vin Crosbie, new media professor at the S.I. Newhouse School of Public Communications at Syracuse University, New York, who commented my piece in the Guardian.

Here’s the crux: Even if Federic’s model could work for a national daily, will it scale to work for the average newspaper? Maybe NYT, WSJ, or USAToday could eek out 2% profit margin using it, but what of the other 1,412 daily newspapers in the U.S., the average-sized of which is 18,000 daily circulation? Do the math. [...] Look at the paltry signup rate NYT has achieved. Scaled to a 18,000 circulation daily, NYT’s results would mean less than 180 paying online subscribers.

Vin is basically right. One of the tragedies of the digital media model is this: unlike the newspaper model, it doesn’t scale down well. There are plenty of local web sites faring well, but none comes close to supporting a 200 staff newsroom costing $25 or $27 million to operate.

frederic.filloux@mondaynote.com

Piracy is part of the digital ecosystem

In the summer of 2009, I found myself invited to a small party in an old bourgeois apartment with breathtaking views of the Champ-de-Mars and Eiffel Tower. The gathering was meant to be an informal discussion among media people about Nicolas Sarkozy’s push for the HADOPI anti-piracy bill. The risk of a heated debate was very limited: everyone in this little crowd of artists, TV and movie producers, and journalists, was on the same side, that is against the proposed law. HADOPI was the same breed as the now comatose American PIPA (Protect Intellectual Property Act) and SOPA (Stop Online Piracy Act). The French law was based on a three-strikes-and-you-are-disconnected system, aimed at the most compulsive downloaders.

The discussion started with a little tour de table, in which everyone had to explain his/her view of the law. I used the standard Alcoholic Anonymous introduction: “I’m Frederic, and I’ve been downloading for several years. I started with the seven seasons of The West Wing, and I keep downloading at a sustained rate. Worse, my kids inherited my reprehensible habit and I failed to curb their bad behavior. Even worse, I harbor no intent to give up since I refuse to wait until next year to see a dubbed version of Damages on a French TV network… In can’t stand Glenn Close speaking French, you see…” It turned out that everybody admitted to copious downloading, making this little sample of the anti-Sarkozy media elite a potential target for HADOPI enforcers. (Since then, parliamentary filibuster managed to emasculate the bill.)

When it come to digital piracy, there is a great deal of hypocrisy. One way another, everyone is involved.

For some large players — allegedly on the plaintiff side — the sinning even takes industrial proportions. Take the music industry.

In October 2003, Wired ran this interesting piece about a company specialized in tracking entertainment contents over the internet. BigChampagne, located in Beverly Hills, is for the digital era what Billboard magazine was in the analog world. Except that BigChampagne is essentially tracking illegal contents that circulates on the web. It does so with incredible precision by matching IP numbers and zip code, finding out what’s hot on peer-to-peer networks. In his Wired piece, Jeff Howe explains:

BigChampagne’s clients can pull up information about popularity and market share (what percentage of file-sharers have a given song). They can also drill down into specific markets – to see, for example, that 38.35 percent of file-sharers in Omaha, Nebraska, have a song from the new 50 Cent album.

No wonder some clients pay BigChampagne up to 40,000$ a month for such data. They  use BigChampagne’s valuable intelligence to apply gentle pressure on local radio station to air the very tunes favored by downloaders. For a long time, illegal file-sharing has been a powerful market and promotional tool for the music industry.

For the software industry, tolerance of pirated contents has been part of the ecosystem for quite a while as well. Many of us recall relying on pirated versions of Photoshop, Illustrator or Quark Xpress to learn how to use those products. It is widely assumed that Adobe and Quark have floated new releases of their products to spread the word-of-mouth among creative users. And it worked fine. (Now, everyone relies on a much more efficient and controlled mechanism of test versions, free trials, video tutorials, etc.)

There is no doubt, though, that piracy is inflicting a great deal of harm on the software industry. Take Microsoft and the Chinese market. For the Seattle firm, the US and the Chinese markets are roughly of the same size: 75 million PC shipments in the US for 2010, 68 million in China. There, 78% of PC software is pirated, vs. 20% in the US; as a result, Microsoft makes the same revenue from the Chinese than from… the Netherlands.

More broadly, how large is piracy today? At the last Consumer Electronic Show, the British market intelligence firm Envisional Ltd. presented its remarkable State of Digital Piracy Study (PDF here). Here are some highlights:
- Pirated contents accounts for 24% of the worldwide internet bandwidth consumption.
- The biggest chunk is carried by BitTorrent (the protocol used for file sharing); it weighs about 40% of the illegitimate content in Europe and 20% in the US (including downstream and upstream). Worldwide, BitTorrent gets 250 million UVs per month.
- The second tier is made by the so-called cyberlockers (5% of the global bandwidth), among them the infamous MegaUpload, raided a few days ago by the FBI and the New Zealand police. On the 500 million uniques visitors per month to cyberlockers, MegaUpload drained 93 million UVs. (To put things in perspective, the entire US newspaper industry gets about 110 million UVs per month). The Cyberlockers segment has twice the users but consumes eight times less bandwidth than BitTorrent simply because files are much bigger on the peer-to-peer system.
- The third significant segment in piracy is illegal video streaming (1.4% of the global bandwidth.)

There are three ways to fight piracy: endless legal actions, legally blocking access, or creating alternative legit offers.

The sue-them-untill-they-die approach is mostly a US-centric one. It will never yield great results (aside from huge legal fees) due to the decentralized nature of the internet (there is no central servers for BitTorrent) and to the tolerance in countries in harboring cyberlockers.

As for law-based enforcement systems such has the French HADOPI or American SOPA/PIPA, they don’t work either. HADOPI proved to be porous as chalk, and the US lawmakers had to yield to the public outcry. Both bills were poorly designed and inefficient.

The figures compiled by Envisional Ltd. are indeed a plea for the third approach, that is the  creation of legitimate offers.

Take a look at the figures below, which shows the peak bandwidth distribution between the US and Europe. You will notice that the paid-for Netflix service takes exactly the same amount of traffic as BitTorrent does in Europe!

US Bandwidth Consumption:

Europe Bandwidth Consumption:

Source : Envisional Ltd

These stats offer a compelling proof that creating legitimate commercial alternatives is a good way to contain piracy. The conclusion is hardly news. The choice between pirated and legit content is a combination of ease-of-use, pricing and availability on a given market. For contents such as music, TV series or movies, services like Netflix, iTunes or even BBC iPlayer go in the right direction. But one key obstacle remains: the balkanized internet (see a previous Monday Note Balkanizing the Web), i.e. the country zoning system. By slicing the global audience in regional markets, both the industry (Apple for instance) and the local governments neglect a key fact: today’s digital audience is getting increasingly multilingual or at least more eager to consume contents in English as they are released. Today we have entertainment products, carefully designed to fit a global audience, waiting months before becoming available on the global market. As long as this absurdity remains, piracy will flourish. As for the price, it has to match the ARPU generated by an advertising-supported broadcast. For that matter, I doubt a TV viewer of the Breaking Bad series comes close to yield an advertising revenue that matches the $34.99 Apple is asking for the purchase of the entire season IV. Maintaining such gap also fuels piracy.

I want Netflix, BBC iPlayer and an unlocked and cheaper iTunes everywhere, now. Please. In the meantime, I keep my Vuze BitTorrent downloader on my computer. Just in case.

frederic.filloux@mondaynote.com

Trying a Simple Model

Advertising still dominates the newspaper revenue model. Depending upon the particular country, it is not uncommon to see print dailies getting 70% to 80% of their revenue from advertising. In the early days of the digital era, when business plans were driven by “eyeballs”, everybody hoped to replicate the tried and true print advertising revenue model. Now, the collective hallucination has dissipated; a more down-to-earth vision prevails: publishers willing to preserve high quality (read: costly) journalism recognize they have no choice but getting their users to pay for it, one way another. The pendulum has swung back.

It’s a chicken-and-egg problem. You’ll be able to charge readers if you put yourself in a position to propose exclusive, unique contents. To do so, you’ll have to put together an strong line-up of professionals, as opposed to a blogger army whose output no one will ever pay a dime for.Next questions include: how much to charge ? Is it 10 (dollars euros, pounds), 20 or more?  What free-to-pay conversion rate to aim at? Can we shoot for 5%, 10% or more of the overall audience? How does a full digital operation look like?

Let’s dive into numbers for a back-of-the-envelope exercise.

First, assumptions: The following is based on my observations of markets in Europe (France, UK, Scandinavia) and the United states; numbers may vary but I trust none are widely off the mark.

In the print world, costs break down as follows:

Newsroom........................25%
Production, printing............25%
Distribution....................20%
Marketing promotion.............20%
Administration..................10%
...............................100%

Now let’s move to a fully digital operation derived from a traditional one in terms of journalistic firepower and standards.

To produce it, we’ll settle for a 200 staff newsroom, with writers, editors, data journalists, information-graphic designers, videographers, etc. We removed the staff working on the dead-tree model. With 200 dedicated people working for an online operation, you can really shoot the for stars. Such a setup costs between $25 and $30 million a year, all expenses included. Let’s settle for a middle $27 million.

Production costs fall sharply as the carbon-based version is gone. The old 45% production and printing line morphs into a conservative 15% for serving web pages and applications. We’ll assume all other costs (marketing, promotion, administrative) remain at the same level.

The cost table now looks like this:

Newsroom...............27M$......40% of the total
Production, technical..10M$......15%
Marketing promotion....20M$......30%
Administration.........10M$......15%
Total Costs............67M$ 

Now, let’s turn to the revenue side.

First: advertising revenue. We assume a real audience of 5 million Unique Visitors per month. By real audience, I mean no cheating, no bogus viewers, reasonable SEM and excellent SEO. People come to the site, stick to it and come back. Each user sees at least 20 pages a month. That’s on the high side. By comparison, Google Ad Planner gives the following page views per UVs:

NYT.........15 pageviews per user and per month (distant paywall)
WSJ.........14 (some paid-for section)
FT.com......11 (strict paywall)
Guardian....14 (free)

20 pages is therefore an ambitious goal. I’m convinced it can be achieved through high-performance recommendation engines (look at what Amazon does in terms of its ability to get people to click on related items).

5 million UVs multiplied by 20 pages views gives (thank you) 100 million PV. Now, let’s assume each page generate a CPM (for several modules) of $20. That’s an average as not all pages yield the same amount: parts of the inventory will go unsold, but pages served to high value, paid-for subscribers will generate twice that amount. This translates into a yearly revenue of $24 million, that is around 5 advertising dollars per visitor per year.

Again: it will vary, but it is consistent with what we see on the market for high quality, branded, publications. (By contrast, even the greatest blogs only yield one or two dollars per user).

Two, subscription revenue. Since our audience is solid and loyal to the brand, we will assume 10% of all readers will be willing to pay. Make no mistake: that is the transformation rate a newspapers such as the New York Times is aiming at (it is currently at 1%, still a long way to go). My take is a general news operation will be price-sensitive, meaning the transformation rate with a $9.99 a month price will be significantly higher than with $15 or $20 per month; by contrast, a specialized publication is less rate-sensitive and can be pricier.

In my model of a general news product, I set the price to $10 a month, which makes the one-tenth conversion rate more realistic. Then, I factor in two items:
- 15% taxes (it ranges from 8% in the US to 20% in France)
- a 13% cost of platform including transaction, database, etc (that’s should be a goal as Google OnePass charges 10%); this line is distinct from the technical costs applied to the entire digital operation.

All of the above taken into account, a digital subscriber paying $10 month will generate a net ARPU of $89 a year for the company. Multiplied by 0.5 million paid-for users (i.e.10% of the global audience), this translates into a revenue of $44 million for digital subscribers.

The revenue table now looks like this:

Advertising......24M$...35% of the total
Subscription.....44M$...65%
Total............68M$...100%

$68 million in revenue for a cost of $67 million (all numbers rounded), leaves a mere 2% operating margin. Nothing to brag about. It could easily translate into an accounting loss, especially since it will take a while to reach several of these goals: a 10% free-to-paid transformation rate, a high number of pages per viewers, both are several years (of losses) away for many publications.
But these are the only dials I set on the ambitious side; the rest (subscription price, audience), is rather conservative; for instance if you simply set the subscription rate at $12 a month instead of $10 — that is fifty cents per weekday — the operating margin jumps to 13%.
And I also set aside many things I firmly believe in, like keeping some print operation in the form of a compact, high-end weekly for instance (with a staff of 200, it sounds feasible), developing ancillary products such as digital book publishing, etc.

Once again, while I feel my numbers are well-grounded, others will find this little model simplistic and questionable. The simulation is aimed at showing there is a life after the death of the daily print edition. Success is a “mere matter” of persistence.

frederic.filloux@mondaynote.com

Cracking the Paywall

(This version corrects an error in the percentage for the price increase of the FT)

Every newspaper, magazine or website is working on a paywall of sorts and closely monitoring what everyone else is doing. In almost every news company, execs are morosely watching advertising projections and finding numbers that are not exactly encouraging. For digital media, there is no way around this year’s weak outlook: the bad economic climate only adds to the downward price pressure exerted by the ever growing inventory of web and mobile pages. In a best-case scenario, volumes and prices will remain flat. On the print circulation side, Western newspapers are likely to witness a continuing readership erosion at a rate of several percentage points.

But here is the interesting point: The strongest players don’t just bow to the inevitable, they accelerate their transition to digital. This week, I was struck by the fact two such leaders made the same move: The New York Times and the Financial Times both announced serious price hike for their newsstand price (respectively 25% and 13.6%) :
- The NYT moves from $2.00 (€1.57) to $2.50 (€1.96) from Monday to Saturday, with no change for the Sunday edition still priced at $5 (€3.92) in New York, and $6 (€4.72) elsewhere.
- The FT goes from £2.20 ($3.39 or €2.66) to £2.50 ($3.85 or €3.03) on weekdays, as the weekend edition moves from £2.80 ($4.32 or €3.39 ) to £3 ($4.62 or €3.63).

Those numbers are really meaningful: a 10% increase every two years or so can be seen as an inflation adjustment — a generous one considering the inflation rate in those countries to be about 2.5%-3.5%. At 25% increase is a strategic decision aimed at accelerating the switch to digital. (The paper version of the FT now costs 25% more than it did last October).

Interestingly enough, for a New York Times addict, reading the paper online with the cheapest package ($15 a month), is now 40% to 50% cheaper that the home-delivered version and 70% cheaper than buying the paper each day at a newsstand. As for the FT, the standard digital version is now 21% cheaper than the print subscription and 68% less than the newsstand price.

Both are working hard at converting readers to the digital paid-for model. The FT is heading full steam into digital, furiously data-mining its 4 million subscribers base to convert them into paid-for subscribers (250,000 according to the most recent count). The FT’s tactics is simple: readers are relentlessly pushed toward the paywall thanks to a diminishing number of stories available for free: from 30 free articles per month in 2007 it is now down to 8 articles; the other bold move is making registration mandatory in order to access even a single story.

Last year, the New York Times came up with a less readable strategy: the adjustable paywall. And it seems to work. The NYT has been able to collect 324,000 paid-for digital subscribers in nine months. Considering the NYT has about four times less non-paying digital registered users than the FT (therefore a lesser conversion potential), this is not bad.

The Times builds its paid-for strategy on three key factors:

1 / The uniqueness of its content. Let’s put it this way: The New York Times has no equivalent in the world when it comes to great journalism, period. This valued content helped collect 34 million uniques visitors a month in its domestic market, and 47 million worldwide. More than any other newspapers in the world, the NYT has a huge base of loyal users. If it manages to convert only 5% of its global audience, say 2.4 million people, and extracts an ARPU (combined subscription and advertising) of $150 per year, it will gross €360 million, which largely covers the cost of its newsroom ($200 million a year, by far the largest in the world).

2 / The managed porosity of its paywall. One key requirement in building the digital subscription system for the Times was keeping as many of its readers as possible. There are two main reasons for this: high audience numbers are critical for advertising revenue; and the visibility factor is crucial for a news brand. This led to a system that targets the heaviest users. But even those can easily game the system (by using several browsers on several devices, I never bump into the paywall, with no particular desire to avoid it). Similarly, prices vary from $15 to $35… for exactly the same content — this is typical of a price structure aimed at audiences with flexible purchasing powers (it is widely established that richer people tend to opt for the most expensive package, regardless of its true value).

3 / Getting in bed with Apple. Since the early iPad days, The New York Times has been working closely with Apple for applications, subscriptions, and the nascent Newsstand. Again: thanks to its unique brand and the trust it carries, the NYT experiences no trouble collecting the precious customer data the app’s default settings fail to provide. In doing so, the Times benefited from Apple’s huge promotional vortex. The Apple system is highly beneficial when it comes to building an audience. But it does so at the expense of the essential customer relationship, and at a huge cost of 30% when the goal should rather be in the 10% range.
That was the Financial Times’ rationale for breaking the Apple leash. Last week, the FT went even further: it acquired the software firm Assanka, well-known for the development of the FT.com’s remarkable web-app that insured its crucial independence from Apple (story in PaidContent). In itself, the move demonstrates the FT’s commitment to mobile products: HTML5 development remain difficult and the FT decided it was critical to integrate Assanka’s development tools.

Of these three factors, the uniqueness of content remains the most potent one. With the inflation of aggregators and of social reading habits, the natural replication of information has turned into an overwhelming flood. Then, the production of specific content — and its protection — becomes a key element in building value. As for price structures, there is no magic formula. Usually, the simpler the better (as Apple demonstrated) — especially for businesses that start from scratch. But, with pre-existing and different audience segments such as an individual and corporate users, pricing decisions become more complicated and a diversified price list can prevent cannibalization. As for the Apple vs. independent app issue, my personal take is that sleeping with Apple is a quick short-term win, an easier strategy. But, in the long run, the independent way (which, after all, is an article of faith for Apple itself) will yield better results.

frederic.filloux@mondaynote.com

My 2012 Watch List

When it comes to cracking the digital media code, 2011 involved more testing than learning. Media companies seem to be locked in a feverish search mode. Their sense of urgency is reinforced by the continuous depletion of worldwide fundamentals: digital advertising’s encephalogram remains flat (at best); and when audiences grow, revenues do not necessarily correlate. As for legacy media such as large quality newspapers which still draw 70-80% of their revenue from print, they are still caught in a double jeopardy: losing circulation plus looming downward price pressure on ads. We see an unforgiving mechanism at work: on mature markets such as Europe or North America, print media currently absorbs about 25% of ad spending while time spent on newspapers falls well below 10%. On digital media the balance is just the opposite: the web takes roughly 20% of ad investments for 25% of time spent; as for mobile devices, there is almost no ad money spent (<1%), but people spend about 10% of their time on their smartphones — and the growth is exponential.

Last year, we saw many efforts in the “right” direction—”right” being a rapidly redefined. Below is a subjective list of moves, trends, innovations, attempts that burgeoned in 2011 and are likely to become more sharply defined with this coming year.

#1 Paid-for news. Many are trying, but no one has cracked the code—yet. Part of the problem is we are in a model that’s just the opposite of one-size-fits-all. We are likely to witness the emergence of many different ways of charging readers for quality content. Variables in the equation are many and sometimes hard to quantify:

- National vs. local
- General news vs. specialized
- Typologies of contents
- Most Likely Prime time reading
- Most Likely Prime device use
- Target group structure.

Go figure a reliable business model with a so many factors in the formula…

Paywalls come in different flavors. The prize for complexity goes for the New York Times’ Digital Subscription Plan launched March 17. According to the Times, its crystal-clear equation can be summed-up as follow:

Once readers click on their 21st [in a 4 weeks period], they will have the option of buying one of three digital news packages — $15 every four weeks for access to the Web site and a mobile phone app (or $195 for a full year), $20 for Web access and an iPad app ($260 a year) or $35 for an all-access plan ($455 a year). All subscribers who take home delivery of the paper will have free and unlimited access across all Times digital platforms except, for now, e-readers like the Amazon Kindle and the Barnes & Noble Nook.

Weirdly enough, this overly complex and pricey scheme seems to work: by the end of Q3, the Times had harvested 324,000 paid digital subscribers. This has to be viewed in the context of a site getting 47 million Unique Visitors per month on average, and 33 million in the US alone. As for mobile access, 11 million iPhones apps and 3 million iPads have been downloaded.

To watch in 2012: how fast the NYT will recruit new paid digital subscribers. To get a good view of the key elements in NYT’s digital revenues, see Ken Doctor’s analysis in Newsonomics. Plus, after the sudden resignation of its CEO (Janet Robinson), the NYT might be entering a new era; she could be replaced by a predominantly digital person.

#2 The Web App Movement. The boldest move of the year was made by The Financial Times: in June, it unveiled a web app for iPad and iPhone, independent of Apple’s closed ecosystem. Among its many advantages, the web app allows the FT.com to foster a close relationship with all its customers. In five months, the FT.com has collected over 250,000 paying digital subscribers. Its entire digital operations now accounts for 30% of its revenue. (More on the FT.com’s economics in this PaidContent story.)

To watch in 2012: The outlook seems quite good for the FT.com. Its marketing division is working hard to tap into a huge database of 4 million registered users, including 1 million for the independent web app, half of them putting it on the home screen of their device.

#3 The Apple’s Newsstand is another item of the 2012 watch list. The project responded to publishers’ wish to see their prestigious titles rise over the crowd of garage apps, and to be able to propose long term subscription plans. In October, Apple came up with its digital kiosk, which is essentially a shortcut for publishers apps displayed in a wooden shelf. For good measure, Apple added an exclusive feature: automated downloading. In short, it is a success for magazines who register massive hikes in their digital sales, but much less so for dailies which remain a bit shy. (We been through this in a previous Monday Note)

==> To watch in 2012: the key issue for a massive move to Apple’s Newsstand remains customer data. Either Apple and the publishers will be able to work out a scheme in which about 70% of the customers will agree to provide their coordinates (see Apple’s Newsstand: Wait for 2.0), or the independent web app movement (FT.com-like) is likely to gain traction.

#4 The switch to Digital Editions, as opposed to dumb PDF, might play a critical role in the development of tangible revenue for the industry. Here, I spoke highly of great examples of tablet-specific applications such as BloombergBusinessWeek+ or the Guardian’s iPad version.

To watch in 2012: the adoption of Digital Editions will depend on three factors: 1) The publisher’s willingness to invest significantly on projects not profitable in the short-term, 2) The advertising community’s ability to understand that digital editions will bring their clients much higher benefits than PDF versions or even web sites will do, 3) The acceptance by various Audit Bureaus of Circulation that reader engagement is incomparably higher for designed-for-tablets editions (for more on the subject, read our recent column Unaccounted For Readers.) If these three items are checked, 2012 is likely to be The Year of Digital Editions.

#5 The Huffington Post contagion. Its acquisition by AOL for $315m has propelled the HuffPo to new highs. The content—largely based on unabashed aggregation and legions of unpaid bloggers—remains mediocre, but no ones really seems to care. As in the pre-bubble era, only eyeballs and hype count. The HuffPo has plenty of both. (OK, when you look at the numbers, as Ken Doctor did in this piece, you’ll see a HuffPo visitor brings 3.5 times less money than the NYT does…).

To watch in 2012: This is the year where the Huffington Post will go legit. Everyone is now kissing Arianna’s ring. Including large media company, such as Le Monde, ElPais, DieZeit and a couple of others in Europe that will help Arianna to go global. As appetizing as an alliance between Alain Ducasse and McDonald’s. Sometimes the search for strategy goes haywire…

frederic.filloux@mondaynote.com

The Best of Curation

I love talking about the things I enjoy using. The emerging ecosystem in which a bunch of smart people curate long form journalism is definitely one of those things. The companies are called Instapaper, Longreads, Longform. I love the material they find for me and I’m in the debt of developers who wrote neat applications that help me manage my very own library of great stories.

My reading selection process for long articles (say above 2500 words) goes like this. It starts with installing the Read Later bookmarklet, developed by Instapaper, on all my internet browsers. When I stumble on something I have no time to dive into, I hit the ReadLater tab in the by browser’s bookmarks bar (below):

This causes the piece to be stored in the cloud. (There is another service/app of the same kind called Read it Later. I just got it this weekend and haven’t had much time to use it yet.)

Then, I loaded the Instapaper app on my iPhone and my iPad, it works just fine. The stories I don’t have time to read at work are now available on my two nomad devices for my daily commute, my chronic insomnia, after-dinner relaxation or long flights. Unsurprisingly, topics center around business stories, medias, tech; but they also extend to neurosciences, and in-depth profiles of creative people in a wide range of fields. In doing so, I have re-created my own serendipitous environment; as I open the app, I always find something interesting I put aside a couple of weeks earlier.

My second source of good stories is the Editor’s Pick on three long forms curation sites. Instapaper has it own Browse section and my two favorites are Longreads and Longform. There are two other such sites I use less often: The Browser and Give Me Something to Read. They’re all built on the same idea: a self-organized community of thousands people (see graph below) who pick up articles they like and put them on Twitter (and also on Facebook and Tumblr); the feeds are then re-aggregated and curated by the sites’ editors. The process looks like this :

This system combines the best of Twitter (gathering a community that selects relevant contents) with the final responsibility of human editors. Just as important, Read Later and Read It Later rely on hundreds of third party applications that use their APIs (a piece of code that allows apps to talk to each other).

Then two questions arise :
– Does this model benefit publishers ?
– What kind of business models can the aggregators hope for ?

To the first question, the answer is yes and no. From their respective sites, these companies play a referrer role as they send traffic back to the original publishers. But when it comes to apps for smartphones or mobiles, these services become value killers: their content is displayed in the apps without advertising. See screenshots from the iPhone Instapaper app below:

As for Read it Later application, it proposes (below) a web view and a reformatted text-view. No need to be a certified ergonomist to guess which one will be used the most:

For good measure, let’s say Apple is not the last entity to add features that kill value by removing ads; below the same NYT web page in normal and “Reader” mode:

For now, publishers don’t seem to care much about this type of value hijacking. The rationale is such apps are still limited to early adopters. In a study released last week, Read it Later said it recorded a total of 47 million “saves” between May and October 2011 (and 36% growth between the first and the last month.) Weirdly enough, most of the “saves” recorded involve tech-related stories from blogs such as LifeHacker, Gizmodo (both are part of Gawker Media) or TechCrunch. Long form journalism appears too small to be accounted for. Equally weird, when Read it Later gives a closer look at data coming from the New York Times, we see this:

Great writers indeed, but hardly long form journalism. We would have expected a predominance of long feature stories, we get columnists and tech writers instead.

Similarly, Longreads.com gets about 100,000 unique visitors a month, founder Mark Armstrong told me. For this last week, publishers altogether got 21,230 referrals form Longreads’ curated picks. Despite this modest volume, Longreads’ 40,000+ community of referrers is growing rapidly at the rate of a thousand every two weeks or so.

Let’s talk business model. The Longreads team includes former McCann Erickson creative director Joyce King Thomas (story in AdAge here). She seems more interested in good journalism rather than in loading the elegant Longreads with a Christmas tree of ads. In short, Longreads’ business future lies more in a membership system than in anything else — maybe some sponsorship, Armstrong acknowledges. The contents Longreads promotes through its links addresses a solvent audience, one that knows great journalism comes with a price and so do good tools to mine it. It shouldn’t be a problem to extract €10 or $20 a year, directly or via an app.

Having said that, I remain a bit skeptical of Longreads’ avoidance (for now) of the classic startup venture capital mechanism. Because barriers to entry into its type of business are low, Longreads ought to quickly build on its momentum and on the undisputed quality of its product. This means promotion and also technology to extend the reach of the service and to secure control of the distribution channel–and to make it more mainstream.

frederic.filloux@mondaynote.com

Datamining Twitter

On its own, Twitter builds an image for companies; very few are aware of this fact. When a big surprise happens, it is too late: a corporation suddenly sees a facet of its business — most often a looming or developing crisis — flare up on Twitter. As always when a corporation is involved, there is money to be made by converting the problem into an opportunity: Social network intelligence is poised to become a big business.

In theory, when it comes to assessing the social media presence of a brand, Facebook is the place to go. But as brands flock to the dominant social network, the noise becomes overwhelming and the signal — what people really say about the brand — becomes hard to extract.

By comparison, Twitter more swiftly reflects the mood of users of a product or service. Everyone in the marketing/communication field becomes increasingly eager to know what Twitter is saying about a product defect, the perception of a strike or an environmental crisis. Twitter is the echo chamber, the pulse of public feelings. It therefore carries tremendous value.

Datamining Twitter is not trivial. By comparison, diving into newspaper or blog archives is easy; phrases are (usually) well-constructed, names are spelled in full, slang words and just-invented jargon are relatively rare. By contrast, on Twitter, the 140 characters limit forces a great deal of creativity. The Twitter lingo constantly evolves, new names and characterizations flare up all the time, which excludes straightforward full-text analysis. The 250 million tweets per day are a moving target. A reliable quantitative analysis of the current mood is a big challenge.

Companies such as DataSift (launched last month) exploit the Twitter fire hose by relying on the 40-plus metadata included in a post. Because, in case you didn’t know it, an innocent looking tweet like this one…

…is a rich trove of data. A year ago, Raffi Krikorian, a developer on Twitter’s API Platform team (spotted thanks to this story in ReadWriteWeb) revealed what lies behind the 140 characters. The image below…

…is a tear-down of a much larger one (here, on Krikorian’s blog) showing the depth of metadata associated to a tweet. Each comes with information such as the author’s biography, level of engagement, popularity, assiduity, location (which can be quite precise in the case of a geotagged hotspot), etc. In this WiredUK interview, DataSift’s founder Nick Halstead mentions the example of people tweeting from Starbucks cafés:

I have recorded literally everything over the last few months about people checking in to Starbucks. They don’t need to say they’re in Starbucks, they can just be inside a location that is Starbucks, it may be people allowing Twitter to record where their geolocation is. So, I can tell you the average age of people who check into Starbucks in the UK.
Companies can come along and say: “I am a retail chain, if I supply you with the geodata of where all my stores are, tell me what people are saying when they’re near it, or in it”. Some stores don’t get a huge number of check-ins, but on aggregate over a month it’s very rare you can’t get a good sampling.

Well, think about it next time you tweet from a Starbucks.

DataSift further refined its service by teaming up with Lexalytics, a firm specialized in the new field of “sentiment analysis“, which measures the emotional tone of a text — very useful to assess the perception of a brand or a product.

Mesagragh, a Paris-based startup with a beachhead in California plans a different approach. Instead of trying to guess the feeling of a Twitter crowd, it will create a web of connections between people, terms and concepts. Put another way, it creates a “structured serendipity” in which the user will naturally expand the scope of a search way beyond the original query. Through its web-based application called Meaningly, Mesagraph is set to start a private beta this week, and a public one next January.

Here is how Meaningly works: It starts with the timeline of tens of thousands Twitter feeds. When someone registers, Meaningly will crawl his Twitter timeline and add a second layer composed by the people the new user follows. It can grow very quickly. In this ever expanding corpus of twitterers, Meaningly detects the influencers, i.e. the people more likely to be mentioned, retweeted, and who have the largest number of qualified followers. To do so, the algorithm applies an “influence index” based on specialized outlets such as Klout or Peer Index that measure someone’s influence on social medias. (I have reservations regarding the actual value of such secret sauces: I see insightful people I follow lag well behind compulsive self-promoters.) Still, such metrics are used by Meaningly to reinforce a recommendation.

Then, there is the search process. To solve the problem of the ever morphing vernacular used on Twitter, Mesagraph opted to rely on Wikipedia (in English) to analyze the data it targets. Why Wikipedia? Because it’s vast (736,000 subjects), it’s constantly updated (including with the trendiest parlance), it’s linked, it’s copyright-free. From it, Mesagraph’s crew extracted a first batch of 200,000 topics.

To find tweets on a particular subject, you first fill the usual search box; Meaningly will propose a list of predefined topics, some expressed with its own terminology; then it will show a list of tweets based on the people you’re following, the people they follow, and “influencers” detected by Meaningly’s recommendation engine. Each Tweet comes with a set of tags derived from the algorithm mapping table. These tags will help to further refine the search with terms users would have not thought of. Naturally, it is possible to create all sorts of custom queries that will capture relevant tweets as they show up; it will then create a specific timeline of tweets pertaining to the subject. At least that’s the idea; the pre-beta version I had access to last week only gave me a sketchy view of the service’s performances. I will do a full test-drive in due course.

Datamining Tweeter has great potential for the news business. Think of it: instead of painstakingly building a list of relevant people who sometimes prattle endlessly, you’ll capture in your web of interests only the relevant tweets produced by your group and the group it follows, all adding-up in real-time. This could be a great tool to follow developing stories and enhance live coverage. A permanent, precise and noise-free view of what’s hot on Twitter is a key component of the 360° view of the web every media should now offer.

frederic.filloux@mondaynote.com

Unaccounted For Readers

Newspaper publishers need to quickly solve a troublesome equation. As carbon-based readership keeps dwindling, the growing legion of digital readers is poorly accounted for. This benefits advertisers who pay less for their presence.

Putting aside web sites audience measurement, we’ll focus instead on the currently ill-defined notion of digital editions. A subject of importance since digital editions are poised to play a key role in the future of online information.

First, definitions. The International Federation of Audit Bureaus of Circulation (IFABC) makes several distinctions that are adopted by most certification agencies around the world. The most straightforward is the “Digital Version” of a publication based on PDF. To be counted in the paid circulation of a newspaper or a magazine, a Digital Version must carry the same editorial content as well as the same advertising (volume and placement) as the paper version.

The second category, “Digital Edition”, is much fuzzier. Digital Editions come in different sizes and shapes, tailored for tablets or smartphones. Examples include The Guardian for iPad, Bloomberg Business Week+ and The Economist versions for iPad or iPhone (see previous Monday Note The Capsule’s Price). These editions have little to do with the print version. They are usually loaded with the same set of stories as their paper sibling, but add more pictures and, sometimes, animated infographics and video. The layout is designed to fit gesture-based navigation. Ads are different, too: far fewer modules, but with multiple screens and multimedia packages. The idea is less ads carrying more value per unit.

Here comes the absurdity.

Digital Versions (in PDF) are often hosted by digital kiosks carrying hundreds of publications, most often magazines in PDF facsimile. On many such kiosks, the best-selling product is the all-you-can-eat flat plan; for users, the 20 dollars or euros per month plan encourages indiscriminate downloading. I chose the word users on purpose. Readers would be presumptuous. On their first month, users will download about 60 to 80 publications. After a quarter or so, downloads stabilize to about 30 publications a month. Are those actually read? Maybe some, but the rest of the bulk is barely leafed through. As a result, the value of the advertising carried by these glanced-at publications trends to zero (the value of an ad being — at least in theory — a function of the eyeballs it will capture). It’s ridiculous to expect a “reader” who gulps down 30 publications to memorize a stack of 40 ad pages.

Nevertheless, such Digital Versions fall into the crucial “paid circulation” category which is still, unfortunately, the main gauge of market performance.

Noticing the absurdity of the open-bar kiosks, various circulations bureaus across the world have worked on ways to account for the behavior of this super-fly-by readership. In France, the OJD says that, in order to be counted as sold, the revenue derived from a digital publication must be higher than 25% of the single copy price, all taxes included. As an example, let’s take a user who opted for a €20 monthly unlimited plan, downloading 40 magazines in one month, each priced at €4.00 in a physical newsstand. To be counted as a valid sale, each magazine should bring at least €1.00. But a consumption of 40 magazines for €20 will only yield €0.5, half of the required minimum. Therefore, the OJD will only count half of the volume sold.

These audit agencies efforts are fine but, regardless of all the tweaks in the way copies are counted, they don’t solve the problem of ads that remains vastly inefficient.

Turning now to Digital Editions, their adaptation to the needs of tablets and smartphones much improves advertising performance. Modules will be fewer, but far more engaging. Interactive ads will lead to what marketers call transformation, which is when someone actually orders an item or interacts with a seller (by requesting a test-drive of a car, for instance). All such things are impossible with a static ad embedded in a PDF.

In addition, Digital Editions can point to an individual reader. When I subscribed to the iPad version of the Guardian, or of BusinessWeek, I actually gave permission to what I consider trusted editorial brands to get my coordinates from Apple. (For high quality publications, the rate of opt-in is said to be above 50%. Not bad.) Practically speaking, it means the publisher will be able to directly interact with me. And, in the near future, for my very own digital edition, that same publisher will inject ads tailored to my socio-demographic profile, my location, etc. (don’t rush folks, I can wait for this type of targeting).

Summing-up: We have Digital Versions that are basically PDF images of the original print publications and Digital Editions that are more sophisticated and built — for obvious reasons — on different structure.

And guess what? Most circulation bureaus segregate the two products; static ones are counted in the paid-circulation line — and consolidated with the paper’s global  circulation — but the tablet or smartphone-designed versions appear in a separate line.

No big deal, you might say. But it actually is.

Problem is, media buyers almost exclusively consider the aggregated figure. They tend to overlook the value of itemized lines. As a consequence, the most sophisticated products, the ones able to deliver engagement and value to advertisers are simply ignored.

Hence the publishers’ furious lobby to convince circulation bureaus to include Digital Editions in their global circulation numbers.

The British Audit Bureau of Circulation was quick to understand the importance of aggregating all forms of circulations on the sole basis of the editorial content. Probably because many UK publishers developed good tablet and smartphone editions. Just a year ago, they issued this unambiguous communiqué:

ABC announced today that it has agreed new Reporting Standards that will allow publishers to present both print and digital editions on one certificate. This offers more flexibility to publishers in how they can claim digital editions.

The new Cross Platform Certificate of Circulation enables publishers to provide a single view of their circulation figures. This includes the circulation of digital editions of magazines designed especially for mobile devices such as Apple’s iPad.

In many countries the issue is still on the table. To their consternation, newspapers and magazines publishers see the constant erosion of their paper versions; at the same time, they are required to serve the booming tablet and smartphone markets with dedicated digital editions that remain undervalued by the advertising community. Frustrating.

frederic.filloux@mondaynote.com