Browsing Category

online publishing

Ebooks Winners & Losers

online publishing By November 14, 2010 Tags: 16 Comments

Let’s come back to the ebook with more questions. There is no doubt: the digital book will find its place under the sun; its prospects look much better than those of the online press. In the first place, there isn’t an ingrained, now decade-old, habit of reading news for free on the internet. Second, the book (in its physical form) is the centuries-old incarnation of the “cognitive container”, with its unparalleled convenience and with a value attached to it. And third, it can’t be unbundled.

For the online press, on the contrary, more than 90% of online newspapers are available for free. The “cognitive container” is totally non-practical in terms of size, readability; the interface sucks: most broadsheets’ stories run on two pages but many readers don’t go beyond the jump. Lastly, the daily news is begging for unbundling (look at the Sunday edition of your favorite newspaper, with its ten plus sections).

What does the book gain by switching to the electronic format?
Three things:
– new formats with rich media appealing to reluctant books readers (the current Generations X and Y, mainly)
– enhanced capabilities such as search, ability to create a personal table of contents, or to extract and index snippets
– a complete overhaul in the production system, which will breed new market opportunities as editorial works, once finished, will enjoy instant worldwide availability.

Of course, obstacles remain. A recent survey conducted by Bain and Co listed eight obstacles in the way of widespread ebook adoption (PDF here).

Interestingly enough, said its authors Patrick Behar and Laurent Colombani, the nostalgia of the “paper experience” is disconnected from the generation factor: all age groups continue to enjoy the book as a physical object. This guarantees some level of coexistence between the two medias. But the authors also admit the two next barriers – the price of the device and reading comfort –  will fade quickly as Moore’s Law still rules, both for mass produced devices and for screen quality (see for instance Qualcomm’s Mirasol display combining the advantages of electronic ink and the color depth of LCD screen — see also this story in The New York Times). On the devices’ price, Bain & Co sees the following evolution and point to the thresholds required to convert purchase intents:

Using this backdrop, let’s now try to see how the different participants might fare. (For a close-ip on the digital rights issue, see last week’s Monday Note)

Manufacturers: uncertain. Users expect around a hundred dollars or euros for an e-reader and will soon expect three times this amount for a full color, full-feature tablet. To put things in perspective, a teardown analysis made by iSuppli shows the cost of components for an Amazon Kindle is $176 as the iPad reaches $264 and $214 for the Samsung Galaxy Tab. This gives an idea how thin margins are likely to be in the future. In other words, manufacturers who won’t be able to sell the blades (i.e. contents) along with the razor will have a hard time making any money.


ebooks: trading digital rights, not files

online publishing By November 8, 2010 Tags: 20 Comments

There are many reasons to be bullish for ebooks. On the device side, the iPad set the standard (rather high) and triggered an intense competition among manufacturers and operating systems providers. On the people side, just take New York’s subway, or a high-speed train in Europe. And we’ve seen nothing yet: tablets prices will go down as cell phone carriers – and eventually media companies – subsidize e-readers. Before year-end, European telcos will offer the Samsung Galaxy — an Android-powered tablet — for €300 or less, preloaded with access to online bookstores and electronic newsstands. For the industry, this Christmas season is critical: tablet makers must secure defensible market territory before Apple’s probable roll-out of its next generation iPad.

The content side remains more complicated to figure out. A first phase is likely to consist of an extension of what we have today, i.e. a transaction system based of book files: text-based books or richer media products. The main players will remain Amazon, or the Apple iBooks store. But, in five to ten years, this way of dealing with intellectual content  will be seen as primitive.

The true revolution will be a shift from a files transaction system to a rights transaction system. This transformation involves radical changes in the way we think of digital content, books, videos or even games.

For now, let’s focus on books. Here is how it could work.

We’re now in 2015. I read books-related contents on a number of different devices: my smartphone, my high definition tablet, and even my PC some times. (I personally do not believe in TV for such products). I want spend a long weekend in Rome. Instead of buying a couple of books – one to organize my trip and another to use on location – I will buy rights to both.

As I download the books I bought rights to on an iPad or a Samsung Galaxy, the content takes advantage of specific screen features and displays large pictures, some of in 360° panoramic format and zoomable. My Microsoft tablet uses the extraordinary DeepZoom technology connected to the Bing Maps Live View


Expanding Into New Territories

advertising, newspapers, online publishing By October 24, 2010 Tags: 27 Comments

In defining business strategies for modern medias such as online newspapers, the most difficult part is finding the right combination of revenue streams. Advertising, pay-per-view, flat fee… All are part of the new spectrum media companies now have to deal with.

The gamut looks like this:

As we can see, newspapers mostly consist of one product line, confined to the mainstream, value-added news category. By going digital, this segment is likely to lose most of its value (expect a 60% meltdown as expressed in revenue per reader). Therefore, for these companies, it becomes critical to expand into new territories already taken over by other players. For instance, big media outlets endowed with strong brands should go into commodity news and participatory/social contents. This doesn’t mean a frontal attack on Facebook or Twitter, obviously; instead, the new reality dictates using and monetizing through them (see last week’s Monday Note on Facebook monetization).

Ancillary publishing should also be considered a natural expansion: news outlets retain large editorial staffs that could be harnessed to produce high value digital books (see this earlier Monday Note on Profitable Long Form Journalism). The “Events” item, on the list/graph above, is more questionable, but it remains a significant source of potential income tied to the brand’s notoriety. I left aside the classifieds business: except for a few media groups (Schibsted all over Europe or Le Figaro Group in France) that boarded the train on time, positions are now too entrenched to justify an investment to gain a position in that segment.

Advertising is likely to remain the biggest money maker for the two dominant categories: Commodity/Participatory/Social Media and Mainstream Value-Added. Unfortunately, in its digital form, advertising has run in deflationary mode for the past decade due to flat (at best) CPMs, with huge inventories putting further pressure on prices.

Print doesn’t look great either as investments shift en masse to digital; this reflects the growing imbalance between time spent by users on print and advertising investments in the medium. According to Nielsen Media Research, the Internet now accounts for 38% of time spent but only for 8% of ad spending; newspapers are on a symmetrical trend as they captured 20% of advertising dollars for only 8% of users’ time.


Aggregators: the good ones vs. the looters

journalism, online publishing By September 19, 2010 Tags: , 117 Comments

News aggregators have grown into all shapes and forms. Some are truly helping the producers of original content but others simply amount to mere electronic ransack.

My daily media routine starts on Techmeme. It is a pure aggregator — actually an aggrefilter, as coined by Dan Farber, at the time editor-in-chief of Cnet, who recommended it. This little site combines simple concept and sophisticated execution. As shown in its “Leaderboard”, it crawls a hundred sources and applies a clever algorithm using 600 parameters. More importantly, it adds a human editing layer. In this Read Write Web interview, Techmeme’s founder Gabe Riviera recently discussed his views on the importance of human editing, how it allowed him to fine-tune the his site’s content. The result is one of the most useful ways of monitoring the tech sector. And, since Gabe Riviera also launched Mediagazer last year, I use it to watch the media space. (Another iteration of the concept, Memeorandum, aggregates political news; for reasons I don’t quite understand yet, it doesn’t work as well as the two others.)

Techmeme and Mediagazer benefit the news outlets they mention. Story excerpts are short enough to avoid being self-sufficient and the hierarchical structure works. (Self-sufficient excerpts result in the aggregator not sending back traffic to the source — I’ll come to that later.) These twin sites are definitely among the best of their kind, resulting in a sound six persons business, not the next Google News but doing OK financially.

In fact, in their very own fields, Techmeme are Mediagazer are more useful than Google News. By crawling through so many sources, with the sole help of a powerful (but aging) algorithm, Google News ends up lacking finesse, precision and selectiveness. It’s a pure product of the engineering culture the search giant is built on, where obsessive hardcore binary thinking sweeps away words like “nuance”, “refinement”, “gradation”.

At the other end of the aggregator spectrum, we have The Huffington Post, one of the smartest digital news machine ever and, at the same time, the mother of all news internet impostures.

In France, where true journalism is in a state of exhaustion, everybody wants to make “Un Huffington Post à la Française“. The dream hardly comes from the best and the brightest. No, the fantasy agitates click-freaks building “traffic machines” on the generous losses their investors are willing to put up with. So, in spite of the red ink, why do they yearn for their Huffington Post so much? One word: Numbers. As recalled in Newsonomics story, in one year, the HuffPo doubled its audience. And now, the HuffPo is nibbling at the’s ankle: 13m unique visitors/month (Nielsen) vs. 19m for the Times. The HuffPo is a privately-held company with abundant funding and therefore does not release financial numbers. Revenues are said to be in the $15m range, and profitability is “near”…, this according to fascinated bloggers who kissed the HuffPo CEO Eric Hippeau’s ring.


A Toolkit for the Cognitive Container

mobile internet, online publishing By August 29, 2010 Tags: 19 Comments

We now live in an apps world. “The web is dead” shouts Chris Anderson, Wired’s editor-in-chief. To make his point, he teamed up with Michael Wolff, a Vanity Fair writer. According his latest theory, the internet is taken over by mobile applications, and the web as we know it, will be soon dead. Wired produces a Cisco-originated graph (below) showing the decrease in “web” traffic, down to a quarter of the traffic of the internet. The other 75%, says Anderson, include video, peer-to-peer, gaming, voice-over-IP telephony, a large part of it encapsulated in apps, blah-bla-blah.

Well. Two things. To begin with, Chris Anderson isn’t the first to notice the rise in applications used to access the internet. Every news outlet’s digital division witnesses a sharp increase in its apps-related traffic. Here in France, Le Monde just said its iPhone apps now contribute about 20% of its entire traffic; its iPad application (a bit crude but efficient reader) has been downloaded 150,000 times. This is just the beginning as publishers are working on new apps, for the iPhone, the iPad, but also for Android, Windows 7 for Mobile and even Bada, Samsung’s proprietary OS. Many publishers forecast a share of 30% of their traffic originating from mobile devices. This is consistent with Morgan Stanley’s predictions of smartphones shipments overtaking the PC two years from now (see below).

Such trends, when repackaged in Chris Anderson’s craft, ascend close to papal encyclical status (that Anderson’s particular skill; in a recent lecture, the British journalism professor George Brock calls him “a professional exaggerator”). Never mind the data he presents are not of the utmost rigor. As we can see here, he magnifies the demise of the web.

But byte-flow analysis is misleading. A more accurate measure would be time spent on the traditional web versus apps. For instance, neither Anderson nor the graph say in which category Facebook traffic falls. Is it an app? A web-based service? All we know is American users spends a quarter of their time on it. I wouldn’t dare wrecking such an attractive intellectual scaffolding with mere facts, but we can’t compare video and text-based pages on the basis of their byte-stream. I did the test: a 3 minutes of You Tube video weighs 16 megabytes; the same time spent on text will only require a 20 kilobytes page, 800 times lighter. (The 8000 words Anderson/Wolff story — devoured in 15 minutes at a normal reading speed, weighs only 117 kilobytes). When measuring things, the metric does alter the perspective…

Nevertheless, Anderson’s fatwa is gaining traction, as did, in its time, his Long Tail theory. Later, Anderson amended the postulate, using the concept of “strong head” (mandatory if you expect to make money with the tail). His “Free!” edict was also updated with the Freemium notion – a paid-for model tied to an incentive. But no more sarcasm, such silicon snake oil is a charming ingredient of our e-times.

Caution with Anderson’s theory aside, there is no doubt the app phenomenon will significantly impact the way we consume news: apps might become their main cognitive container.


The Facebook Gravitational Effect

online publishing, social networks By August 2, 2010 Tags: 30 Comments

Over the next twelve months, the media industry is likely to be split between those who master the Facebook system and those who don’t. A decade or so  ago, for a print publication, going on the internet was seen as the best way to rejuvenate its audience; today, as web news audiences reach a plateau, Facebook is viewed as the most potent traffic booster.

If you are looking for the ultimate cyber black hole, point your browser toward Facebook. Beyond the 500 million users milestone, even more significant gravitational pull await the media industry. Here are facts to keep in mind.

— While the average online newspaper is viewed about 30 minutes per month (see data from the NAA), users spend 12 times more on Facebook: a worldwide average of 5hrs 52 minutes, 6hrs 02 minutes in the United States and 4hrs 12  minutes in France. Globally, social networks represent about 10% of the total internet time; and 2/3 of the internet population visit one such network at least once a month. And the growth is about  30% per year; in three years, that’s 220%, a multiplication by 2.2!

— Facebook dwarfs other social networks: worldwide, measured in time per month, it weighs 6 times MySpace, and 12 times twitter and 30 times LinkedIn.

— Of the half billion users, 250 million are logging every day, for about 34 minutes.

— Just as important, or more, 150 million access Facebook through their mobile phone.

— In June alone, on the US market, users spend more time on Facebook than on sites owned by Google, Microsoft and Yahoo combined (source: Nielsen).

Update Aug.2:  Nielsen just released this study showing that American spend 23% of their internet time on social media, vs. 16% a year ago.

The time spent numbers are always spectacular… but some view those as misleading considering how users interact with Facebook: uploading videos or photographs takes inherently more time than glancing over Google News. Granted. Let’s then consider more media related metrics.


Zero tolerance for latency

online publishing By July 18, 2010 13 Comments

The big battle of the coming years will be a battle for time. For media related software or for web design, the fight will be for customers’ or readers’ attention, the challenge will be to prevent them from fleeing elsewhere and to give them more in less time.

More than ever, we are in the business where speed is key.

Look at how critical the speed factor has been in recent tech successes. To be sure, Google has been able to dominate search thanks to the quality of its algorithm. But Google’s win also came from its ability to deliver results faster than anyone else. This speed comes from the combined performance of three pieces of software:  Map Reduce slices the request in multiple chunks and assigns the work to multiple CPUs, the Google File System at the core of Google’s distributed architecture and the main database, BigTable, all supported by an unprecedented hardware deployment of several million servers. (A Cornell University paper by a Google Fellow gives a comprehensive description of the company’s architecture, PDF here). All three components are dedicated to speed of service, one of Google’s edges and a must to conquer cloud computing, where waiting for a file update or data transfer is not an option.

Speed is also a key success factor for Amazon and iTunes — the two kings of the friction-free transactions — but also for the video streaming service Hulu, or for business applications such as Salesforce. As for hardware, Apple taught us that speed and fluidity weigh more than a long feature list. Look at the four iterations of the iPhone (leaving aside the antennae issue for once), most of the hardware improvements have been aimed at increasing speed and fluidity of use (OK, battery life as well).

Two of the reasons why the iPad is about to take over the netbook market are its near to zero boot time and its instant application launch. Both are a blow to the PC which remains stuck in its passé architecture despite huge increases in processing power (or perhaps because abundant power facilitates wasteful programming practices). This contributed to the pace of the iPad adoption by the customers: it took 28 days for the iPad to reach a million users, vs. 180 days for the netbook. Consumers love speedy devices.

And, as a final example of speed related services, we can mention classifieds websites derived from the highly successful Blocket in Sweden (Blocket is owned by my former employer Schibsted). The site is so huge that an equivalent about 5.5% of the entire Swedish GNP goes through it!  As for the French version called Le Bon Coin (see Monday Note’s story), it delivers 2000 pages per second while relying on fairly small hardware. But the site is entirely coded in C language that provides lightning-fast data delivery (users see 40 pages per visits on average, eight to ten times more than any news site).

Let’s face it, consumers tolerance to latency is closing down to zero. They want ultra-fast boot time, quick network access, fast pages display or download.

Sadly, this issue that has yet to percolate to the surface of news media industry consciousness. Too many web publishers remain convinced that the quality of their editorial is far more important that the underlying technology that supports it. This is made even more obvious with the inception of the iPad and of its applications. The vast majority of news media publishers have not focused enough on speed and seamlessness. Let me repeat a point I’ve already made here several times: the digital news sector needs more investment in technology and techies.


A New Gallic Idea: Taxing Google

online publishing By January 10, 2010 Tags: 21 Comments

The French cultural elite has come up with a bunch of ideas to stimulate the legal consumption of digital goods. The basic principles are stunningly original: subsidize and tax. These creations are detailed in a report ordered by the Président de la République to the Ministry of Culture. This is the way it works here: when a problem plagues the private sector, the executive branch tasks clever, carefully picked-up fellows with writing a report. It involves hearings —about a hundred in that case — held behind closed-door, off-the-record; no one can figure out who is standing for what.

This time, the selected authors of the report titled “Création et Internet(available here) are: Patrick Zelnik, a music producer, Jacques Toubon, a 69 years-old former all-purpose minister (including Culture in1993-1995) and Guillaume Cerutti, the CEO of Sotheby’s France. Not exactly digital front-runners. As a music producer, Zelnik has brilliantly missed the digital train; Toubon has seen more mice in government offices than on his desk and Cerutti is running an auction house where sales are concluded with a hammer blow, not a touchpad click.

One of the most spectacular strokes of inspiration involves the creation of a taxpayer-subsidized “Online Music Card”. It could work like this: a young internet user, compulsive music downloader, buys a card for €20-€25. But the card carries a face value of €50. Then, after a while — expect a few years for roughly a million of young people above 24 — the magic happens: this crowd mutates into legal download addicts and forgets the appeal of illegal Net music (which, in France, is 20 times more important than the legal variety). That’s a hell of a good news for Apple, its iTunes cards could be bought by the bulk using French taxpayers’ money. Bear with me: that’s Christopher Columbus’ Egg. How come we didn’t think of it earlier? Flooding the young addicted-to-free generation with subsidies to reverse the anything goes, culture-copyright-looting tsunami! You know what? Sometimes, I’m proud of my country.

Second idea, my favorite: taxing Google. The concept, so to speak, is the following.


The 2010 Media Watch List

advertising, magazines, mobile internet, newspapers, online publishing By January 3, 2010 Tags: , 13 Comments

No predictions, just a few of many hot topics for the newborn year.

Paywalls. 2010 could see a significant number of newspapers jumping into the paid-for option. Among the conditions to be met:

– Grouping around a toll collector. It could be Journalism Online in the US, a big media group in Europe, or even Google — should a truce occur between the search giant and publishers. From the user’s standpoint, the payment intermediary must be friction free, able to operate on any platform (web, mobile) and across brands.
Publishers will have to devise a clever price structure. If a knee-jerk move takes them back to the tired basic-content vs. premium-content duality, they are doomed.
– State-of-the-art web analytics affords much more refined tactics around users, platforms segmentation, etc. In addition, a paid-for system must be able to deal with many sources of income, such as monthly subscriptions, pay by-the-click, metering system based on downloads, time spent, etc.
– Publishers must act in concert. In every market, the biggest players will have to carefully coordinate their move to paid-models: everybody must jump at the same time. This is easier said than done: there is always the risk a rogue player will “cheat”, that is break the pact in order to secure a better market position. Also, too much “coordination” could encourage a disgruntled competitor to sue on anti-trust grounds.Daily newspapers shifting to periodicals. How many dailies in the world will shift from seven or five issues a week to three or two? Undoubtedly, many. This is a better trend than it sounds. For breaking news, print is no longer relevant, but it will remain the medium of choice for long-form pieces. Newspapers publishing a few times a week will gain by becoming more magazine-like in their news coverage; they’ll save their story-breaking capabilities for web versions. In this regard, the mobile web will soon become bigger than the original, PC-based variant.
The “instant web” such as Twitter and its offspring will thrive in 2010. The likeliest offshoot is video-twittering as pocket size camcorders continue to spread (see Gizmodo comparison here). These will be supplemented by an upcoming generation of high-definition devices with Net connectivity through wifi or 3G networks.

Advertising Disintermediation. The media buying side is definitely not the sector to be in for the next decade. First of all, ad spending will continue its adjustment to the actual time spent on various medias. In 2008, print captured 20% of advertising dollars for only 8% of the time spent; in comparison, digital got 29% or our time but 8% of ad spending. Those numbers, those discrepancies tell us the correction is far from over.
Unless they devise smarter ways to analyze web audiences (see below, the audience measurement issue) and, as a result, clearly define the true value of each group of users, there is no longer a need for the media buyers’ costly intermediation. The trend is there: the most agile web sites will go directly to brands and advertisers, they will propose sophisticated integration mechanisms for their sites and mobile platforms. So do social networks such as the 25m users French Skyrock (see our case study).
Anyway, Google will settle the intermediation issue as its boss candidly puts it in Ken Auletta’s books (1): “Google wants to be the agent that sells the ads on all distribution platforms, whether it is print, television, radio or the internet. (…) As our technology gets better, we will be able to replace some of their [large companies] internal captive sales forces”. Media buyers, consider yourself notified: you’re toast.
As for the creative side, we hope advertising agencies will, at last, wake up and think of new ways to integrate their messages in digital media layouts (as in print), rather than trying to divert users away from media sites (see previous Monday Note on the inherent design flaws of the internet).


Measuring time spent on a web page

online publishing By May 24, 2009 13 Comments

How much time is actually spent on websites? New technologies are emerging, starting with time spent on individual pages and drilling down to page segments. Such technologies will lead to improved monetization; they could even spell good news for paid sites.  Here is why.

First, display ads. Banners and other modules still represent  30% to 45%  of the sector (depending on the market). For a brand, display ads remain the best way to actually be seen on a web page, as it is seen in a magazine or in a newspaper. At least in theory. In fact, there are several catches. The first one is the discrepancy between the size of the average computer screen and the length of the average web page. It takes about 5-6 scrolls to get to the bottom of a page. (Some sites require as many as 25-30 scrolls – the gateway to carpal tunnel syndrome.)

Evidently, not all modules get the same amount of viewer attention. As a result, all modules do not hold the same value for the media, nor do they create the same ROI for the advertiser. However, today, ad spaces are sold roughly at the same price, the main variable being the type of page and the editorial context (home page or article page, in a sport, business or politics section of the site).

The second catch is the advertising module’s actual goal: is it supposed to be just seen (Chanel brand awareness, for instance) or clicked-on (win a trip to the Bahamas)? The latter sort, clicked-on, conflicts with the editorial environment. On the one hand, the media (its editors, at least) work hard at making the content compelling, relevant and interesting; therefore, the last thing they want is visitors clicking away and going to the Bahamas vacation site. On the other hand, the advertiser wants editorial context without too much ability to retain attention. In short, so-so content will make visitors more inclined to click on the Bahamas banner. In these conflicting goals lies one of the main problems of internet advertising: a growing number of advertisers want to pay for performance, i.e., when people actually click on the module.

This makes time-measurement relevant.