Carriers Whine: We Wuz Robbed!

[First: No (new) iPad report, yet. In the meantime you can feast your eyes, or nurse your dyspepsia, by googling “iPad 3” or “new iPad”. This will tell you almost everything (minus the Fingerspitzengefhül, the all-important gut-feel) about the product, and definitely everything about the kommentariat. If we thought we’d plumbed the nadir with the iPhone 4S…]

Dictionary.com unfolds the historical and linguistic links to We Wuz Robbed, and translates: We were cheated out of a victory; we were tricked or outsmarted.

The gist of the carriers’ lament is this: We do the hard work and someone else is making all the money. And by someone else they mean a certain interloping personal computer company that has, without the slightest experience in the technical (and deal-making) intricacies of the mobile phone industry, inexplicably lucked into the smartphone business and pocketed an unfair share of the cash.

The PR flacks go to work and give us this “rich” WSJ article, “How the iPhone Zapped Carriers”, from which I extract a few of its many gems:

Americans are glued to their mobile devices, obsessively calling, texting, emailing and downloading applications. So why is the U.S. wireless industry in such straits, as shown by AT&T Inc.’s crucial but failed plan to buy T-Mobile USA?
A big reason is that carriers are losing power to the device and software makers riding the smartphone boom.
For the most part, it’s really been a wealth transfer from AT&T shareholders to Apple shareholders…
Device makers and app developers are having the fun, while the carriers are doing the grunt work.

Nowhere does the churnalist entertain anything other than the carrier party line. Not a word from users, from Google, from the device makers and software freeloaders who are “riding” the boom and having all the fun.

The article’s bias is clear, in its language and innumeracy. First, AT&T’s “crucial but failed” attempt to buy T-Mobile was a bid to restrict competition and raise prices. Had the merger gone through, customers would be the ones crying “We wuz robbed.”

Second, the article mentions a decrease in the sacrosanct monthly ARPU (Average Revenue Per User) to $46.09, down $2 from the previous year. Behold once more the lack of respect for the reader exhibited by the fake four-digit precision. But beyond the attempted intimidation, what is the meaning of the $46.09 average, what ingredients does it mix together?

Curious, I ask the oracle a simple question: ATT ARPU. The first hit is a happy — triumphant almost — AT&T press release for Q4, 2010:

AT&T Reports Record 2.8 Million Wireless Net Adds, Strong U-verse Sales, Continued Revenue Gains in the Fourth Quarter
…This marked the eighth consecutive quarter AT&T has posted a year-over-year increase in postpaid ARPU.

And the news gets better. The Q4 2011 investor presentation (download it here) yields these morsels:


The average ARPU for smartphones on AT&T’s network is 1.9 times that of the company’s non-smartphone devices.

The $64 ARPU is for all wireless devices, smart (with their 1.9x revenue premium) and dumb. Two years ago, AT&T CEO Randall Stephenson pronounced himself happy with the “over $100” ARPU from iPhone subscribers. So how much does AT&T get for its iPhones today?

From Apple’s latest earnings release, we know the iPhone ASP (Average Selling Price) is about $660. AT&T subscribers pay $200, plus $20 or more in accessories, directly to Apple. This leaves $440 to be fronted by AT&T. Subtract that $440 number from $2880 (the customary 24 month x $120 agreement), and there’s $2440 left — or about $100 per month of “real” iPhone ARPU.

But there’s a problem with my back-of-the-envelope calculations. Data consumption makes up 40% of AT&T’s service revenue; I can’t prove it, but I suspect that iPhone subscribers are more ‘‘generous”, they consume more data, than AT&T’s other customers. In Q4 2011, that probably worked out to a nice ARPU of about $120 per iPhone.– and users will pay even more now that the “unlimited’’ plans are no longer offered.

What about Verizon? Are they being “zapped,” too? The oracle obligingly responds to “Verizon ARPU” with a few good links, such as this one, trumpeting Verizon’s robust health:

US carrier landscape in Q3: Verizon records biggest ARPU

Jumping to Verizon’s Q4 2011 numbers and to the January 24th earnings call transcript (courtesy of Seeking Alpha), we needn’t shed tears. The company’s smartphone business is doing well:

Ever since the iPhone barged in, we’ve heard carriers cry extortion; they complain that Apple’s prices — to them — are too high. But they took the iPhone and its prices for two simple reasons: higher ARPUs and fear of losing subscribers to a competitor, the cost of watching your most “productive” subscribers — the ones who contribute to the 1.9x ARPU factor — go elsewhere. It’s better to bet the company on the iPhone than on not having it.

Sprint agrees. According to our WSJ story, Sprint has committed $15B to the purchase of iPhones for a period ending in 2014. (Another WSJ story says $20B, but what’s $5B these days?) As obverse evidence, we have T-Mobile’s simple explanation for its subscriber losses: No iPhone.

Carriers subsidize smartphones because it’s what their customers want, and they put up with the iPhone’s higher price because it’s what their customers want most. Just last week, a T-Mobile exec called for an end to smartphone subsidies — but refused to go first. When/if the iPhone becomes less desired, the carrier subsidies will subside.

I introduce this thought by way of providing context for another statement in the WSJ article:

‘’… subsidizing a customer buying an iPhone would cost 40%, or about $200, more than another kind of phone, on average.”

(Let’s see: $200 divided by 24 months, that’s about $8 a month. Will an iPhone customer yield the extra $8 in monthly ARPU? The carriers’ accountants seem to think so.)

Still on subsidies, try this thought experiment: You walk into an AT&T or Verizon store with a fully-paid unlocked phone. Will you get a lower monthly deal? I asked and, in both cases, the answer is a polite no. See this on-line chat with a Verizon person:

No deal on an unsubsidized phone. The logic is impeccable: We complain about subsidies while using them to tie customers up.

Carriers want to imagine a world in which the ‘‘excess’’ $200-per-iPhone subsidy moves back to its rightful home: the carriers’ coffers. With 180 million iPhones sold so far, that’s $36B of pure carrier profit…and Apple would still enjoy an ASP of more than $400 for its iPhone. Cosmic order would be restored.

Back in this reality, carriers complain about excessive subsidies and threats of disintermediation, of attempts to make them ‘‘dumb pipes’’. But nowhere do we see a discussion of the ratio between the cost of an additional cell tower and the new revenue it generates. We can be sure carriers know this number, but they’re not sharing. It must be a good one: We now see carriers eager to offer their new LTE infrastructure as data pipes for the (unsubsidized) new iPad.

As a final offering, regard this handy chart from Fierce Wireless:

Some observations and a little math, in no particular order.

  • The first four carriers, Verizon, AR&T, Sprint, and T-Mobile, have 295M subscribers, 90% of the total US market.
  • If we multiply each carrier’s ARPU by its number of subs, sum the results, and divide by the 295M, the overall ARPU works out to about $50 (our journalist would write $49.69).
  • For the two leading carriers, the churn rate (people leaving) is quite low, about 1%. Compare this to the iPhone-less competitors: Before Sprint got the iPhone 4S, Sprint’s churn was about twice AT&T’s; T-Mobile’s is close to three times that of Verizon’s.
  • AT&T continues to benefit from its early bet on the iPhone and, in Q3, added more subs than Verizon.
  • Both Verizon and AT&T get about 40% of their service revenue from data.


Ebooks: The Giant Disruption

(Part of a series)

In the last twelve months, I’ve never bought fewer printed books — and I’ve never read so many books. I have switched to ebooks. My personal library is with me at all times, in my iPad and my iPhone (and in the cloud), allowing me to switch reading devices as conditions dictate. I also own a Kindle, I use it mostly during Summer, to read in broad daylight: an iPad won’t work on a sunny café terrace.

I don’t care about the device itself, I let the market decide, but I do care about a few key features. Screen quality is essential: in that respect the iPhone’s Retina Display is unbeatable in the LED backlit word, and Kindle e-ink is just perfect with natural light. Because I often devour at least two books in parallel, I don’t want to struggle to land on the page I was reading when I switch devices. They must sync seamlessly, period, even with the imperfect cellular network. (And most of the time, they do.)

I’m an ebook convert. Not by ideology (I love dead-tree books, and I enjoy giving those to friends and family), just pragmatism. Ebooks are great for impulse buying. Let’s say I read a story in a magazine and find the author particularly brilliant, or want to drill further down into the subject thanks to a pointer to nicely rated book, I cut and paste the reference in the Amazon Kindle store or in the Apple’s iBooks store and, one-click™ later, the book is mine. Most of the time, it’s much cheaper than the print version (especially in the case of imported books).

This leads to this thought about the coming ebook disruption: We’ve seen nothing yet. Eighteen months ago, I was asked to run an ebooks roundtable for the Forum d’Avignon (an ultra-elitist cultural gathering judiciously set in the Palais des Papes). Preparing for the event, I visited most of the French publishers and came to realize how blind they were to the looming earthquake. They viewed their ability to line-up great authors as a seawall against the digital tsunami. In their minds, they might, at some point, have to make a deal with Amazon or Apple in order to channel digital distribution of their oeuvres to geeks like me. But the bulk of their production would sagely remain stacked on bookstores shelves. Too many publishing industry professionals still hope for a soft transition.

How wrong.

In less than a year, the ground has shifted in ways the players didn’t foresee. This caused the unraveling of the book publishing industry, disrupting key components of the food chain such as deal structures and distribution arrangements.

Let’s just consider what’s going on in self-publishing.

“Vanity publishing” was often seen as the lousiest way to land on a book store shelf. In a country such as France, with a strong history of magisterial publishing houses, confessing to being published “à compte d’auteur” (at the writer’s expense) results in social banishment. In the United Kingdom or the US, this is no longer the case. Trade blogs and publications are filled with tales of out-of-nowhere self-publishing hits, or of prominent authors switching to DIY mode, at once cutting-off both agent and publisher.

And guess who is this trend’s grand accelerator? Amazon is. To get the idea, read these two articles: last October’s piece in the New York Times Amazon Signs Up Authors, Writing Publishers Out of Deal, and a recent Bloomberg BusinessWeek cover story on Amazon’s Hit Man. The villain of those tales is former übber-literary agent Larry Kirshbaum, hired last May by the e-retailer giant to corral famous writers using six-figures advances. (By the way, BBW’s piece is subtitled “A tale of books, betrayal, and the (alleged) secret plot to destroy literature”, a hard-sell come-on…). Of course you can also read the successful self-publishing poster-child tale in this excellent profile of Amanda Hocking in the Guardian.

Here is what’s going on:

– Amazon is intent on taking over the bulk of the publishing business by capturing key layers of intermediation. At some point, for the market’s upper-crust, by deploying agents under the leadership of Mr. Kirshbaum and of its regional surrogates, Amazon will “own” the entire talent-scouting food chain. For the bottom-end, a tech company like Amazon is well-positioned for real-time monitoring and early detection of an author gaining traction in e-sales, agitating on the blogosphere or buzzing on social networks. (Pitching such scheme to French éditeurs is like speaking Urdu to them.)

– For authors, the growth of e-publishing makes the business model increasingly attractive. Despite a dizzying price deflation (with ebooks selling for $2.99), higher volumes and higher royalty percentages change the game. In the too-good-to-be-an-example Amanda Hocking story, here is the math as told in the Guardian piece:

Though [a $2.99 price is] cheap compared with the $10 and upwards charged for printed books, [Hocking] gained a much greater proportion of the royalties. Amazon would give her 30% of all royalties for the 99-cent books, rising to 70% for the $2.99 editions – a much greater proportion than the traditional 10 or 15% that publishing houses award their authors. You don’t have to be much of a mathematician to see the attraction of those figures: 70% of $2.99 is $2.09; 10% of a paperback priced at $9.99 is 99 cents. Multiply that by a million – last November Hocking entered the hallowed halls of the Kindle Million Club, with more than 1m copies sold – and you are talking megabucks.

Again, aspiring (or proven) authors need to cool-down when looking at such numbers. The Kindle Million Club mentioned above counts only 11 members to date — and most were best-sellers authors in the physical world beforehand.

– But at some point, the iceberg will capsize and the eBook will become the publishing market’s primary engine. Authors will go digital-first and the most successful will land a traditional book deal with legacy publishers.

Shift happens, brutally sometimes.


Next week: the editing equation and how the rise e-publishing will segment the craftsmanship of book-making.

Apple’s Grand User Experience Unification

Apple just announced Mountain Lion, the 10.8 version of the Mac operating system, scheduled for delivery in late summer of this year. I dutifully installed the developer preview; it works, mostly (see here for PCMag’s list of notable features, and here for a quick video tour.). More important is that less than a year after the introduction of OS X 10.7, we now have two data points and can draw a line…and the slope confirms our expectations: Mac OS X begat iOS but, now, iOS fathers Apple’s Unified User Experience.

iOS leadership came about for two reasons.

First, the numbers. You’ve probably seen this “viral” Asymco graph, compliments of Horace Dediu, that compares the installed base growth for various Apple products, alive and historic:

Quoting Horace:

The iOS platform overtook the OS X platform in under four years, and more iOS devices were sold in 2011 (156 million) than all the Macs ever sold (122 million).

No one, Apple execs included, expected such an explosion. But here we are: The son of OS X is now the Big Daddy and everything else must line up behind it. Imagine an alternate universe in which Scott Forstall, Apple’s iOS czar, hadn’t won the decision to pick a version of Mac OS X as the software engine for the iPhone. (Scott is also the “father” of Siri. He convinced Jobs to buy the company and to put substantial resources behind it after the acquisition.)

Just as important, iOS provides a fresh (or “fresh-ish”) start. iOS is a rebirth, rid of (many) sins of the past. Because it must run on less of everything — RAM, MIPS, screen, power –engineers were “forced” to shed the layers of software silt that accumulate inside any OS. This gave iOS designers and coders the opportunity to rethink the User Experience (UX), and to pass these ideas back to the Mac.

As examples: The multi-finger trackpad gestures, inherited from iOS, are welcome additions to OS X, they help us find our way in a maze of application windows. So are the full-screen apps with their felicitous and subtly size-conscious ways of hiding and revealing menubars and the Dock. The animation may differ between the smallest 11.6” MacBook Air and a large 27” screen, but physically it feels the same.

Under the hood, we discern an iOS-inspired ways of installing and uninstalling applications. In another trick learned from iOS, Lion manages application state from fully on to fully off and, more interestingly, various levels of readiness in between.

[For an in-depth and opinionated discussion of the technical aspects of OS X Lion -- including glimpses into the Mac’s possible future -- you can spend $4.99 on Mac OS X 10.7 Lion: the Ars Technica Review. It’s available in Kindle e-book form, but not as an Apple i-Book. You can also turn to Fraser Speiers’ lucid discussion of iOS multitasking here, with videos here.]

In 2007, while clearly coming from the same company, the Mac and the iPhone had markedly different UXs. The phone’s small screen was the biggest reason for the differences. When the iPad came out in 2010, some folks joked that the new device was simply a Brobdingnagian iPhone, perfect for the fat-fingered. But the size-appropriate translation of the iOS UX onto a much bigger screen hinted at things to come…and, indeed, later that year Apple announced its intention to further adapt iOS user interface ideas and fold them into the Mac.

If the Mac is a now-traditional personal computer, the iPad is a more personal one, and the iPhone is really personal. (This should please Messrs. Ballmer and Shaw at Microsoft. According to their hymnal, there is no shift to a post-PC era, it’s turtles, err… PCs all the way down to smartphones.)

For a company that prides itself on simplicity and elegance, it only makes sense that Apple would offer a consistent UX across all its devices, a GUUX, a Grand Unified User Experience. Apple customers should be able to move easily and naturally from one device to another, selecting the best tool for the task at hand. Add another unification, iCloud storage services, and Apple can offer more reasons to buy more of its products.

It’s a lovely, soothing theory.

In reality, the Grand Unification isn’t there yet. We still face antiquated limitations, bad bugs, aging applications, and capricious flourishes.

Let’s start with the menubar at the top of the OS X screen. It worked well on the original Mac with its small screen and lack of multitasking, but on today’s 21.5’’ or 27” displays and the many applications they contain, the menubar is bad ergonomics and leads to confusion. Novice and experienced users alike are often misled: If you unintentionally click outside the app window, the menubar at the top of the screen becomes associated with another app, or with the Finder:

On apps such as Pages, it gets worse: You have to deal with two menubars, the one inside the app window, and the one at the top of the screen. Why does Apple cling to this antiquity?

(Friends tell me that it would be difficult to move the top menubar into the app. Perhaps…but more difficult than moving from the undebuggable OS 9 to the Unix/NextStep-based OS X?)

In Microsoft’s Windows, each app window carries its own menubar, there’s no need to move to the top of the big screen to access the File menu, there’s no confusion about the context of your action. Furthermore, when you close an app’s last window, the app quits. Apple recently started doing something similar, but it’s apparently limited to a few utility programs; big apps don’t quit when their last window is closed.

Why not take a few good ideas from Windows?

Moving to bad bugs, the Mac’s Mail app is still an abomination, an app that was either poorly architected or poorly implemented or both. It keeps quitting or freezing on my machines. All on its own — meaning with no prodding by this user — Mail will spin the dreaded beachball for tens of seconds. Is it talking to itself?

Another of my favorite apps, Preview, will suddenly lose part of its mind:

With the Mountain Lion announcement, Apple execs tell us that OS X is now on a once-a-year release regimen. Great…but what about iWork apps? When will they be updated?

I have a long list of iWork bugs, and some are really embarrassing. Take a simple Numbers graph and copy it into Pages:

Works fine…but it loses its title and legend when copied into Word. It must be Microsoft’s fault, right? No, the same thing happens when the chart is moved to Apple’s own Preview:

(When I tried it again, just to make sure this wasn’t a “luser” error, Preview crashed on me.)

Speaking of Microsoft Word, the US version knows the punctuation rules for both US English and French. Not my version of Pages…which is why I have to keep Word around.

Some apps aren’t merely not improving, they seem to be going downhill. The Lion version of Address Book made it harder to manage multiple books, and the app ignores some of Apple’s own UI conventions, such as double-clicking at the top of the window to minimize it.

I’ll finish this litany with Apple’s skeuomorphic flourishes. This apparently is a new fashion: Make computer objects look more like the “real” thing in order to provide familiarity. Sometimes, as with the faux stitched leather and bits of torn paper in the iCal app, familiarity breeds contempt:

The Address Book is even worse, I won’t reproduce it here.

Sure, a good UX needs to extend a welcome mat, but we don’t need extraneous, functionally pointless simulacra of the physical world. Perhaps these details are just a case of brainstorm hysteria in Cupertino: “Idea: Put a rod and hoops at the top of each window, hang drapes on the side and give users a choice of styles!”

Apple must choose between its established Bauhaus elegance and 70‘s Rich Corinthian Leather:

Let’s end on more measured notes.

  • Bugs and brain flatulence aside, a Grand Unified UX is the right idea. Who will argue against making it easier to move from one Apple device to another? Especially when using fresh and successful iPhone/iPad constructs as the model.
  • Lion and Mountain Lion are transitional versions, and the awkwardness shows…but they’re moving in the right direction. Mountain Lion, even in its buggy preview form, shows a large number of nice improvements over Lion.
  • It’s been a very long time – three years — since the latest iWork release. But this lull is very likely due to Apple’s focus on the first set of iOS releases. Sooner or later, we’ll see a fresh iWork that cures the most glaring bugs — and that makes OS X and iOS file formats more compatible.

Lastly, having spent a little more time with Mountain Lion, I hope we’ll get the newer version of Safari ASAP. At the top of the list of neat improvements: we’ll be granted the ability to search directly from the URL bar. Yes, finally, just like Opera, Firefox, Chrome and Internet Explorer…


Twitter, Facebook and Apps Scams

Here is the latest Twitter scam I’ve heard this week. Consider two fictitious media, the Gazette and the Tribune operating on the same market, targeting the same demographics, competing fort the same online eyeballs (and the brains behind those). Our two online papers rely on four key traffic drivers:

  1. Their own editorial efforts, aimed at building the brand and establishing a trusted relationship with the readers. Essential but, by itself, insufficient to reach the critical mass needed to lure advertisers.
  2. Getting in bed with Google, with a two-strokes tactic: Search Engine Optimization (SEO), which helps climb to the top of search results page; and Search Engine Marketing (SEM), in which a brand buys keywords to position its ads in the best possible context.
  3. An audience acquisition strategy that will artificially grow page views as well as the unique visitors count. Some sites will aggregate audiences that are remotely related to their core product, but that will better dress them up for the advertising market (more on this in a forthcoming column).
  4. An intelligent use of social medias such Facebook, Twitter, LinkedIn and of the apps ecosystem as well.

Coming back to the Tribune vs. Gazette competition, let’s see how they deal with the latter item.

For both, Twitter is a reasonable source of audience, worth a few percentage points. More importantly, Twitter is a strong promotional vehicle. With 27,850 followers, the Tribune lags behind the Gazette and its 40,000 followers. Something must be done. The Tribune decides to work with a social media specialist. Over a couple of months, the firm gets to the Tribune to follow (in the Twitter sense) most of the individuals who already are Gazette followers. This mechanically translates into a “follow-back” effect powered by implicit flattery: ‘Wow, I’ve been spotted by the Tribune, I must have voice on some sort…’ In doing so, the Tribune will be able to vacuum up about a quarter or a third — that’s a credible rate of follow-back — of the Gazette followers. Later, the Tribune will “unfollow” the defectors to cover its tracks.

Compared to other more juvenile shenanigans, that’s a rather sophisticated scam. After all, in our example, one media is exploiting its competitor’s audience the way it would buy a database of prospects. It’s not ethical but it’s not illegal. And it’s effective: a significant part of the the followers so “converted” to the Tribune are likely stick to it as the two media do cover the same beat.

Sometimes, only size matters. Last December, the French blogger Cyroul (also a digital media consultant) uncovered a scam performed by Fred & Farid, one of the hippest advertising advertising agencies. In his post (in French) Cyroul explained how the ad agency got 5000 followers in a matter of five days. As in the previous example, the technique is based on the “mass following” technique but, this time, it has nothing to do with recruiting some form of “qualified” audience. Fred & Farid arranged to follow robots that, in turn, follow their account.  The result is a large number of new followers from Japan or China, all sharing the same characteristic: the ratio between following/followed is about one, which is, Cyroul say, the signature of bots-driven mass following. Pathetic indeed. His conclusion:

One day, your “influence” will be measured against real followers or fans as opposed to bots-induced accounts or artificial ones. Then, brands will weep as their fan pages will be worth nothing; ad agencies will cry as well when they realize that Twitter is worth nothing.

But wait, there are higher numbers on the crudeness scale: If you type “increase Facebook fans” in Google, you’ll get swamped with offers. Wading through the search results, I spotted one carrying a wide range of products: 10,000 views on YouTube for €189; 2000 Facebook “Likes” for €159; 10,000 followers on Twitter for €890, etc. You provide your URL, you pay on a secure server, it stays anonymous and the goods are delivered between 5 and 30 days.

The private sector is now allocating huge resources to fight the growing business of internet scams. Sometimes, it has to be done in a opaque way. One of the reasons why Google is not saying much about its ranking algorithm is — also — to prevent fraud.

As for Apple, its application ecosystem faces the same problem in. Over time, its ranking system became questionable as bots and download farms joined the fray. In a nutshell, as for the Facebook fans harvesting, the more you were willing to pay, the more notoriety you got thanks to inflated rankings and bogus reviews. Last week, Apple issued this warning to its developer community:

Adhering to Guidelines on Third-Party Marketing Services

Feb 6, 2012
Once you build a great app, you want everyone to know about it. However, when you promote your app, you should avoid using services that advertise or guarantee top placement in App Store charts. Even if you are not personally engaged in manipulating App Store chart rankings or user reviews, employing services that do so on your behalf may result in the loss of your Apple Developer Program membership.

Evidently, Apple has a reliability issue on how its half million apps are ranked and evaluated by users. Eventually, it could affect its business as the AppStore could become a bazaar in which the true value of a product gets lost in a quagmire of mediocre apps. This, by the way, is a push in favor of an Apple-curated guide described in the Monday Note by Jean-Louis (see Why Apple Should Follow Michelin). In the UK, several print publishers have detected the need for independent reviews; there, newsstands carry a dozen of app review magazines, not only covering Apple, but the Android market as well.

Obviously there is a market for that.

Because they depend heavily on advertising, preventing scams is critical for social networks such as Facebook or Twitter. In Facebook’s pre-IPO filing, I saw no mention of scams in the Risk Factors section, except in vaguest of terms. As for Twitter, all we know is the true audience is much smaller than the company says it is: Business Insider calculated that, out of the 175 million accounts claimed by Twitter, 90 million have zero followers.

For now, the system stills holds up. Brands remain convinced that their notoriety is directly tied to the number of fan/followers they claim — or their ad agency has been able to channel to them. But how truly efficient is this? How large is the proportion of bogus audiences? Today there appears to be no reliable metric to assess the value of a fan or a follower. And if there is, no one wants to know.


Strange Facebook Economics

Exactly three years ago, Charlie Rose interviewed Marc Andreessen, the creator of Netscape and Facebook board member. In his trademark rapid-fire talk, Marc shared his views on Facebook. (Keep the February 2009 context in mind: the social network had 175 million users and Microsoft had just made an investment setting Facebook’s valuation at $15 billion.)

About Mark Zuckerberg’s vision:

The big vision basically is — I mean the way I would articulate it is connect everybody on the planet, right? So I mean [there are] 175 million people on the thing now. Adding a huge number of users every day. 6 billion people on the planet. Probably 3 billion of them with modern electricity and maybe telephones. So maybe the total addressable market today is 3 billion people. 175 million to 3 billion is a big challenge. A big opportunity.

About monetization:

There’s a lot of confusion out there. Facebook is deliberately not taking the kind of normal brand advertising that a lot of Web sites will take. So you go to a company like Yahoo which is another fantastic business and they’ve got these banner ads and brand ads all over the place, Facebook has made a strategic decision not to take a lot of that business in favor of building its own sort of organic business model; and it’s still in the process of doing that and if they crack the code, which I think that thy will, then I think it will be very successful and will be very large. The fallback position is to just take normal advertising. And if Facebook just turned on the spigot for normal advertising today, it’d be doing over a billion dollars in revenue. So it’s much more a matter of long term (…)  It could sell out the homepage and it would start making just a gigantic amount of money. So there’s just tremendous potential and it’s just a question exactly how they choose to exploit it. What’s significant about that is that Mark [Zuckerberg] is very determined to build a long term company.

In another interview last year, commenting on Facebook’s generous cumulated funding ($1.3 billion as of January 2011), Andreessen said the whole amount actually was a shrewd investment as it translated into an acquisition cost of a “one or two dollars per user” ($1.53 to be precise), which sounded perfectly acceptable to him.

Now, take a look at last week’s pre-iPO filing: Marc Andreessen was right both in 2009 and in 2011.

Last year, each of the 845 million active members brought $4.39 in revenue and $1.18 in net income. Even better, based on the $3.9 billion in cash and marketable securities on FB’s balance sheet, each of these users generated a cosy cash input of $1.53 dollars.

How much is the market expected to value each user after the IPO? Based on the projected  $100 billion valuation, each Facebooker would carry a value of $118. Keep this number in mind.

How does it compare with other media and internet properties?

Take LinkedIn: The social network for professionals is fare less glamorous than Facebook, a fact reflected in its members’ valuation. Today, LinkedIn has about 145 millions users, for a $7.7 billion market cap; that’s a value of $57 per user, half a Facebooker. A bit strange considering LinkedIn demographics, in theory much more attractive than Facebook advertising wise. (See a detailed analysis here). Per user and per year, LindkedIn makes $3.5 in revenue and $0.78 in profit.

Let’s now switch to traditional medias. Some, like the New York Times, were put on “deathwatch” by Marc Andreessen three years ago.

Assessing the number of people who interact with NYT brands is quite difficult. For the company’s numerous websites, you have to deal with domestic and global reaches: 43 millions UVs for the Times globally, 60 millions for its guide site About.com, etc. Then, you must take into account print circulation for the NY Times and the Boston Globe, the numbers of readers per physical copy, audience overlaps between businesses, etc.

I’ll throw an approximate figure of 50 million people worldwide who, one way or the other, are in some form of regular contact with one of the NYT’s brands. Based on today’s $1.14 billion market cap, this yields a valuation of $23 per NYT customer, five times less than Facebook. That’s normal, many would say. Except for one fact: In 2011, each NYT customer brought $46 in revenue, almost ten times more than Facebook. As for the profit (a meager $56 million for the NYT), each customer brought a little more than a dollar.

I did the same math with various media companies operating in print, digital, broadcast and TV. Gannett Company, for instance, makes between $50 and $80 per year in revenue  per customer, and, depending on the way you count, the market values that customer at about $50.

Indeed, measured by trends (double digit growth), global reach and hype, Facebook or LinkedIn are flying high while traditional medias are struggling; when Facebook achieves a 47% profit margin, Gannett or News Corp are in the 10% range.

Still. If we pause at today’s snapshot, Facebook economics appear out of touch with reality: each customer brings then times less than legacy media, and the market values that customer up to five times more. And when News Corp gets a P/E of 17, Gannett a P/E of 8, Facebook is preparing to offer shares a multiple of 100 times its earnings and 25 times its revenue. Even by Silicon Valley ambitious standards, market expectation for Facebook seems excessive: Apple is worth 13 times its earnings and Google 20 times.

Facebook remains a stunning achievement: it combines long term vision, remarkable execution, and a ferociously focused founder. But, even with a potential of 3 billion internet-connected people in 2016 vs. 1.6 billion in 2010 (a Boston Consulting Group projection), it seems the market has put Facebook in a dangerous bubble of its own.


Facebook: The Revenge of the Nerds

We’ll look at the other side of the coin in a moment, but first let’s give credit where it’s due and admire the obverse: I’m delighted to see Facebook going public, just deserts for Mark Zuckerberg and his group of very smart techies.

If you have the time and inclination, take a walk through Facebook’s SEC S-1 filing in preparation for its IPO, you won’t regret it. Pay particular attention to the manifesto Zuckerberg calls The Hacker Way and allow this aging geek (I’ll soon be 28) to sing its praises. Consider this verse:

We have a saying: “Move fast and break things.” The idea is that if you never break anything, you’re probably not moving fast enough.

Where others have stumbled as they shuffled, Zuckerberg and his gang have raced to create a technical giant. The infrastructure required to support 845M “monthly active” users that upload 250M photos each day might not be Google-size (yet), but it’s definitely Google-class. To show off this plumbing, Zuckerberg & Co. took a few pages from Apple’s (and Google’s) stylebook: They stuck to a simple, clean UI, unlike Myspace and their pavement pizza chic.

Facebook’s success isn’t just a sweet retort to Zuckerberg’s critics, it’s a confirmation of what makes Silicon Valley tick: techies, geeks, and nerds. While the technoïds aren’t always right — far from it — the great ones end up making and running great companies. The establishment bluestockings may roll their eyes at the hoodies and bare feet, but look at what happens when the suits take over. Look at HP, Yahoo!, or Cisco; regard Apple during its dark age

It wasn’t very long ago, I recall gleefully, that the kommentariat cluck-clucked disapprovingly over the founder’s “obvious’’ immaturity, his tactless management style, his poor public-speaking manner. But when you read Facebook’s S1, you’ll realize how good a negotiator Zuckerberg must have been early on. Since its inception, the company has raised about $1.5B, an unusually large amount for a start up, and well above the threshold that usually translates into management castration as investors demand a bigger share of the spoils, ransom for their assumption of greater risk.

Instead, Zuckerberg got investors to go for the radius of the pizza as opposed to the angle of the slice, their ownership percentage. Zuckerberg may own “only” 28% of Facebook, but he manufactured agreements that give him effective control of the company with 57% of voting rights

Some will downplay the achievement: ‘He must have gotten good advice’ . Of course…but he followed it. When you’re in charge, the quality of the advice is no excuse for bad performance; conversely, good advice shouldn’t be used to dismiss good results.

Speaking of which, in 2011, the company’s revenue was $3.7B, with a tidy $1B profit and $3.8B in cash – to which they’ll be adding at least $5B in the upcoming IPO. This is a nicely profitable company. The Washington Post’s Wonkblog put Facebook’s performance in graphic perspective:

Take a look at the number of employees: a mere 3,200. With 3.7B in revenue, that works out to $1.2M per worker. Turning to cash per worker ($3.9B / 3,200 = $1.2M), Facebook is about as rich as Uncle Apple’s $1.3M cash per “full-time equivalent” employee. It’s a remarkable achievement for any company, and unheard of for one so young.

But it’s not all roses.

As Zuckerberg’s Letter To Investors properly contends, Facebook can “change how people relate to their governments and social institutions” and “improve how people connect to businesses and the economy”. Making tons of money in the process is totally legit…as long as a key condition is met: informed consent. And “informed consent” mean just that: Information that a reasonably attentive individual — as opposed to an Apple patent attorney — can understand.

On this count, Facebook’s actions have been less than transparent. Perhaps it’s a consequence of the Hacker Way: Ship first, ask questions later. Or perhaps Facebook is betting we’re too lazy and ignorant to read the fine print, just like wireless carriers who try to dazzle us with their sleight-of-plan hoodwinks.

Furthermore, Facebook’s ubiquity and power raises the spectre of yet another Walled Garden: Is Zuckerberg’s company killing the Open Web by superimposing a proprietary lattice of connections between users, including companies that use Facebook to do business with its community? Many have noted that Google can’t really index the Facebook web. As John Batelle puts it:

Sure, Google can crawl Facebook’s “public pages,” but those represent a tiny fraction of the “pages” on Facebook, and are not informed by the crucial signals of identity and relationship which give those pages meaning.

(True. But does Google want to index Facebook? Behind the Open posture stands Google’s real aim: Bulldozing anything and anyone standing between their ad engines and their targets.)

Lastly, let’s consider the Web 2.0 proverb: If the product is free, You are the product. With that in mind, I couldn’t help wince at the opening of Zuckerberg’s Letter To Investors:

Facebook was not originally created to be a company. It was built to accomplish a social mission — to make the world more open and connected.

It reminded me of the Don’t Be Evil puffery in Google’s own S-1:

Don’t be evil. We believe strongly that in the long term, we will be better served — as shareholders and in all other ways — by a company that does good things for the world even if we forgo short term gains. This is an important aspect of our culture and is broadly shared within the company.

When I read those words back in 2004, I thought Google was either incredibly naive or a little too obvious in their do-good posture. Either way, we know what has happened: Google needs to be all things to all people, all the time, everywhere, on every device, in order to irradiate us with their advertising photons. Google’s motto should be Disintermediation R’Us. Instead, their mission statement reads:

Organize the world’s information and make it universally accessible and useful.

…all in the name of selling ads.

In his letter, Zuckerberg comes up with a similarly lofty sentiment:

There is a huge need and a huge opportunity to get everyone in the world connected, to give everyone a voice and to help transform society for the future.

I don’t mean to diminish Zuckerberg’s accomplishments. He’s built an epoch-making company, I’m delighted by the team of highly skilled technologists he’s assembled — a team that includes some dear friends of mine — and the tech culture they evince. He’s surrounded himself with sharp business people and extracted oodles of money from strong investors; he’s Bill Gates/Larry Ellison/Page+Brin caliber or above…and I’m thrilled to see the former naysayers now eating out of his hand.

So why not just say something like…

We help people connect in safe, convenient, and innovative ways. In doing so, we’ve built a business of historic proportions. We make money selling advertising that is finely tuned to reach our users in cost-competitive ways. Because we believe in Facebook’s unlimited potential, we will manage ourselves for the long term rather than for short-term profit. We have built an ownership and control structure to accomplish this goal.

There’s good evidence that the people who buy Amazon, Google, and Facebook shares are willing to let these companies run for the long term rather than for the next quarter. Smart people don’t need lofty mission statements to guide their investments, they watch what the execs do and decide if they’re using “the long term” as an excuse or if they’re really aiming for it.


Piracy is part of the digital ecosystem

In the summer of 2009, I found myself invited to a small party in an old bourgeois apartment with breathtaking views of the Champ-de-Mars and Eiffel Tower. The gathering was meant to be an informal discussion among media people about Nicolas Sarkozy’s push for the HADOPI anti-piracy bill. The risk of a heated debate was very limited: everyone in this little crowd of artists, TV and movie producers, and journalists, was on the same side, that is against the proposed law. HADOPI was the same breed as the now comatose American PIPA (Protect Intellectual Property Act) and SOPA (Stop Online Piracy Act). The French law was based on a three-strikes-and-you-are-disconnected system, aimed at the most compulsive downloaders.

The discussion started with a little tour de table, in which everyone had to explain his/her view of the law. I used the standard Alcoholic Anonymous introduction: “I’m Frederic, and I’ve been downloading for several years. I started with the seven seasons of The West Wing, and I keep downloading at a sustained rate. Worse, my kids inherited my reprehensible habit and I failed to curb their bad behavior. Even worse, I harbor no intent to give up since I refuse to wait until next year to see a dubbed version of Damages on a French TV network… In can’t stand Glenn Close speaking French, you see…” It turned out that everybody admitted to copious downloading, making this little sample of the anti-Sarkozy media elite a potential target for HADOPI enforcers. (Since then, parliamentary filibuster managed to emasculate the bill.)

When it come to digital piracy, there is a great deal of hypocrisy. One way another, everyone is involved.

For some large players — allegedly on the plaintiff side — the sinning even takes industrial proportions. Take the music industry.

In October 2003, Wired ran this interesting piece about a company specialized in tracking entertainment contents over the internet. BigChampagne, located in Beverly Hills, is for the digital era what Billboard magazine was in the analog world. Except that BigChampagne is essentially tracking illegal contents that circulates on the web. It does so with incredible precision by matching IP numbers and zip code, finding out what’s hot on peer-to-peer networks. In his Wired piece, Jeff Howe explains:

BigChampagne’s clients can pull up information about popularity and market share (what percentage of file-sharers have a given song). They can also drill down into specific markets – to see, for example, that 38.35 percent of file-sharers in Omaha, Nebraska, have a song from the new 50 Cent album.

No wonder some clients pay BigChampagne up to 40,000$ a month for such data. They  use BigChampagne’s valuable intelligence to apply gentle pressure on local radio station to air the very tunes favored by downloaders. For a long time, illegal file-sharing has been a powerful market and promotional tool for the music industry.

For the software industry, tolerance of pirated contents has been part of the ecosystem for quite a while as well. Many of us recall relying on pirated versions of Photoshop, Illustrator or Quark Xpress to learn how to use those products. It is widely assumed that Adobe and Quark have floated new releases of their products to spread the word-of-mouth among creative users. And it worked fine. (Now, everyone relies on a much more efficient and controlled mechanism of test versions, free trials, video tutorials, etc.)

There is no doubt, though, that piracy is inflicting a great deal of harm on the software industry. Take Microsoft and the Chinese market. For the Seattle firm, the US and the Chinese markets are roughly of the same size: 75 million PC shipments in the US for 2010, 68 million in China. There, 78% of PC software is pirated, vs. 20% in the US; as a result, Microsoft makes the same revenue from the Chinese than from… the Netherlands.

More broadly, how large is piracy today? At the last Consumer Electronic Show, the British market intelligence firm Envisional Ltd. presented its remarkable State of Digital Piracy Study (PDF here). Here are some highlights:
- Pirated contents accounts for 24% of the worldwide internet bandwidth consumption.
- The biggest chunk is carried by BitTorrent (the protocol used for file sharing); it weighs about 40% of the illegitimate content in Europe and 20% in the US (including downstream and upstream). Worldwide, BitTorrent gets 250 million UVs per month.
- The second tier is made by the so-called cyberlockers (5% of the global bandwidth), among them the infamous MegaUpload, raided a few days ago by the FBI and the New Zealand police. On the 500 million uniques visitors per month to cyberlockers, MegaUpload drained 93 million UVs. (To put things in perspective, the entire US newspaper industry gets about 110 million UVs per month). The Cyberlockers segment has twice the users but consumes eight times less bandwidth than BitTorrent simply because files are much bigger on the peer-to-peer system.
- The third significant segment in piracy is illegal video streaming (1.4% of the global bandwidth.)

There are three ways to fight piracy: endless legal actions, legally blocking access, or creating alternative legit offers.

The sue-them-untill-they-die approach is mostly a US-centric one. It will never yield great results (aside from huge legal fees) due to the decentralized nature of the internet (there is no central servers for BitTorrent) and to the tolerance in countries in harboring cyberlockers.

As for law-based enforcement systems such has the French HADOPI or American SOPA/PIPA, they don’t work either. HADOPI proved to be porous as chalk, and the US lawmakers had to yield to the public outcry. Both bills were poorly designed and inefficient.

The figures compiled by Envisional Ltd. are indeed a plea for the third approach, that is the  creation of legitimate offers.

Take a look at the figures below, which shows the peak bandwidth distribution between the US and Europe. You will notice that the paid-for Netflix service takes exactly the same amount of traffic as BitTorrent does in Europe!

US Bandwidth Consumption:

Europe Bandwidth Consumption:

Source : Envisional Ltd

These stats offer a compelling proof that creating legitimate commercial alternatives is a good way to contain piracy. The conclusion is hardly news. The choice between pirated and legit content is a combination of ease-of-use, pricing and availability on a given market. For contents such as music, TV series or movies, services like Netflix, iTunes or even BBC iPlayer go in the right direction. But one key obstacle remains: the balkanized internet (see a previous Monday Note Balkanizing the Web), i.e. the country zoning system. By slicing the global audience in regional markets, both the industry (Apple for instance) and the local governments neglect a key fact: today’s digital audience is getting increasingly multilingual or at least more eager to consume contents in English as they are released. Today we have entertainment products, carefully designed to fit a global audience, waiting months before becoming available on the global market. As long as this absurdity remains, piracy will flourish. As for the price, it has to match the ARPU generated by an advertising-supported broadcast. For that matter, I doubt a TV viewer of the Breaking Bad series comes close to yield an advertising revenue that matches the $34.99 Apple is asking for the purchase of the entire season IV. Maintaining such gap also fuels piracy.

I want Netflix, BBC iPlayer and an unlocked and cheaper iTunes everywhere, now. Please. In the meantime, I keep my Vuze BitTorrent downloader on my computer. Just in case.


2011: Shift Happens

Whatever 2011 was, it wasn’t The Year Of The Incumbent. The high-tech world has never seen the ground shift under so many established companies. This causes afflicted CEOs to exhibit the usual symptoms of disorientation: reorg spams, mindless muttering of old mantras and, in more severe cases, speaking in tongues, using secret language known only to their co-CEO.

Let’s start with the Wintel Empire

Intel. The company just re-organized its mobile activities, merging four pre-existing groups into a single business unit. In a world where mobile devices are taking off while PC sales flag, Intel has effectively lost the new market to ARM. Even if, after years of broken promises, Intel finally produces a low-power x86 chip that meets the requirements of smartphones and tablets, it won’t be enough to take the market back from ARM.

Here’s why: The Cambridge company made two smart decisions. First, it didn’t fight Intel on its sacred PC ground; and, second, it licensed its designs rather than manufacture microprocessors. Now, ARM licensees are in the hundreds and a rich ecosystem of customizing extensions, design houses and silicon foundries has given the architecture a dominant and probably unassailable position in the Post-PC world.

We’ll see if Intel recognizes the futility of trying to dominate the new theatre of operations with its old weapons and tactics, or if it goes back and reacquires an ARM license. This alone won’t solve its problems: customers of ARM-based Systems On a Chip (SOC) are used to flexibility (customization) and low prices. The first ingredient isn’t in evidence in the culture of a company used to dictate terms to PC makers. The second, low prices, is trouble for the kind of healthy margins Intel derives from its Wintel quasi-monopoly. Speaking of which…

Microsoft. The company also reorged its mobile business: Andy Lees, formerly President of its Windows Phone division just got benched. The sugar-coating is Andy keeps his President title, in “a new role working for me [Ballmer] on a time-critical opportunity focused on driving maximum impact in 2012 with Windows Phone and Windows 8”. Right.

Ballmer once predicted Windows Mobile would achieve 40% market share by 2012, Andy Lee pays the price for failing to achieve traction with Windows Phone: according to Gartner, Microsoft’s new mobile OS got 1.6% market share in Q2 2011.

Microsoft will have to buy Nokia in order to fully control its destiny in this huge new market currently dominated by Android-based handset makers (with Samsung in the lead) and by Apple. In spite of efforts to ‘‘tax” Android licensees, the old Windows PC licensing model won’t work for Microsoft. The vertical, integrated, not to say “Apple” approach works well for Microsoft in its flourishing Xbox/Kinect business, it could also work for MicroNokia phones. Moreover, what will Microsoft do once Googorola integrates Moto hardware + Android system software + Google applications and Cloud services?
In the good old PC business Microsoft’s situation is very different, it’s still on top of the world. But the high-growth years are in the past. In the US, for Q2 2011, PC sales declined by 4.2%; in Europe, for Q3 this time, PC sales went down by 11.4% (both numbers are year-to-year comparisons).

At the same time, according to IDC the tablet market grew 264.5% in Q3 (admire the idiotic .5% precision, and consider tablets started from a small 2010 base). Worldwide, including the newly launched Kindle Fire, 2011 tablets shipments will be around 100 million units. Of which Microsoft will have nothing, or close to nothing if we include a small number of the confidential Tablet PC devices. The rise of tablets causes clone makers such as Dell, Samsung and Asus (but not Acer) to give up on netbooks.

In 2012, Microsoft is expected to launch a Windows 8 version suited for tablets. That version will be different from the desktop product: in a break with its monogamous Wintel relationship, Windows 8 will support ARM-based tablets. This “forks” Windows and many applications in two different flavors. Here again, the once dominant Microsoft lost its footing and is forced to play catch-up with a “best of both world” (or not optimized for either) product.

In the meantime, Redmond clings to a PC-centric party line, calling interloping smartphones and tablets “companion products’’. One can guess how different the chant would be if Microsoft dominated smartphones or tablets.

Still, like Intel, Microsoft is a growing, profitable and cash-rich company. Even if one is skeptical of their chances to re-assert themselves in the Post-PC world, these companies have the financial means to do so. The same cannot be said of the fallen smartphone leaders.

RIM: ‘Amateur hour is over.This is what the company imprudently claimed when introducing its PlayBook tablet. It is an expensive failure ($485M written off last quarter) but RIM co-CEOs remain eerily bullish: ‘Just you wait…’ For next quarter’s new phones, for the new BlackBerry 10 OS (based on QNX), for a software update for the PlayBook…

I remember being in New York City early January 2007 (right before the iPhone introduction). Jet-lagged after flying in from Paris, I got up very early and walked to Avenue of The Americas. Looking left, looking right, I saw Starbucks signs. I got to the closest coffee shop and saw everyone in the line ahead of me holding a BlackBerry, a.k.a. CrackBerry for its addictive nature. Mid-december 2011, RIM shares were down 80% from February this year:

Sammy the Walrus IV provides a detailed timeline for RIM’s fall on his blog, it’s painful.

On Horace Dediu’s Asymco site, you’ll find a piece titled “Does the phone market forgive failure?”. Horace’s answer is a clear and analytical No. Which raises the question: What’s next for RIM? The company has relatively low cash reserves ($1.5B) and few friends, now, on financial markets. It is attacked at the low end by Chinese Android licensees and, above, by everyone from Samsung to Nokia and Apple. Not a pretty picture. Vocal shareholders demand a change in management to turn the company around. But to do what? Does anyone want the job? And, if you do, doesn’t it disqualify you?

Nokia: The company has more cash, about 10B€ ($13B) and a big partner in Microsoft. The latest Nokia financials are here and show the company’s business decelerates on all fronts, this in a booming market. Even if initial reactions to the newest Windows Phone handsets aren’t said to be wildly enthusiastic, it is a bit early to draw conclusions. But Wall Street (whose wisdom is less than infinite) has already passed judgment:

Let’s put it plainly: No one but RIM needs RIM; but Microsoft’s future in the smartphone (and, perhaps, tablet) market requires a strong Nokia. Other Windows Phone “partners” such as Samsung are happily pushing Android handsets, they don’t need Microsoft the way PC OEMs still need Windows. Why struggle with a two-headed hydra when you can acquire Nokia and have only one CEO fully in charge? Would this be Andy Lees’ mission?

All this stumbling takes place in the midst of the biggest wave of growth, innovation and disruption the high-tech industry has ever seen: the mobile devices + Cloud + social graph combination is destroying (most) incumbents on its path. Google, Apple, Facebook, Samsung and others such as Amazon are taking over. 2012 should be an interesting year for bankers and attorneys.


HP Kicks webOS To The Kerb

We strongly believe that the best days for webOS are still ahead.

Thus spake Meg Whitman in her memo to the troops, an intramural rendition of HP’s official announcement that webOS will be “contributed” to the Open Source community.

…the executive team has been working to determine the best path forward for this highly respected software. We looked at all the options in the market today…By providing webOS to the open source community…we have the potential to fundamentally change the landscape.

Either she thinks we’re dimwits, or she’s being cleverly cheeky. Does she think we’ll fall for the tired corpospeak? “Victory! WhatWereWeThinking v3.0 has been released to the Open Source community”. Or is she slyly fessing up? “After much abuse inside the HP cage, it’s clear that webOS can only be restored to health if released into the wild.”

Releasing a product as Open Source isn’t always an admission of failure; see exhibits Linux or, more recently, WebKit. But the successful Open Source offerings were created in Open Source form. They weren’t “contributed” in a last-ditch effort to save face after unsuccessful attempts to monetize a proprietary version.

Furthermore, there’s real money to be made with an Open Source product…if you know what you’re doing. Look at Red Hat: nicely profitable, with nearly a $10B market cap. They make a lot of money selling Linux…or, more accurately, by selling a Linux “distro”, a suite of products and services that surround the free Linux kernel. They make money the iTunes way: Customers won’t pay for tunes that are otherwise (more or less legally) freely available, but they will pay for services around the music.

So is Open Source the way to go for webOS? I don’t think so.

Let’s look at Symbian, a product that’s similar to webOS in its complicated history: Born at Psion; moved to a Nokia-Motorola-Ericsson-Matsushita-Psion joint venture; thrown into Open Source by the Symbian Foundation, an even more complicated JV. Lately, things have become even murkier as Symbian appears to have been “outsourced to Accenture”.

Adobe’s Flex is another kicked-to-the-kerb example. When HTML5 appeared to displace Flash, Adobe officially open-sourced Flex to the non-profit Apache Software Foundation.

Even the success of Firefox, certainly the most visible Open Source application, might not be as indisputable as we first thought.  With net assets of $120M at the end of 2009, the “non-profit” Mozilla Foundation, Firefox’s progenitor, has been the great Open Source success story. 2009 revenues were $104 million, most of which was generated by sending searches to Google from the Firefox browser. In other words, Google has been Firefox’ sugar daddy as the Mountain View company battles Microsoft’s Internet Explorer quasi-monopoly.

But things have changed. Google Chrome is in its ascendancy; Google points to security holes in Firefox. Firefox served at Google’s pleasure, but is no longer needed.

Not exactly a bona fides Open Source success.

(Ironically — or at least amusingly — Meg Whitman singled out Firefox as an example of Open Source success in a post-announcement interview. To add tech credentials to appearance, she had HP director, venture investor, and Netscape founder Marc Andreessen sitting by her side. We won’t dwell on the admission that trotting out Andreessen represents.)

A closer look at HP’s official statements makes things even less clear:

HP will engage the open source community to help define the charter of the open source project under a set of operating principles:
. The goal of the project is to accelerate the open development of the webOS platform
. HP will be an active participant and investor in the project
. Good, transparent and inclusive governance to avoid fragmentation
. Software will be provided as a pure open source project
HP also will contribute ENYO, the application framework for webOS, to the community in the near future along with a plan for the remaining components of the user space.
Beginning today, developers and customers are invited to provide input and suggestions at http://developer.palm.com/blog/.

This is language designed to obfuscate rather than clarify, filled with qualifiers and weasel words. Read it again and ask yourself: Is there even one actionable sentence? are we given numbers, dates, some measurable commitment?

No. Instead, we get lame HR-like phrases:

. HP will engage the open source community — in what kind of embrace?
. active participant and investor — by how much and when?
. transparent and inclusive governance — why not opaque and exclusionary?
. a pure open source project — as opposed to yesterday’s impure and proprietary?
. near future… along with a plan — we don’t know, we’re just saying

Nowhere does Whitman state how much money, how many people, or when things might coalesce.

Allow me to translate:

We tried and tried and found no takers for webOS. Android is too strong, our old partner Microsoft leaned on us, and webOS is seen as damaged goods. We used the Open Source exit to get kudos from vocal enthusiasts. We know it’s cynical, but what do you want us to say? Good bye and good luck?

The charade (and cynicism) doesn’t stop there. Now we’re told HP might make webOS-powered tablets. Not in 2012, that year’s roadmap has been inked, HP is committed to Windows 8 tablets. Maybe in 2013. That, ladies and gentlemen, attests to HP’s unwavering commitment to webOS.

By 2013 there will be tablets coming from all the usual suspects (except RIM): Samsung, Googorola and other Android players, Amazon, Microsoft’s OEMs and newly acquired subsidiary Nokia…and, of course, Apple’s iPad HD2.

When I hear Whitman make such statements, I’m reminded of the old joke about the difference between a computer salesperson and a used-car salesman: The used-car gent knows he’s lying. For my alma mater’s sake, for HP’s good, let’s hope Meg Whitman knows she’s putting us on.


Datamining Twitter

On its own, Twitter builds an image for companies; very few are aware of this fact. When a big surprise happens, it is too late: a corporation suddenly sees a facet of its business — most often a looming or developing crisis — flare up on Twitter. As always when a corporation is involved, there is money to be made by converting the problem into an opportunity: Social network intelligence is poised to become a big business.

In theory, when it comes to assessing the social media presence of a brand, Facebook is the place to go. But as brands flock to the dominant social network, the noise becomes overwhelming and the signal — what people really say about the brand — becomes hard to extract.

By comparison, Twitter more swiftly reflects the mood of users of a product or service. Everyone in the marketing/communication field becomes increasingly eager to know what Twitter is saying about a product defect, the perception of a strike or an environmental crisis. Twitter is the echo chamber, the pulse of public feelings. It therefore carries tremendous value.

Datamining Twitter is not trivial. By comparison, diving into newspaper or blog archives is easy; phrases are (usually) well-constructed, names are spelled in full, slang words and just-invented jargon are relatively rare. By contrast, on Twitter, the 140 characters limit forces a great deal of creativity. The Twitter lingo constantly evolves, new names and characterizations flare up all the time, which excludes straightforward full-text analysis. The 250 million tweets per day are a moving target. A reliable quantitative analysis of the current mood is a big challenge.

Companies such as DataSift (launched last month) exploit the Twitter fire hose by relying on the 40-plus metadata included in a post. Because, in case you didn’t know it, an innocent looking tweet like this one…

…is a rich trove of data. A year ago, Raffi Krikorian, a developer on Twitter’s API Platform team (spotted thanks to this story in ReadWriteWeb) revealed what lies behind the 140 characters. The image below…

…is a tear-down of a much larger one (here, on Krikorian’s blog) showing the depth of metadata associated to a tweet. Each comes with information such as the author’s biography, level of engagement, popularity, assiduity, location (which can be quite precise in the case of a geotagged hotspot), etc. In this WiredUK interview, DataSift’s founder Nick Halstead mentions the example of people tweeting from Starbucks cafés:

I have recorded literally everything over the last few months about people checking in to Starbucks. They don’t need to say they’re in Starbucks, they can just be inside a location that is Starbucks, it may be people allowing Twitter to record where their geolocation is. So, I can tell you the average age of people who check into Starbucks in the UK.
Companies can come along and say: “I am a retail chain, if I supply you with the geodata of where all my stores are, tell me what people are saying when they’re near it, or in it”. Some stores don’t get a huge number of check-ins, but on aggregate over a month it’s very rare you can’t get a good sampling.

Well, think about it next time you tweet from a Starbucks.

DataSift further refined its service by teaming up with Lexalytics, a firm specialized in the new field of “sentiment analysis“, which measures the emotional tone of a text — very useful to assess the perception of a brand or a product.

Mesagragh, a Paris-based startup with a beachhead in California plans a different approach. Instead of trying to guess the feeling of a Twitter crowd, it will create a web of connections between people, terms and concepts. Put another way, it creates a “structured serendipity” in which the user will naturally expand the scope of a search way beyond the original query. Through its web-based application called Meaningly, Mesagraph is set to start a private beta this week, and a public one next January.

Here is how Meaningly works: It starts with the timeline of tens of thousands Twitter feeds. When someone registers, Meaningly will crawl his Twitter timeline and add a second layer composed by the people the new user follows. It can grow very quickly. In this ever expanding corpus of twitterers, Meaningly detects the influencers, i.e. the people more likely to be mentioned, retweeted, and who have the largest number of qualified followers. To do so, the algorithm applies an “influence index” based on specialized outlets such as Klout or Peer Index that measure someone’s influence on social medias. (I have reservations regarding the actual value of such secret sauces: I see insightful people I follow lag well behind compulsive self-promoters.) Still, such metrics are used by Meaningly to reinforce a recommendation.

Then, there is the search process. To solve the problem of the ever morphing vernacular used on Twitter, Mesagraph opted to rely on Wikipedia (in English) to analyze the data it targets. Why Wikipedia? Because it’s vast (736,000 subjects), it’s constantly updated (including with the trendiest parlance), it’s linked, it’s copyright-free. From it, Mesagraph’s crew extracted a first batch of 200,000 topics.

To find tweets on a particular subject, you first fill the usual search box; Meaningly will propose a list of predefined topics, some expressed with its own terminology; then it will show a list of tweets based on the people you’re following, the people they follow, and “influencers” detected by Meaningly’s recommendation engine. Each Tweet comes with a set of tags derived from the algorithm mapping table. These tags will help to further refine the search with terms users would have not thought of. Naturally, it is possible to create all sorts of custom queries that will capture relevant tweets as they show up; it will then create a specific timeline of tweets pertaining to the subject. At least that’s the idea; the pre-beta version I had access to last week only gave me a sketchy view of the service’s performances. I will do a full test-drive in due course.

Datamining Tweeter has great potential for the news business. Think of it: instead of painstakingly building a list of relevant people who sometimes prattle endlessly, you’ll capture in your web of interests only the relevant tweets produced by your group and the group it follows, all adding-up in real-time. This could be a great tool to follow developing stories and enhance live coverage. A permanent, precise and noise-free view of what’s hot on Twitter is a key component of the 360° view of the web every media should now offer.