In Bangkok, with the Fast Movers


The WAN-IFRA congress in Bangkok showed good examples of the newspaper industry’s transformation. Here are some highlights. 

Last week, I travelled to Bangkok for the 65th congress of the World Association of Newspapers (The WAN-IFRA also includes the World Editors Forum and the World Advertising Forum.) For a supposedly dying industry, the event gathered a record crowd: 1400 delegates from all over the world (except for France, represented by at most a dozen people…) Most presentations and discussions revealed an acceleration in the transformation of the sector.

The transition is now mostly led by emerging countries seemingly eager to get rid themselves as quickly as possible of the weight of the past. At a much faster pace than in the West, Latin America and Asia publishers take advantage of their relatively healthy print business to accelerate the online transition. These many simultaneous changes involve spectacular newsroom transformations where the notion of publication gives way to massive information factories equally producing print, web and mobile content. In these new structures, journalists, multimedia producers, developers (a Costa-Rican daily has one computer wizard for five journalists…) are blended together. They all serve a vigorous form of journalism focused on the trade’s primary mission: exposing abuses of power and public or private failures (the polar opposite of the aggregation disease.) To secure and to boost the conversion, publishers rethink the newsroom architecture, eliminate walls (physical as well as mental ones), overhaul long established hierarchies and desk arrangements (often an inheritance of the paper’s sections structure.)

In the news business, modernity no longer resides in the Western hemisphere. In Europe and in the United States, a growing number of readers are indeed getting their news online, but in a terrifyingly scattered way. According to data compiled by media analyst Jim Chisholm, newspapers represent 50.4% of internet consumption when expressed in unique visitors, but only 6.8% in visits, 1.3% in time spent, and 0.9% in page views!… “The whole battle is therefore about engagement”, says WAN-IFRA general manager Vincent Peyregne, who underlines that the level of engagement for digital represents about 5% of what it is for print — which matches the revenue gap. This is consistent with Jim Chisholm’s views stated a year ago in this interview to Ria Novosti [emphasis mine]:

If you see, how often in a month do people visit media, they visit the print papers 16 times, while the for digital papers it’s just six. At that time they look at 36 pages in print and just 3.5 in digital. Over a month, print continues to deliver over 50 times the audience intensity of newspaper digital websites.

One of the best ways to solve the engagement equation is to gain a better knowledge of audiences. In this regard, two English papers lead the pack: The Daily Mail and the Financial Times. The first is a behemoth : 119 million uniques visitors per month (including 42 m in the UK) and the proof that a profusion of vulgarity remains a weapon of choice on the web. Aside from sleaziness, the Mail Online is a fantastic data collection machine. At the WAN conference, its CEO Kevin Beatty stated that DMG, the Mail’s parent company, reaches 36% of the UK population and, on a 10-day period, the company collects “50 billion things about 43 million people”. The accumulation of data is indeed critical, but all the people I spoke with — I was there to moderate a panel about aggregation and data collection — are quick to denounce an advertising market terribly slow to reflect the value of segmentation. While many media outlets spend a great deal of resources to build data analytics, media buying agencies remain obsessed with volume. For many professionals, the ad market better quickly understand what’s at stake here; the current status quo might actually backfire as it will favor more direct relationships between media outlets and advertisers. As an example, I asked to Casper de Bono, the B2B Manager for the, how its company managed to extract value from its trove of user data harvested through its paywall. De Bono used the example of an airline that asked to extract the people that logged on the site from at least four different places served by the airline in the last 90 days. The idea was to target these individuals with specific advertising — anyone can imagine the value of such customers… This is but an example of the’s ultra-precise audience segmentation.

Paywalls were also on everyone’s lips in Bangkok. “The issue is settled”, said Juan Señor, a partner at Innovation Media Consulting, “This is not the panacea but we now know that people are willing to pay for quality and depth”. Altogether, he believes that 3% to 5% of a media site’s unique visitors could become digital subscribers. And he underlined a terrible symmetry in the revenue structure of two UK papers: While the Guardian — which resists the idea of paid-for digital readers — is losing £1m per week, The Telegraph makes roughly the same amount (£50m a year, $76m or €59m) in extra revenues thanks to its digital subscriptions… No one believes paywalls will be the one and only savior of online newspapers but, at the very least, paywalls seem to prove quality journalism is back in terms of value for the reader.

Android vs. Apple. Market Share vs. Profit Share, Part 255


Conventional wisdom and badly reconstructed history can lead to seemingly comfortable but in reality fragile conclusions. Prepare to be confused. 

Ever since the Android platform emerged as the only real competitor to Apple’s iOS devices, we’ve been treated to a debate which I’ll oversimplify: If Apple makes all the money but Android gets all the volume, who will win? A cursory survey of tech journals and blogs would lead one to believe that the case is closed: Market Share trumps Profit Share. It always does.

So Apple should call it a day? I’m skeptical. Not about the conclusion — Market Share isn’t exactly a dark horse — but about the arguments that are trotted out. False memories of Apple’s past have become a template for its future. For example, a recent Wall Street Journal article ends thus [and, sorry, you need a subscription to see the entire article]:

“Unfortunately, Apple has seen this movie before. A generation ago, it also had a top product whose market share was undercut by cheap, inferior rivals. It hopes the iPhone’s story isn’t a sequel to the Mac’s.”

(I emailed the WSJ writer asking three simple, clarifying questions. No answer, but that’s standard practice, as witheringly described by Philip Elmer-DeWitt at the end of this post.)

I was there “a generation ago”. In 1981, when IBM introduced the IBM PC, I was starting Apple France. Big Blue had made startling changes to its old ways, boldly calling its new machine The Personal Computer (we thought the “The” was ours). In an even bolder move, IBM loosened its tie and its dress code, and tried (successfully) to speak to the “common man” by using a Charlie Chaplin imitator as a mascot:

An interesting choice, particularly when juxtaposed with the real Chaplin’s cine-commentary on “labor-saving devices”:

The original PC from IBM’s Boca Raton group was a faithful homage to the Apple ][, right down to the cassette interface. But it wasn't a cheap imitation. There was one important difference:  Where the Apple ][ used a 8-bit 6502 processor, IBM splurged on the much-more-powerful 16-bit Intel chip.

Almost overnight, the pages of InfoWorld, previously replete with salivating reviews of Apple products, were filled with IBM PC articles. The new machine got a major boost with the launch of Lotus 1-2-3, a multi-function spreadsheet that became the gold standard for office applications, especially on desktops that sported hard disks and large color screens. Against the Apple ][, the IBM PC was a superior product -- and deftly marketed.

For the next few years, the Apple ][ family stumbled. The Apple ///, beset by early hardware failures, didn't answer the 16-bit question. It wasn't the modernization of the Apple ][ that the company had promised. The Apple II GS was even worse, not compatible enough with the Apple ][ and not powerful enough to attract developers, particularly Bill Gates who saw no potential for Microsoft applications.

That brings us to 1984. The Macintosh changed the game, right?

Hardly. At its coming out party, the Mac was two years behind schedule. I recall the "Mac's Last Slip" jibes at company meetings. No one would deny the obvious potential, the elegance, the innovative user interface, the clean square pixels on the bit-mapped screen, the fonts, the LaserWriter connection... But the Mac didn't support external hard drives until 1986, and it would be another year before internal disks, additional modularity, and a great Trinitron color monitor were added.

By that time, IBM had had the market to itself for half a decade, and its PC creation had morphed into the Wintel clone industry.

Contrary to the revisionist WSJ story, the "generation ago" Mac never had a market share to undercut. Apple's flagship product -- innovative, elegant, a generation ahead – was a dreamer's machine. Down-to-earth market wisdom said the Mac was perfect for Stanford undergrads, but not serious enough for real business use. The common view was application developers wouldn't be able to afford the investment in time and hardware. Starved of competitive software, the Macintosh was doomed to irrelevance and, ultimately, failure.

It almost happened, especially after Apple's desperate attempt to prop up platform share numbers by licensing Mac clones, a move that resulted in a brutal drop in Apple's margins. Market share vs. Profit Share...

The Mac was saved by Gil Amelio's unintentionally self-sacrificing decision to hand the Apple reins back to Steve Jobs. What followed was the most amazing turnaround our industry has ever seen, and it started with two controversial moves: Jobs rescinded the Mac OS license, and he made a deal with the Microsoft Devil. He convinced Gates' company to "invest" $150M in non-voting Apple shares and develop new Mac versions of the Explorer browser and Office apps (although, in reality, the agreement was part of a settlement of an older IP dispute).

We know the rest of the story, including a meme-adverse fact: For close to seven years, the Mac has consistently gained market share at the expense of PC clones.

Since the advent of another flagship product, the iPhone this time, the riches-to-rags Mac meme has led to predictions of a similar fate: Death by drowning in a sea of "cheap" Android clones. Apple's high price ($650 per iPhone on average) gives too much low-end room for competitors. The price will be undercut, there will be a decline in unit share that, in turn, will lead to lower profits, lower developer interest, lower ability to invest in future products. The road to irrelevance is paved with high margins and low market share.

Never mind two differences. First, the iPhone never lacked apps, 750,000 of them at last count. And never mind that it is immensely profitable, that Apple is embarrassingly flush with more cash than all its high-tech colleagues combined. The pundits won't accept evidence as an answer. Market Share will trump Profit Share. Why let facts cloud a good argument?

One is tempted to point to the race to the bottom that PC clone makers have experienced over the past decade. HP enjoys the largest Market Share of all PC makers, but it also "enjoys" less than 4% operating profit for its efforts. Meanwhile, Apple's margin is in the 25% range for its Mac line. That may not be as enjoyable as the 60% margin for the iPhone, but it's a solid business, particularly when you consider that the clone makers, HP and Dell foremost, are angling to get out of the business altogether. (See an earlier MN: Post-PC: Wall Street Likes the View.)

Returning to the iOS vs Android debate, I will state an opinion - not to be confused with a prediction, let alone The Truth: I think the vertical simplicity of Apple's business will tilt the field in its favor as the complicated Android world devolves into anarchy. Apple vs Google isn't Apple vs Microsoft/Intel/IBM.

Let's back up a bit. Google's 2005 acquisition of Android was a visionary move. (Some say Google's vision was sharpened by Eric Schmidt's presence on Apple's Board as the company worked on the future iPhone. Jobs was furious about Google's decision and summarily asked Schmidt to leave.) Android's unprecedented growth -- more than 50% share of the smartphone market in the US, and even more worldwide – is a testament to the "open" approach. Google gives away the Open Source Android OS; processors are another kind of "open", custom-designed under ARM licenses open to all payers.

But Android is a "cushion shot", it's an indirect way for Google to make money. Android is a Trojan horse that infects smartphones so it can install services that collect the user data that feeds Google's true business: advertising.

Now, Google faces several problems. Android's openness leads to incompatibilities between devices, a problem for developers that didn't happen under Microsoft's rule in the PC era. Worse (for Google), the many diverging versions of Android (a.k.a. forks) -- especially those created in China -- carry no Google services. They harvest no data and so they bring no advertising revenue potential back to Google.

This is clearly a concern for Google, so much so that the company now offers "pure" Android smartphones by Samsung (for $650) and HTC (for $599) on its Google Play site.

On the other hand, Android 2013 is a mature, stable OS. It isn't Windows '95, which was nothing more than a shell bolted on top of DOS. While the Mac's system software wasn't fully developed when it first came out, many saw it as superior -- or potentially superior -- to Microsoft's OS. Android is a tougher competitor than Windows was at the same age.

Then there is Google's subsidiary Motorola Mobility and the relationship with Samsung, the most powerful Android handset maker. As discussed last week, Motorola's stated intention is to push Android phone prices well below the $650 (unsubsidized) level. Is Samsung in a position to wag the Android dog? And if so, how will they react to Motorola's moves?

Let's not forget "the small matter of execution", one that might prove more important than lofty "strategic" considerations. And, to further complicate predictions, we have the herd's tendency to assume Company X will make all the mistakes while its competitors will play a perfect game.

Confused? Then I have accomplished one of my goals, to show how unhelpful the old bromides are when trying to guess what will happen next.


PS: I'd be remiss if I didn't direct you recently discovered articles by John Kirk, who calls himself a recovering attorney and indeed writes tightly reasoned posts on Techpinions. I'll whet your appetite with two quotes. One from Does The Rise Of Android's Market Share Mean The End of Apple's Profits? [emphasis mine]:

Steve Jobs wanted, and Apple wants, market share. But they want the RIGHT market share. Apple wants customers who are willing to pay for their products. And Apple wants customers who are good for their platform. In other words, Apple wants market share in their target demographic. Based on the fact that Apple is taking in 72% of the mobile phone profits with only 8% or 9% of the market share, it sure sounds like they’ve aquired the right market share to me.

Does the rise of Android’s market share mean the end of Apple’s profits? Hardly. You can argue as loudly as you like that developers and profit share must necessarily follow market share. But the facts will shout you down.

The other is from 4 Mobile Business Models, 4 Ways To Keep Score where he concludes:

And if you’re going to prophesy that market share alone gives Google data that will someday, somehow, be worth something to someone, then you need to go back and re-read how the “razor-and-blades” business model is scored.

What we desperately need in analyzing mobile computing is far more attention paid to profits and far less attention paid to prophets.


Please, Please Uncle Tim, Tell Us A Story…


I’m back from D11, the 11th yearly edition of the Wall Street Journal’s tech conference. The conference site gives you the complete speaker roster, commentary, and full videos of the on-stage interviews as well as demos and hallway conversations.

With such a complete and well-organized reproduction of the event, why even go?

For the schmoozing, the in-the-moment impressions of speakers, the audience reactions… This is the only conference I attend (I’ve only missed it once). I enjoy rubbing scales with aging crocodiles and watching new and old saurians warily eying one another.
Speaking of attendees, I’m struck again by the low, almost non-existent European participation. Most pointedly, Fleur Pellerin, France’s Minister in charge of the Digital Economy, wasn’t there… even though she will be in the Valley this week. Had Minister Pellerin spent a day or two with us in Rancho Palos Verdes, she would have seen, heard, felt, and learned more than in the half dozen limousine hops she’ll make from one Valley HQ to another where she’ll be subjected to frictionless corporate presentations that have been personalized with a quick Search and Replace insertion of her name and title.
At D11, the rules of Aristotelian Unities apply: unity of action, place, and time. The entire ecosystem is represented: entrepreneurs, investors, CEOs of large companies, consultants, investment bankers, journalists, headhunters. What better place to contemplate the Knowledge Economy’s real workings, its actors and its potential to lift France out of its unemployment malaise?

(Of course, Pellerin might also be looking to mend fences after Yahoo’s attempt to acquire DailyMotion was blocked by another French minister. My own view is that the French government did Yahoo a favor. From what I think I know about the company and the political climate surrounding it, Marissa Mayer and Henrique De Castro, her COO, probably had no idea what awaited them.)

The conference formula is refreshingly simple:  Walt Mossberg, the Journal’s tech guru, and Kara Swisher, his co-executive, sit down and interview an industry notable (or sometimes two). No speeches allowed, no PowerPoints…
In the early days, I felt the questions were a little too soft — with the regrettable exception of Kara’s condescending grilling of Mark Zuckerberg four years ago. She clearly didn’t take him seriously. But Uncle Walt once told me he trusts his audience to do our job, to correctly decode the answers, the body language — and to look at one another and roll our eyes on occasion.
Once again, we were treated to phenomenal speakers. I liked Dick Costolo, Twitter’s witty, deeply smart (and best-dressed) CEO; and was impressed by Facebook COO Sheryl Sandberg’s deft handling of questions about business, gender, and politics. Sandberg is a  veteran of Washington, where she worked for Treasury Secretary Larry Summers, her thesis adviser at Harvard, and Google  — where she worked for another Larry. Reading her best-selling and inevitably controversial Lean In doesn’t replace seeing her on stage.

Another highlight was the one exception to the No PowerPoint rule: Mary Meeker’s high-speed walk through the freshest version of her rightly celebrated Internet Trends deck. And, while we’re at it, take a look at this astounding (no exaggeration, I promise) zettabyte (1 billion terabytes, 10^21 bytes) Internet traffic projection by Cisco.
Then we have the perplexing interview with Dennis Woodside and Regina Dugan, CEO and Sr. VP, respectively, of Motorola Mobility, now a Google subsidiary. (Regular attendees will recall that Dugan was on stage at D9 as Director of DARPA , the Defense Advanced Research Projects Agency that gave birth to the Internet).

Woodside stated that Motorola would deliver a range of new phones later this year, including a “hero device” with better integration of sensors into the User Interface as well as class-leading autonomy. He also added that Motorola would sell it for much less than the various $650 smartphones available today, probably meaning no-contract Samsung, HTC and Apple top-of-line phones at Verizon and other carriers.
A smarter-but-much-cheaper phone… it’s a bold but credible claim. Keep in mind that Motorola doesn’t exist to make money for itself. It’s part of what I call Google’s 115% Business Model: Advertising makes 115% of Google’s profits and everything else brings the number back down to 100. The smartphone market could become even more interesting if, after making a free smartphone OS, Google subsidizes the hardware as well.

Less credibly, however, Woodside insisted that Google has not and will not give its captive Motorola special access to Android code, because this is something Google simply doesn’t do. Perhaps he doesn’t recall that Google gave advanced access to upcoming Android builds to chosen partners such as Samsung, HTC, and, if memory serves, LG.
Just as interesting, if a bit troubling, Regina Dugan gave us insights into individual identification research work at Motorola. She proudly displayed a tattoo on her forearm that incorporates an RFID (Radio Frequency Identification) antenna that lets you log onto services without the usual annoyances. Or you can swallow an “authentication” pill that’s powered by digestive acids. As Dugan puts it, “your entire body becomes your authentication token.”  Hmmm… A tattoo on one’s forearm, a pill that emits an ID signal that you can’t turn off (for a while)…

Last but not least, Tim Cook’s interview. The low point in the Apple CEO’s appearance came during the Q&A section at the end (it’s around the 1:10:35 mark if you want to fast forward). A fund manager (!!) plaintively begged Cook to make him dream, to tell him stories about the future, like Google does. “Otherwise, we’ll think Mike Spindler and Gil Amelio…” (I’m paraphrasing a bit).

Cook refused to bite. As he’d done many times in the interview, he declined to make announcements, he only allowed TV and wearable devices were areas of “intense interest”. And, when asked if Apple worked on more “game changers” like the iPhone or the iPad, he had no choice but promise more breakthroughs. Nothing new here, this has been Apple’s practice for years.
Which raises a question: What was Apple’s CEO doing at D11 less than two weeks before the company’s Worldwide Developer Conference where, certainly, announcements will be made? What did the organizers and audience expect, that Tim Cook would lift his skirt prematurely?
Actually, there was a small morsel: Cook, discussing Apple TV, claimed 13 million current generation devices had been sold to date, half of them in the past year… but that’s food for another Monday Note.
Audience and media reactions to the lack of entertainment were mixed.

For my part, perhaps because of my own thin skin, I find Tim Cook’s preternatural calm admirable. Taunted with comparisons to Spindler and Amelio, dragged onto the Senate floor, being called a liar by a NYT columnist, constant questioned about his ability to lead Apple to new heights of innovation… nothing seems to faze him. More important, nothing extracts a word of complaint from him.
This is much unlike another CEO, Larry Page, who constantly whines about “negativity” directed at Google, a conduct unbecoming the leader of a successful company that steamrolls everything in its path.
I have my own ideas about Cook’s well-controlled behavior, they have to do with growing up different in Mobile, Alabama. But since he’s obviously not keen to discuss his personal life, I’ll leave it at that and envy his composure.
New Apple products are supposed to come out later this year. You can already draft the two types of stories: If they’re strong, this will be Tim Cook’s Apple; if not, it’ll be We Told You So.

Tech as a boost for development


Moore’s Law also applies to global development. From futuristic wireless networks for rural Africa to tracking water well drillings, digital technology is a powerful boost for development as evidenced by a growing number of initiatives.  

Last week, The Wall Street Journal unveiled a Google project designed to provide wireless networks in developing countries, more specifically in sub-Saharan Africa and Southeast Asia. According to the Journal, the initiative involves using the airwaves spectrum allocated for television signals or teaming up with cellular carriers already working there. In its typical “outside-of-the-box” thinking, the project might also rely on high-altitude blimps to cover infrastructure-deprived areas. Coupled with low-cost handsets using the Android operating system, or the brand new Firefox OS for mobile, this would boost the spread of cellular phones in poor countries.

Previously unavailable, mobile access will be a game changer for billions of people. At the last Mobile World Congress in Barcelona, I chatted with an Alcatel-Lucent executive who explained the experiments she witnessed in Kenya, such as providing the equivalent of index cards to nurses to upgrade their knowledge of specific treatments; the use of mobile phone translated into an unprecedented reach, even in remote areas where basic handsets are shared among many people. Similarly, tests for access to reading material were conducted by UNESCO, the United Nations branch for education and culture. Short stories, some loaded with interactive features, were sent to phones and, amazingly, kids flocked to read, share and participate. All of this was carried on “dumb” phones, sometimes with only mono-color displays. Imagine what could be done with smartphones.

Moore’s Law will keep helping. Currently, high end smartphones are out of reach for emerging markets where users rely on prepaid cards instead of subscriptions. But instead of a $400-$600 handsets (without a 2-year contract) currently sold in Western markets, Chinese manufacturers are aiming at a price of $50 for a durable handset, using a slower processor but sporting all expected features: large screen, good camera, GPS module, accelerometers, and tools for collective use. On such a foundation, dedicated applications can be developed — primarily for education and health.

As an example, the MIT Media Labs has created a system for prescribing eyeglasses that requires only a one-dollar eyepiece attached to a smartphone; compared to professional equipment costing thousands times more, it runs a very decent diagnostic. (This is part of the MIT Global Challenge Initiative).

This, coupled with liquid-filled adjustable glasses such as this one presented at TED a couple of years ago, will help solve vision problems in poor countries for a couple of dollars per person. Other systems aimed at detecting vision-related illnesses such as cataract or glaucoma are in development. So are blood-testing technologies based on bio-chips tied to a mobile app for data collection.

Last week, I attended the Google’s Zeitgeist conference in the UK — two days of enthralling TED-like talks (all videos here). Among many impressive speakers, two got my attention. The first is Sugata Mitra, a professor of education technology at Newcastle University. In his talk — filled with a mixture of Indian and British humor — he described self-organizing systems experiments in rural India built around basic internet-connected computers. The results are compelling for language learning and basic understanding of science or geography.

The other speaker was the complete opposite. Scott Harrison has an interesting trajectory: he is a former New York nightclub promoter who changed drastically his life seven years ago by launching the organization Charity:Water. Harrison’s completely fresh approach helped him redefine how a modern charitable organization should work. He built his organization around three main ideas. First, 100% of donations should reach a project. To achieve this, he created two separate funding circuits: a public one for projects and another for to support operational costs.

Principle number two, build a brand, with all the attributes that go with it: Strong visual identity and well-designed web site (most of those operated by NGO’s are terrible); the web site is rich and attractive and it looks more like than an Obama campaign fundraising machine than a NGO, (I actually tested Charity:Water’s very efficient donation system by giving $100, curious to see where the money will land.)

The third and probably the most innovative idea was to rely on simple, proven digital technologies to guarantee complete project traceability. Donors can find precisely where their money ends up — whether it is for a $60 sand-filter fountain or a $2000 well. Last, Charity:Water funded a drilling truck equipped with a GPS tracker that makes it visible on Google Maps; in addition, the truck tweets its location on a real-time basis. Thanks to a $5 million Google funding, the organization currently works with seven high-tech US companies to develop robust water sensors able to show in real-time how much water is running on a given project. About 1000 of these are to be installed before year-end. This will help detect possible malfunctions and it will also carries promotional (read: fundraising) capabilities: thanks to a mobile app, a kid who helped raise few hundreds bucks among his friends can see where his or her water is actually flowing.

As I write this, I see comments coming, denouncing the gadgetization of charity, the waste of money in technologies not directly benefiting the neediest, Google’s obscure and mercantile motives, or the future payback for cellular carriers from the mobile initiatives mentioned earlier. Sure thing, objections must be heard. But, at this time, everyone who has traveled in poor areas — like I did in India or in sub-Saharan countries such as Senegal, Mauritania and Burkina-Faso — comes back with the strong conviction that all means must be used to provide these populations with basic things we take for granted in the Western world. As for Charity:Water, results speak for themselves: Over six years, the organization has raised almost $100m and it provided drinkable water to 3m people (out of 800m who don’t have access to it in the world — still lots of work left.) Like in many areas, the benefits of new, disruptive models based on modern technologies far outweigh the disadvantages.

Post-PC: Wall Street Likes the View


The conventional PC business is now on the decline and yet share prices for of key players Microsoft and HP are moving up. Why?

In an April press release, IDC painted a bleak picture for the PC. Compared to last year’s first quarter, worldwide shipments of PCs are down 13.9%, the “steepest decline ever in a single quarter”. US numbers are about the same: -12.7%. On a graph, the trend is unmistakable:

Is this a trend Wall Street likes?

When you consider Microsoft, it seems so. In a corporate blog post titled Windows 8 at 6 months, the company proudly claims to have “recently surpassed the 100 million licenses sold mark for Windows 8.” This is an interesting number. A quarter ago, MS announced it had sold 60 million licenses, meaning that only 40 million were sold in the last three months. That’s a 33% drop…hardly a rousing success. (The “licenses sold” phrase requires caution, it doesn’t only mean “sold with new PCs”, there are also updates to existing machines, with or without enthusiasm for the new Windows OS.)

“Ignore the Windows 8 numbers and IDC analysis”, says Wall Street. While the tech-heavy Nasdaq climbed only 6.6% in the last 60 days, Microsoft shares went up by 21%.

The same apparent illogic holds for Hewlett-Packard. Last week, the largest PC maker disclosed its second quarter numbers. Compared to the same quarter last year, they’re not exactly pretty:

Revenue down by 10% to $27.6B
Operating Margin at 5.8%, down by about 20% (HP prefers “down 1.4 points”)
EPS (Earnings Per Share) at 55 cents, down 31%

Zeroing on HP’s PC business, things look worse:

Revenue down by 20% to $7.6B
Operating Margin at 3.2%, down 44% (“down 2.2 points” sounds better)

As one would expect, Wall Street reacted, and HP shares went…up. By 17.8% the day after the announcement:

What was the good news for investors? Resorting to one of the usual bromides, HP “handily beat Street expectations” by posting Earnings Per Share (EPS) of $0.55 vs. a projected $0.30 to $0.40.

As discussed in the December 16th Monday Note, Chapter 2 of the Turnaround Artist Manual prescribes exactly what we’re seeing: Drastically lower expectations within days of taking on the job. “Things are worse than I was told. We’ll have to touch bottom before we bounce back…’”

Following the script, HP CEO Meg Whitman called 2013 a “fix and rebuild year”. Everyone should expect a “broad-based profit decline”. But a 17% rebound in the stock price can’t be explained solely by a collective sigh of relief when the actual numbers aren’t as bad as the CEO had led everyone to expect.

(In its earnings release, HP still calls itself “The world’s largest technology company”. I guess they think smartphones and tablets aren’t “technology”, but PCs and printers are…)

As quoted in a VentureBeat post, Whitman thinks that the other US PC maker, Dell, is in no better shape:

“You saw a competitor, Dell, completely crater earnings,” Whitman said in response to a question. “Maybe that is what you do when you are going private. We are setting up the company for the long term.”

Ironically, and without a hint of self-awareness, she accuses Dell of playing the Setting Artificially Low Expectations game:

She implied that Dell did that on purpose, since Michael Dell is motivated to repurchase shares in the company as cheaply as possible, and deliberately lowering earnings is a good way to get the share prices to fall.

 Actually, Whitman must envy what Dell is attempting to do: Get out of the PC clone Race To The Bottom. Because PCs make half of Dell’s revenue, getting out of that hopelessly commoditized business would cause trouble if done in public. Going private allows Dell to close the curtain, perform the unappetizing surgery out of view and, later, return to Wall Street with a smaller company endowed with a more robust earnings engine, focused on higher-enterprise gear and services.

This helps explain the apparent paradox: Wall Street doesn’t like HP and Microsoft shares despite their lower PC numbers but because of them. Investors want to believe that future earnings (the ones they count on when buying shares today) will come from “Post-PC” products and services instead of being weighed down by shrinking PC volumes and margins. In particular, those who buy HP shares must believe that the company will sooner or later exit the PC clone business. For Microsoft, the bet is that the company will artfully manage a smooth transition to higher Enterprise and Entertainment revenues and their fatter margins.

I’m not in fond of the “Post-PC” label, it lacks nuance and it’s premature. The desktop and laptop machines we’ve known for more than three decades may no longer be the sole incarnations of our personal computing – our affection, time, and money have shifted smartphones and tablets – but the PC will continue to live in our offices and homes.

Regard Lenovo, the Chinese company that seized on IBM’s PC business when Big Blue decided to exit the race. They’re doing quite well, posting a record $34B in revenue for this year.

There is life left in the PC business, just not for US incumbents.


Why Google Will Crush Nielsen


Internet measurement techniques need a complete overhaul. New ways have emerged, potentially displacing older panel-based technologies. This will make it hard for incumbent players to stay in the game.

The web user is the most watched consumer ever. For tracking purposes, every large site drops literally dozens of cookies in the visitor’s browser. In the most comprehensive investigation on the matter, The Wall Street Journal found that each of the 50 largest web sites in the United Sates, weighing 40% of the US page views, installed an average of 64 files on a user device. (See the WSJ’s What They Know series and a Monday Note about tracking issues.) As for server logs, they record every page sent to the user and they tell with great accuracy which parts of a page collect most of the reader’s attention.

But when it comes to measuring a digital viewer’s commercial value, sites rely on old-fashioned panels, that is limited user population samples. Why?

Panels are inherited. They go back to the old days of broadcast radio when, in order to better sell advertising, dominant networks wanted to know which station listeners tuned in to during the day. In the late thirties, Nielsen Company made a clever decision: they installed a monitoring box in 1000 American homes. Twenty years later, Nielsen did the same, on a much larger scale, with broadcast television. The advertising world was happy to be fed with plenty of data — mostly unchallenged as Nielsen dominated the field. (For a detailed history, you can read Rating the Audience, written by two Australian media academics). As Nielsen expanded to other media (music, film, books and all sorts of polls), moving to the internet measurement sounded like a logical step. As of today, Nielsen only faces smaller competitors such as ComScore and others.

I have yet to meet a publisher who is happy with this situation. Fearing retribution, very few people talk openly about it (twisting the dials is so easy, you know…), but hey all complain about inaccurate, unreliable data. In addition, the panel system is vulnerable to cheating on a massive scale. Smarty pants outfits sell a vast array of measurement boosters, from fake users that will come in just once a month to be counted as “unique” (they are indeed), to more sophisticated tactics such as undetectable “pop under” sites that will rely on encrypted URLs to deceive the vigilance of panel operators. In France for instance, 20% to 30% of some audiences can be bogus — or largely inflated. To its credit, Mediametrie — the French Nielsen affiliate that produces the most watched measurements — is expending vast resources to counter the cheating, and to make the whole model more reliable. It works, but progress is slow. In August 2012, Mediametrie Net Ratings (MNR), launched a Hybrid Measure taking into account site centric analytics (server logs) to rectify panel numbers, but those corrections are still erratic. And it takes more than a month to get the data, which is not acceptable for the real-time-obsessed internet.

Publishers monitor the pulse of their digital properties on a permanent basis. In most newsrooms, Chartbeat (also imperfect, sometimes) displays the performance of every piece of content, and home pages get adjusted accordingly. More broadly, site-centric measures detail all possible metrics: page views, time spent, hourly peaks, engagement levels. This is based on server logs tracking dedicated tags inserted in each served page. But the site-centric measure is also flawed: If you use, say, four different devices — a smartphone, a PC at home, another at work, and a tablet — you will be incorrectly counted as four different users. And if you use several browsers you could be counted even more times. This inherent site-centric flaw is the best argument for panel vendors.

But, in the era of Big Data and user profiling, panels no longer have the upper hand.

The developing field of statistical pairing technology shows great promise. It is now possible to pinpoint a single user browsing the web with different devices in a very reliable manner. Say you use the four devices mentioned earlier: a tablet in the morning and the evening; a smartphone for occasional updates on the move, and two PCs (a desktop at the office and a laptop elsewhere). Now, each time you visit a new site, an audience analytics company drops a cookie that will record every move on every site, from each of your devices. Chances are your browsing patterns will be stable (basically your favorite media diet, plus or minus some services that are better fitted for a mobile device.) Not only your browsing profile is determined from your navigation on a given site, but it is also quite easy to know which sites you have been to before the one that is currently monitored, adding further precision to the measurement.

Over time, your digital fingerprint will become more and more precise. Until then, the set of four cookies is independent from each other. But the analytics firm compiles all the patterns in single place. By data-mining them, analysts will determine the probability that a cookie dropped in a mobile application, a desktop browser or a mobile web site belongs to the same individual. That’s how multiple pairing works. (To get more details on the technical and mathematical side of it, you can read this paper by the founder of Drawbridge Inc.) I recently discussed these techniques with several engineers both in France and in the United Sates. All were quite confident that such fingerprinting is doable and that it could be the best way to accurately measure internet usage across different platforms.

Obviously, Google is best positioned to perform this task on a large scale. First, its Google Analytics tool is deployed over 100 millions web sites. And the Google Ad Planner, even in its public version, already offers a precise view of the performance of many sites in the world. In addition, as one of the engineers pointed out, Google is already performing such pairing simply to avoid showing the same ad twice to a someone using several devices. Google is also most likely doing such ranking in order to feed the obscure “quality index” algorithmically assigned to each site. It even does such pairing on a nominative basis by using its half billion Gmail accounts (425 million in June 2012) and connecting its Chrome users. As for giving up another piece of internet knowledge to Google, it doesn’t sounds like a big deal to me. The search giant knows already much more about sites than most publishers do about their own properties. The only thing that could prevent Google from entering the market of public web rankings would be the prospect of another privacy outcry. But I don’t see why it won’t jump on it — eventually. When this happens, Nielsen will be in big trouble.

Otellini’s Striking Confession


We know Intel shunned ARM processors and played virtually no role in the smartphone revolution. But we now learn Steve Jobs asked Intel to build the iPhone microprocessor. Paul Otellini, Intel’s departing CEO, admits he should have followed his gut – and made the smartphone world a very different place.

CEO valedictions follow a well-known script: My work is done here, great team, all mistakes are mine, all good deeds are theirs, I leave the company in strong hands, the future has never been brighter… It’s an opportunity for a leader to offer a conventional and contrived reminiscence, what the French call la toilette des souvenirs (which Google crudely translates as toilet memories instead of the affectionate and accurate dressing up memories).

For his farewell, Paul Otellini, Intel’s departing CEO, chose the interview format with The Atlantic Monthly’s senior editor Alexis Madrigal. They give us a long (5,700+ words) but highly readable piece titled Paul Otellini’s Intel: Can the Company That Built the Future Survive It?


The punctuation mark at the title’s end refers to the elephantine question in the middle of Otellini’s record: Why did Intel miss out on the smartphone? Why did the company that so grandly dominates the PC market sit by while ARM architecture totally, and perhaps irretrievably, took over the new generation of phones — and most other embedded applications?

According to Otellini, it was the result of Intel’s inertia: It took a while to move the machine.

Madrigal backfills this uneasy explanation with equal unease:

“The problem, really, was that Intel’s x86 chip architecture could not rival the performance per watt of power that designs licensed from ARM based on RISC architecture could provide. Intel was always the undisputed champion of performance, but its chips sucked up too much power. In fact, it was only this month that Intel revealed chips that seem like they’ll be able to beat the ARM licensees on the key metrics.”

Note the tiptoeing: Intel’s new chips “seem like” they’ll be fast enough and cheap enough. Madrigal charitably fails to note how Intel, year after year, kept promising to beat ARM at the mobile game, and failed to do so. (See these 2010, 2011 and 2012 Monday Notes.) Last year, Intel was still at it, dismissively predicting “no future for ARM or any of its competitors“. Tell that to ARM Holdings, whose licensees shipped 2.6 billions chips in the first quarter of this year.

Elsewhere in the article, Otellini offers a striking revelation: Fresh from anointing Intel as the microprocessor supplier for the Mac, Steve Jobs came back and asked Intel to design and build the CPU for Apple’s upcoming iPhone. (To clarify the chronology, the iPhone was announced early January, 2007; the CPU conversation must have taken place two years prior, likely before the June, 2005 WWDC where Apple announced the switch to x86. See Chapter 36 of Walter Isaacson’s Jobs bio for more.)

Intel passed on the opportunity [emphasis mine]:

“We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we’d done it, […]

Indeed, the world would have been different. Apple wouldn’t be struggling through a risky transition away from Samsung, its frenemy CPU supplier; the heart of the iPhone would be Made In America; Intel would have supplied processors for more than 500 million iOS devices, sold even more such chips to other handset makers to become as major a player in the smartphone (and tablet) space as it is in the PC world.

Supply your own adjectives…

Indulging briefly in more What If reverie, compare the impact of Intel’s wrong turn to a better one: How would the world look like if, at the end of 1996, Gil Amelio hadn’t returned Apple back to Steve Jobs? (My recollection of the transaction’s official wording could be faulty.)

So, again, what happened?

At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.

A little later, Otellini completes the train of thought with a wistful reverie, a model of la toilette des souvenirs:

“The lesson I took away from that was, while we like to speak with data around here, so many times in my career I’ve ended up making decisions with my gut, and I should have followed my gut,” he said. “My gut told me to say yes.”

The frank admission is meant to elicit respect and empathy. Imagine being responsible for missing the opportunity to play a commanding role in the smartphone revolution.

But perhaps things aren’t as simple as being a “gut move” short of an epochal $100B opportunity.

Intel is a prisoner of its x86 profit model and Wall Street’s expectations. It’s dominant position in the x86 space give Intel the pricing power to command high margins. There’s no such thing in the competitive ARM space, prices are lower. Even factoring in the lower inherent cost of the somewhat simpler devices (simpler for the time being; they’ll inevitably grow more complex), the profit-per-ARM chip is too thin to sustain Intel’s business model.

(Of course, this assumes a substitution, an ARM chip that displaces an x86 device. As it turns out, the smartphone business could have been largely additive, just as we now see with tablets that cannibalize classical PCs.)

Another factor is the cultural change that would have been required were Intel to have gotten involved in making ARM devices. As both the designer and manufacturer of generation after generation of x86 microprocessors, Intel can wait until they’re good and ready before they allow PC makers to build the chips into their next products. The ARM world doesn’t work that way. Customers design their own chips (often called a System on a Chip, or SoC), and then turn to a semiconductor manufacturer (a foundry) to stamp out the hardware. Taking orders from others isn’t in Intel’s DNA.

And now?

The answer might lie in another French expression: L’histoire ne repasse pas les plats. Google Translate is a bit more felicitous this time: History does not repeat itself. I prefer the more literal image — History doesn’t come around offering seconds — but the point remains: Will there be seconds at the smartphone repast?

Officially, Intel says its next generation of x86 processors will (finally!) topple the ARM regime, that their chips will offer more computing might with no cost or power dissipation penalty. In their parlance “the better transistor” (the basic unit of logic processing) will win.

I doubt it. The newer x86 devices will certainly help Microsoft and its OEMs make Windows 8 devices more competitive, but that won’t prevent the spread of ARM in the legion of devices on which Windows is irrelevant. For these, Intel would have to adopt ARM, a decision Otellini has left to the new tandem leadership of Brian Krzanich (CEO) and Renée James (President). Will they stick to the old creed, to the belief Intel’s superior silicon design and manufacturing technology will eventually overcome the disadvantages of the more complex x86 architecture? Or will they take the plunge?

They might be helped by a change in the financial picture.

In 2006, that is after throwing Jobs in Samsung’s arms (pun unintended), Intel sold its ARM business, the XScale line, to Marvell. The reason was purely financial: for similar capital expenditures (costly fabs), ARM processors achieved much lower per-unit profit, this because of the much more competitive scene than in the x86 space.

Now, if Intel really wants to get a place at the smartphone table with new and improved x86 devices, the company will have to price those to compete with established ARM players. In other words, Intel will have to accept the lower margins they shunned in 2006. Then, why not do it with the ARM-based custom processors Apple and others require?


(I’ll confess a weakness for The Atlantic and, in particular, for its national correspondent James Fallows, a literate geek and instrument-rated pilot who took upon himself to live in Beijing for a while and, as a result, can speak more helpfully about China than most members of the Fourth Estate. Going back to last week’s reference to the Gauche Caviar, when my Café de Flore acquaintances fall into their usual rut of criticizing my adopted country for its lack of “culture”, I hold out that The Atlantic — which sells briskly at the kiosk next door — is one of many examples of American journalistic excellence.

And, if you’re interested in more strange turns, see this other string Alexis Madrigal piece in the same Atlantic: The Time Exxon Went Into the Semiconductor Business (and Failed). I was there, briefly running an Exxon Information Systems subsidiary in France and learning the importance of corporate culture.)–JLG

Two strategies: The Washington Post vs. The NYT


Both are great American newspapers, both suffer from the advertising slump and from the transition to digital. But the New York Times’ paywall strategy is making a huge difference. 

The Washington Post’s financials provide a good glance at the current status of legacy media struggling with the shift to digital. Unlike others large dailies, the components of the Post’s P&L clearly appear in its statements, they are not buried under layers of other activities. Product-wise, the Post remains a great news machine, collecting Pulitzer Prizes with clockwork regularity and fighting hard for scoops. The Post also epitomizes an old media under siege from specialized, more agile outlets such as Politico, ones that break down the once-unified coverage provided by traditional large media houses. In an interview to the New York Times last year, Robert G. Kaiser, a former editor who had been with the paper since 1963, said this:

“When I was managing editor of The Washington Post, everything we did was better than anyone in the business,” he said. “We had the best weather, the best comics, the best news report, the fullest news report. Today, there’s a competitor who does every element of what we do, and many of them do it better. We’ve lost our edge in some very profound and fundamental ways.”

The iconic newspaper has been slow to adapt to the digital era. Its transformation really started around 2008. Since then, it has checked all the required boxes: integration of print and digital productions; editors are now involved on both sides of the news production and all relentlessly push the newsroom to write more for the digital version; many blogs covering a wide array of topics have been launched; and the Post now has a good mobile application. The “quant” culture also set in, with editors now taking into account all the usual metrics and ratios associated with digital operations, including a live update of Google’s most relevant keywords prominently displayed in the newsroom. All this helped the Post collect 25.6 million unique visitors per month, vs. 4 to 5 million for Politico, and 35 million for the New York Times that historically enjoys a more global audience.

Overall, the Washington Post Company still relies heavily on its education business, as show in the table below :

 Revenue:.......$4.0bn (-3% vs. 2011)
 Education:.....$2.2bn (-9%)
 Cable TV:......$0.8bn (+4%)
 Newspaper:.....$0.6bn (-7%)
 Broadcast TV:..$0.4bn (+25%)

But the education business no is longer the cash cow it used to be. Not only did its revenue decrease but, last year, it lost $105m vs. a $96m profit in 2011. As for the newspaper operation, it widened its losses to $53m in 2012 from $21m in 2011. And the trend worsens: for the first quarter of 2013, the newspaper division’s revenue decreased by 4% vs. a year ago and it lost $34m vs. $21m for Q1 2011.

Now, let’s move to a longer-term perspective. The chart below sums up the Post’s (and others legacy media’s) problem:

Translated into a table:

                  Q1-2007   Q1-2013  Change %
 Revenue (All):....$219m.....$127m.....-42%
 Print Ad:.........$125m.....$49m......-61%
 Digital Ad:.......$25m......$26m......+4%

A huge depletion in print advertising, a flat line (at best) for digital advertising, the elements sum up the equation faced by traditional newspapers going from print to online.

Now, let’s look at the circulation side using a comparison with the New York Times. (Note that it’s not possible to extract the same figures for advertising from the NYT Co.’s financial statements because they aggregate too many items.) The chart below shows the evolution of the paid circulation for the Post between 2007 and 2013:

..and for the NY Times:

Call it the paywall effect: The New York Times now aggregates both print and digital circulations. The latter now amounts to 676,000 digital subscribers that have been recruited using the NYT’s metered system (see previous Monday Notes under the “paywall” tag). (Altogether, digital subscribers to the NYT, the International Herald and the Boston Globe now number 708,000). It seems the NYT found the right formula: its digital subscribers portfolio grows at a 45% per year rate, thanks to a combination of sophisticated marketing, mining customer data and aggressive pricing (it even pushes special deals for Mother’s Day.) All this adds to the bottom line: if each digital sub brings $12 a month, the result is about $100m that didn’t exist two years ago. But it does not benefit the advertising side as it continues to suffer. For the first quarter of 2013 vs. the same period last year, the NYT Company lost 13% in print ads revenue and 4% for digital ads. (As usual in their earning calls, NYT officials mention the deflationary effects of ad exchanges as one cause of erosion in digital ads.)

One additional sign that digital advertising will remain in the doldrums: Politico, too, is exploring alternatives; it will be testing a paywall in a sample of six states and for its readers outside the United States. The system will be comparable to the or the, with a fixed number of articles available for free (see Politico’s management internal memo.)

It is increasingly clear that readers are more willing than we once thought to pay for content they value and enjoy. With more than 300 media companies now charging for online content in the U.S., the notion of paying to read expensive-to-produce journalism is no longer that exotic for sophisticated consumers.


Elon Musk’s Sweet Revenge


Elon Musk, Tesla’s CEO, saw its latest creation, the Model S – and himself – criticized by traditional media. Now, Tesla just scored its first profitable quarter and Consumer Reports put the Model S at the top of its rankings, making it possible for Musk’s company to become more than a niche player.

Palo Alto is known, primarily, as the cradle of high-tech. Its birth registry stretches from pre-World War II Hewlett-Packard, to Cisco, Sun Microsystems (after Stanford University Network), Logitech, and on to Google and Facebook.

But there’s an aspect of the town that’s rarely remarked upon. As a happy Palo Alto resident for 25 years as well as a half-century regular at the Café de Flore and Au Sauvignon, I can attest that Palo Alto vies with Paris’ Left Bank as the cynosure of the Gauche Caviar — the Caviar Left, the Volvo Liberals as they were known eons ago. Palo Altans, like the residents of the sixth arrondissement, have money and they’re willing to spend it (this isn’t constipated New England, after all) — but they only spend it in the proper way. And there’s no better way to demonstrate that you’re spending your money in a seemly fashion than to be seen driving the proper car.

The combination of tech culture, money, and sincere (if easily lampooned) social/ecological awareness make Palo Alto an interesting place to watch automotive fashion wax and wane.

Walking Palo Alto’s leafy streets in the early 2000′s, I witnessed the rise of the Prius. Rather than grafting “green” organs onto a Camry or a disinterred Tercel, Toyota’s engineers had designed a hybrid from the tires up…and they gave the car a distinctive, sui generis look. It was a stroke of genius, and it tickled us green. What better way to flaunt our concern for the environment while showing off our discerning tech taste than to be spotted behind the wheel of a Prius? (I write “us” without irony: I owned a Gen I and a Gen II Prius, and drive a Prius V in France.) Palo Alto was Prius City years before the rest of the world caught on. (Prius is now the third best-selling car worldwide; more than a million were sold in 2012.)

The cute but artificial Volkswagen Beetle came and went. The Mini, on the other hand, has been a success. A coupling of British modesty and German engineer (the car is built by BMW), the Mini proved that Americans could fall in love with a small car.

The Smart, an even smaller car, hasn’t fared well at all. There are now more older Citroëns than Smarts on our streets. I also see some tiny Fiat 500s, but too few so far to call it a durable trend.

Then there’s Tesla. In 2008, when the Tesla Roadster came out, I watched it with mixed feelings: some in my neighborhood ended up on flatbeds, but I smiled as I saw Roadsters smoothly (and silently) outrun a Porsche when the traffic light turned green.

As much as I admired Elon Musk, Tesla’s founder and a serial entrepreneur of PayPal fame, I was skeptical. A thousand-pound battery and electric drive train in a Lotus frame…it felt like a hack. This was a beta release car, a $100k nano-niche vehicle. It wasn’t seemly.

Musk muscled his way through, pushed his company onto firmer financial ground, and, in June 2012, Tesla began delivery of the Model S. This is a “real” car with four doors, a big trunk (two, actually, front and back), and a 250 mile (400 km) range. Right away, the sales lot at Tesla’s corporate store in nearby Menlo Park was packed. I started to see the elegant sedan on our streets, and within a few months there were three Model Ss in the parking garage at work. With their superior range, they rarely feed from the EV charging stations. (The Nissan Leaf, on the other hand, is a constant suckler.)

This was a big deal. The company had jumped straight from beta to Tesla 2.0. The bigwigs in the automotive press agreed: Motor Trend and Automobile Magazine named the Model S their 2012 Car of the Year.

Actually, not all the bigwigs agreed. The New York Times’ John Broder gushed over the Model S’s futuristic engineering (“The car is a technological wonder”), but published an ultimately negative story titled Stalled Out on Tesla’s Electric Highway. The battery wouldn’t hold a charge, the car misreported its range, Tesla support gave him bad information… The car ended up being hauled off on a flatbed.

Broder’s review didn’t evince much empathy from Elon Musk, a man who clearly doesn’t believe the meek will inherit the Earth. In a detailed blog post backed up by the data the data that was logged by the car, Tesla’s CEO took Broder to task for shoddy and fallacious reporting:

As the State of Charge log shows, the Model S battery never ran out of energy at any time, including when Broder called the flatbed truck…
During the second Supercharge… he deliberately stopped charging at 72%. On the third leg, where he claimed the car ran out of energy, he stopped charging at 28%.

More unpleasantness ensued, ending with an uneasy statement from Margaret Sullivan, The NYT’s Public Editor: Problems With Precision and Judgment, but Not Integrity, in Tesla Test, and with Musk claiming that the NYT story had cost Tesla $100M in market cap.

Other writers, such as David Thier in Forbes, rushed to Broder’s defense for no reason other than an “inclination”:

I’m inclined to trust the reporter’s account of what happened, though at this point, it barely matters. The original story is so far removed that mostly what we have now is a billionaire throwing a temper tantrum about someone who said mean things about him.

In “Why the great Elon Musk needs a muzzle” (sorry, no link; the article is iPad only) Aaron Robinson of Car and Driver Magazine condemns Musk for the sin of questioning the infallibility of the New York Times:

(There’s no need to pile onto this argument, but let’s note that the NYT’s foibles are well-documented, such as, I can’t resist, its tortured justification for not using the word “torture” when dealing with “enhanced interrogation”.)

None of this dampened the enthusiasm of customers living in our sunnier physical and psychological clime. I saw more and more Model Ss on the streets and freeways. Most telling, the Model S became a common sight in the parking lot at Alice’s Restaurant up the hill in Woodside, a place where bikers and drivers of fashionable cars, vintage and cutting edge, gather to watch and be watched.

Publishing deadlines can be cruel. A few days after Robinson’s story appeared in Car and Driver, Tesla released its quarterly numbers for Q1 2013 (click to enlarge):

Tesla’s $555M in revenue is an astonishing 20x increase compared to the same quarter a year ago. Tesla is now profitable; shares jumped by more than 37% in two trading sessions. On Wall Street paper, the company’s $8.77B market cap makes it worth about 20% of GM’s $42.93B capitalization… Musk got his “lost $100M” back and more.

Curiously, the numbers also show that while Operations were in the red, the company recorded a Net Income of $11M. How is that possible? The explanation is “simple”: If your car company manufactures vehicles that surpass (in a good way) California’s emissions standards, the state hands you Zero Emissions Vehicle Credits for your good behavior. You can then sell your virtue to the big car companies – Chrysler, Ford, GM, Honda — who must comply with ZEV regulations. For Tesla, this arrangement resulted in “higher sales of regulatory credits including $67.9 million in zero emission credit sales”.

Tesla is careful to note that this type of additional income is likely to disappear towards the end of 2013. (For a more detailed analysis of Tesla’s numbers see this post from The Truth About Cars, a site that recommends itself for not being yet another industry mouthpiece.)

The numbers point to a future where Tesla can leave its niche and become a leading manufacturer in a too-often stodgy automotive industry. And, of course, we Silicon Valley geeks take great pleasure in a car that updates it software over the air, like a smartphone; that has a 17″ touchscreen; and that’s designed and built right here (the Tesla factory is across the Bay in the NUMMI plant that was previously occupied by Toyota and GM).

A last dollop of honey in Elon’s revenge: Coinciding with the Car and Driver screed, Consumer Reports gave the Model S its top test score. After driving a friend’s Model S at adequate freeway speeds, I agree, it’s a wonderful car, a bit of the future available today.

Some say the Model S is still too pricey, that it’s only for the very well-off who can afford a third vehicle, that it will never reach a mass audience. It’s a reasonable objection, but consider Ferrari: It sold 7318 cars in 2012 and says it will restrict output in 2013 to less than 7,000 to “keep its exclusivity” – in other words, it must adapt to the slowing demand in Europe and, perhaps, Asia. Last year, Land Rover sold about 43,000 cars in the US. By comparison, Tesla will sell about 20,000 cars this year and expects to grow further as it opens international distribution.

One more thing: Elon Musk is also the CEO of SpaceX, a successful maker of another type  of vehicles: space-launch rockets.