Browsing Category


MSFT Hardware Futures

Uncategorized By December 22, 2014 14 Comments


(Strangely, the WordPress software gives me a “Bad Gateway 502”  error message when I fully spell the name of the Redmond company)

by Jean-Louis Gassée

Microsoft’s hardware has long been a source of minor profit and major pain. In this last 2014 Monday Note, we’ll look at the roles Microsoft’s hardware devices will play — or not —  in the company’s future.

Excluding keyboards and the occasional Philippe Starck mouse, Microsoft makes three kinds of hardware: Game consoles, PC-tablet hybrids, and smartphones. We’ll start with the oldest and least problematic category: Game consoles.

Building on the success of DOS and its suite of business applications, Microsoft brought forth the MSX reference platform in 1983. This was a Bill Gates-directed strategic move, he didn’t want to leave the low-end of the market “unguarded”. Marketed as “home computers”, which meant less capable than a “serious” PC, MSX-branded machines were manufactured by the likes of Sony and Yamaha, but its only serious impact was in gaming. As the Wikipedia articles says, “MSX was the platform for which major Japanese game studios, such as Konami and Hudson Soft, produced video game titles.”

For the next two decades, gaming remained a hobby for Microsoft. This changed in 2001 when the company took the matter into its own hands and built the Xbox. Again, the company wanted to guard against “home invasions”.

With its Intel processors and customized version of Windows, the first iteration of the Xbox was little more than a repackaged PC. The 2005 Xbox 360 was a heartier offering: It featured an IBM-designed Power-PC derivative processor and what some call a “second-order derivative” of Windows 2000 ported to the new CPU.

Now we have the Xbox One. Launched in 2013, the platform is supported by a full-fledged ecosystem of apps, media store, and controllers such as the remarkable Kinect motion sensor.

Success hasn’t been easy. The first Xbox sold in modest numbers, 24 million units in about five years. Sales of the second generation Xbox 360 were better — almost 80 million through 2013 — but it was plagued with hardware problems, colloquially known as the Red Ring of Death. Estimates of the number of consoles that were afflicted range from 23% to more than 54%. Predictably, poor reliability translated into heavy financial losses, as much as $2B annually. Today’s Xbox One fares a little better: It lost only $800M for the first eight months of its life, selling 11.7M units in the process.

Microsoft’s latest numbers bundle Xbox game consoles and Surface tablet-PCs into a single Computing & Gaming category that makes up $9.7B of the company’s $87B in revenue for the 2014 Fiscal Year. This means Xbox console contribute less than 10% of total sales, which is probably why Satya Nadella, Microsoft’s new CEO, has carefully positioned the Xbox business as less than central to the company’s business:

“I want us to be comfortable to be proud of Xbox, to give it the air cover of Microsoft, but at the same time not confuse it with our core.”

In other words, the Xbox business can continue… or it could disappear. Either way, it won’t have much effect on Microsoft’s bottom line or its future.

For the moment, and with the assistance of a holiday price cut, Xbox One sales are topping those of the Sony PS4, but that shouldn’t take our attention away from a more important trend: The rise of mobile gaming. Smartphones are gaining in raw computing power, connectivity, display resolution, and, as a result, support from game developers on both Android and iOS platforms. Larger, more capable game consoles aren’t going away, but their growth is likely to slow down.

The history of Xbox problems, Nadella’s lukewarm embrace of the series, the ascendency of mobile gaming… by comparison the Surface tablet should look pretty good.

It doesn’t.

When Steve Ballmer introduced the Surface device in June, 2012, he justified Microsoft’s decision to compete with its own Windows licensees by the need to create a “design point”, a reference for a new type of device that would complement the “re-imagined” Windows 8.


Two and a half years later, we know two things: Surface tablet sales have been modest (about $2B in the 2014 Fiscal Year ended June 30th), and Windows 8 frustrated so many users that Microsoft decided to re-re-imagine it and will re-introduce it as Windows 10, scheduled to be released in mid-2015.

Microsoft believes its Surface combines the best of the PC with the best of a tablet. While the hybrid form has given rise to some interesting explorations by PC makers, such as the Yoga 3 Pro by Lenovo, many critics — and not just Apple — condemn the hybrid as a compromise, as a neither-nor device that sub-optimizes both its tablet and its PC functions (see the tepid welcome given to the HP Envy).

What would happen if Microsoft stopped making Surface Pro tablets? Not much… perhaps a modest improvement in the company’s profit picture. While the latest quarter of Surface Pro 3 sales appear to have brought a small positive gross margin, Surface devices have cost Microsoft about $1.7B over the past two years. Mission accomplished for the “design point”.

We now turn to smartphones.

Under the Ballmer regime, Microsoft acquired Nokia rather than let its one and only real Windows Phone licensee collapse. It was a strategic move: Microsoft was desperate to achieve any sort of significance in the smartphone world after seeing its older Windows Mobile platform trounced by Google’s Android and Apple’s iOS.

In the latest reported quarter (ended September 30th 2014), Windows Phone hardware revenue was $2.6B. For perspective, iPhone revenue for the same period was $23.7B. Assuming that Apple enjoys about 12% of the world smartphone market, quarterly worldwide revenue for the sector works out to about $200B… of which Microsoft gets 1.3%. Perhaps worse, a recent study says that Microsoft’s share of the all-important China smartphone market is “almost non-existent at 0.4 percent”. (China now has more than twice as many smartphone users, 700M, as the US has people, 319M.)

Hardware development costs are roughly independent of volume, as is running an OS development organization. But hardware production costs are unfavorably impacted by low volumes. Windows Phones sell less and they cost more to make, putting Microsoft’s smartphone business in a dangerous downward spiral. As Horace Dediu once remarked, the phone market doesn’t forgive failure. Once a phone maker falls into the red, it’s nearly impossible to climb back into the black.

What does all this mean for Microsoft?

Satya Nadella, the company’s new CEO, uses the phrase “Mobile First, Cloud First” to express his top-level strategy. It’s a clear and relevant clarion call for the entire organization, and Microsoft seems to do well in the Cloud. But how does the Windows Phone death spiral impact the Mobile First part?

In keeping with its stated strategy, the company came up with Office apps on iOS and Android, causing bewilderment and frustration to Windows Phone loyalists who feel they’d been left behind. Versions of Office on the two leading mobile platforms ensures Microsoft’s presence on most smartphones, so why bother making Windows Phones?

Four and a half years ago, in a Monday Note titled Science Fiction: Nokia Goes Android, I fantasized that Nokia ought to drop its many versions of Symbian and adopt Android instead. Nokia insiders objected that embracing a “foreign OS” would cause them to lose control of their destiny. But that’s exactly what happened to them anyway when they jumped into bed with Stephen Elop and, a bit later, with Windows Phone. This started a process that severely damaged phone sales, ending with Microsoft acquisition of what was already a captive licensee.

Now the Android question rises again.

Should Microsoft pursue what looks like a manly but losing Windows Phone hardware strategy or switch to making and selling Android phones? Or should it drop an expensive smartphone design, manufacturing, and distribution effort altogether, and stay focused on what it does already, Mobile First, Cloud First applications?


The Rise of AdBlock Reveals A Serious Problem in the Advertising Ecosystem

Uncategorized By December 8, 2014 178 Comments


By Frédéric Filloux

Seeing a threat to their ecosystem, French publishers follow their German colleagues and prepare to sue startup Eyeo GmbH, the creator of anti-advertising software AdBlock Plus. But they cannot ignore that, by using ABP, millions of users actively protest against the worst forms of advertising. 

On grounds that it represents a major economic threat to their business, two groups of French publishers are considering a lawsuit against AdBlockPlus creator Eyeo GmbH. (Les Echos, broke the news in this story, in French).
Plaintiffs are said to be the GESTE and the French Internet Advertising Bureau. The first is known for its aggressive stance against Google via its contribution to the Open Internet Project. (To be clear, GESTE said they were at a “legal consulting stage”, no formal complaint has been filed yet.) By his actions, the second plaintiff, the French branch of the Internet Advertising Bureau is in fact acknowledging its failure to tame the excesses of the digital advertising market.

Regardless of its validity, the legal action misses a critical point. By downloading the plug-in AdBlock Plus (ABP) on a massive scale, users do vote with their mice against the growing invasiveness of digital advertising. Therefore, suing Eyeo, the company that maintains ABP, is like using Aspirin to fight cancer. A different approach is required but very few seem ready to face that fact.

I use AdBlock Plus on a daily basis. I’m not especially proud of this, nor do I support anti-advertising activism, I use the ad-blocker for practical, not ideological, reasons. On too many sites, the invasion of pop-up windows and heavily animated ad “creations” has became an annoyance. A visual and a technical one. When a page loads, the HTML code “calls” all sorts of modules, sometimes 10 or 15. Each sends a request to an ad server and sometimes, for the richest content, the ad elements trigger the activation of a third-party plug-in like Adobe’s Shockwave which will work hard to render the animated ads. Most of the time, these ads are poorly optimized because creative agencies don’t waste their precious time on such trivial task as providing clean, efficient code to their clients. As a consequence, the computer’s CPU is heavily taxed, it overheats, making fans buzz loudly. Suddenly, you feel like your MacBook Pro is about to take off. That’s why, with a couple of clicks, I installed AdBlock Plus. My ABP has spared me several thousands of ad exposures. My surfing is now faster, crash-free, and web pages looks better.

I asked around and I couldn’t find a friend or a colleague not using the magic plug-in. Everyone seems to enjoy ad-free surfing. If this spreads, it could threaten the very existence of a vast majority of websites that rely on advertising.

First, a reality check. How big and dangerous is the phenomenon? PageFair, a startup-based in Dublin, Ireland, comes up with some facts. Here are key elements drawn from a 17-pages PDF document available here.





Put another way, if your site, or your apps, are saturated with pop-up windows, screaming videos impossible to mute or skip, you are encouraging the adoption of AdBlock Plus — and once it’s installed on a browser, do not expect any turning  back. As an example of an unwitting APB advocate:


Eyeo’s AdBlock Plus takes the advertising rejection in its own hands — but these are greedy and dirty ones. Far from being the work of a selfless white knight, Eyeo’s business model borders on racketeering. In its Acceptable Ads Manifesto, Eyeo states the virtues of what the company feels are tolerable formats:

1. Acceptable Ads are not annoying.
2. Acceptable Ads do not disrupt or distort the page content we’re trying to read.
3. Acceptable Ads are transparent with us about being an ad.
4. Acceptable Ads are effective without shouting at us.
5. Acceptable Ads are appropriate to the site that we are on.

Who could disagree? But such blandishments go with a ruthless business model that attests to the merits of straight talk:

We are being paid by some larger properties that serve non-intrusive advertisements that want to participate in the Acceptable Ads initiative.
Whitelisting is free for all small and medium-sized websites and blogs. However, managing this list requires significant effort on our side and this task cannot be completely taken over by volunteers as it happens with common filter lists.
Note that we will never whitelist any ads that don’t meet these criteria. There is no way to buy a spot in the whitelist. Also note that whitelisting is free for small- and medium-sized websites.
In addition, we received startup capital from our investors, like Tim Schumacher, who believe in Acceptable Ads and want to see the concept succeed.

Of course, there is no public rate card. Eyeo doesn’t provide any measure of what defines  “small and medium size websites” either. A 5 million monthly uniques site can be small in the English speaking market but huge in Finland. And the number of “larger properties” and the amount they had to pay to be whitelisted remains a closely guarded secret. According to some German websites, Eyeo is said to have snatched $30m from big internet players; not bad for a less than 30 people operation (depending of the recurrence of this “compliance fee” — for lack of a better term.)

There are several issues here.

One, a single private entity cannot decide what is acceptable or not for an entire sector. Especially in such an opaque fashion.

Two, we must admit that Eyeo GmbH is filling a vacuum created by the incompetence and sloppiness of the advertising community’s, namely creative agencies, media buyers and organizations that are supposed to coordinate the whole ecosystem (such as the Internet Advertising Bureau.)

Three, the rise of ad blockers is the offspring of two major trends: a continual deflation of digital ads economics, and the growing reliance on ad exchanges and Real Time Bidding, both pushing prices further down.

Even Google begins to realize that the explosion of questionable advertising formats has become a problem. Proof is its recent Contributor program that proposes ad-free navigation in exchange for a fee ranging from $1 to $3 per month (read this story on NiemanLab, and more in a future Monday Note).

The growing rejection of advertising AdBlock Plus is built upon is indeed a threat to the ecosystem and it needs to be addressed decisively. For example, by bringing at the same table publishers and advertisers to meet and design ways to clean up the ad mess. But the entity and leaders who can do the job have yet to be found.


Apple Watch: Hard Questions, Facile Predictions

Uncategorized By December 8, 2014 49 Comments


by Jean-Louis Gassée

Few Apple products have agitated forecasters and competitors as much as the company’s upcoming watch. The result is an escalation of silly numbers – and one profound observation from a timepiece industry insider.

Apple Watch 2015 sales predictions are upon us: 10 million, 20 million, 24 million, 30 million, even 40 million! Try googling “xx million apple watch”, you won’t be disappointed. Microsoft’s Bing doesn’t put a damper on the enthusiasm either: It finds a prediction for first year sales of 60 million Apple Watches!

These are scientific, irony-free numbers, based on “carefully weighed percentages of iPhone users” complemented by investigations into “supplier orders” and backed up by interviews with “potential buyers”. Such predictions reaffirm our notion that the gyrations and divinations of certain anal-ists and researchers are best appreciated as black comedy— cue PiperJaffray’s Gene Munster with his long-running Apple TV Set gag.

Fortunately, others are more thoughtful. They consider how the product will actually be experienced by real people and how the new Apple product will impact the watch industry.

As you’ll recall from the September 14th “Apple Watch Is And Isn’t”, Jean-Claude Biver, the LVMH executive in charge of luxury watch brands such as Hublot and TAG Heuer, offered his frank opinion of the “too feminine” AppleWatch:

“To be totally honest, it looks like it was designed by a student in their first trimester.” 

At the time, it sounded like You Don’t Need This sour grapes from disconcerted competitor. But recently, Biver has also given us deeper, more meaningful thoughts:

“A smartwatch is very difficult for us because it is contradictory,” said Mr. Biver. “Luxury is supposed to be eternal … How do you justify a $2,000 smart watch whose technology will become obsolete in two years?” he added, waving his iPhone 6. 

Beautiful. All the words count. Luxury and Eternity vs. Moore’s Law.

To help us think about the dilemma that preoccupies the LVMH exec, let’s take a detour through another class of treasured objects: Single Lens Reflex cameras.



Unless you were a photojournalist or fashion photographer taking hundreds of pictures a day, these cameras lasted forever. A decade of use would come and go without impact on the quality of your pictures or the solid feel of the product. People treasured their Hasselblads, Leicas (not an SLR), Canons, and more obscure marques such as the Swiss Alpa. (I’m a bit partial, here, I bought a Nikon exactly like the one pictured above back in 1970.)

These were purely mechanical marvels. No battery, the light sensor was powered by…light.

Then, in the mid-nineties, digital electronics begin to sneak in. Sensor chips replaced silver-halide film; microcomputers automated more and more of the picture taking process.

The most obvious victim was Eastman Kodak, a company that had dominated the photographic film industry for more than a century – and filed for bankruptcy in 2012. (A brief moment of contemplation: Kodak owned many digital photography patents and even developed the first digital camera in 1975, but “…the product was dropped for fear it would threaten Kodak’s photographic film business.” [Wikipedia].)

The first digital cameras weren’t so great. Conventional film users rightly criticized the lack of resolution, the chromatic aberrations, and other defects of early implementations. But better sensors, more powerful microprocessors, and clever software won the day. A particular bit of cleverness that has saved a number of dinner party snapshots was introduced in the late-nineties: A digital SLR sends a short burst of flash to evaluate the scene, and then uses the measurements to automatically balance shutter speed and aperture, thus correcting the classical mistake of flooding the subject in the foreground while leaving the background in shadows.

Digital cameras have become so good we now have nostalgia “film packs” that recreate the defects — sorry, the ambiance — of analog film stock such as Ektachrome or Fuji Provia.

But Moore’s Law exacts a heavy price. At the high end, the marvelous digital cameras from Nikon, Canon, and Sony are quickly displaced year after year by new models that have better sensors, faster microprocessors, and improved software. Pros and prosumers can move their lenses — the most expensive pieces of their equipment — from last year’s model to this one’s, but the camera body is obsolete. In this regard, the most prolific iterator seems to be Sony, today’s king of sensor chips; the company introduces new SLR models once or twice a year.

At the medium to low end, the impact of Moore’s law was nearly lethal. Smartphone cameras have become both so good and so convenient (see Chase Jarvis’ The Best Camera is the One That’s With You) that they have displaced almost all other consumer picture taking devices.

What does the history of cameras say for watches?

At the high-end, a watch is a piece of jewelry. Like a vintage Leica or Canon mechanical camera, a Patek watch works for decades, it doesn’t use batteries, and it doesn’t run on software. Mechanical watches have even gained a retro chic among under-forty urbanites who have never had to wind a stem. (A favorite of techies seems to be the Officine Panerai.)

So far, electronic watches haven’t upended the watch industry. They’ve mostly replaced a spring with a battery and have added a few functions and indicator displays – with terrible user interfaces. This is about to change. Better/faster/cheaper organs are poised to invade watches: sensors, microprocessors + software, wireless links…

Jean-Claude Biver is right to wonder how the onslaught of ever-improving technology will affect the “eternity” of the high-end, fashion-conscious watch industry…and he’ll soon find out:  He’s planning a (yet-to-be announced) TAG Heuer smartwatch.

With this in mind, Apple’s approach is intriguing: The company plays the technology angle, of course, and has loaded their watch with an amazing — some might say disquieting — amount of hardware and software, but they also play the fashion and luxury game. The company invited fashion writers to the launch; it hosted a celebrity event at Colette in Paris with the likes of Karl Lagerfeld and Anna Wintour in attendance. The design of the watch, the choice of materials for the case and bands/bracelets… Apple obviously intends to offer customers a differentiated combination of traditional fashion statement and high-tech functions.

But we’re left with a few questions…

Battery life is one question — we don’t know what it will be. The AppleWatch user interface is another.

The product seems to be loaded with features and apps… will users “get” the UI, or will they abandon hard-to-use functions, as we’ve seen in many of today’s complicated watches?

But the biggest question is, of course, Moore’s Law. Smartphone users have no problem upgrading every two years to new models that offer enticing improvements, but part of that ease is afforded by carrier subsidies (and the carriers play the subsidy game well, despite their disingenuous whining).

There’s no carrier subsidy for the AppleWatch. That could be a problem when Moore’s Law makes the $5K high-end model obsolete. (Expert Apple observer John Gruber has wondered if Apple could just update the watch processor or offer a trade-in — that would be novel.)

We’ll see how all of this plays out with regard to sales. I’ll venture that the first million or so AppleWatches will sell easily. I’ll certainly buy one, the entry-level Sports model with the anodized aluminum case and elastomer band. If I like it, I’ll even consider the more expensive version with a steel case and ingenious Marc Newson link bracelet — reselling my original purchase should be easy enough.

Regardless of the actual sales, first-week numbers won’t matter. It’s what happens after that that matters.

Post-purchase Word of Mouth is still the most potent marketing device. Advertising might create awareness, but user buzz is what makes or breaks products such as a watch or phone (as opposed to cigarettes and soft drinks). It will take a couple months after the AppleWatches arrive on the shelves before we can judge whether or not the product will thrive.

Only then can we have a sensible discussion about how the luxury segment of the line might plan to deal with the eternity vs. Moore’s Law question.


Hard Comparison: Legacy Media vs. Digital Native

Uncategorized By November 24, 2014 17 Comments


by Frédéric Filloux

From valuations to management cultures, the gap between legacy media companies and digital natives ones seems to widen. The chart below maps the issues and shows where efforts should focus. 

At conferences and workshops in Estonia, Spain or in the US, most discussions I recently had ended up zeroing on the cultural divide between legacy media and internet natives. About fifteen years into the digital wave, tectonic plates seems to drift more apart that ever. On one side, most media brands — the surviving ones — are still struggling with an endless transition. On the other, digital native companies, all with deeply embedded technology, expand at an incredible pace. Hence the central question: can legacy media catch up? What are the most critical levers to pull in order to accelerate change?

Once again, it’s not a matter of a caricatural opposition of fossilized media brands versus agile and creative media startups. The reality is far more complex. I come from a world in which information had price and cost; facts were verified; seasoned editors called the shots; readers were demanding and loyal — and journalists occasionally autistic. I’m coming from the culture of great stories, intense competition (now gone) and the certitude of the important role of great journalism in society.

That said, I simply had the luck to be in the right place at the right time to embrace the new culture: Small companies, starting on a blank slate with the unbreakable faith and systemic understanding that combine into a vision of growth and success, all wrapped-up in the virtues of risk-taking. I always wanted to believe that the two cultures could be compatible — in fact, I hoped the old world would be able to morph swiftly and efficiently enough to catch the wave, deal with new kinds of readers, with a wider set of technologies and a proteiform competition. I still want to believe this.

In the following chart, I list the most critical issues and pinpoint the areas of transformation that are both the most urgent and the easiest to address.



1. Funding: The main reason why newcomers are able to quickly leave the incumbent in the dust. When venture firms compete to provide $160m to Flipboard, $61m to Vox Media, or $96m to BuzzFeed, the consequences are not just staggering valuations. Abundant funds translate into the ability to hire more and better qualified people. Just one example: Netflix’s recommendation system — critical to ensure both viewer engagement and retention — can count on a $150m yearly budget, far more than the entire revenue of many mid-sized media companies. Fact is: old media companies in transition will never be able to attract such level of funding due to inherent scalability limitations (it is extremely rare to see a legacy media corporation suddenly jumping out of its ancestral business.)

2. Resource Allocation. Typically, the management team of a legacy media will assign just enough resources to launch a product or service and hope for the best. This deliberate scarcity has several consequences: From the start, the project team will be in the fight/survival mode, both internally (vs. other projects or “historical” operations); second consequence, in the (likely) case of a failure, it will be difficult to find the cause: Was the product or service inherently flawed? Or did it fail to achieve “ignition” because the approach was too cautious? The half-baked, half-supported legacy product might stagnate for ever, without making sufficient money to be seen as a success, nor significant losses to justify a termination. By contrast, a digital native corporation will go at full throttle from day one with scores of managers, engineers, marketers and sufficient development time for tests, market research, promotion, etc. The idea is to succeed — or to fail, but fast and clearly.

3. Approach to timing. The tragedy for the vast majority of legacy media is they no longer have the luxury of long term thinking. Shareholder pressure and weak P&L impose quick results. By contrast, most digital companies are built for the long term: Their management is asked to grow, conquer, secure market positions and then monetize. It can take years, as seen in many instances, form Flipboard to Amazon (which might have pushed the envelope a bit too far.)

4. Scalability vs. sustainability. Many reasons — readership structure, structurally constrained markets — explain the difficulty for legacy media to scale up. At the polar opposite, disrupters like Uber or AirBnB, or super-optimizers such as BuzzFeed or The Huffington Post are designed and built to scale — globally.

5. Customer relations. On this aspect, the digital world has reset the standard. All of a sudden, legacy media companies appeared outdated when it comes to customer satisfaction, from poor subscription handling to the virtuous circle of acquisition-engagement-retention of customers.

In the chart above, my allocation of purple dots (feasibility) illustrates the height of hurdles facing large, established media brands. Many components remain extremely hard to move – I personally experience that on a daily basis.  But there is no excuse not to take a better care of customers, not to reward the risk-taking of committed staffers, assign resources in a decisive manner or induce a better sense of competition.


Clayton Christensen Becomes His Own Devil’s Advocate

Uncategorized By November 24, 2014 40 Comments


by Jean-Louis Gassée

Every generation has its high tech storytellers, pundits who ‘understand’ why products and companies succeed and why they fail. And each next generation tosses out the stories of their elders. Perhaps it’s time to dispense with “Disruption”.

“I’m never wrong.”

Thus spake an East Coast academic, who, in the mid- to late-eighties, parlayed his position into a consulting money pump. He advised — terrorized, actually — big company CEOs with vivid descriptions of their impending failure, and then offered them salvation if they followed his advice. His fee was about $200K per year, per company; he saw no ethical problem in consulting for competing organizations.

The guru and I got into a heated argument while walking around the pool at one of Apple’s regular off-sites. When I disagreed with one of his wild fantasies, his retort never varied: I’m never wrong.

Had I been back in France, I would have told him, in unambiguous and colorful words, what I really thought, but I had acclimated myself to the polite, passive-aggressive California culture and used therapy-speak to “share my feelings of discomfort and puzzlement” at his Never Wrong posture. “I’ve always been proved right… sometimes it simply takes longer than expected”, was his comeback. The integrity of his vision wasn’t to be questioned, even if reality occasionally missed its deadline.

When I had entered the tech business a decade and a half earlier, I marveled at the prophets who could part the sea of facts and reveal the True Way. Then came my brief adventures with the BCG-advised diversification of Exxon into the computer industry.

Preying on the fear of The End of Oil in the late-seventies, consultants from the prestigious Boston company hypnotized company executives with their chant: Information Is The Oil of The 21st Century. Four billion dollars later (a lot of money at the time), Exxon finally recognized the cultural mismatch of the venture and returned to the well-oiled habits of its hearts and minds.

It was simply a matter of time, but the BCG was ultimately proved right — we now have our new Robber Barons of zeroes and ones. But they were wrong about something even more fundamental but slippery, something they couldn’t divine from their acetate foils: culture.

A little later, we had In Search of Excellence, the 1982 best-seller that turned into a cult. Tom Peters, the more exuberant of the book’s two authors, was a constant on pledge-drive public TV. As I watched him one Sunday morning with the sound off, his sweaty fervor and cutting gestures reminded me of the Bible-thumping preacher, Jimmy “I Sinned Against You” Swaggart. (These were my early days in California; I flipped through a lot of TV channels before Sunday breakfast, dazzled by the excess.)

Within a couple of years, several of the book’s exemplary companies — NCR, Wang, Xerox — weren’t doing so well. Peters’ visibility led to noisy accusations and equally loud denials of faking the data, or at least of carefully picking particulars.

These false prophets commit abuses under the color of authority. They want us to respect their craft as a form of science, when what they’re really doing is what Neil Postman, one of my favorite curmudgeons, views as simple storytelling: They felicitously arrange the facts in order to soothe anxiety in the face of a confusing if not revolting reality. (Two enjoyable and enlightening Postman books: Conscientious Objections, a series of accessible essays, and Amusing Ourselves To Death, heavier, very serious fare.)

A more recent and widely celebrated case of storytelling in a scientist’s lab coat is Clayton Christensen’s theory of disruptive innovation. In order to succeed these days — and, especially, to pique an investor’s interest — a new venture must be disruptive, with extra credit if the disrupter has attended the Disrupt conference and bears a Renommierschmiss from the Startup Battlefield.

(Credit: )

Christensen’s body of work is (mostly) complex, sober, and nuanced storytelling that’s ill-served by the overly-simple and bellicose Disruption! battle cry. Nonetheless, I’ll do my share and provide my own tech world simplification: The incumbency of your established company is forever threatened by lower cost versions of the products and services you provide. To avoid impending doom, you must enrich your offering and engorge your price tag. As you abandon the low end, the interloper gains business, muscles up, and chases you farther up the price ladder. Some day — and it’s simply a matter of time — the disruptor will displace you.

According to Christensen, real examples abound. The archetypes, in the tech world, are the evolution of the disk drive, and the disruptive ascension from mainframe to minicomputer to PC – and today’s SDN (Software Defined Networking) entrants.

But recently, skeptical voices have disrupted the Disruption business.

Ben Thompson (@monkbent) wrote a learned paper that explains What Clayton Christensen Got Wrong. In essence, Ben says, disruption theory is an elegant explanation of situations where the customer is a business that’s focused on cost. If the customer is a consumer, price is often trumped by the ineffable values (ease-of-use, primarily) that can only be experienced, that can’t be described in a dry bullet list of features.

More broadly, Christensen came under attack by Jill Lepore, the New Yorker staff writer who, like Christensen, is a Harvard academic. In a piece titled The Disruption Machine, What the gospel of innovation gets wrong, Lepore asserts her credentials as a techie and then proceeds to point out numerous examples where Christensen’s vaunted storytelling is at odds with facts [emphasis and edits mine]:

“In fact, Seagate Technology was not felled by disruption. Between 1989 and 1990, its sales doubled, reaching $2.4 billion, “more than all of its U.S. competitors combined,” according to an industry report. In 1997, the year Christensen published ‘The Innovator’s Dilemma,”’Seagate was the largest company in the disk-drive industry, reporting revenues of nine billion dollars. Last year, Seagate shipped its two-billionth disk drive. Most of the entrant firms celebrated by Christensen as triumphant disrupters, on the other hand, no longer exist

Between 1982 and 1984, Micropolis made the disruptive leap from eight-inch to 5.25-inch drives through what Christensen credits as the ‘Herculean managerial effort’ of its C.E.O., Stuart Mahon. But, shortly thereafter, Micropolis, unable to compete with companies like Seagate, failed. 

MiniScribe, founded in 1980, started out selling 5.25-inch drives and saw quick success. ‘That was MiniScribe’s hour of glory,’ the company’s founder later said. ‘We had our hour of infamy shortly after that.’ In 1989, MiniScribe was investigated for fraud and soon collapsed; a report charged that the company’s practices included fabricated financial reports and ‘shipping bricks and scrap parts disguised as disk drives.’”

Echoes of the companies that Tom Peters celebrated when he went searching for excellence.

Christensen is admired for his towering intellect and also for his courage facing health challenges — one of my children has witnessed both and can vouch for the scholar’s inspiring presence. Unfortunately, his reaction to Lepore’s criticism was less admirable. In a BusinessWeek interview Christensen sounds miffed and entitled:

“I hope you can understand why I am mad that a woman of her stature could perform such a criminal act of dishonesty—at Harvard, of all places.”

At Harvard, of all places. Hmmm…

In another attempt to disprove Jill Lepore’s disproof, a San Francisco- based investment banker wrote a scholarly rearrangement of Disruption epicycles. In his TechCrunch post, the gentleman glows with confidence in his use of the theory to predict venture investment successes and failures:

“Adding all survival and failure predictions together, the total gross accuracy was 84 percent.”


“In each case, the predictions have sustained 99 percent levels of statistical confidence without a flinch.”

Why the venture industry hasn’t embraced the model, and why the individual hasn’t become richer than Warren Buffet as a result of the unflinching accuracy remains a story to be told.

Back to the Disruption sage, he didn’t help his case when, as soon as the iPhone came out, he predicted Apple’s new device was vulnerable to disruption:

“The iPhone is a sustaining technology relative to Nokia. In other words, Apple is leaping ahead on the sustaining curve [by building a better phone]. But the prediction of the theory would be that Apple won’t succeed with the iPhone. They’ve launched an innovation that the existing players in the industry are heavily motivated to beat: It’s not [truly] disruptive. History speaks pretty loudly on that, that the probability of success is going to be limited.”

Not truly disruptive? Five years later, in 2012, Christensen had an opportunity to let “disruptive facts” enter his thinking. But no, he stuck to his contention that Modularity always defeats integration:

“I worry that modularity will do its work on Apple.”

In 2013, Ben Thompson, in his already quoted piece, called Christensen out for sticking to his theory:

“[…] the theory of low-end disruption is fundamentally flawed. And Christensen is going to go 0 for 3.”

Perhaps, like our poolside guru, Christensen believes he’s always right…but, on rare occasions, he’s simply wrong on the timing.

Apple will, of course, eventually meet its maker, whether through some far off, prolonged mediocrity, or by a swift, regrettable decision. But such predictions are useless, they’re storytelling – and a bad, facile kind at that. What would be really interesting and courageous would be a detailed scenario of Apple’s failure, complete with a calendar of main steps towards the preordained ending. No more Wrong on the Timing excuses.

A more interesting turn for a man of Christensen’s intellect and reach inside academia would be to become his own Devil’s Advocate. Good lawyers pride themselves in researching their cases so well they could plead either side. Perhaps Clayton Christensen could explain, with his usual authority, how the iPhone defines a new theory of innovation. Or why the Macintosh has prospered and ended up disrupting the PC business by sucking up half of the segment profits. He could then draw comparisons to other premium goods that are happily chosen by consumers, from cars to clothes and…watches.


Tim Cook Free At Last

Uncategorized By November 2, 2014 Tags: 10 Comments


by Jean-Louis Gassée

Trading one’s privacy for the benefit of others isn’t an easy decision. Tim Cook just made such a swap, and the reverberations are beginning to be heard.

I’m happy and relieved that Tim Cook decided to “come out”, to renounce his cherished privacy and speak of his sexual orientation in plain terms rather than veiled, contorted misdirections. The unsaid is toxic.

If you haven’t done so already, please take the time to read Tim’s I’m Proud to Be Gay Businessweek editorial. Soberly written and discreetly moving, the piece concludes with:

“…I’m doing my part, however small, to help others. We pave the sunlit path toward justice together, brick by brick. This is my brick.”

It’s an admirable cause…but why should I care? Why does this 70-year old French-born American, a happily married-up father of three adult and inexplicably civilized children, care that Cook’s sexuality is now part of the public record?


First, I like and respect Cook for what he does, how he does it, and the way he handles his critics. For the past three years he’s been bombarded by questions about Apple’s slowing growth and the absent Next Big Thing, he’s been criticized for both hastening and impeding the inevitable commoditization of All Things Apple, he’s been called a liar by the NYT. Above all, he’s had to suffer the hidden — and occasionally blatant — accusation: You’re no Steve Jobs.

Throughout it all, Cook has displayed a preternatural calm in refusing to take the bait. In a previous Monday Note, I attributed his ability to deflect the cruel jibes to his having grown up “different” in Alabama. In his editorial, Cook confirms as much:

“It’s been tough and uncomfortable at times… [but] it’s also given me the skin of a rhinoceros, which comes in handy when you’re the CEO of Apple.”

Second, I’ve seen the ravages of homophobia at close range. A salient and personal example is the young gay architect of our first Palo Alto house. He quickly sensed he could be open with us, and would tease my wife Brigitte by showing her pictures of a glorious group of young bucks on vacation in Greece, adding, “What a loss for females”. But he also told us of his shame when he became aware of his desires in his adolescence, that he kneeled down every night to pray that his god would have mercy and make him “normal”. His parents rejected him and refused to keep in touch, even after the HIV virus made him perilously sick.

One morning when we were driving to his place in San Francisco to deliver a painting Brigitte had made for him, his partner called and told us not to come. Our friend had just passed away, still unaccepted by his parents.

Another personal example. A local therapist, a gay Buddhist, told me he couldn’t work as an M.D. in his native Caracas because the oppressive culture wouldn’t allow a gay man to so much as touch another man — even as a doctor. When he decided to tell his parents he was gay, he had to take them to a California mountain and mellow them with a certain herb before they would hear him out, and even then they didn’t entirely embrace his “choice” of sexuality.

Years of conversation with the fellow — who’s exactly my age — in a setting that facilitates honesty have brought empathy and insights that aren’t prevalent or even encouraged in the Parisian culture I come from, even in the supposedly liberated Left Bank that has been the home of lionized gay men such as Yves Saint-Laurent and Karl Lagerfeld. (I recommend Alicia Drake’s The Beautiful Fall. Lagerfeld, Saint Laurent, and Glorious Excess in 1970s Paris, a well-document and beautifully written parallel life history.)

This leads me to my third point, brought up by my wife. Gays have always been accepted in creative milieus. In many fields — fashion, certainly, but even in high tech — it’s almost expected that a “designer” is homosexual. Despite counter examples such as  Christian Lacroix, or our own Sir Jony, the stereotype endures.

According to the stereotype, it’s okay for “artistes” (I’ve learned the proper dismissive pronunciation, an elongated ‘eee’ after the first ’t’) to be unconventional, but serious business people must be straight. When I landed in Cupertino in 1985, I became acquainted with the creative <=> gay knee jerk. True-blue business people who didn’t like Apple took to calling us “fags” because of our “creative excesses” and disregard of the establishment.

What Brigitte likes most about Cook’s coming out is that it portends a liberation of the Creative Ghetto. Cook isn’t just outing himself has a gay executive; he’s declaring that being gay — or “creatively excessive”, or unconventional — is fully appropriate at the very top of American business. It helps, she concludes, that Apple’s CEO has made his statement from a position of strength, at a time when the company’s fortunes have reached a new peak and his leadership is more fully recognized than ever.

The ripples now start. Perhaps they’ll bring retroactive comfort to many execs such as former BP CEO John Browne who, in 2007, left his job in fear of a revelation about his lifestyle – and an affirmation to myriads of “different” people at the bottom of the pyramid.

Tim Cook brings hope of a more accepting world – both inside and outside of business. For this he must be happy, and so am I.

And, while I’m at it, Happy Birthday.


Science Fiction: Apple Makes A Toaster Fridge…

Uncategorized By October 27, 2014 12 Comments


…a supremely elegant one, naturally.

Plummeting iPad sales rekindle fantasies of a hybrid device, a version that adopts PC attributes, something like a better execution of the Microsoft Surface Pro concept. Or not.

For a company that has gained a well-deserved reputation for its genre-shifting — even genre-creating — devices, it might seem odd that these devices evolve relatively slowly, almost reluctantly, after they’ve been introduced.

It took five years for the iPhone to grow from its original 3.5” in 2007, to a doubled 326 ppi on the same screen size for the June 2010 iPhone 4, to a 5” screen for the 2012 iPhone 5.

In the meantime, Samsung’s 5.3” Galaxy Note, released in 2011, was quickly followed by a 5.5” phablet version. Not to be outdone, Sony’s 2013 Xperia Z Ultra reached 6.4” (160 mm). And nothing could match the growth spurt of the long-forgotten (and discontinued) Dell Streak: from 5” in 2010 to 7” a year later.

Moreover, Apple’s leadership has a reputation — again, well-deserved — of being dismissive of the notion that their inspired creations need to evolve. While dealing with the iPhone 4 antenna fracas at a specially convened press event in 2010, a feisty Steve Jobs took the opportunity to ridicule Apple’s Brobdingnagian smarphone rivals, calling them “Hummers”, predicting that no one will buy a phone so big “you can’t get your hand around it”.

A smaller iPad? Nah, you’d have to shave your fingertips. Quoting the Grand Master in October 2010 [emphasis mine]:

“While one could increase the resolution to make up some of the difference, it is meaningless unless your tablet also includes sandpaper, so that the user can sand down their fingers to around one-quarter of their present size. Apple has done expensive user testing on touch interfaces over many years, and we really understand this stuff.

There are clear limits of how close you can place physical elements on a touch screen, before users cannot reliably tap, flick or pinch them. This is one of the key reasons we think the 10-inch screen size is the minimum size required to create great tablet apps.

For his part, Tim Cook has repeatedly used the “toaster-fridge” metaphor to dismiss the idea that the iPad needs a keyboard… and to diss hybrid tablet-PC devices such as Microsoft’s Surface Pro, starting with an April 2012 Earnings Call [emphasis and stitching mine]:

“You can converge a toaster and a refrigerator, but those aren’t going to be pleasing to the user. […] We are not going to that party, but others might from a defensive point of view.”

Recently, however, Apple management has adopted a more nuanced position. In a May 2013 AllThings D interview, Tim Cook cautiously danced around the iPhone screen size topic — although he didn’t waste the opportunity to throw a barb at Samsung [insert and emphasis mine]:

“We haven’t [done a bigger screen] so far, that doesn’t shut off the future. It takes a lot of really detailed work to do a phone right when you do the hardware, the software and services around it. We’ve chosen to put our energy in getting those right and have made the choices in order to do that and we haven’t become defocused working multiple lines.”

Sixteen months later, Apple’s Fall 2014 smartphone line-up sports three screen sizes: the 4” iPhone 5C and 5S , the new 4.7” iPhone 6, and the 5.5” iPhone 6 Plus phablet.

Is this apostasy? Fecklessness?

Remarking on Jobs’ quotable but not-always-lasting pronouncements, Cook gives us this:

“[Jobs] would flip on something so fast that you would forget that he was the one taking the 180 degree polar [opposite] position the day before. I saw it daily. This is a gift, because things do change, and it takes courage to change. It takes courage to say, ‘I was wrong.’ I think he had that.”

That brings us to the future of the iPad. In the same interview (in 2012) Cook expressed high hopes for Apple’s tablet:

“The tablet market is going to be huge… As the ecosystem gets better and better and we continue to double down on making great products, I think the limit here is nowhere in sight.”

Less than two years after the sky-is-the-limit pronouncement, iPad unit sales started to head South and have now plummeted for three quarters in a row (- 2,3%, – 9% and – 13% for the latest period). This isn’t to say that the iPad is losing ground to its competitors, unless you include $50 models. Microsoft just claimed $903M in Surface Pro revenue for the quarter ended last September, which, at $1K per hybrid, would be .9M units, or double that number if the company only sold its $499 year-old model. For reference, 12.3M iPads were sold in the same period (I don’t know any company, other than Apple, that discloses its tablet unit volume).

As Andreessen Horowitz’s Benedict Evans felicitously tweets it: There’re 2 tablet markets: next-gen computing vision, where Apple has 80%, and, bigger but quite separate, the cheap TV/casual games device.”

Still, the concern remains. Does the iPad own 80% of a shrinking market, or can the Cupertino team reboot sales and fulfill Tim Cook’s The Limit Is Nowhere In Sight promise?

What’s missing?

A hint might lie in plain sight at the coffee shop next door. We see laptops, a Kindle reader or two, and iPads – many with an attached keyboard. Toaster-fridges!

But here’s Craig Federighi, Apple’s Sr. VP of Software Engineering, who is fond of dismissing talk of touch-screen Macs:

“We don’t think it’s the right interface, honestly.”

I find Federighi’s remark a bit facile. Yes, touching the screen makes much more ergonomic sense for a tablet than for a laptop, but in view of the turnabouts discussed above, I don’t quite know what to make of the honestly part.

Frederigh may be entombed in the OS X and iOS software caves, but can he honestly ignore the beautiful Apple Wireless Keyboard proposed as an iPad accessory, or the many Logitech, Incase, and Belkin keyboards offered in the company’s on-line store? (Amazon ranks such keyboards between #20 and #30 in their bestsellers lists.) Is he suborning others to commit the crime of toaster-fridging?

In any case, the iPad + keyboard combo is an incomplete solution. It’s not that the device suffers from a lack of apps. Despite its poor curation, the App Store’s 675,000 iPad apps offer productivity, entertainment, education, graphic composition and editing, music creation, story-telling, and many other tools. As Father Horace (Dediu) likes to put it, the iPad can be “hired to do interesting jobs”.

No, what’s missing is that the iOS user interface building blocks are not keyboard-friendly. And when you start to list what needs to be done, such as adding a cursor, the iPad hybrid looks more and more like a Mac…but a Mac with smaller margins. The 128GB iPad plus an Apple Keyboard rings up at $131 less than a 11”, 128GB MacBook Air. (As an added benefit, perhaps the Apple toaster-fridge would come bundled with Gene Munster’s repeatedly predicted TV Set.)

On to better science fiction.

Let’s imagine what might happen next quarter when Intel finally ships the long-promised Broadwell processors. The new chips’ primary selling point is reduced power consumption. The Broadwell probably won’t dislodge ARM SoCs from smartphones, but a reduced appetite for electricity could enable a smaller, slimmer, lighter MacBook Air 2, with or without a double (linear) density Retina display.

Now consider last quarter’s iPad and Mac numbers, compared to the previous year:


Mac units grew 25% year-on-year, while iPads experienced a 7% decrease.

You’re in Apple’s driver seat: Do you try to make the iPad feel more like a Mac despite the risks on many levels (internal engineering, app developers, UI issues), or do you let nature to take its course and let the segment of more demanding users gravitate to the Mac, cannibalizing iPad sales as a result? Put another way, are you willing to risk the satisfaction of users who enjoy “pure tablet” simplicity in order to win over customers who will naturally choose a nimbler Mac?

PS: John Kirk just published a column titled The Apple Mac Takes Its Place In The Post-PC World where he digs up a prophetic Gates quote and explains the rise of the Mac as the weapon of choice for power users.


The two things that could hurt Google 

Uncategorized By October 26, 2014 Tags: , 7 Comments


Google’s recent Search Box feature is but one example of the internet giant’s propensity to use weird ideas to inflict damage upon itself. This sheds light on two serious dangers for Google: Its growing disconnection from the real world and its communication shortcomings. 

At first, the improved Google search box discreetly introduced on September 5 sounded like a terrific idea: you enter the name of a retailer — say Target, Amazon — and, within Google’s search result page, shows up another, dedicated search box in which you can search inside the retailer inventory. Weirdly enough, this new feature was not mentioned in a press release, but just in a casual Google Webmaster Central Blog post aimed at the tech in-crowd.

Evidently, it was also supposed to be a serious commercial enhancer for the search engine. Here is what it looked like as recently as yesterday:



Google wins on both ends: it keeps users on its own site (a good way to bypass the Amazon gravity well) while, in passing, cashing on ad modules purchased, in this case, both by itself bidding for the keyword “perceuse” (drill) on, and also by Amazon’s competitors offering the same appliance (and whose bids were lower.)

In due fairness, the Google Webmaster Blog explains how to bypass the second stage and how to make a search that lands directly to the site, in our example. Many US e-commerce sites did so. Why Amazon didn’t is still unclear.

Needless to say, this new feature triggered outrage from many e-commerce sites, especially in Europe. (I captured these screenshots on because no ads showed up for US retailers, most likely because I’m browsing form Paris).

For Google’s opponents, it was a welcome ammunition. Immediately, the Open Internet Project summoned a press conference (last Thursday Oct. 23), inviting journalists seen as supportive of their cause. In a previous Monday Note (see Google and the European media: Back to the Ice Age), I told the story of this advocacy group, mostly controlled by the German publishing giant Axel Springer AG, and the French media group Lagardère Active. The latter’s CEO, Denis Olivennes is well-know for his deft political maneuvers, much less so for his business acumen as he missed scores of digital trains in his long career in retail (he headed French retailer Fnac), and in the media business.

Realizing its mistake, Google quickly pulled back, removing the search box on several retailers’ sites, and announcing (though unofficially) that it was working on an opt-out system.

This incident is the perfect illustration of two major Google liabilities.

One: Google’s disconnect from the outside world keeps growing. More than ever, it looks like an insulated community, nurturing its own vision of the digital world, with less and less concern for its users who also happen to be its customers. It looks like Google lives in its own space-time (which is not completely a figure of speech since the company maintains its own set of atomic clocks to synchronize its data centers across the world independently from official time sources).

You can actually feel it when hanging around its vast campus, where large luxury buses coming from San Francisco pour out scores of young people, mostly male (70%) mostly white (61%), produced by the same set of top universities (in that order:  Stanford, UC Berkeley, Carnegie Mellon, MIT, UCLA…). They are pampered in the best possible way, with free food, on location dental care, etc. They see the world through the mirrored glass of their office, their computer screen and the reams of data that constitute their daily reality.

Google is a brainy but also messy company where the left hemisphere ignores what the other one does. Since the right one (the engineers) is particularly creative and productive, the left brain suffers a lot. In this recent case, a group of techies working at the huge search division (several thousands people) came up with this idea of an improved search box. Higher up, near the top, someone green-lighted the idea that went live early September. Many people from the left hemisphere — communication, legal, public affairs — might have been kept in the dark, not even willfully, by the engineering team, but simply by natural cockiness (or naiveté). However, I also suspect the business side of the company was in the loop (“Google” and “candor” make a solid oxymoron).

Two: Google has a chronic communication problem. The digital ecosystem is known for quickly testing and learning (as opposed to legacy media that are more into staying and sinking). In practical terms, they fire first and reflect afterwards. And sometimes retract. In the search box incident, the right attitude would have been to put up a communiqué saying basically, “Our genuine priority was to improve the user experience [the mandatory BS], but we found out that many e-retailers strongly disliked this new feature. As a result, we took the following steps, blablabla.” Instead, Google did nothing of the sort, only getting its engineering staff to quietly remove the offending search box.

There is a pattern to Google’s inability to properly communicate. You almost discover by accident that these people are doing stunning things in many fields. When the company is questioned, it almost never responds by providing solid data to make its point — that’s simply unbelievable from a company that is so obsessed with its reliance to hard facts. Recall Google’s internal adoption of W. Edwards Deming’s motto: In god we trust, all others bring data.

In parallel, the company practices access journalism, picking up the writer of its choosing, giving him/er a heads-up for a specific subject hoping for a good story. Here are two examples from Wired and The Atlantic.



These long-read “exclusive” and timely features were reported respectively on location from New Zealand and Australia. They are actually great and balanced pieces since both Wired’s Steven Levy and Atlantic’s Alex Madrigal are fine journalists.

While it never miss a opportunity to mention its vulnerability, Google is better than anyone else at nurturing it. Like Mikhail Gorbachev used to say about the crumbling USSR: “The steering is not connected to the wheels”. We all know what happened.