About Jean-Louis Gassée

http://

Posts by Jean-Louis Gassée:

Apple Software Quality Questions

 

by Jean-Louis Gassée

A flurry of recent software accidents in iOS and OS X raises questions about Apple’s management of its relentless increase in R&D spending.

For the past six months or so, I’ve become increasingly concerned about the quality of Apple software. From the painful gestation of OS X 10.10 (Yosemite) with its damaged iWork apps, to the chaotic iOS 8 launch, iCloud glitches, and the trouble with Continuity, I’ve gotten a bad feeling about Apple’s software quality management. “It Just Works”, the company’s pleasant-sounding motto, became an easy target, giving rise to jibes of “it just needs more work”.

I felt this was an appropriate Monday Note topic but kept procrastinating. Then the Holidays break came, including time on a boat with worse than no Internet – meaning frustratingly unpredictable and slow when on.

Coming back to the Valley, I read Marco Arment’s January 4th, 2015 post titled Apple has lost the functional high ground:

“We don’t need major OS releases every year. We don’t need each OS release to have a huge list of new features. We need our computers, phones, and tablets to work well first so we can enjoy new features released at a healthy, gradual, sustainable pace.

I fear that Apple’s leadership doesn’t realize quite how badly and deeply their software flaws have damaged their reputation, because if they realized it, they’d make serious changes that don’t appear to be happening. Instead, the opposite appears to be happening: the pace of rapid updates on multiple product lines seems to be expanding and accelerating.”

(Unfortunately, this well-meaning, reasoned critique from a respected Apple developer became fodder for the usual click-baiters, leading Arment to regret that he wrote it. This is sad.)

Arment isn’t the only one lamenting Apple’s software quality. See Glenn Fleishman’s well-documented list of nontrivial issues, or Michael Tsai’s compilation of comments from developers and engineers, such as this one from Geoff Wozniak (no relation to Woz):

“At this point, my default position on Apple software in OS X has moved from ‘probably good’ to ‘probably not OK’. They seem more interested in pumping out quantity by way of more upgrades. It’s death by a thousand cuts, but it’s death nonetheless.”

I’m late to this discussion but I’d like to add a few detailed observations of my own, examples of questionable design decisions, poor implementation, and other “broken windows”. Boredom may ensue.

We’ll start with Apple’s Pages word processor. When it was introduced ten years ago, I found it mostly pleasant, easy for my limited use, progressively improved over a succession of releases, with welcome features such as Google Search, Wikipedia, and Dictionary/Thesaurus integration.

Curiously, however, Pages did some things differently. Hyperlink creation, for example, was inconsistent with Apple Mail, TextEdit, and Microsoft Word conventions. With these “older” products, you select some text, press cmd-K, paste the URL of the desired destination, and you’re good to go:

351-1unnamed-9

In the new Pages, no cmd-K joy. You have to bring up the Inspector, paste the link in the URL field, and press Enter.

351-2-unnamed

It’s not overly complicated, but why abandon the simple ⌘-K convention used elsewhere on the Mac?

With each Pages update I hoped for a return to the ancient ways, and when Pages 5 came out in late 2013, I thought my prayers had been answered. I select some text, type ⌘-K, and up pops the link editor:

351-3-unnamed-8

I paste the target URL into the Link field, press Enter, and I’m done, right? I’ve just created a link to a MacWorld story.

But, no. If I go back to the link I just entered, I see this:

351-4-unnamed-6

The MacWorld URL I entered is gone, replaced by the “helpful” default, www.apple.com. I also try clicking on Go to Page; indeed, it takes me to www.apple.com.

This can’t be right…I click Edit and go through the process again, the intended link sticks this time. Out of fear of having stumbled on an unreproducible phantom quirk, I carefully step through the procedure several times from different angles.

If I tiptoe to the File menu and click Save after I’ve pasted the URL but without pressing Enter, the intended link stays; it’s not replaced by www.apple.com:

351-5unnamed-7

However, this only works if I Tab into the Link field and paste my URL. If I double-click on the pre-filled www.apple.com, paste the URL, and Save from the File menu, the link is gone. (Again, I carefully reproduced the procedure.)

This is madness.

But it doesn’t stop there.

Befuddled users found they couldn’t send Pages 5 files through Gmail. It’s now fixed, as the What’s New in Pages screen proudly claims…

351-6unnamed-5

…but how could such an obvious, non-esoteric bug escape Apple’s attention in the first place?

Then we have “deprecated” features. Gone are the convenient Writing Tools:

351-7unnamed-4

Search with Google is still there, but it’s harder to find; a Look-Up function bundles the Dictionary and Wikipedia but, believe it or not, there’s no Thesaurus. I also liked the Search function in Pages 4.3:

351-8unnamed-3

It’s gone in Pages 5. Admittedly, this might not be a big deal for most users, but it allowed me to have kremlinology fun with executive abuse of words such as “incredible” and other platitudinous phrases.

We know the official excuse for removing features: iOS compatibility. It’s a noble goal on paper, and it sounds good on stage and in Keynote slides, but iWork on iOS is far from a godsend. Creating even a moderately complex document on an iPad is an unpleasant, frustrating experience.

Even if we concede that iOS compatibility may mean some amount of “dumbing down” (and we’ll note that the MacWorld review was careful to call Pages 5 a different product rather than a mere update), why didn’t Apple catch more of the obvious bugs? I’d like to have a quiet on-on-one with the Pages product manager to hear his/her explanations for the state of the product.

I can’t leave Pages without a stop at the iCloud version. (Apple, probably taking a page from Google’s old playbook, labels all three iWork products “beta”.) I tried writing a Monday Note article in iCloud. Impossible, no links. If I turn to the version of Microsoft Word on their One Drive service…It Just Works:

351-9unnamed-2

Imagine Microsoft running an ad campaign: I’m One Drive, You’re iCloud…

To be complete, Microsoft’s Office Online isn’t without its own quirks. It loves me so much it refuses to sign me out:

351-10unnamed-1

We now turn to iTunes. Pages might not concern a majority of Mac users, but iTunes sure does, and it presents an even sorrier spectacle than Apple’s productivity apps.

A good product allows its users to build a mental model of what it does and how it does it. Paraphrasing Alan Kay, the user forms a what/how idea at the product’s door, then walks in and finds an Ali Baba cave full of pleasant surprises. How this applies to iTunes is left to the reader. iTunes is a mess, an accumulation of debris and additions without a discernible backbone. I won’t go as far as the Valley wag who calls iTunes Apple’s Vista, but iTunes reflects poorly on a company that takes prides in the fit and finish of its products.

For example, this is what I see when I open iTunes on my Mac:

351-11unnamed

If you squint, you’ll see the same Bach Orchestral Suites repeated six times, and Mozart’s Requiem four times. Entries in the Playlist are duplicated for no apparent reason. And let’s not even try to make and manage folders to group playlists by artist or other criteria. Nor can I make sense of the presentation of TV Show episodes. On my Apple TV, iTunes sometimes shows episodes in natural order, but then reverses them for no reason.

No need to continue the litany, the One Cockroach Theory tells us there are many more under the sink. Such as, I can’t resist, iMessages inconsistencies between devices.

Of course, making bugs lists is easier than finding solutions, particularly if we want to avoid “all you have to do” bromides. So, we’ll proceed with caution and look at some numbers.

In 2012, Apple revenue grew by 45% to $156.5B and R&D went up by 39% to $3.4B.
In 2013, revenue grew 9% to $171B but R&D went up 32% to $4.5B.
In 2014, revenue went up 7% to $183B while R&D grew 35% to $6B.

Such relentless increase in R&D spending isn’t “free”, it means hiring lots of people and starting many projects, or, worse, piling more people onto existing ones. This results in management problems, less visibility over a larger number of teams and, vertically, more opaque layers, less ability to diagnose people problems.

Another consideration is priorities. The received wisdom is that Apple engineers hail from Lake Wobegon: They’re “all above average”. But in a fight for resources, where do you put your best soldiers, on iOS or OS X? On Pages or Mail?

Apple execs aren’t indifferent to the company’s software quality problems, and they’re not unaware of the management pitfalls in fixing them. Take Apple Mail: For several years (close to five by my memory of conversations with Bertrand Serlet, then Apple’s head of OS development), Apple Mail had been a painful, many times a day irritant. It consumed so much computing power that the Activity Monitor on my MacBook Pro sometimes showed a CPU usage number as high as 257%, with fans spinning loudly, and general mail operations getting mysteriously stuck. Messages would disappear from a mailbox and yet be found by Spotlight, the Mac’s internal Search engine.

A recent OS X update seems have fixed these problems. A better manager was put in charge, people decisions were finally made, and Apple Mail is now (almost) boringly normal, receiving, sending, deleting, and sorting junk without fuss.

Let’s just hope that the all-important iTunes development team gets the “cure” it deserves, and iWorks after that.

Last, there is the mixed bag of comparisons. One side of the coin is Apple’s numbers are splendid. The quarterly results that will be disclosed next week (January 27th) are likely to show strong iPhone 6 sales and a continuation of Mac progress. And despite my bug list, Apple software still compares favorably to Windows 8 and Android offerings.

The other view is that the quality lapses we observe are the beginning of a slide into satisfied mediocrity, into organizations and projects that “run themselves”, that are allowed to continue for political reasons without regard for the joy of customers.

I know what I hope for. I don’t expect perfection, I’ve lived inside several sausage factories and remember the smell. If Apple were to spend a year concentrating on solid fixes rather than releasing software that’s pushed out to fit a hardware schedule, that would show an ascent rather than a slide.

JLG@mondaynote.com

The iPhone’s 8th Anniversary

 

by Jean-Louis Gassée

Smartphones existed before Steve Jobs introduced the iPhone on January 9th, 2007. But by upending existing technology platforms, application distribution, and carrier business models, he kickstarted a new era of computing whose impact is yet to be fully understood.

I knew one of the victims of the Charlie Hebdo massacre: Bernard Maris. We weren’t friends, just pleasantly casual acquaintances through the in-law side of my family. Typical Parisian dinner conversations “rearranging the world” led to a Palo Alto visit and an interview for a small Charlie Hebdo piece, complete with the requisite risqué drawing.

350-bernard-maris-07
[Bernard Maris]

After several false starts writing about the events in Paris, I’ve come to the conclusion that I’m too angry at too many targets, starting with certain cowards in the media who don’t understand that the fear of antagonizing oppressors perpetuates their power, that no good culture can exist without a dose of bad taste, that the demand to never be offended is inhumane. As Cardinal André Vingt-Trois, archbishop of Paris puts it: ‘A caricature, even in bad taste, criticism, even extremely unfair, cannot be put on the same plane as murder.

(Lovers of ironic detail will note that Cardinal Vingt-Trois was once the titular bishop of Thibilis, Algeria. In partibus infidelium.)

Instead, I will turn to a more positive train of thought: The beginning of the Smartphone 2.0 era.

Eight years ago, Steve Jobs walked onto the stage at MacWorld San Francisco and gave a masterful performance. His presentation is worth revisiting from time to time, a benchmark against which to evaluate a PowerPoint-addled CEO pitch or a product intro cum dance number.

In his talk, Jobs tells us that the iPhone is one of these products that, like the Mac and the iPod before, “changes everything”. He was right, of course, but one wonders… even with his enormous ambition, did Jobs envision that the iPhone would not only transform Apple and an entire industry, but that it would affect the world well beyond the boundaries of the tech ecosystem?

If the last sentence sounds a bit grand, let’s look at the transformation of the smartphone industry, starting with Apple.

In 2006, the year before the iPhone, Apple revenue was $19B (for the Fiscal Year ending in September). That year, iPod revenue exceeded the Mac, $7.7B to $7.3B…but no one claimed that Apple had become an iPod company.

In 2007, revenue climbed to $24B, a nice 26% progression. Mac sales retook the lead ($10.3B vs. $8.3B for the iPod), and iPhone sales didn’t register ($123M) as shipments started late in the Fiscal Year and accounting’s treatment of revenue blurred the picture.

In 2008, revenue increased to $32.5B, up 35%. iPhone revenue began to weigh in at $1.8B, far behind $9B for the iPod and $14.3B for the Mac (a nice 39% uptick).

In 2009, revenue rose by a more modest 12%, to $36.5B — this was the financial crisis. iPod declined to $8B (- 11%) as its functionality was increasingly absorbed by the iPhone, and the Mac declined a bit to $13.8B (- 3%). But these shortfalls were more than compensated for by iPhone revenue of $6.8B (+ 266%), allowing the company to post a $4B increase for the year. This was just the beginning. (And even the beginning was bigger than originally thought: Due to a change in revenue recognition esoterica, 2009 iPhone revenue would be recalculated at $13.3B.)

In 2010, iPhone revenue shot up to $25B, pushing Apple’s overall revenue up by a phenomenal 52% to $65B. The iPhone now represented more than 1/3rd of total revenue.

In 2011, growth accelerates, revenue reaches $108B (+ 66%), more than five times the pre-iPhone 2006 number. iPhone reaches $47B (+ 87%), now almost half of the company’s total.

For 2012, sales shoot up to $156.5B (+ 45%), and the iPhone reaches $80.5B (+ 71%). At such massive absolute numbers, 45% and 71% growth look almost unnatural as they appear to violate the Law of Large Numbers. As this happens, the iPhone crosses the 50% of total revenue threshold, and accounts for probably 2/3rd of Apple’s total profit.

Apple’s growth slowed in 2013 to a modest + 9%, with $171B overall revenue. The iPhone, weighing in at $91.3B (+ 16%), provides most ($12.6B) of the modest ($14B) overall revenue increase and 53% of total sales.

Last year, growth slows just a bit more: $182.8B (+ 7%) with the iPhone reaching $102B (+12%). Once again, the iPhone contributes most of the total revenue growth ($10.7B of $11.9B) and fetches 56% of the company’s sales. Notably, the iPad shows a 5% decrease and, at $2.3B, the iPod is becoming less and less relevant. (Although, how many companies would kill for $2.3B in music player revenue?)

The excellent Statista portal gives us picture of the iPhone’s emergence as Apple’s key product:

350_jlg_

While the company is about ten times larger than it was before the iPhone came out, the smartphone industry has become a nearly trillion dollar business. Depending on how we count units and dollars, if we peg Apple at 12% market share, that means the worldwide number across the smartphone industry reaches $800B. If we grant Apple just a 10% share, we have our $1T number.

For reference, still according to Statista, the two largest auto companies, Toyota and the Volkswagen Group, accounted for $485B in revenue in 2013:

350_JLG2-1

However we calculate its size, whether we place it at $800B or $1T, what we mustn’t do is think that the smartphone industry merely grew to this number. Today’s smartphone business has little in common with what it was in 2006.

Consider that Motorola “invented” the cell phone. Now Motorola is (essentially) gone: Acquired by Google, pawned of to Lenovo, likely to do well in its new owner’s Chinese line.

Nokia: The Finnish company stole the crown from Motorola when cell phones became digital and once shipped more than 100M phones per quarter. Since then, Nokia was Osborned by its new CEO, Stephen Elop, an ex-Microsoft  exec, and is now owned by Elop’s former employer. With 5% or less market share, Nokia is waste of Microsoft resources and credibility… unless they switch to making Android phones as a vehicle for the company’s  “Cloud-First, Mobile-First” apps.

Palm, a company that made a credible smartphone by building on their PDA expertise, was sold to HP and destroyed by it. They’re worse than dead, with a necrophiliac owner (TCL), and LG humping other parts of the corpse for their WebOS TVs and a WebOS smartwatch.

And then there’s the BlackBerry. Once the most capable of all the smartphones with a Personal Information Manager that was ahead of its time, it was rightly nicknamed CrackBerry by its devoted users. Now BlackBerry Limited is worth less than a 1/100th of Apple, and is trying to find a niche – or a seeker of body parts.

The change in the industry is, of course, far from being solely Apple’s “fault”. In many ways, Google destroyed more incumbents than Apple. Google acquired Android in 2005, well before the iPhone appeared. According to the always assertive Tomi Ahonen, China now sports more than 2000 (!) phone brands, all based on some Android derivative. And let’s not forget the voraciousness of Apple’s giant Korean frenemy Samsung, which acts as both a supplier of key iPhone components and a competitor.

But is the industry now settled? Are any of the current incumbents, including Apple, unassailable? Market-leading Samsung appears to be challenged by both Apple at the high end and Xiaomi from below, and has announced more recent troubles. Our friend Tomi argues that Xiaomi isn’t the new Apple but that Lenovo and Huawei are the ones to watch. And, of course, Apple is seen as a “hits” company, a business that lives and dies by its next box-office numbers — and the numbers for the new iPhone 6 aren’t in, yet They’re likely be very strong.

Regardless of any individual company’s business case, the overall of impact of the smartphone on the world is what counts the most. In a blog post titled Tech’s Most Disruptive Impact Over the Next Five Years, Tim Bajarin argues that the real Next Big Thing isn’t the Internet of Things, Virtual Reality, or BitCoin. These are all important advances, but nothing compared to the impact of smartphones [emphasis mine]:

“Another way to think of this is that smart phones or pocket computers connecting the next two billion people to the internet is similar to what the Gutenberg Press and the Bible were to the masses in the Middle Ages.”

As Horace Dediu notes, we’re on track to 75% US smartphone penetration by the end of 2014. The big impact to come will be getting the entire world to reach and exceed this degree of connectivity, especially in areas where there’s little or no wired connectivity.

This is what Steve Jobs started eight years ago by upending established players and carrier relationships.

JLG@mondaynote.com

MSFT Hardware Futures

 

(Strangely, the WordPress software gives me a “Bad Gateway 502″  error message when I fully spell the name of the Redmond company)

by Jean-Louis Gassée

Microsoft’s hardware has long been a source of minor profit and major pain. In this last 2014 Monday Note, we’ll look at the roles Microsoft’s hardware devices will play — or not —  in the company’s future.

Excluding keyboards and the occasional Philippe Starck mouse, Microsoft makes three kinds of hardware: Game consoles, PC-tablet hybrids, and smartphones. We’ll start with the oldest and least problematic category: Game consoles.

Building on the success of DOS and its suite of business applications, Microsoft brought forth the MSX reference platform in 1983. This was a Bill Gates-directed strategic move, he didn’t want to leave the low-end of the market “unguarded”. Marketed as “home computers”, which meant less capable than a “serious” PC, MSX-branded machines were manufactured by the likes of Sony and Yamaha, but its only serious impact was in gaming. As the Wikipedia articles says, “MSX was the platform for which major Japanese game studios, such as Konami and Hudson Soft, produced video game titles.”

For the next two decades, gaming remained a hobby for Microsoft. This changed in 2001 when the company took the matter into its own hands and built the Xbox. Again, the company wanted to guard against “home invasions”.

With its Intel processors and customized version of Windows, the first iteration of the Xbox was little more than a repackaged PC. The 2005 Xbox 360 was a heartier offering: It featured an IBM-designed Power-PC derivative processor and what some call a “second-order derivative” of Windows 2000 ported to the new CPU.

Now we have the Xbox One. Launched in 2013, the platform is supported by a full-fledged ecosystem of apps, media store, and controllers such as the remarkable Kinect motion sensor.

Success hasn’t been easy. The first Xbox sold in modest numbers, 24 million units in about five years. Sales of the second generation Xbox 360 were better — almost 80 million through 2013 — but it was plagued with hardware problems, colloquially known as the Red Ring of Death. Estimates of the number of consoles that were afflicted range from 23% to more than 54%. Predictably, poor reliability translated into heavy financial losses, as much as $2B annually. Today’s Xbox One fares a little better: It lost only $800M for the first eight months of its life, selling 11.7M units in the process.

Microsoft’s latest numbers bundle Xbox game consoles and Surface tablet-PCs into a single Computing & Gaming category that makes up $9.7B of the company’s $87B in revenue for the 2014 Fiscal Year. This means Xbox console contribute less than 10% of total sales, which is probably why Satya Nadella, Microsoft’s new CEO, has carefully positioned the Xbox business as less than central to the company’s business:

“I want us to be comfortable to be proud of Xbox, to give it the air cover of Microsoft, but at the same time not confuse it with our core.”

In other words, the Xbox business can continue… or it could disappear. Either way, it won’t have much effect on Microsoft’s bottom line or its future.

For the moment, and with the assistance of a holiday price cut, Xbox One sales are topping those of the Sony PS4, but that shouldn’t take our attention away from a more important trend: The rise of mobile gaming. Smartphones are gaining in raw computing power, connectivity, display resolution, and, as a result, support from game developers on both Android and iOS platforms. Larger, more capable game consoles aren’t going away, but their growth is likely to slow down.

The history of Xbox problems, Nadella’s lukewarm embrace of the series, the ascendency of mobile gaming… by comparison the Surface tablet should look pretty good.

It doesn’t.

When Steve Ballmer introduced the Surface device in June, 2012, he justified Microsoft’s decision to compete with its own Windows licensees by the need to create a “design point”, a reference for a new type of device that would complement the “re-imagined” Windows 8.

349-commercialSurface3

Two and a half years later, we know two things: Surface tablet sales have been modest (about $2B in the 2014 Fiscal Year ended June 30th), and Windows 8 frustrated so many users that Microsoft decided to re-re-imagine it and will re-introduce it as Windows 10, scheduled to be released in mid-2015.

Microsoft believes its Surface combines the best of the PC with the best of a tablet. While the hybrid form has given rise to some interesting explorations by PC makers, such as the Yoga 3 Pro by Lenovo, many critics — and not just Apple — condemn the hybrid as a compromise, as a neither-nor device that sub-optimizes both its tablet and its PC functions (see the tepid welcome given to the HP Envy).

What would happen if Microsoft stopped making Surface Pro tablets? Not much… perhaps a modest improvement in the company’s profit picture. While the latest quarter of Surface Pro 3 sales appear to have brought a small positive gross margin, Surface devices have cost Microsoft about $1.7B over the past two years. Mission accomplished for the “design point”.

We now turn to smartphones.

Under the Ballmer regime, Microsoft acquired Nokia rather than let its one and only real Windows Phone licensee collapse. It was a strategic move: Microsoft was desperate to achieve any sort of significance in the smartphone world after seeing its older Windows Mobile platform trounced by Google’s Android and Apple’s iOS.

In the latest reported quarter (ended September 30th 2014), Windows Phone hardware revenue was $2.6B. For perspective, iPhone revenue for the same period was $23.7B. Assuming that Apple enjoys about 12% of the world smartphone market, quarterly worldwide revenue for the sector works out to about $200B… of which Microsoft gets 1.3%. Perhaps worse, a recent study says that Microsoft’s share of the all-important China smartphone market is “almost non-existent at 0.4 percent”. (China now has more than twice as many smartphone users, 700M, as the US has people, 319M.)

Hardware development costs are roughly independent of volume, as is running an OS development organization. But hardware production costs are unfavorably impacted by low volumes. Windows Phones sell less and they cost more to make, putting Microsoft’s smartphone business in a dangerous downward spiral. As Horace Dediu once remarked, the phone market doesn’t forgive failure. Once a phone maker falls into the red, it’s nearly impossible to climb back into the black.

What does all this mean for Microsoft?

Satya Nadella, the company’s new CEO, uses the phrase “Mobile First, Cloud First” to express his top-level strategy. It’s a clear and relevant clarion call for the entire organization, and Microsoft seems to do well in the Cloud. But how does the Windows Phone death spiral impact the Mobile First part?

In keeping with its stated strategy, the company came up with Office apps on iOS and Android, causing bewilderment and frustration to Windows Phone loyalists who feel they’d been left behind. Versions of Office on the two leading mobile platforms ensures Microsoft’s presence on most smartphones, so why bother making Windows Phones?

Four and a half years ago, in a Monday Note titled Science Fiction: Nokia Goes Android, I fantasized that Nokia ought to drop its many versions of Symbian and adopt Android instead. Nokia insiders objected that embracing a “foreign OS” would cause them to lose control of their destiny. But that’s exactly what happened to them anyway when they jumped into bed with Stephen Elop and, a bit later, with Windows Phone. This started a process that severely damaged phone sales, ending with Microsoft acquisition of what was already a captive licensee.

Now the Android question rises again.

Should Microsoft pursue what looks like a manly but losing Windows Phone hardware strategy or switch to making and selling Android phones? Or should it drop an expensive smartphone design, manufacturing, and distribution effort altogether, and stay focused on what it does already, Mobile First, Cloud First applications?

The Intel Enigma

 

by Jean-Louis Gassée

Intel once turned down the opportunity to become the sole supplier of iPhone processors. Why haven’t they let go of their defocused search for the Next Big Thing and, instead, used All Means Necessary to regain the account?

Intel is a prosperous company. For the quarter ended last September, Intel scored $14.6B in Sales, 65% Gross Margin and $4.5B in Operating Income, a nice progression from the same period a year ago:

348_PL

A 65% Gross Margin is enviable for any company, and exceptional for a hardware maker: Intel’s GM is up in software territory. By comparison, Apple’s Gross Margin – considered too comfortable by followers of the Church of Market Share – stands at 38.6% for the 2014 Fiscal Year ended last September.

But when we take a closer look at the numbers, the picture isn’t as rosy.  Nearly 90% of Intel’s revenue — $12.9B of the total $14.6B  — comes from two groups: PC and Data Center (servers, networking, storage). Intel’s presence in the mobile world? Nonexistent:

348_non_existent

Essentially no revenue for Mobile and Communications, and a $1B loss. Looking at the past four quarters, Intel has lost about $4B in the pursuit of the mobile market (Daniel Eran Dilger says $7B in the past two years).

How did Intel handle the problem? By sweeping it under the rug. In November, Intel CEO Brian Krzanich announced that the company was merging Mobile into the PC group and would discontinue its $51 per Android tablet subsidy in 2015. This came just weeks after Krzanich had proclaimed Mission Accomplished in the tablet field:

“‘We’ve made good progress getting into tablets’ Krzanich told reporters ahead of the annual Intel Developer Forum in San Francisco. ‘We’ve gone from nothing to something where I consider us a real tablet manufacturer.’”

348The company’s inability to break into the mobile field — into any field other than PCs and servers — isn’t new, and it has worried Intel for decades. Company execs and strategists aren’t happy being the hardware half of Wintel, with being yoked to Microsoft’s fortunes. They like the money, but they want a “second source” for their profits, something other than the x-86 market, so they’ve embarked on a never-ending quest for the next stage in the Intel rocket.

(Of course, the company isn’t blind to the benefits of the Wintel alliance: Given two processors of equal merit, the one running Windows fetches the higher price, hence the ferocious tactics that have landed the company in court on several occasions.)

In its search for the Next Big Thing, Intel has tried alternatives to the x-86 architecture and come up with failures such as the iAPX 32 and the Itanium high-end server processor. The latter, a puzzling adoption of HP’s PA-RISC architecture, was quickly dubbed Itanic by tech wags as results failed to match lofty launch projections.

Intel has tried server farms, modems, networking equipment and, I kid you not, toy microscopes, but they somehow never got around to mobile. In the pre-iPhone days of the mobile world, the dominant players — Nokia, Motorola, Palm, Blackberry — all used processors based on the ARM architecture, processors that were too small and inexpensive to interest Intel. No money there, they cost 1/10th or less of a PC processor.

Steve Jobs offered Intel a chance to get into the mobile game: He asked the company to bid on an ARM-derivative for the iPhone. As Paul Otellini, Intel’s CEO at the time, wistfully and gallantly recounted, he gave the opportunity a pass, thinking the numbers (price and quantity) were too low. (An ex-Intel acquaintance told me that the business people felt they should go after Nokia, instead, because of its huge volume at the time.)

In 2006, after missing the iPhone, Intel sold its ARM processor business to Marvell.

When iPhones and Android-based smartphones took off, Intel insisted they weren’t concerned, that they would triumph in the end: We will win because our unapproachable manufacturing technology will produce x-86 processors that are superior in every way to ARM-based competitors.

We’ve heard this line every year since. The latest version is summarized in this slide from a November Investor Meeting:

348_intel_transistor

What Intel contends here is that they always have a three-year lead over their competition. — it’s just a given. What company execs fail to explain is why smartphone manufacturers have failed to see the light, and why Android tablet makers had to be bribed.

Now it seems that Intel has discovered the Internet of Things… and Wearables, of course. If you have the patience, flip through this 66-slide presentation that tells us that IoT will be huge because the objects around us will all become intelligent (a story we’ve already heard from companies such as Cisco — which is also looking for its Next Big Thing).

348_intel_IoT_evryw

Naturally, wearables are in there:

348_intel_IoT_evryw

This is painful. The whole presentation is an Everything And The Kitchen Sink assemblage of unoriginal ideas. There’s no focus in Intel’s Theory of Everything, no way to see when, where, and how the company will actually rise above the IoT noise.

As for wearables — now fashionable in more ways than one — Intel touts its new MICA bracelet:

348_fashion

You can “pre-order” yours at Opening Ceremony and have it delivered in time for Christmas.

Let’s not forget Intel’s partnership with Google for the next-gen Google Glass, nor the company’s acquisition of Basis, a maker of fitness wearables.

Certainly, the more “initiatives” Intel throws at the wall the higher the chances that one of them will stick. But from the outside, it feels like Intel is being driven by courtiers and PowerPoint makers, that senior management really doesn’t know what to do – and what not to do. (Krzanich says he green-lighted the MICA project because his wife approved of it “after using it for several days”.)

Of all the things Intel should and shouldn’t have done, the Apple element figures mightily. Since Intel offered a whopping $51 Android tablet subsidy, a charity that landed its mobile activities $7B in the red over two years, why didn’t the company offer Apple a $10 or $20 subsidy per processor as a way get the manufacturing relationship restarted? ‘We’ll beat Samsung’s prices, we’ll be your second source.’ If Intel’s 14nm process is so superior, how come Intel execs didn’t convince Apple to dump frenemy Samsung?

I see three possible answers.

One is that the 14 nanometer process is woefully late. Deliveries of some Broadwell chips (the nickname of the next round of x-86 processors) are now slated for early- to mid-2105. Apple might feel that Intel’s process needs to mature before it can deliver 300M units.

The second is that Intel’s claim of a three-year technology lead might be less than reliable. Samsung could be closer to delivering 14nm chips than Intel would like us (and itself) to believe.

Or perhaps Intel sees Apple as a real adversary that’s intent on designing all of its own processors, even for laptops and desktops that are currently powered by x-86 chips. But even so, why not become the preferred fabricator?

The Intel enigma remains: There’s no clear, resounding answer to the What’s Next?question, only some lingering puzzlement over What Happened?

JLG@mondaynote.com

Apple Watch: Hard Questions, Facile Predictions

 

by Jean-Louis Gassée

Few Apple products have agitated forecasters and competitors as much as the company’s upcoming watch. The result is an escalation of silly numbers – and one profound observation from a timepiece industry insider.

Apple Watch 2015 sales predictions are upon us: 10 million, 20 million, 24 million, 30 million, even 40 million! Try googling “xx million apple watch”, you won’t be disappointed. Microsoft’s Bing doesn’t put a damper on the enthusiasm either: It finds a prediction for first year sales of 60 million Apple Watches!

These are scientific, irony-free numbers, based on “carefully weighed percentages of iPhone users” complemented by investigations into “supplier orders” and backed up by interviews with “potential buyers”. Such predictions reaffirm our notion that the gyrations and divinations of certain anal-ists and researchers are best appreciated as black comedy— cue PiperJaffray’s Gene Munster with his long-running Apple TV Set gag.

Fortunately, others are more thoughtful. They consider how the product will actually be experienced by real people and how the new Apple product will impact the watch industry.

As you’ll recall from the September 14th “Apple Watch Is And Isn’t”, Jean-Claude Biver, the LVMH executive in charge of luxury watch brands such as Hublot and TAG Heuer, offered his frank opinion of the “too feminine” AppleWatch:

“To be totally honest, it looks like it was designed by a student in their first trimester.” 

At the time, it sounded like You Don’t Need This sour grapes from disconcerted competitor. But recently, Biver has also given us deeper, more meaningful thoughts:

“A smartwatch is very difficult for us because it is contradictory,” said Mr. Biver. “Luxury is supposed to be eternal … How do you justify a $2,000 smart watch whose technology will become obsolete in two years?” he added, waving his iPhone 6. 

Beautiful. All the words count. Luxury and Eternity vs. Moore’s Law.

To help us think about the dilemma that preoccupies the LVMH exec, let’s take a detour through another class of treasured objects: Single Lens Reflex cameras.

347_Nikon_F_Photomic_FTn-2714

 

Unless you were a photojournalist or fashion photographer taking hundreds of pictures a day, these cameras lasted forever. A decade of use would come and go without impact on the quality of your pictures or the solid feel of the product. People treasured their Hasselblads, Leicas (not an SLR), Canons, and more obscure marques such as the Swiss Alpa. (I’m a bit partial, here, I bought a Nikon exactly like the one pictured above back in 1970.)

These were purely mechanical marvels. No battery, the light sensor was powered by…light.

Then, in the mid-nineties, digital electronics begin to sneak in. Sensor chips replaced silver-halide film; microcomputers automated more and more of the picture taking process.

The most obvious victim was Eastman Kodak, a company that had dominated the photographic film industry for more than a century – and filed for bankruptcy in 2012. (A brief moment of contemplation: Kodak owned many digital photography patents and even developed the first digital camera in 1975, but “…the product was dropped for fear it would threaten Kodak’s photographic film business.” [Wikipedia].)

The first digital cameras weren’t so great. Conventional film users rightly criticized the lack of resolution, the chromatic aberrations, and other defects of early implementations. But better sensors, more powerful microprocessors, and clever software won the day. A particular bit of cleverness that has saved a number of dinner party snapshots was introduced in the late-nineties: A digital SLR sends a short burst of flash to evaluate the scene, and then uses the measurements to automatically balance shutter speed and aperture, thus correcting the classical mistake of flooding the subject in the foreground while leaving the background in shadows.

Digital cameras have become so good we now have nostalgia “film packs” that recreate the defects — sorry, the ambiance — of analog film stock such as Ektachrome or Fuji Provia.

But Moore’s Law exacts a heavy price. At the high end, the marvelous digital cameras from Nikon, Canon, and Sony are quickly displaced year after year by new models that have better sensors, faster microprocessors, and improved software. Pros and prosumers can move their lenses — the most expensive pieces of their equipment — from last year’s model to this one’s, but the camera body is obsolete. In this regard, the most prolific iterator seems to be Sony, today’s king of sensor chips; the company introduces new SLR models once or twice a year.

At the medium to low end, the impact of Moore’s law was nearly lethal. Smartphone cameras have become both so good and so convenient (see Chase Jarvis’ The Best Camera is the One That’s With You) that they have displaced almost all other consumer picture taking devices.

What does the history of cameras say for watches?

At the high-end, a watch is a piece of jewelry. Like a vintage Leica or Canon mechanical camera, a Patek watch works for decades, it doesn’t use batteries, and it doesn’t run on software. Mechanical watches have even gained a retro chic among under-forty urbanites who have never had to wind a stem. (A favorite of techies seems to be the Officine Panerai.)

So far, electronic watches haven’t upended the watch industry. They’ve mostly replaced a spring with a battery and have added a few functions and indicator displays – with terrible user interfaces. This is about to change. Better/faster/cheaper organs are poised to invade watches: sensors, microprocessors + software, wireless links…

Jean-Claude Biver is right to wonder how the onslaught of ever-improving technology will affect the “eternity” of the high-end, fashion-conscious watch industry…and he’ll soon find out:  He’s planning a (yet-to-be announced) TAG Heuer smartwatch.

With this in mind, Apple’s approach is intriguing: The company plays the technology angle, of course, and has loaded their watch with an amazing — some might say disquieting — amount of hardware and software, but they also play the fashion and luxury game. The company invited fashion writers to the launch; it hosted a celebrity event at Colette in Paris with the likes of Karl Lagerfeld and Anna Wintour in attendance. The design of the watch, the choice of materials for the case and bands/bracelets… Apple obviously intends to offer customers a differentiated combination of traditional fashion statement and high-tech functions.

But we’re left with a few questions…

Battery life is one question — we don’t know what it will be. The AppleWatch user interface is another.

The product seems to be loaded with features and apps… will users “get” the UI, or will they abandon hard-to-use functions, as we’ve seen in many of today’s complicated watches?

But the biggest question is, of course, Moore’s Law. Smartphone users have no problem upgrading every two years to new models that offer enticing improvements, but part of that ease is afforded by carrier subsidies (and the carriers play the subsidy game well, despite their disingenuous whining).

There’s no carrier subsidy for the AppleWatch. That could be a problem when Moore’s Law makes the $5K high-end model obsolete. (Expert Apple observer John Gruber has wondered if Apple could just update the watch processor or offer a trade-in — that would be novel.)

We’ll see how all of this plays out with regard to sales. I’ll venture that the first million or so AppleWatches will sell easily. I’ll certainly buy one, the entry-level Sports model with the anodized aluminum case and elastomer band. If I like it, I’ll even consider the more expensive version with a steel case and ingenious Marc Newson link bracelet — reselling my original purchase should be easy enough.

Regardless of the actual sales, first-week numbers won’t matter. It’s what happens after that that matters.

Post-purchase Word of Mouth is still the most potent marketing device. Advertising might create awareness, but user buzz is what makes or breaks products such as a watch or phone (as opposed to cigarettes and soft drinks). It will take a couple months after the AppleWatches arrive on the shelves before we can judge whether or not the product will thrive.

Only then can we have a sensible discussion about how the luxury segment of the line might plan to deal with the eternity vs. Moore’s Law question.

JLG@mondaynote.com

Clayton Christensen Becomes His Own Devil’s Advocate

 

by Jean-Louis Gassée

Every generation has its high tech storytellers, pundits who ‘understand’ why products and companies succeed and why they fail. And each next generation tosses out the stories of their elders. Perhaps it’s time to dispense with “Disruption”.

“I’m never wrong.”

Thus spake an East Coast academic, who, in the mid- to late-eighties, parlayed his position into a consulting money pump. He advised — terrorized, actually — big company CEOs with vivid descriptions of their impending failure, and then offered them salvation if they followed his advice. His fee was about $200K per year, per company; he saw no ethical problem in consulting for competing organizations.

The guru and I got into a heated argument while walking around the pool at one of Apple’s regular off-sites. When I disagreed with one of his wild fantasies, his retort never varied: I’m never wrong.

Had I been back in France, I would have told him, in unambiguous and colorful words, what I really thought, but I had acclimated myself to the polite, passive-aggressive California culture and used therapy-speak to “share my feelings of discomfort and puzzlement” at his Never Wrong posture. “I’ve always been proved right… sometimes it simply takes longer than expected”, was his comeback. The integrity of his vision wasn’t to be questioned, even if reality occasionally missed its deadline.

When I had entered the tech business a decade and a half earlier, I marveled at the prophets who could part the sea of facts and reveal the True Way. Then came my brief adventures with the BCG-advised diversification of Exxon into the computer industry.

Preying on the fear of The End of Oil in the late-seventies, consultants from the prestigious Boston company hypnotized company executives with their chant: Information Is The Oil of The 21st Century. Four billion dollars later (a lot of money at the time), Exxon finally recognized the cultural mismatch of the venture and returned to the well-oiled habits of its hearts and minds.

It was simply a matter of time, but the BCG was ultimately proved right — we now have our new Robber Barons of zeroes and ones. But they were wrong about something even more fundamental but slippery, something they couldn’t divine from their acetate foils: culture.

A little later, we had In Search of Excellence, the 1982 best-seller that turned into a cult. Tom Peters, the more exuberant of the book’s two authors, was a constant on pledge-drive public TV. As I watched him one Sunday morning with the sound off, his sweaty fervor and cutting gestures reminded me of the Bible-thumping preacher, Jimmy “I Sinned Against You” Swaggart. (These were my early days in California; I flipped through a lot of TV channels before Sunday breakfast, dazzled by the excess.)

Within a couple of years, several of the book’s exemplary companies — NCR, Wang, Xerox — weren’t doing so well. Peters’ visibility led to noisy accusations and equally loud denials of faking the data, or at least of carefully picking particulars.

These false prophets commit abuses under the color of authority. They want us to respect their craft as a form of science, when what they’re really doing is what Neil Postman, one of my favorite curmudgeons, views as simple storytelling: They felicitously arrange the facts in order to soothe anxiety in the face of a confusing if not revolting reality. (Two enjoyable and enlightening Postman books: Conscientious Objections, a series of accessible essays, and Amusing Ourselves To Death, heavier, very serious fare.)

A more recent and widely celebrated case of storytelling in a scientist’s lab coat is Clayton Christensen’s theory of disruptive innovation. In order to succeed these days — and, especially, to pique an investor’s interest — a new venture must be disruptive, with extra credit if the disrupter has attended the Disrupt conference and bears a Renommierschmiss from the Startup Battlefield.

345_christensen__
(Credit: www.claytonchristensen.com )

Christensen’s body of work is (mostly) complex, sober, and nuanced storytelling that’s ill-served by the overly-simple and bellicose Disruption! battle cry. Nonetheless, I’ll do my share and provide my own tech world simplification: The incumbency of your established company is forever threatened by lower cost versions of the products and services you provide. To avoid impending doom, you must enrich your offering and engorge your price tag. As you abandon the low end, the interloper gains business, muscles up, and chases you farther up the price ladder. Some day — and it’s simply a matter of time — the disruptor will displace you.

According to Christensen, real examples abound. The archetypes, in the tech world, are the evolution of the disk drive, and the disruptive ascension from mainframe to minicomputer to PC – and today’s SDN (Software Defined Networking) entrants.

But recently, skeptical voices have disrupted the Disruption business.

Ben Thompson (@monkbent) wrote a learned paper that explains What Clayton Christensen Got Wrong. In essence, Ben says, disruption theory is an elegant explanation of situations where the customer is a business that’s focused on cost. If the customer is a consumer, price is often trumped by the ineffable values (ease-of-use, primarily) that can only be experienced, that can’t be described in a dry bullet list of features.

More broadly, Christensen came under attack by Jill Lepore, the New Yorker staff writer who, like Christensen, is a Harvard academic. In a piece titled The Disruption Machine, What the gospel of innovation gets wrong, Lepore asserts her credentials as a techie and then proceeds to point out numerous examples where Christensen’s vaunted storytelling is at odds with facts [emphasis and edits mine]:

“In fact, Seagate Technology was not felled by disruption. Between 1989 and 1990, its sales doubled, reaching $2.4 billion, “more than all of its U.S. competitors combined,” according to an industry report. In 1997, the year Christensen published ‘The Innovator’s Dilemma,”’Seagate was the largest company in the disk-drive industry, reporting revenues of nine billion dollars. Last year, Seagate shipped its two-billionth disk drive. Most of the entrant firms celebrated by Christensen as triumphant disrupters, on the other hand, no longer exist

Between 1982 and 1984, Micropolis made the disruptive leap from eight-inch to 5.25-inch drives through what Christensen credits as the ‘Herculean managerial effort’ of its C.E.O., Stuart Mahon. But, shortly thereafter, Micropolis, unable to compete with companies like Seagate, failed. 

MiniScribe, founded in 1980, started out selling 5.25-inch drives and saw quick success. ‘That was MiniScribe’s hour of glory,’ the company’s founder later said. ‘We had our hour of infamy shortly after that.’ In 1989, MiniScribe was investigated for fraud and soon collapsed; a report charged that the company’s practices included fabricated financial reports and ‘shipping bricks and scrap parts disguised as disk drives.’”

Echoes of the companies that Tom Peters celebrated when he went searching for excellence.

Christensen is admired for his towering intellect and also for his courage facing health challenges — one of my children has witnessed both and can vouch for the scholar’s inspiring presence. Unfortunately, his reaction to Lepore’s criticism was less admirable. In a BusinessWeek interview Christensen sounds miffed and entitled:

“I hope you can understand why I am mad that a woman of her stature could perform such a criminal act of dishonesty—at Harvard, of all places.”

At Harvard, of all places. Hmmm…

In another attempt to disprove Jill Lepore’s disproof, a San Francisco- based investment banker wrote a scholarly rearrangement of Disruption epicycles. In his TechCrunch post, the gentleman glows with confidence in his use of the theory to predict venture investment successes and failures:

“Adding all survival and failure predictions together, the total gross accuracy was 84 percent.”

and…

“In each case, the predictions have sustained 99 percent levels of statistical confidence without a flinch.”

Why the venture industry hasn’t embraced the model, and why the individual hasn’t become richer than Warren Buffet as a result of the unflinching accuracy remains a story to be told.

Back to the Disruption sage, he didn’t help his case when, as soon as the iPhone came out, he predicted Apple’s new device was vulnerable to disruption:

“The iPhone is a sustaining technology relative to Nokia. In other words, Apple is leaping ahead on the sustaining curve [by building a better phone]. But the prediction of the theory would be that Apple won’t succeed with the iPhone. They’ve launched an innovation that the existing players in the industry are heavily motivated to beat: It’s not [truly] disruptive. History speaks pretty loudly on that, that the probability of success is going to be limited.”

Not truly disruptive? Five years later, in 2012, Christensen had an opportunity to let “disruptive facts” enter his thinking. But no, he stuck to his contention that Modularity always defeats integration:

“I worry that modularity will do its work on Apple.”

In 2013, Ben Thompson, in his already quoted piece, called Christensen out for sticking to his theory:

“[…] the theory of low-end disruption is fundamentally flawed. And Christensen is going to go 0 for 3.”

Perhaps, like our poolside guru, Christensen believes he’s always right…but, on rare occasions, he’s simply wrong on the timing.

Apple will, of course, eventually meet its maker, whether through some far off, prolonged mediocrity, or by a swift, regrettable decision. But such predictions are useless, they’re storytelling – and a bad, facile kind at that. What would be really interesting and courageous would be a detailed scenario of Apple’s failure, complete with a calendar of main steps towards the preordained ending. No more Wrong on the Timing excuses.

A more interesting turn for a man of Christensen’s intellect and reach inside academia would be to become his own Devil’s Advocate. Good lawyers pride themselves in researching their cases so well they could plead either side. Perhaps Clayton Christensen could explain, with his usual authority, how the iPhone defines a new theory of innovation. Or why the Macintosh has prospered and ended up disrupting the PC business by sucking up half of the segment profits. He could then draw comparisons to other premium goods that are happily chosen by consumers, from cars to clothes and…watches.

JLG@mondaynote.com

Payment Systems Adventures – Part II: Counting Friends And Foes

 

by Jean-Louis Gassée

It’s still too early to tell if Apple Pay will square the circle and emerge as a payment system that’s more secure, more convenient, and is widely accepted. MCX, a competing solution that faces more challenges than Apple Pay, helps shed light on the problem.

Apple Pay was announced on September 9th with the new iPhone 6, and rolled out on October 20th.

Where it works, it works well. The roster of banks and merchants that accept Apple’s new payment system is impressive, with big names such as Visa, American Express, Bank of America, Macy’s, Walgreens, and Whole Foods.

But it doesn’t work everywhere.

At launch, Apple Pay covered just a corner of the territory blanketed by today’s debit and credit cards. Then we had a real surprise. Within 24 hours of the roll-out, a handful of merchants, notably CVS, Rite-Aid, Target, and Wal-Mart, pulled the plug on Apple Pay. Apparently, these retailers suddenly remembered they had signed an exclusive agreement with Merchant Customer Exchange (MCX), a consortium of merchants that’s developing a competing payment system and mobile app called CurrentC. How a company as well-managed as CVS could have “forgotten” about its contract with MCX, and what the threatened consequences were for this lapse of memory aren’t known…yet.

We could wade through the professions of good faith and sworn allegiance (“We are committed to offering convenient, reliable, and secure payment methods that meet the needs of our customers”, says Rite Aid PR flack Ashley Flower), but perhaps we’re better off just listing MCX’s Friends and Foes.

Let’s start with the Foes: MCX hates credit cards. As Ron Shevlin of Snarketing 2.0 reports, the hatred isn’t even veiled:

“At last year’s BAI Retail Delivery conference…I asked Mr. Scott [Lee Scott, former Wal-Mart CEO] why, in the face of so many failed consortia before it, would MCX succeed? He said: ‘I don’t know that it will, and I don’t care. As long as Visa suffers.’”

This open animosity is understandable. When we look at Wal-Mart’s latest financials, we see that the company’s net income is 3.1% of sales. A typical Visa transaction costs them 1.51% of the amount that was charged. (See Credit Card Processing Fees & Rates for mind-numbing esoterica.)

For Wal-Mart and other big merchants, this 1.51% “donation” cuts too close to the bone, which is why they banded together to form the MCX consortium.

So we know who MCX’s Foes are…but does it have any Friends?

Not really. Counting the MCX merchants themselves as Friends is a bit of a circular argument — no sin there, it’s business — but it doesn’t build a compelling case for the platform.

What about consumers?

On paper, the MCX idea is simple: You download the CurrentC app onto your mobile phone and connect it to a bank account (ABA routing and account number). When it comes time to pay for a purchase, CurrentC displays a QR code that you present to the cashier. The code is scanned, there’s a bit of network chatter, and money is pumped directly out of your bank account.

Set-up details are still a bit sketchy. For example, the CurrentC trial run required the customer’s social security and driver’s license numbers in addition to the bank info. MCX says it doesn’t “expect” to have these additional requirements when CurrentC launches in early 2015, but I’m not sure that it matters. The requirement that the customer supply full banking details and then watch as money is siphoned off without delay is essentially no different from a debit card — but with a middle man inserted into the process. And while debit card use surpassed credit cards as far back as 2007, US shoppers are loathe to leave the warm embrace of their credits cards when it comes to big ticket purchases (average debit card charge in 2012: $37; credit card: $97; see here for yet more estorica).

What does MCX and CurrentC offer that would entice consumers to abandon their credit and debit cards and give merchants direct access to their bank accounts? The consortium can’t offer much in the way of financial incentives, not when the whole point is to remedy Visa’s 1.51% processing fee.

Now let’s look at Apple Pay; first, consumers.

Apple has recognized the strong bond between consumers and their credit cards: The average wallet contains 3.7 cards, with a balance of $7.3K outstanding. Apple Pay doesn’t replace credit cards so much as it makes the relationship more secure and convenient.

Set up is surprisingly error-free — and I’m always expecting bugs (more on that in a future note). The credit card that’s connected to your iTunes account is used by default, all you have to do is launch Passbook and re-enter the CVV number on the back. If you want to use a different credit card account, you take a picture of the card and Passbook verifies it with the issuer. Debit cards also work, although you have to call the bank…as in an actual telephone call. In my case, the bank had a dedicated 877 number. Less than 30 seconds later a confirmation appeared on my device.

Paying is simple: Gently tap the phone on a compatible, NFC-enabled point-of-sale terminal and place a registered finger on the TouchID button; the phone logs the transaction in Passbook and then vibrates pleasantly to confirm.

344-1

On the security side, Apple Pay doesn’t store your credit card number, neither on your  phone nor on Apple’s servers. Instead, the card is represented by an encrypted token; the most you can ever see are the last four digits of the card — even on an unlocked phone, even when you’re deleting a card from your Passbook.

Simplifying a bit (or a lot), during a transaction this encrypted token is sent through the NFC terminal back to your bank where it’s decrypted. Not even the merchant can see the card.

We can also count the banks and credit card companies as Friends of Apple Pay. For them, nothing much changes. A small fee goes to Apple (0.15%, $1 for every $700). Apple Pay isn’t meant to make money in itself, its goal is to make iDevices more pleasant, more secure.

Banks also like the potential for cutting down on fraud. In 2013, payment card fraud was pegged at $14B globally with half of that in the US. How deeply Apple Pay will cut into this number isn’t known, but the breadth and warmth of Apple Pay adoption by financial institutions speaks for their expectations. Wells Fargo, for example, put up a large billboard over the 101 freeway and promoted the service on social media:

What about merchants? This is a mixed bag; some seem to be fully on board, although, as ever, we mustn’t judge by what they say for the flackery on the left is just as disingenuous as the flackery on the right. Regard the declaration from pro-Apple Pay Walgreens: “Incorporating the latest mobile technology into our business is another way we are offering ultimate convenience for our customers.” Sound familiar?

Others, such as Wal-Mart are resolute Foes. Of the fence sitters, time will tell if they’ll jump into the Apple Pay camp or desert it. It’s still very early.

Questions remain regarding “loyalty” programs, a cynical word if there ever was one when considering the roach motels of frequent flyer miles. A quick look at in-app payments provides a possible answer.

One such example, no surprise, is Apple’s own App Store app where you can pay with Apple Pay after scanning an accessory’s barcode. The app triggers a confirmation email that shows that the merchant, Apple, is aware of the transaction. Other merchants can, and will, build their own apps, but there’s still the question of how a loyalty program will work for point-of-sale transactions where merchants can’t see your data.

In a clumsily worded comparison, MCX CEO Dekkers Davidson tries to imply that his company’s exclusivity requirement is much like AT&T’s arrangement with Apple in the early days of the iPhone, and arrangement that wasn’t permanent and that worked out well for both parties. In the meantime, one can visualize Apple engaging in an encircling action, patiently adding partners and features quarter after quarter.

We’ll know soon if this battle is won before it’s even fought.

JLG@mondaynote.com

Cultural Adventures In Payment Systems – Part I

 

by Jean-Louis Gassée

Payment systems and user behaviors have evolved over the past three decades. In this first of a two-part Monday Note, I offer a look at the obstacles and developments that preceded the Apple Pay launch.

When I landed in Cupertino in 1985, I was shocked, shocked to find that so much gambling was going on in here. But it wasn’t the Rick’s Café Américain kind of gambling, it was the just-as-chancy use of plastic: Colleagues would heedlessly offer their credit card numbers to merchants over the phone; serious, disciplined executives would hand their AmEx Platinums to their assistants without a second thought.

This insouciant way of doing business was unheard of in my Gallic homeland. The French (and most Europeans) think that trust is something that must be earned, that it has a value that is debased when it’s handed out too freely. They think an American’s trusting optimism is naïve, even infantile.

After I got over my shock, I came to see that my new countrymates weren’t such greenhorns. They understood that if you want to lubricate the wheels of commerce, you have to risk an occasional loss, that the rare, easily-remedied abuses are more than compensated for by a vibrant business. It wasn’t long before I, too, was asking my assistant to run to the store with my Visa to make last-minute purchases before a trip.

(On the importance of Trust and its contribution to The Wealth of Nations — or their poverty — see Alain Peyrefitte’s La Société de Confiance [The Society of Trust]. Unfortunately the work hasn’t been translated into English, unlike two of Peyrefitte’s other books, The Trouble with France and the prophetic 1972 best-seller The Immobile Empire. The title of the latter is a deplorable translation of Quand la Chine s’éveillera… Le monde tremblera, “When China Awakes, The World Will Shake”, a foreboding attributed to Napoleon.)

The respective attitudes towards trust point out a profound cultural difference between my two countries. But I also noticed other differences that made my new environment feel a little antiquated.

For example, direct deposit and direct deduction weren’t nearly as prevalent in America as in France. In Cupertino, I received a direct deposit paycheck, but checks to cover expenses were still “cut”, and I had to write checks for utilities and taxes and drop them in the mailbox.

Back in Paris, everything had been directly wired into and out of my bank account. Utilities were automatically deducted ten days after the bill was sent, as mandated by law (the delay allowed for protests and stop-payments if warranted). Paying taxes was ingeniously simple: Every month through October, a tenth of last year’s total tax was deducted from your bank account. In November and December, you got a reprieve for Holiday spending fun (or, if your income had gone up, additional tax payments to Uncle François — Mitterrand at the time, not Hollande).

Like a true Frenchman, I once mocked these “primitive” American ways in a conversation with a Bank of America exec in California. A true Californian, she smiled, treated me to a well-rehearsed Feel-Felt-Found comeback, and then, dropping the professional mask, she told me that the distrust of electronic commerce that so astonished me here in Silicon Valley (of all places), it was nothing compared to Florida where it’s common for retirees to cash their Social Security checks at the bank, count the physical banknotes and coins, and then deposit the money into their accounts.

Perhaps this was the heart of the “Trust Gap” between Europe and the US: Europeans have no problem trusting electronic commerce as long as it doesn’t involve people; Americans trust people, not machines.

My fascination with electronic payment modes preceded my new life in Silicon Valley. In 1981, shortly after starting Apple France, I met Roland Moreno, the colorful Apple ][ hardware and software developer who invented the carte à puce (literally “chip card”, but better known as a “smart card”) that’s found in a growing number of credit cards, and in mobile phones where it’s used as a Subscriber Identity Module (SIM).

343_jlg

The key to Moreno’s device was that it could securely store a small amount of information, hence its applicability to payment cards and mobile phones.

I carried memories of my conversations with Moreno with me to Cupertino. In 1986, we briefly considered adding a smart card reader to the new ADB Mac keyboard, but nothing came of it. A decade later, Apple made a feeble effort to promote the smart card for medical applications such as a patient ID, but nothing came of that, either.

The results of the credit cards industry’s foray into smart card technology were just as tepid. In 2002, American Express introduced its Blue smart card in the US with little success:

“But even if you have Blue (and Blue accounts for nearly 10% of AmEx’s 50 million cards), you may still have a question: What the hell does that chip (and smart cards in general) do?

The answer: Mostly, nothing. So few stores have smart-card readers that Blue relies on its magnetic strip for routine charges.”

In the meantime, the secure smart chip found its way into a number of payment cards in Europe, thus broadening the Trust Gap between the Old and New Worlds, and heightening Roland’s virtuous and vehement indignation.

(Moreno, who passed away in 2012, was a true polymath; he was an author, gourmand, inventor of curious musical instruments, and, I add without judgment, an ardent connoisseur of a wide range of earthly delights).

Next came the “Chip and PIN” model. Despite its better security — the customer had to enter a PIN after the smart card was recognized — Chip and PIN never made it to the US, not only because there were no terminals into which the customers could type their PINs (let alone that could read the smart cards in the first place), but, just as important, because there was a reluctance on the part of the credit card companies to disturb ingrained customer behavior.

It appeared that smart cards in the US were destined to butt up against these two insurmountable obstacles: The need for a new infrastructure of payment terminals and a skepticism that American customers would change their ingrained behavior to accept them.

In 2003, I made a bad investment in the payment system field on behalf of the venture company I had just joined. The entrepreneur that came to us had extensive “domain knowledge” and proposed an elegant way to jump over both the infrastructure and the customer behavior obstacles by foregoing the smart card altogether. Instead, he would secure the credit card’s magnetic stripe.

(more next page)

Tim Cook Free At Last

 

by Jean-Louis Gassée

Trading one’s privacy for the benefit of others isn’t an easy decision. Tim Cook just made such a swap, and the reverberations are beginning to be heard.

I’m happy and relieved that Tim Cook decided to “come out”, to renounce his cherished privacy and speak of his sexual orientation in plain terms rather than veiled, contorted misdirections. The unsaid is toxic.

If you haven’t done so already, please take the time to read Tim’s I’m Proud to Be Gay Businessweek editorial. Soberly written and discreetly moving, the piece concludes with:

“…I’m doing my part, however small, to help others. We pave the sunlit path toward justice together, brick by brick. This is my brick.”

It’s an admirable cause…but why should I care? Why does this 70-year old French-born American, a happily married-up father of three adult and inexplicably civilized children, care that Cook’s sexuality is now part of the public record?

342-Tim_cook

First, I like and respect Cook for what he does, how he does it, and the way he handles his critics. For the past three years he’s been bombarded by questions about Apple’s slowing growth and the absent Next Big Thing, he’s been criticized for both hastening and impeding the inevitable commoditization of All Things Apple, he’s been called a liar by the NYT. Above all, he’s had to suffer the hidden — and occasionally blatant — accusation: You’re no Steve Jobs.

Throughout it all, Cook has displayed a preternatural calm in refusing to take the bait. In a previous Monday Note, I attributed his ability to deflect the cruel jibes to his having grown up “different” in Alabama. In his editorial, Cook confirms as much:

“It’s been tough and uncomfortable at times… [but] it’s also given me the skin of a rhinoceros, which comes in handy when you’re the CEO of Apple.”

Second, I’ve seen the ravages of homophobia at close range. A salient and personal example is the young gay architect of our first Palo Alto house. He quickly sensed he could be open with us, and would tease my wife Brigitte by showing her pictures of a glorious group of young bucks on vacation in Greece, adding, “What a loss for females”. But he also told us of his shame when he became aware of his desires in his adolescence, that he kneeled down every night to pray that his god would have mercy and make him “normal”. His parents rejected him and refused to keep in touch, even after the HIV virus made him perilously sick.

One morning when we were driving to his place in San Francisco to deliver a painting Brigitte had made for him, his partner called and told us not to come. Our friend had just passed away, still unaccepted by his parents.

Another personal example. A local therapist, a gay Buddhist, told me he couldn’t work as an M.D. in his native Caracas because the oppressive culture wouldn’t allow a gay man to so much as touch another man — even as a doctor. When he decided to tell his parents he was gay, he had to take them to a California mountain and mellow them with a certain herb before they would hear him out, and even then they didn’t entirely embrace his “choice” of sexuality.

Years of conversation with the fellow — who’s exactly my age — in a setting that facilitates honesty have brought empathy and insights that aren’t prevalent or even encouraged in the Parisian culture I come from, even in the supposedly liberated Left Bank that has been the home of lionized gay men such as Yves Saint-Laurent and Karl Lagerfeld. (I recommend Alicia Drake’s The Beautiful Fall. Lagerfeld, Saint Laurent, and Glorious Excess in 1970s Paris, a well-document and beautifully written parallel life history.)

This leads me to my third point, brought up by my wife. Gays have always been accepted in creative milieus. In many fields — fashion, certainly, but even in high tech — it’s almost expected that a “designer” is homosexual. Despite counter examples such as  Christian Lacroix, or our own Sir Jony, the stereotype endures.

According to the stereotype, it’s okay for “artistes” (I’ve learned the proper dismissive pronunciation, an elongated ‘eee’ after the first ’t’) to be unconventional, but serious business people must be straight. When I landed in Cupertino in 1985, I became acquainted with the creative <=> gay knee jerk. True-blue business people who didn’t like Apple took to calling us “fags” because of our “creative excesses” and disregard of the establishment.

What Brigitte likes most about Cook’s coming out is that it portends a liberation of the Creative Ghetto. Cook isn’t just outing himself has a gay executive; he’s declaring that being gay — or “creatively excessive”, or unconventional — is fully appropriate at the very top of American business. It helps, she concludes, that Apple’s CEO has made his statement from a position of strength, at a time when the company’s fortunes have reached a new peak and his leadership is more fully recognized than ever.

The ripples now start. Perhaps they’ll bring retroactive comfort to many execs such as former BP CEO John Browne who, in 2007, left his job in fear of a revelation about his lifestyle – and an affirmation to myriads of “different” people at the bottom of the pyramid.

Tim Cook brings hope of a more accepting world – both inside and outside of business. For this he must be happy, and so am I.

And, while I’m at it, Happy Birthday.

JLG@mondaynote.com

Science Fiction: Apple Makes A Toaster Fridge…

 

…a supremely elegant one, naturally.

Plummeting iPad sales rekindle fantasies of a hybrid device, a version that adopts PC attributes, something like a better execution of the Microsoft Surface Pro concept. Or not.

For a company that has gained a well-deserved reputation for its genre-shifting — even genre-creating — devices, it might seem odd that these devices evolve relatively slowly, almost reluctantly, after they’ve been introduced.

It took five years for the iPhone to grow from its original 3.5” in 2007, to a doubled 326 ppi on the same screen size for the June 2010 iPhone 4, to a 5” screen for the 2012 iPhone 5.

In the meantime, Samsung’s 5.3” Galaxy Note, released in 2011, was quickly followed by a 5.5” phablet version. Not to be outdone, Sony’s 2013 Xperia Z Ultra reached 6.4” (160 mm). And nothing could match the growth spurt of the long-forgotten (and discontinued) Dell Streak: from 5” in 2010 to 7” a year later.

Moreover, Apple’s leadership has a reputation — again, well-deserved — of being dismissive of the notion that their inspired creations need to evolve. While dealing with the iPhone 4 antenna fracas at a specially convened press event in 2010, a feisty Steve Jobs took the opportunity to ridicule Apple’s Brobdingnagian smarphone rivals, calling them “Hummers”, predicting that no one will buy a phone so big “you can’t get your hand around it”.

A smaller iPad? Nah, you’d have to shave your fingertips. Quoting the Grand Master in October 2010 [emphasis mine]:

“While one could increase the resolution to make up some of the difference, it is meaningless unless your tablet also includes sandpaper, so that the user can sand down their fingers to around one-quarter of their present size. Apple has done expensive user testing on touch interfaces over many years, and we really understand this stuff.

There are clear limits of how close you can place physical elements on a touch screen, before users cannot reliably tap, flick or pinch them. This is one of the key reasons we think the 10-inch screen size is the minimum size required to create great tablet apps.

For his part, Tim Cook has repeatedly used the “toaster-fridge” metaphor to dismiss the idea that the iPad needs a keyboard… and to diss hybrid tablet-PC devices such as Microsoft’s Surface Pro, starting with an April 2012 Earnings Call [emphasis and stitching mine]:

“You can converge a toaster and a refrigerator, but those aren’t going to be pleasing to the user. […] We are not going to that party, but others might from a defensive point of view.”

Recently, however, Apple management has adopted a more nuanced position. In a May 2013 AllThings D interview, Tim Cook cautiously danced around the iPhone screen size topic — although he didn’t waste the opportunity to throw a barb at Samsung [insert and emphasis mine]:

“We haven’t [done a bigger screen] so far, that doesn’t shut off the future. It takes a lot of really detailed work to do a phone right when you do the hardware, the software and services around it. We’ve chosen to put our energy in getting those right and have made the choices in order to do that and we haven’t become defocused working multiple lines.”

Sixteen months later, Apple’s Fall 2014 smartphone line-up sports three screen sizes: the 4” iPhone 5C and 5S , the new 4.7” iPhone 6, and the 5.5” iPhone 6 Plus phablet.

Is this apostasy? Fecklessness?

Remarking on Jobs’ quotable but not-always-lasting pronouncements, Cook gives us this:

“[Jobs] would flip on something so fast that you would forget that he was the one taking the 180 degree polar [opposite] position the day before. I saw it daily. This is a gift, because things do change, and it takes courage to change. It takes courage to say, ‘I was wrong.’ I think he had that.”

That brings us to the future of the iPad. In the same interview (in 2012) Cook expressed high hopes for Apple’s tablet:

“The tablet market is going to be huge… As the ecosystem gets better and better and we continue to double down on making great products, I think the limit here is nowhere in sight.”

Less than two years after the sky-is-the-limit pronouncement, iPad unit sales started to head South and have now plummeted for three quarters in a row (- 2,3%, – 9% and – 13% for the latest period). This isn’t to say that the iPad is losing ground to its competitors, unless you include $50 models. Microsoft just claimed $903M in Surface Pro revenue for the quarter ended last September, which, at $1K per hybrid, would be .9M units, or double that number if the company only sold its $499 year-old model. For reference, 12.3M iPads were sold in the same period (I don’t know any company, other than Apple, that discloses its tablet unit volume).

As Andreessen Horowitz’s Benedict Evans felicitously tweets it: There’re 2 tablet markets: next-gen computing vision, where Apple has 80%, and, bigger but quite separate, the cheap TV/casual games device.”

Still, the concern remains. Does the iPad own 80% of a shrinking market, or can the Cupertino team reboot sales and fulfill Tim Cook’s The Limit Is Nowhere In Sight promise?

What’s missing?

A hint might lie in plain sight at the coffee shop next door. We see laptops, a Kindle reader or two, and iPads – many with an attached keyboard. Toaster-fridges!

But here’s Craig Federighi, Apple’s Sr. VP of Software Engineering, who is fond of dismissing talk of touch-screen Macs:

“We don’t think it’s the right interface, honestly.”

I find Federighi’s remark a bit facile. Yes, touching the screen makes much more ergonomic sense for a tablet than for a laptop, but in view of the turnabouts discussed above, I don’t quite know what to make of the honestly part.

Frederigh may be entombed in the OS X and iOS software caves, but can he honestly ignore the beautiful Apple Wireless Keyboard proposed as an iPad accessory, or the many Logitech, Incase, and Belkin keyboards offered in the company’s on-line store? (Amazon ranks such keyboards between #20 and #30 in their bestsellers lists.) Is he suborning others to commit the crime of toaster-fridging?

In any case, the iPad + keyboard combo is an incomplete solution. It’s not that the device suffers from a lack of apps. Despite its poor curation, the App Store’s 675,000 iPad apps offer productivity, entertainment, education, graphic composition and editing, music creation, story-telling, and many other tools. As Father Horace (Dediu) likes to put it, the iPad can be “hired to do interesting jobs”.

No, what’s missing is that the iOS user interface building blocks are not keyboard-friendly. And when you start to list what needs to be done, such as adding a cursor, the iPad hybrid looks more and more like a Mac…but a Mac with smaller margins. The 128GB iPad plus an Apple Keyboard rings up at $131 less than a 11”, 128GB MacBook Air. (As an added benefit, perhaps the Apple toaster-fridge would come bundled with Gene Munster’s repeatedly predicted TV Set.)

On to better science fiction.

Let’s imagine what might happen next quarter when Intel finally ships the long-promised Broadwell processors. The new chips’ primary selling point is reduced power consumption. The Broadwell probably won’t dislodge ARM SoCs from smartphones, but a reduced appetite for electricity could enable a smaller, slimmer, lighter MacBook Air 2, with or without a double (linear) density Retina display.

Now consider last quarter’s iPad and Mac numbers, compared to the previous year:

341_jlg_table

Mac units grew 25% year-on-year, while iPads experienced a 7% decrease.

You’re in Apple’s driver seat: Do you try to make the iPad feel more like a Mac despite the risks on many levels (internal engineering, app developers, UI issues), or do you let nature to take its course and let the segment of more demanding users gravitate to the Mac, cannibalizing iPad sales as a result? Put another way, are you willing to risk the satisfaction of users who enjoy “pure tablet” simplicity in order to win over customers who will naturally choose a nimbler Mac?

JLG@mondaynote.com

PS: John Kirk just published a column titled The Apple Mac Takes Its Place In The Post-PC World where he digs up a prophetic Gates quote and explains the rise of the Mac as the weapon of choice for power users.