About Jean-Louis Gassée

http://

Posts by Jean-Louis Gassée:

New iWork: Another Missed Opportunity To Set Expectations

 

With the 5.0 iWork suite we revisit Apple’s propensity to make lofty claims that fall short of reality. The repetition of such easily avoidable mistakes is puzzling and leads us to question what causes Apple executives to squander the company’s well-deserved goodwill.

Once upon a time, our youngest child took it upon herself to sell our old Chevy Tahoe. She thought her father was a little too soft in his negotiations on the sales lot, too inclined to leave money on the table in his rush to end the suffering.

We arrive at the dealership. She hops out, introduces herself to the salesperson, and then this kid — not yet old enough to vote — begins her pitch. She starts out by making it clear that the car has its faults: a couple dents in the rear fender, a stubborn glove compartment door, a cup holder that’s missing a flange. Flaws disclosed, she then shows off the impeccable engine, the spotless interior, the good-as-new finish (in preparation, she’d had the truck detailed inside and out, including the engine compartment).

The dealer was charmed and genuinely complimentary. He says my daughter’s approach is the opposite of the usual posturing. The typical seller touts the car’s low mileage, the documented maintenance, the vows of unimpeachable driver manners. The seller tries to hide the tired tires and nicked rims, the white smoke that pours from the tail pipe, the “organic” aroma that emanates from the seat cushions — as if these flaws would go unnoticed by an experienced, skeptical professional.

‘Give the bad news first’ said the gent. ‘Don’t let the buyer discover them, it puts you on the defensive. Start the conversation at the bottom and end with a flourish.’ (Music to this old salesman’s ears. My first jobs were in sales after an “unanticipated family event” threw me onto the streets 50 years ago. I’m still fond of the trade, happiest when well executed, sad when not).

The fellow should have a word or two with Apple execs. They did it again, they bragged about their refurbished iWork suite only to let customers discover that the actual product fails to meet expectations.

We’ll get into details in a moment, but a look into past events will help establish the context for what I believe to be a pattern, a cultural problem that starts at the top (and all problems of culture within a company begin at the executive level).

Readers might recall the 2008 MobileMe announcement, incautiously pitched as Exchange For The Rest of Us. When MobileMe crashed, the product team was harshly criticized by the same salesman, Steve Jobs, who touted the product in the first place. We’ll sidestep questions of the efficacy of publicly shaming a product team, and head to more important matters: What were Jobs and the rest of Apple execs doing before announcing MobileMe? Did they try the product? Did they ask real friends — meaning non-sycophantic ones — how they used it, for what, and how they really felt?

Skipping some trivial examples, we land on the Maps embarrassment. To be sure, it was well handled… after the fact. Punishment was meted out and an honest, constructive apology made. The expression of regret was a welcome departure from Apple’s usual, pugnacious stance. But the same questions linger: What did Apple execs know and when did they know it? Who actually tried Apple Maps before the launch? Were the execs who touted the service ignorant and therefore incompetent, or were they dishonest, knowingly misrepresenting its capabilities? Which is worse?

This is a pattern.

Perhaps Apple could benefit from my daughter’s approach: Temper the pitch by confessing the faults…

“Dear customers, as you know, we’re playing the long game. This isn’t a finished product, it’s a work in progress, and we’ll put your critical feedback to good use.”

Bad News First, Calibrate Expectations. One would think that (finally!) the Maps snafu would have seared this simple logic into the minds of the Apple execs.

But, no.

We now have the iWork missteps. Apple calls its new productivity suite “groundbreaking”. Eddy Cue, Apple’s head of Internet Software and Services, is ecstatic:

“This is the biggest day for apps in Apple’s history. These new versions deliver seamless experiences across devices that you can’t find anywhere else and are packed with great features…” 

Ahem… Neither in the written announcement nor during the live presentation will one find a word of caution about iWork’s many unpleasant “features”.

The idea, as best we can discern through the PR fog, is to make iOS and OS X versions of Pages, Numbers, and Keynote “more compatible” with each other (after Apple has told us, for more than two years, how compatible they already are).

To achieve this legitimate, long game goal, the iWork apps weren’t just patched up, they were re-written.

The logic of a fresh, clean start sounds compelling, but history isn’t always on the side of rewriting-from-scratch angels. A well-known, unfortunate example is what happened when Lotus tried a cross-platform rewrite of its historic Lotus 1-2-3 productivity suite. Quoting from a Wikipedia article:

“Lotus suffered technical setbacks in this period. Version 3 of Lotus 1-2-3, fully rewritten from its original macro assembler into the more portable C language, was delayed by more than a year as the totally new 1-2-3 had to be made portable across platforms and fully compatible with existing macro sets and file formats.”

The iWorks rewrite fares no better. The result is a messy pile of missing features and outright bugs that educed many irate comments, such as these observations by Lawrence Lessig, a prominent activist, Harvard Law professor, and angry Apple customer [emphasis and edits mine]:

“So this has been a week from Apple hell. Apple did a major upgrade of its suite of software — from the operating system through applications. Stupidly (really, inexcusably stupid), I upgraded immediately. Every Apple-related product I use has been crippled in important ways.

… in the ‘hybrid economy’ that the Internet is, there is an ethical obligation to treat users decently. ‘Decency’ of course is complex, and multi-faceted. But the single dimension I want to talk about here is this: They must learn to talk to us. In the face of the slew of either bugs or ‘features’ (because as you’ll see, it’s unclear in some cases whether Apple considers the change a problem at all), a decent company would at least acknowledge to the public the problems it identifies as problems, and indicate that they are working to fix it.”

Lessig’s articulate blog post, On the pathological way Apple deals with its customers (well worth your time), enumerates the long litany of iWork offenses.

Srange Paste Behavior copy

[About that seemingly errant screenshot, above...keep reading.]

Shortly thereafter, Apple issued a support document restating the reasons for the changes:

“…applications were rewritten from the ground up to be fully 64-bit and to support a unified file format between OS X and iOS 7 versions” 

and promising fixes and further improvements:

“We plan to reintroduce some of these features in the next few releases and will continue to add brand new features on an ongoing basis.”

Which led our Law Professor, who had complained about the “pathologically constipated way in which Apple communicates with its customers”, to write another (shorter) post and thank the company for having at last “found its voice”…

Unfortunately, Lessig’s list of bugs is woefully short of the sum of iWork’s offenses. For example, in the old Pages 4.0 days, when I click on a link I’m transported to the intended destination. In Pages 5.0, instead of the expected jump, I get this…

[See above.]

Well, I tried…CMD-CTRL-Shift-4, frame the shot, place the cursor, CMD-V… Pages 5.0 insists on pasting it smack in the middle of a previous paragraph [again, see above].

Pages has changed it’s click-on-a-link behavior; I can get used to that, but…it won’t let me paste at the cursor? That’s pretty bad. Could there be more?

There’s more. I save my work, restart the machine, and the Save function in Pages 5.0 acts up:

Pages 5.0 Autosave Bug copy

What app has changed my file? Another enigma. I’m not sharing with anyone, just saving my work in my Dropbox, something that has never caused trouble before.

Another unacceptable surprise: Try sending a Pages 5.0 file to a Gmail account. I just checked, it still doesn’t work. Why wasn’t this wasn’t known in advance – and not fixed by now?

I have to stop. I’ll leave comparing the even more crippled iCloud version of iWork to the genuinely functional Web version of Office 365 for another day and conclude.

First. Who knew and should have known about iWork’s bugs and undocumented idiosyncrasies? (I’ll add another: Beware the new autocorrect)

Second. Why brag instead of calmly making a case for the long game and telling loyal customers about the dents they will inevitably discover?

Last and most important, what does this new fiasco say about the Apple’s management culture? The new iPhones, iPad and iOS 7 speak well of the company’s justly acclaimed attention to both strategy and implementation. Perhaps there were no cycles, no neurons, no love left for iWork. Perhaps a wise general puts the best troops on the most important battles. Then, why not regroup, wait six months and come up with an April 2014 announcement worthy of Apple’s best work?

JLG@mondaynote.com

———

This hasn’t been a good week using Apple products and services. I’ve had trouble loading my iTunes Music library on an iPad, with Mail and other Mavericks glitches, moving data and apps from one computer to another, a phantom Genius Bar appointment in another city and a stubborn refusal to change my Apple ID. At every turn, Apple support people, in person, on the phone or email, were unfailingly courteous and helpful. I refrained from mentioning iWork to these nice people.

 

Intel Is Under New Management – And It Shows

 

Intel rode the PC wave with Microsoft and built an seemingly insurmountable lead in the field of “conventional” (PCs and laptops) microprocessors. But, after his predecessor missed the opportunity to supply the CPU chip for Apple’s iPhone, Intel’s new CEO must now find a way to gain relevance in the smartphone world.

In last May’s The Atlantic magazine, Intel’s then-CEO Paul Otellini confessed to a mistake of historic proportions. Apple had given Intel the chance to be part of the smartphone era, to supply the processor for the first iPhone… and Otellini said no [emphasis and light editing mine]:

“The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do… At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.”
“…while we like to speak with data around here, so many times in my career I’ve ended up making decisions with my gut, and I should have followed my gut. [...] My gut told me to say yes.”

That Otellini found the inner calm to publicly admit his mistake — in an article that would be published on his last day as CEO, no less — is a testament to his character. More important, Otellini’s admission unburdened his successor, Brian Krzanich, freeing him to steer the company in a new direction.

And Krzanich is doing just that.

First: House cleaning. Back in March 2012, the Wall Street Journal heralded Intel as The New Cable Guy. The idea was to combine an Intel-powered box with content in order to serve up a quality experience not found elsewhere (read Apple, Netflix, Roku, Microsoft…). To head the project, which was eventually dubbed OnCue, Intel hired Erik Huggers, a senior industry executive and former head of BBC Online.

At the All Things D conference in February, Huggers announced that the TV service would be available later this year. The Intel TV chief revealed no details about how the service OnCue would differ from existing competitors, or how much the thing would cost…but he assured us that the content would be impressive (“We are working with the entire industry”), and the device’s capabilities would be comprehensive (“This is not a cherry-pick… this is literally everything”).

Intel seemed to be serious. We found out that more than 1,000 Intel employees in Oregon had been engaged in testing the product/service.

Then Krzanich stepped in, and applied a dose of reality:

Intel continues to look at the business model…. we are not experts in the content industry and we’re being careful.” [AllThingsD: New Intel CEO Says Intel TV Sounds Great in Theory. But …]

Indeed, to those of us who have followed the uneasy dance between Apple and content providers since the first Apple TV shipped in 2007, the Intel project sounded bold, to say the least.

Late September, the project was put on hold and, last week, the news came that OnCue had been cancelled and allegedly offered to Verizon, whose V Cast media distribution feats come to mind…

Even before OnCue’s cancellation was made official, the well-traveled Erik Huggers appeared to show an interest in the Hulu CEO job. (If Mr Huggers happens to be reading this: I’d be more than happy to relieve you of the PowerPoints that you used to pitch the project to Intel’s top brass, not to mention the updates on the tortuous negotiations for content, and the reports from the user testing in Oregon. These slides must make fascinating corpospeak logic.)

Krzanich quickly moved from doubt to certainty. He saw that OnCue would neither make money by itself, nor stimulate sales or margins for its main act, x86 processors. OnCue would never be an Apple TV “black puck”, a supporting character whose only mission is to make the main personal computers (small, medium and large; smartphones, tablets and conventional PCs) more useful and pleasant.

So he put an end to the impossible-to-justify adventure.

That was easy.

Tackling Intel’s failure to gain a significant role in the (no longer) new world of smartphones is a much more complicated matter.

With its x86 processors, Intel worked itself into a more-than-comfortable position as part of the Wintel ecosystem. The dominant position achieved by the Microsoft-Intel duopoly over two decades yielded correspondingly high margins for both.

But smartphones changed the game. ARM processors proved themselves better than x86 at the two tasks that are integral to personal, portable devices: lowering power consumption and customization. The ARM architecture didn’t have to wait for the iPhone and Android handsets to dominate the cell phone business. Just as Windows licensing spawned a large number of PC makers, ARM licensing contributed to the creation of a wide range of processor design and manufacturing companies. The ARM site claims 80 licensees for its newer Cortex family and more than 500 for its older Classic Arm processors. No monopoly means lower margins.

Intel saw the unattractive margins offered by ARM processors and didn’t want to commit the billions of dollars required by a fab (a chip manufacturing plant) for a product that would yield profits that were well below Wall Street expectations.

The prospect of bargain basement margins undoubtedly figured in Otellini’s decision to say no to the iPhone. In 2006, no one could have predicted that it could have been made up in volume, that there would be a billion smartphone sales in 2014. (I’m basing the 1B number for the entire industry on Horace Dediu’s estimate of 250 million iOS devices for 2014.)

Even if the Santa Clara company had had the foresight to accept lower margins in order to ensure their future in the smartphone market, there would still have been the problem of customization.

Intel knows how to design and manufacture processors that used “as is” by PC makers. No customization, no problems.

This isn’t how the ARM world works. Licensees design processors that are customized for their specific device, and they send the design to a manufacturer. Were Intel to enter this world, they would no longer design processors, just manufacture them, an activity with less potential for profit.

This explains why Intel, having an ARM license and making XScale processors, sold the business to Marvell in 2006 – a fateful date when looking back on the Apple discussions.

But is Intel’s new CEO is rethinking the “x86 and only x86″ strategy? Last week, a specialty semiconductor company called Altera announced that Intel would fabricate some if its chips containing a 64-bit ARM processor. The company’s business consists of offering faster development times through “programmable logic” circuits. Instead of a “hard circuit” to be designed, manufactured, tested, debugged, modified and sent back to the manufacturing plant in lengthy and costly cycles, you buy a “soft circuit” from Altera and similar companies (Xilinx comes to mind). This more expensive device can be reprogrammed on the spot to assume a different function, or correct the logic in the previous iteration. Pay more and get functioning hardware sooner, without slow and costly turns through a manufacturing process.

With this in mind, what Intel will someday manufacture for Altera isn’t the 64-bit ARM processor that excited some observers: “Intel Makes 14nm ARM for Altera“. The Stratix 10 circuits Altera contracts to Intel manufacturing are complicated and expensive ($500 and up) FPGA (Field Programmable Gate Array) devices where the embedded ARM processor plays a supporting, not central, role. This isn’t the $20-or-less price level arena in which Intel has so far declined to compete.

Manufacturing chips for Altera might simply be work-for-hire, a quick buck for Intel, but I doubt it. Altera’s yearly revenue is just shy of $2B; Intel is a $50B company. The newly announced device, just one in Altera’s product lines, will not “move the needle” for Intel — not in 2014 (the ship date isn’t specified), or ever.

Instead, I take this as a signal, a rehearsal.  250M ARM SoCs at $20 each would yield $5B in revenue, 10% of Intel’s current total…

This might be what Krzanich had in mind about when he inked the “small” manufacturing agreement with Altera; perhaps he was weighing the smaller margins of ARM processors against the risk of slowing PC sales.

Graciously freed from the past by his predecessor, it’s hard to see how Intel’s new CEO won’t take the plunge and use the company’s superb manufacturing technology to finally

make ARM processors.

JLG@mondaynoye.com

 

Security Shouldn’t Trump Privacy – But I’m Afraid It Will

 

The NSA and security agencies from other countries are shooting for total surveillance, for complete protection against terrorism and other crimes. This creates the potential for too much knowledge falling one day in the wrong hands.

An NSA contractor, Edward Snowden, takes it upon himself to gather a mountain of secret internal documents that describe our surveillance methods and targets, and shares them with journalist Glenn Greenwald. Since May of this year, Greenwald has provided us with a trickle of Snowden’s revelations… and our elected officials, both here and abroad, treat us to their indignation.

What have we learned? We Spy On Everyone.

We spy on enemies known or suspected. We spy on friends, love interests, heads of state, and ourselves. We spy in a dizzying number of ways, both ingenious and disingenuous.

(Before I continue, a word on the word “we”. I don’t believe it’s honest or emotionally healthy to say “The government spies”. Perhaps we should have been paying more attention, or maybe we should have prodded our solons to do the jobs we elected them for… but let’s not distance ourselves from our national culpability.)
You can read Greenwald’s truly epoch-making series On Security and Liberty in The Guardian and pick your own approbations or invectives. You may experience an uneasy sense of wonder when contemplating the depth and breadth of our methods, from cryptographic and social engineering exploits (doubly the right word), to scooping up metada and address books and using them to construct a security-oriented social graph.

We manipulate technology and take advantage of human foibles; we twist the law and sometimes break it, aided by a secret court without opposing counsel; we outsource our spying by asking our friends to suck petabytes of data from submarine fiber cables, data that’s immediately combed for keywords and then stored in case the we need to “walk back the cat“.

NSA-Merkel-Phone

Sunday’s home page of the German site Die Welt

The reason for this panopticon is simple: Terrorists, drugs, and “dirty” money can slip through the tiniest crack in the wall. We can’t let a single communication evade us. We need to know everything. No job too small, no surveillance too broad.

As history shows, absolute anything leads to terrible consequences. In a New York Review of Books article, James Bamford, the author of noted books on the NSA, quotes Senator Frank Church who, way back in 1975, was already worried about the dangers of absolute surveillance [emphasis mine]:

“That capability at any time could be turned around on the American people and no American would have any privacy left, such [is] the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide. If this government ever became a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back, because the most careful effort to combine together in resistance to the government, no matter how privately it was done, is within the reach of the government to know. Such is the capability of this technology…. I don’t want to see this country ever go across the bridge. I know the capacity that is there to make tyranny total in America, and we must see to it that this agency and all agencies that possess this technology operate within the law and under proper supervision, so that we never cross over that abyss. That is the abyss from which there is no return.

From everything we’ve learned in recent months, we’ve fallen into the abyss.

We’ve given absolute knowledge to a group of people who want to keep the knowledge to themselves, who seem to think they know best for reasons they can’t (or simply won’t) divulge, and who have deemed themselves above the law. General Keith Alexander, the head of the NSA, contends that “the courts and the policy-makers” should stop the media from exposing our spying activities. (As Mr. Greenwald witheringly observes in the linked-to article, “Maybe [someone] can tell The General about this thing called ‘the first amendment’.”)

Is the situation hopeless? Are we left with nothing but to pray that we don’t elect bad guys who would use surveillance tools to hurt us?

I’m afraid so.

Some believe that technology will solve the problem, that we’ll find ways to hide our communications. We have the solution today! they say: We already have unbreakable cryptography, even without having to wait for quantum improvements. We can hide behind mathematical asymmetry: Computers can easily multiply very large numbers to create a key that encodes a message, but it’s astronomically difficult to reverse the operation.

Is it because of this astronomic difficulty — but not impossibility — that the NSA is “the largest employer of mathematicians in the country“? And is this why “civilian” mathematicians worry about the ethics of those who are working for the Puzzle Palace?

It might not matter. In a total surveillance society, privacy protection via unbreakable cryptography won’t save you from scrutiny or accusations of suspicious secrecy. Your unreadable communication will be detected. In the name of State Security, the authorities will knock on your door and demand the key.

Even the absence of communication is suspect. Such mutism could be a symptom of covert activities. (Remember that Bin Laden’s compound in Abbottabad was thoroughly unwired: No phones, no internet connection.)

My view is that we need to take another look at what we’re pursuing. Pining for absolute security is delusional, and we know it. We risk our lives every time we step into our cars — or even just walk down the street — but we insist on the freedom to move around. We’re willing to accept a slight infringement on our liberties as we obey the rules of the road, and we trust others will do the same. We’re not troubled by the probability of ending up mangled while driving to work, but the numbers aren’t unknown (and we’re more than happy to let insurance companies make enormous profits by calculating the odds).

Regarding surveillance, we could search for a similar risk/reward balance. We could determine the “amount of terror” we’re willing to accept and then happily surrender just enough of our privacy to ensure our safety. We could accept a well-defined level of surveillance if we thought it were for a good cause (as in keeping us alive).

Unfortunately, this pleasant-sounding theory doesn’t translate into actual numbers, on either side of the equation. We have actuarial tables for health and automotive matters, but none for terrorism; we have no way of evaluating the odds of, say, a repeat of the 9/11 terrorist attack. And how do you dole out measures of privacy? Even if we could calculate the risk and guarantee a minimum of privacy, imagine that you’re the elected official who has to deliver the message:

In return for guaranteed private communication with members of your immediate family (only), we’ll accept an X% risk of a terrorist attack resulting in Y deaths and Z wounded in the next T months.

In the absence of reliable numbers and courageous government executives, we’re left with an all-or-nothing fortress mentality.

Watching the surveillance exposition unfold, I’m reminded of authoritarian regimes that have come and gone (and, in some cases, come back). I can’t help but think that we’ll coat ourselves in the lubricant of social intercourse: hypocrisy. We’ll think one thing, say another, and pretend to ignore that we’re caught in a bad bargain.

JLG@mondaynote.com

 

iPhone 5S surprises

 

I will withhold judgment on the new iPhone until I have a chance to play customer, buy the product (my better half seems to like the 5C while I pine for a 5S), and use it for about two weeks — the time required to go beyond my first and often wrong impressions”.

I wrote those words a little over a month ago. I’ve now played customer for the requisite two weeks — I got an iPhone 5S on October 3rd — and I’m prepared to report.

But first, some context.

iPhone launches always generate controversy, there’s always something to complain about: Antennagate for the iPhone 4, the Siri beta for the 4S, the deserved Maps embarrassment last year – with a clean, dignified Tim Cook apology.

(Whether these fracas translate into lost revenue is another matter).

As I sat in the audience during the introduction of the original iPhone, back in January, 2007, I thought the demo was too good, that Steve was (again) having his way with facts. I feared that when the product shipped a few months later, the undistorted reality would break the spell.

We know now that the iPhone that Steve presented on the stage was unfinished, that he trod a careful path through a demo minefield. But the JesusPhone that Apple shipped — unfinished in many ways (no native apps, no cut-and-paste) — was more than a success: It heralded the Smartphone 2.0 era.

iphone 5s

This year, Tim Cook introduced the riskiest hardware/software combination since the original iPhone. The iPhone 5S wants to be more than just “new and improved”, it attempts to jump off the slope with its combination of two discontinuities: a 64-bit processor and a new 64-bit iOS. Will it work, or will it embarrass itself in a noisome backfire?

First surprise: It works.

Let me explain. I have what attorneys call “personal knowledge” of sausage factories, I’ve been accountable for a couple and a fiduciary for several others. I have first-hand experience with the sights, the aromas, the tumult of the factory floor, so I can’t help but wince when I approach a really new product, I worry in sympathy with its progenitors. The 5S isn’t without its “aromas” (we’ll get to those later), but the phone is sleek and attractive, the house apps are (mostly) solid, and the many new Application Programming Interfaces (API) promise novel applications. Contrary to some opinions, there are fewer warts than anyone could have expected.

Surprise #2, the UI: I had read the scathing critiques of the spartan excesses, and, indeed, I feel the drive for simplicity occasionally goes too far. The buttons on the built-in timer are too thin, too subdued. When I meditate in the dark I can’t distinguish Start from Cancel without my glasses. But I’m generally happy with the simpler look. Windows and views get out of the way quickly and gracefully, text is neatly rendered, the removal of skeuomorphic artifacts is a relief.

The next surprise is the fingerprint sensor a.k.a. Touch ID. Having seen how attempts to incorporate fingerprint recognition into smartphones and laptops have gone nowhere, I had my doubts. Moreover, Apple had acquired AuthenTec, the company that created the fingerprint sensor, a mere 15 months ago. Who could believe that Apple would be able to produce a fingerprint-protected iPhone so quickly?

But it works. It’s not perfect, I sometimes have to try again, or use another finger (I registered three on my right hand and two on my left), but it’s clear that Apple has managed to push Touch ID into the category of “consumer-grade technology”: It works often enough and delivers enough benefit to offset the (small) change in behavior.

A personal favorite surprise is Motion Sensing.

When Apple’s Marketing Supremo Phil Schiller described the M7 motion processor, I didn’t think much of it, I was serving the last days of my two-month sentence wearing the JawBone UP bracelet mentioned in a previous Monday Note. (A friend suggested I affix it to his dog’s collar to see what the data would look like.)

Furthermore, the whole “lifestyle monitoring” business didn’t seem like virgin territory. The Google/Motorola Moto X smartphone introduced last August uses a co-processor that, among other things, monitors your activities, stays awake even when the main processor is asleep, and adjusts the phone accordingly. A similar co-processing arrangement is present in Moto X’s predecessors, the Droid Maxx, Ultra and Mini.

But then I saw a Twitter exchange about Motion Sensing apps about a week after I had activated my iPhone 5S. One thumb touch later, the free Pedometer++ app asked for my permission to use motion data (granted) and immediately told me how many steps I’d taken over the past seven days.

I went to the chauffeured iPhone on my wife’s desk and installed the app. I did the same on friends’ devices. The conclusion was obvious: The M7 processor continuously generates and stores motion data independent of any application. A bit of googling shows that there are quite a few applications that use the motion data that’s obligingly collected by the M7 processor; I downloaded a number of these apps and the step counts are consistent.

(Best in class is the ambitious MotionX 24/7. Philippe Kahn’s company FullPower Technologies licenses MotionX hardware and software to many motion-sensing providers, including Jawbone and, perhaps, Apple. Wearable technologies aren’t just for our wrists…we carry them in our pockets.)

My wife asked if her iPhone would count steps from within her handbag. Ever the obliging husband, I immediately attended to this legitimate query, grabbed her handbag, and stepped out of the house for an experimental stroll. A conservatively dressed couple walked by, gave me a strange look, and didn’t respond to my evening greeting, but, indeed, the steps were counted.

A question arises: Does Apple silently log my movements? No, my iPhone records my locomotion, but the data stays within the device — unless, of course, I let a specific application export them. One must be aware of the permissions.

Other 5S improvements are welcome but not terribly surprising. The camera has been smartly enhanced in several dimensions; search finally works in Mail; and, to please Sen. McCain, apps update themselves automatically.

All of this comes with factory-fresh bugs, of course, a whiff of the sausage-making apparatus. iPhoto crashed on launch the first three or four times I tried it, but has worked without complaint since then.  A black Apple logo on a white background appeared and then quickly disappeared — too brief to be a full reboot, too sparse to be part of an app.

I’ve had to reboot the 5S to recover a dropped cellular connection, and have experienced hard-to-repeat, sporadic WiFi trouble that seems to spontaneously cure itself.(“How did you fix it?” asks my wife when her tech chauffeur gets the sullen device to work again. “I don’t know, I poke the patient everywhere until it responds.”)

From my admittedly geeky perspective, I’m not repelled by these glitches, they didn’t lose my data or prevent me from finishing a task. They’re annoying, but they’re to be expected given the major hardware and software changes. And I expect that the marketplace (as opposed to the kommentariat) will shrug them off and await the bug fixes that will take care of business.

So, yes, overall, the “discontinuous” 5S works.

[I'm also using a pre-release of Mavericks, the upcoming 10.9 version of OS X, on two Macs. There, I wonder if I'm not seeing the opposite of the iPhone 5S: less risk, more bugs. I hope things straighten out for the public release. I'll report if and when warranted.] [I can't resist: The Washington Post's Wonkblog calls the iPhone's third color... Dignified Gold. I wonder: Is it a compliment to Sir Jony's unerring taste? Or a clever, indirect ethnic slur?]

JLG@mondaynote.com

Microsoft Mission Impossible

 

You’re Microsoft’s new CEO. How do you like staring at the abyss between two mutually exclusive ways of making money? The old business model, Windows and Office licensing, is going away. The Devices and Services future puts you in direct competition against the likes of Google and Apple as well as former licensing vassals such as HP and Dell. Can you take the company to the other side, or will you fall to the bottom of the business model transition canyon?

Life used to be simple and immensely profitable at Microsoft. As its name implies, the company started as a supplier of microcomputer software. Simplifying a bit, it all started with the BASIC interpreter, which found its way into many early personal computers including the Apple ][. After that came DOS, the operating system for IBM’s Personal Computer; and Multiplan, an early foray into desktop productivity. DOS begat Windows, and Multiplan was succeeded in steps by the full Office suite. Through a series of astute business and lawyerly maneuvers, the Windows + Office combo eventually spread to virtually all PC clones.

This made Microsoft the most successful software company the world had ever seen, and its founding CEO, Bill Gates, became the richest man on the planet. In 2000, the company’s market capitalization reached $540B (approximately $800B in today’s dollars). As this Wikinvest graph shows, Microsoft dwarfed all other tech companies:

msft_graph1

(At the time, the NASDAQ index of mostly tech stocks stood a little above 4,000, it closed at 3,792 this past Friday.)

Back then, Windows + Office licensing was the only money pump that really mattered. Everything else — all other software products and even sales of enterprise servers — either depended on Microsoft’s huge PC installed base, or didn’t move the needle. Hardware and entertainment lines of business were largely immaterial; online activities weren’t yet the money sink we’ve seen in recent years.

According to the company’s 2000 Annual Report, the combination of the “Windows Platforms” and “Productivity Applications” accounted for $19.3B in revenue ($9.3B and $10B, respectively). That’s 84% of the company’s $23B total revenue and, even more important, 98% of Microsoft’s Operating Income!

Moving to Q1 2013, the market capitalization picture has drastically changed:

msft_graph2

Google is in many ways becoming Microsoft 2.0, Oracle has grown nicely, and Apple is now on top.

What happened?

Mobile personal computing happened. Smartphones and tablets are displacing conventional PCs, desktops, and laptops.

To put it even more succinctly: the iPhone did it.

When Steve Jobs stepped onto the stage at MacWorld in January, 2007, there were plenty of smartphones on the market. Windows Mobile, Palm Treo, Nokia, Blackberry… But Apple’s iPhone was different. It really was a personal computer with a modern operating system. While the iPhone didn’t initially support third party apps, a Software Development Kit (SDK) and App Store were soon introduced.

Android quickly followed suit, the Smartphone 2.0 race was on, and the incumbents were left to suffer grievous losses.

Riding on the iPhone’s success and infrastructure, the iPad was introduced, with Android-powered tablets not far behind. These new, mobile personal computers caused customers to Think Different, to re-examine their allegiance to the one-and-only PC.

As these products flooded the market, Microsoft went through its own version of the Stages of Grief, from denial to ultimate acceptance.

First: It’s Nothing. See Steve Ballmer memorably scoffing at the iPhone in 2007. Recall ODM Director Eddie Wu’s 2008 predication that Windows Mobile would enjoy 40% market share by 2012.

Second: There is no Post-PC…”Plus is the new ‘Post’“. Smartphones and tablets are mere companion devices that will complement our evergreen PCs. The party line was eloquently asserted two years ago by Frank Shaw, Microsoft’s VP of Communications:

“So while it’s fun for the digerati to pronounce things dead, and declare we’re post-PC, we think it’s far more accurate to say that the 30-year-old PC isn’t even middle aged yet, and about to take up snowboarding.”

Next comes Bargaining: Microsoft makes a tablet, but with all the attributes of a PC. Actually, they make two Surface devices, one using an ARM processor, the other a conventional Intel CPU.

Today comes Acceptance: We’re indeed in a Post-PC era. PCs aren’t going to disappear any time soon, but the 30-year epoch of year after year double digit growth is over. We’re now a Devices and Services company!

It’s a crisp motto with a built-in logic: Devices create demand for Microsoft services that, in turn, will fuel the market’s appetite for devices. It’s a great circular synergy.

But behind the slick corpospeak lurks a problem that might seriously maim the company: Microsoft wants to continue to license software to hardware makers while it builds a Devices business that competes with these same licensees. They want it both ways.

Real business model transitions are dangerous. By real transition I don’t mean adding a new line of peripherals or accessories, I mean moving to a new way of making money that negatively impacts the old one. The old money flow might dry up before the new one is able to replace it, causing an earnings trough.

For publicly traded companies, this drought is unacceptable. Rather than attempt the transition and face the ire of Wall Street traders, some companies slowly sink into irrelevance. Others take themselves private to allow the blood-letting to take place out of public view. When the curtain lifts some months later, a smaller, healthier outfit is relaunched on the stock market. Dell is a good example of this: Michael Dell gathered investors, himself included, to buy the company back and adapt its business model to a Post-PC world behind closed doors.

Microsoft can’t abandon its current model entirely, it can’t stop selling software licenses to hardware makers. But the company realizes that it also has to get serious about making its own hardware if it wants to stay in the tablets and smartphone race.

The key reason for Microsoft’s dilemma is Android. Android is inexpensive enough (if not exactly free) that it could kill Redmond’s mobile licensing business. (Microsoft might get a little bit of money from makers of Android-powered hardware thanks to its patent portfolio, but that doesn’t change the game.) This is why Microsoft offered “platform support payments” to Nokia, which essentially made Windows Phone free. And, now we have the belated, under duress acquisition of Nokia’s smartphone business, complete with 32,000 angry Finns.

(Microsoft is rumored to have approached HTC with an offer to dual-boot Windows Phone on HTC’s Android handsets. It’s not very believable rumor — two competing operating systems on the same smartphone? But it has a satisfying irony: In an earlier incarnation I saw Microsoft play legal hardball against anyone who tried to sell PCs with both Windows and another OS installed at the factory…)

Another example of trying to keep one foot on each side of the abyss is the Surface tablet. Microsoft tried to create a hybrid “best-of-both-worlds” PC/tablet, complete with two different UIs. I bought one and found what many experienced: It doesn’t have the simplicity and agility of a genuine tablet, nor does it offer the classic workflow found on Windows 7. We’ll have to see how helpful the upcoming Windows 8.1 is in that regard.

So… What about our new CEO?

  • S/he finds a company that’s in the middle of a complicated structural and cultural reorganization.
  • The legacy PC business is slowing down, cannibalized by mobile personal computers.
  • Old OEM partners aren’t pleased with the company’s new direction(1). They have to be kept inside the tent while the Surface tablets experiment plays out. Success will let Microsoft discard Legacy PC makers. Failure will lead Redmond to warmly re-embrace its old vassals.
  • The Windows Phone licensing business lost its clients as a result of the Nokia acquisition.
  • Integrating Nokia will be difficult, if not a slow-moving disaster.
  • The Windows Phone OS needs work, including a tablet version that has to compete with straight tablets from Android licensees and from Apple.
  • Employees have to be kept on board.
  • So do shareholders.

How would you like the job?

JLG@mondaynote.com

(1) HP’s Meg Whitman now sees Microsoft as a competitor — and introduces a Google-powered Chromebook. What we think this will do for HP’s Personal Systems Group revenue and profit is best left unsaid.

Apple Under Siege

 

Two years after Steve Jobs left us, Apple now wears Tim Cook’s imprint and, for all the doubt and perpetual doomsaying, seems to wear it well. One even comes to wonder if the Cassandras aren’t in fact doing Apple a vital favor.

Last Friday, Tim Cook issued a somber remembrance to Apple employees:

Team-
Tomorrow marks the second anniversary of Steve’s death. I hope everyone will reflect on what he meant to all of us and to the world. Steve was an amazing human being and left the world a better place. I think of him often and find enormous strength in memories of his friendship, vision and leadership. He left behind a company that only he could have built and his spirit will forever be the foundation of Apple. We will continue to honor his memory by dedicating ourselves to the work he loved so much. There is no higher tribute to his memory. I know that he would be proud of all of you.
Best,
Tim

I am one of the many who are in Steve’s debt and I miss him greatly. I consider him the greatest creator and editor of products this industry has ever known, and am awed by how he managed the most successful transformation of a company — and of himself — we’ve ever seen. I watched his career from its very beginning, I was fortunate to have worked with him, and I thoroughly enjoyed agreeing and disagreeing with him.

I tried to convey this in an October 9th, 2011 Monday Note titled Too Soon. I just re-read it and hope you’ll take the time to do the same. You’ll read words of dissent by Richard Stallman and Hamilton Nolan, but you’ll mostly find praise by Barack Obama, John Stewart, Nicholson Baker in the New Yorker, and this elegant image by Jonathan Mak:

steve_silouhette

Two years later, we can look at Apple under Tim Cook’s leadership. These haven’t been an easy twenty-four months: Company shares have gone on a wild ride, execs have been shown the door, there was the Maps embarrassment and apology, and there has been a product drought for most of the last fiscal year (ending in September).

All of this has provided fodder for the Fox Newsstands of the Web, for netwalkers seeking pageviews. The main theme is simple and prurient, hence its power: Without Steve, Apple is on the decline. The variations range from the lack of innovation — Where’s the Apple TV?, the iWatch?, the next Big Thing? — to The Decline of The Brand, Android Is Winning, and Everything Will Be Commoditized.

Scan Philip Ellmer-DeWitt’s Apple 2.0 or John Gruber’s Daring Fireball and treat yourself to intelligent repudiations of this incessant “claim chowder“, discredited pontifications. I’ll extract a few morsels from my own Evernote stash:

Apple’s press conference showed a brand unraveling, or so said VentureBeat in March, 2012. Eighteen months later, Apple passed Coca-Cola to become the world’s most valuable brand.

How Tim Cook can save himself (and Apple), subtitled, for good measure: What the confused Apple CEO can do to avoid getting canned and having to slink away with nothing but his $378 million compensation package as comfort. Penned by a communications consultant who “teaches public relations at NYU”, the article features an unexpected gem: Cook should buy a blazer. You know, “to break the deleterious chokehold of the Steve Jobs’ [sic] legacy”.

Apple: The Beginning of a Long Decline? (note the hedging question mark.) This LinkedIn piece, which questions the value of the fingerprint sensor, ends with a lament:

There was no sign of a watch. So those of us in Silicon Valley are left watching, wondering, and feeling a little empty inside… Jobs is gone. It looks like Apple’s magic is slowly seeping away now too.

Shortly thereafter, Samsung’s iWatch killer came out…and got panned by most reviewers.

Last: Apple’s Board of Directors are concerned about Apple’s pace of innovation, says Fox Business News Charlie Gasparino, who claims to have “reliable sources”.

Considering how secretive the company is, can anyone imagine a member of Apple’s Board blabbing to a Fox Business News irrespondent?

Despite the braying of the visionary sheep, Tim Cook never lost his preternatural calm, he never took the kommentariat’s bait. Nor have his customers: They keep buying, enjoying, and recommending Apple’s products. And they do so in such numbers — 9 million new iPhones sold in the launch weekend — that Apple had to file a Form 8-K with the Security and Exchanges Commission (SEC) to “warn” shareholders that revenue and profits would exceed the guidance they had provided just two months ago when management reviewed the results of the previous quarter.

In Daniel Eran Dilger’s words, Data bites dogma: Apple’s iOS ate up Android, Blackberry U.S. market share losses this summer:

Apple’s increase accounted for 1.5 of the 1.6 percentage points that Android and Blackberry collectively lost. This occurred a full month before the launch of Apple’s iPhone 5s and 5c and the deployment of iOS 7.

Regarding the “Apple no longer innovates” myth, Jay Yarow tells us why Apple Can’t Just ‘Innovate’ A New Product Every Other Year. His explanation draws from a substantial New York Times Magazine article in which Fred Vogelstein describes the convergence of company-wide risk-taking and engineering feats that resulted in the iPhone:

It’s hard to overstate the gamble Jobs took when he decided to unveil the iPhone back in January 2007. Not only was he introducing a new kind of phone — something Apple had never made before — he was doing so with a prototype that barely worked. Even though the iPhone wouldn’t go on sale for another six months, he wanted the world to want one right then. In truth, the list of things that still needed to be done was enormous. 

It’s a great read. But even Vogelstein can’t resist the temptation of inserting a word of caution: “And yet Apple today is under siege…” 

This is something I heard 33 years ago when I signed up to start Apple France in 1980, and I’ve heard it constantly since then. I’ll again quote Horace Dediu, who best summarizes the concern:

“[There's a] perception that Apple is not going to survive as a going concern. At this point of time, as at all other points of time in the past, no activity by Apple has been seen as sufficient for its survival. Apple has always been priced as a company that is in a perpetual state of free-fall. It’s a consequence of being dependent on breakthrough products for its survival. No matter how many breakthroughs it makes, the assumption is (and has always been) that there will never be another. When Apple was the Apple II company, its end was imminent because the Apple II had an easily foreseen demise. When Apple was a Mac company its end was imminent because the Mac was predictably going to decline. Repeat for iPod, iPhone and iPad. It’s a wonder that the company is worth anything at all.”

I recently experienced a small epiphany: I think the never-ending worry about Apple’s future is a good thing for the company. Look at what happened to those who were on top and became comfortable with their place under the sun: Palm, Blackberry, Nokia…

In ancient Rome, victorious generals marched in triumph to the Capitol. Lest the occasion go to the army commander’s head, a slave would march behind the victor, murmuring in his ear, memento mori, “remember you’re mortal”.

With that in mind, one can almost appreciate the doomsayers — well, some of them. They might very well save Apple from becoming inebriated with their prestige and, instead, force the company to remember, two years later and counting, how they won it.

JLG@mondaynote.com

 

Microsoft Directors Have Much Explaining To Do

 

Blaming Steve Ballmer for Microsoft’s string of mistakes won’t do. Why did the Board of Directors keep him on the job for thirteen years, only to let him “retire” in the midst of several dangerous transitions — without naming a successor? What does this say about the Board’s qualifications to pick Microsoft’s next CEO?

For more than a decade, a team of physicians has been ministering to a patient who was once vital and robust, but now no longer thrives. Recurring diagnostic errors, stubborn inattention to symptoms, improper prescriptions haven’t yet killed the object of their care but, lately, the patient’s declining health has become so obvious that the doctors, now embarrassed and desperate, have scheduled a heart transplant.

Now comes the test: Would you entrust the patient’s future to such a confederacy of dunces?

With this metaphor in mind, let’s contemplate the record of Microsoft Directors since Steve Ballmer assumed the mantle 13 years ago, and ask if they’re qualified to appoint a successor.

Consider the Directors’ obdurate passivity while they watched the company miss opportunities, take one wrong turn after another, and fail to execute crucial transitions. Search was conceded to Google; digital music (players and distribution) is dominated by Apple; social networking belongs to Facebook, Twitter, and LinkedIn; the smartphone market is handed over to Google’s Android and Apple’s iPhone; tablets from the same duo are now bleeding the Windows + Office Golden Goose; Windows Vista and now Windows 8; Surface tablets… Even the once mighty Internet Explorer browser has been displaced by Google’s Chrome running on all desktop and mobile platforms.

Blaming (and forgiving) the CEO for one or two mistakes is reasonable. But if these missteps were entirely Ballmer’s fault, why did the Directors keep him at the helm? This raises the question: How much of the company’s value did the Directors themselves let Google, Apple, and others run away with? Is Microsoft’s Board a danger to the company?

The latter question comes in sharper relief when looking at the timing and manner of Ballmer’s exit.

ballmer

On July 11th, Ballmer announces a major company reorganization. More than just the usual medley of beheadings and redistribution of spoils, Microsoft was to restructure itself away from its old divisional arrangement and move towards the type of functional organization used by companies such as Apple. In addition, the new company motto became Devices and Services, evoking a virtuous circle: Best-of-class branded devices would sell more great Microsoft services, while the latter would give a boost to Microsoft devices.

A week later, on July 18th, Microsoft releases pedestrian quarterly numbers, the lowlight of which is a $900M write-off attributed to very poor sales of Surface PC/tablets

On August 23rd, Ballmer announces his sooner-than-planned retirement — sometime in the following 12 months. No word of a successor.

And, to top everything off, on September 3rd, with Ballmer on his way out, the Board approves the emergency acquisition of Nokia’s handset business, complete with 32,000 angry Finns. (We’ll discuss their misdirected anger in a future Monday Note.)

A drastic company reorganization makes sense. Instead of one more turn of the optimizing crank, Microsoft acknowledges that it needs to Think Different.

Writing off unsold inventory is the sensible recognition of a problem; it removes an impediment by facilitating a fire sale.

There was a clear and present danger for Nokia’s handset business to fail, or to become the walking dead. Microsoft bought it to avoid the possible collapse of the Windows Phone platform. In theory (i.e., ignoring cultural realities), the acquisition gives Microsoft more control over its smartphone future.

All rational moves.

But letting Ballmer go right in the middle of two huge and complicated transitions — and without immediately appointing a successor? On its face, the timing and manner of Ballmer’s exit defies common business sense. It also raises questions about the Board’s failure to adequately plan for Ballmer’s succession. Supposedly, Succession Planning is a key component of good Corporate Governance. In plain language, a Board of Directors is obligated to identify and groom successors for key positions, starting with the CEO.

Which raises a few more questions.

Microsoft undertakes two risky, company-redefining moves: a profound structural and strategic reorganization, followed by its most foreign, most people-intensive acquisition ever. What was the overwhelming need to announce Ballmer’s departure – without naming a successor – right in the middle of such instability?

Considering its résumé, what makes Microsoft’s Board qualified to pick a new CEO?

And what are the parameters of the search for Mr. Right? Assuming Microsoft hires an industry heavyweight, will this individual be given the space and power to be his own woman or man, that is to reshuffle the Board? And what about the freedom from deferring to the company’s Founder?

And what must the mood be like at Microsoft? “When you receive an order, do absolutely nothing and wait for the countermanding directive.” This ancient Army saying must now be popular in Redmond. It’s not that people working there don’t care, but they just don’t know what the next CEO will want, and they certainly don’t know when. How can one not expect damaging paralysis and politicking when the CEO is let go without a successor?

All interesting questions.

JLG@mondaynote.com

————————-

[I'll leave alone rumors such as Ford's CEO Alan Mullally replacing Ballmer. Notwithstanding the obligatory congratulations, there would be much giggling in Mountain View and Cupertino. Competent management is a necessary but not sufficient condition...see Ballmer.]

64 bits. It’s Nothing. You Don’t Need It. And We’ll Have It In 6 Months

 

Apple’s A7 processor, the new iOS 7 and “house” apps are all industry firsts: genuine, shipping 64-bit mobile hardware and software. As we’ve seen before with the iPhone or the iPad, this new volley of Apple products is first met with the customary bursts of premature evaluations and counterfactual dismissals.

On September 10th, Apple revealed that the new iPhone 5s would be powered by its new 64-bit A7 processor. The initial reactions were less than enthused. We were treated to exhumations of lazy bromides…

“I don’t drink Kool-Aid. Never liked the stuff and I think we owe it to ourselves to collectively question whether or not Apple’s ‘reality distortion field’ is in effect when we consider how revolutionary the iPhone 5S is and if Apple’s 64-bit A7 processor under its shiny casing will be all its [sic] cracked up to be when the device hits the market in volume.” [Forbes]

…and equally lazy “markitecture” accusations…

“With current mobile devices and mobile apps, there really is no advantage [to 64 bits] other than marketing — the ability to say you’re the first to have it…” [InfoWorld]

…and breezy brush-offs, such as this tweet from an industry expert:

“We’ll see just how good Apple’s marketing team is trying to leverage 64-bit. 64-bit add more memory and maybe registers. Period.” [Twitter]

Rather than wonder what these commenters were drinking, let’s turn to AnandTech, widely regarded as one of the best online hardware magazines.

Founded by Anand Lal Shimpi when he was all of 14-years-old, AnandTech is known for its exhaustive (and sometimes exhausting) product reviews. The 14-section September 17th iPhone 5S review doesn’t disappoint. Among other things, it provides detailed iPhone 5S vs. iPhone 5 performance comparisons such as this:

5S GeekBench Anand Edited

There are many other charts, comparisons, and considerations of the new 64-bit ARMv8 instruction set, the move from 16 to 32 floating-point NEON 128-bit registers, the hardware acceleration of cryptography operations… It’s a very long read, but not a boring one (at least not for interested geeks).

The bottom line is plain: The A7 processor is a substantial improvement that’s well supported by the 64-bit iOS7. (And I’d like to meet the author and bow to his encyclopedic knowledge.)

Was it because of AnandTech’s cool analysis that the doubters have changed their tune?

As I predicted, Apple A7 benchmarks well due to CPU arch (for IPC), new GPU, ARM v8′

Now that the A7 had become a Benchmarking Beast, the author of the previous week’s brush-off tweet (“more memory and maybe registers. Period”) has revised his position [emphasis mine]:

“The improvements Apple made with the A7 are truly incredible, and they really went against the grain in their choices. With an industry obsessed with more cores, they went with fewer, larger and efficient cores. With people expecting v8 and 64-bit ARM in late 2014, Apple brings it out in 2013 with full Xcode support and many performance optimizations.” [...] “Apple has done it again, but this time in unexpected fashion.”

That all-purpose defense, unexpected, provides a key to the wrong-footing of many “experts”.

When Apple entered the microprocessor field a mere five years ago with its acquisition of Palo Alto Semiconductor, the move was panned: Apple had no future competing with established industry leaders such as Intel, Qualcomm, Nvidia, and Samsung.

But with the successive, increasing refinement of the A4, A5, and A6, the designs were ultimately viewed as good, very good, roughly on par with the rest of the industry. What these processors lacked in raw power was more than made up for by they way they were integrated into Apple’s notion of a purposeful, usable mobile device: Enhanced UI responsiveness, reduced power consumption, obeisance to the unique requirements of media and communications.

The expectation was that Apple would either fail, or produce a “competent” (meaning not particularly interesting) iteration of previous A4-5-6 designs. No one expected that the processor would actually work, with all in-house apps running in 64-bit mode from day one.

But let’s back up and rewrite a bit of history, ourselves:

On September 10th, Samsung announced its flagship 64-bit Exynos processor, supported by Android 5.0, the 64-bit version of Google’s market-leading mobile OS. The new Galaxy S64 smartphone, which will ship on September 20th, features both 64-bit hardware and software components. Samsung and Google receive high praise:

“Supercomputer-class processor… Industry-leading performance… Tightly integrated 64-bit software and hardware open a new era of super high-performance applications previously impossible on mobile devices…”

And Apple gets its just deserts:

“Once again, Apple gets out-innovated…This confirms the trend we’ve seen since Tim Cook took over… iPhones have become second-class devices… The beginning of a long decline…”

Apple can be thankful this is fantasy: The real world would never treat it like this (right?).

My fantasy isn’t without basis: Within 24 hours of Apple’s September announcement, Samsung’s mobile business chief Shin Jong-kyun said his company will have its own 64-bit Exynos processor:

“Not in the shortest time. But yes, our next smartphones will have 64-bit processing functionality…” [The Korea Times]

As for Android support, no problem: 64-bit versions of the underlying Linux kernel already exist. Of course, the system software layer that resides on top of the Linux kernel — the layer that is Android — will also need to be converted to take advantage of the 64-bit processor, as will the Software Development Kit (SDK) that third-party developers use to create apps. It’s a sizable challenge, but one that’s well within the Android’s team skills and resources; the process has certainly been under way for a while already.

The real trouble starts outside of Google. Which 64-bit processor? Intel’s (the company says it will add 64-bit “capabilities” to Android)? Samsung’s? Qualcomm’s?

Who writes and supports device drivers for custom SoC modules? This sounds a lot like Windows device driver complications, but the complexity is multiplied by Google’s significantly weaker control over hardware variants.

Apple’s inherent control over all of the components in its platform will pay dividends in the speed and quality of the transition. There will be glitches — there will always be new, factory-fresh bugs — but the new 64-bit hardware is designed to run existing 32-bit apps, and it seems to actually do so in practice.

Now let’s go beyond the iPhone 5S. In his September 10th presentation, Phil Schiller, Apple’s Marketing Supremo, called the A7′s performance “desktop class”. These words were carefully calibrated, rehearsed, and approved. This isn’t a “Can’t innovate anymore? My asssaeta, blurted while seized by religious fervor at last Spring’s Apple Developers Conference.

Does “desktop class” imply that Apple could use future versions of its 64-bit processor to replace Intel chips in its Mac devices?

In the AnandTech post quoted above, several benchmarks compare Apple’s A7 to a new x86 chip, Intel’s Baytrail, with interesting results:

AnandTech Baytrail A7

So, yes, in theory, a future Apple 64-bit processor could be fast enough to power a Mac.

But let’s consider a 3GHz iMac running a high-end media creation application such as Photoshop or Autodesk. The processor doesn’t want to be constrained by power consumption requirements, it’s optimized for performance (this even ignores the upcoming MacPro and its thermal management prowess).

Can we see a split in the Mac product line? The lower, more mobile end would use Apple’s processors, and the high-end, the no-holds-barred, always plugged to the wall desktop devices would still use x86 chips. With two code bases to maintain ß OS X applications to port? Probably not.

Apple could continue to cannibalize its (and others’) PC business by producing “desktop-class” tablets. Such speculation throws us back to a well-known problem: How do you compose a complex document without a windowing system and a mouse or trackpad pointer?

We’ve seen the trouble with Microsoft’s hybrid PC/tablet, its dual Windows 8 UI which is considered to be “confusing and difficult to learn (especially when used with a keyboard and mouse instead of a touchscreen).”

The best suggestion I’ve seen so far comes from “a veteran design and management surgeon” who calls himself Kontra and proposes An interim solution for iOS ’multitasking‘ based on a multi-slot clipboard.

If Apple provides a real way to compose complex documents on a future iPad, a solution that normal humans will embrace, then it will capture desktop-class uses and users.

Until such time, Macs and iPads are likely to keep using different processors and different interaction models.

JLG@mondaynote.com

 

Apple Market Share: Facts and Psychology

 

Remember netbooks? When Apple was too greedy and stupid to make a truly low-cost Macintosh? Here we go again, Apple refuses to make a genuinely affordable iPhone. There will be consequences — similar to what happened when the Mac refused to join netbooks circling the drain. 

My first moments with the iPad back in April 2010 were mistaken attempts to use it as a Mac. Last year, it took a long overdue upgrade to my eyeglasses before I warmed to the nimbler iPad mini, never to go back to its older sibling.

With that in mind, I will withhold judgment on the new iPhone until I have a chance to play customer, buy the product (my better half seems to like the 5C while I pine for a 5S), and use it for about two weeks — the time required to go beyond my first and often wrong impressions.

While I wait to put my mitts on the new device, I’ll address the conventional hand-wringing over the 5C’s $549 pricetag (“It’s Too Damned High!” cry the masses).

iphone5c copie

Henry Blodget, who pronounced the iPhone Dead In Water in April 2011, is back sounding the alarm: Apple Is Being Shortsighted — And This Could Clobber The Company. His argument, which is echoed by a number of pundits and analysts, boils down to a deceptively simple equation:

Network Effect + Commoditization = Failure

The Network Effect posits that the power of a platform is an exponential function of the number of users. Android, with 80% of the smartphone market will (clearly) crush iOS by sucking all resources into its gravitational well.

Commoditization means that given an army of active, resourceful, thriving competitors, all smartphones will ultimately look and feel the same. Apple will quickly lose any qualitative advantage it now enjoys, and by having to compete on price it could easily fall behind.

Hence the preordained failure.

As a proof-of-concept, the nay-sayers point to the personal computer battle back in the pre-mobile dark ages: Didn’t we see the same thing when the PC crushed the Mac? Microsoft owned the personal computer market; PC commoditization drove prices into the bargain basement…

Interpret history how you will, the facts show something different. Yes, the Redmond Death Star claimed 90% of the PC market, but it failed to capture all the resources in the ecosystem. There was more than enough room for the Mac to survive despite its small market share.

And, certainly, commoditization has been a great equalizer and price suppressant — within the PC clone market. Microsoft kept most of the money with the de facto monopoly enjoyed by its Windows + Office combo, while it let hardware manufacturers race to the bottom (netbooks come to mind). Last quarter, this left HP, the (still) largest PC maker, with a measly 3% operating profit for its Personal Systems Group. By contrast, Apple’s share of the PC market may only be 10% or less, but the Mac owns 90% of the $1000+ segment in the US and enjoys a 25% to 35% margin.

After surviving a difficult birth, a ruthlessly enforced Windows + Office platform, and competition from PC makers large and small, the Mac has ended up with a viable, profitable business. Why not look at iDevices in the same light and see a small but profitable market share in its future?

Or, better yet, why not look at more than one historical model for comparison? For example, how is it that BMW has remained so popular and profitable with its One Sausage, Three Lengths product line strategy? Aren’t all cars made of steel, aluminium (for Sir Jony), plastic, glass, and rubber? When the Bavarian company remade the Mini, were they simply in a race to the bottom with Tata’s Nano, or were they confidently addressing the logical and emotional needs of a more affluent — and lasting — clientèle?

Back to the colorful but “expensive” 5C, Philip Elmer-DeWitt puts its price into perspective: For most iPhone owners, trading up to the 5C is ‘free‘ due to Apple’s Reuse and Recycle program. We’ll have to see if The Mere Matter of Implementation supports the theory, and where these recycled iPhones end up. If the numbers work, these reborn iPhones could help Apple gain a modest foothold in currently underserved price segments.

Still thinking about prices, I just took a look at the T-Mobile site where, surprise, the 5C is “free“, that is no money down and 24 months at $22 — plus a $10 “SIM Kit” (read the small print.) You can guess what AT&T offers: 24 months at $22/month (again, whip out your reading glasses.) Verizon is more opaque, with a terrible website. Sprint also offers a no-money-down iPhone 5C, although with more expensive voice/data plans.

This is an interesting development: Less than a week ago, Apple introduced the iPhone 5C with a “posted price” of $99 — “free” a few days later.

After much complaining to the media about “excessive” iPhone subsidies, carriers now appear to agree with Horace Dediu who sees the iPhone as a great “salesman” for carriers, because it generates higher revenue per user (ARPU). As a result, the cell philanthropists offer lower prices to attract and keep users — and pay Apple more for the iPhone sales engine.

Of course, none of this will dispel the anticipation of the Cupertino company’s death. We could simply dismiss the Apple doomsayers as our industry’s nattering nabobs of negativism, but let’s take a closer look at what insists under the surface. Put another way, what are the emotions that cause people to reason against established facts, to feel that the small market share that allowed the Mac to prosper at the higher end will inevitably spell failure for iDevices?

I had a distinct recollection that Asymco’s Horace Dediu had offered a sharp insight into the Apple-is-doomed mantra. Three searches later, first into my Evernote catchall, then to Google, then to The Guardian, I found a Juliette Garside article where Horace crisply states the problem [the passage quoted here is from a longer version that's no longer publicly available; emphasis and elision mine]:

“[There's a] perception that Apple is not going to survive as a going concern. At this point of time, as at all other points of time in the past, no activity by Apple has been seen as sufficient for its survival. Apple has always been priced as a company that is in a perpetual state of free-fall. It’s a consequence of being dependent on breakthrough products for its survival. No matter how many breakthroughs it makes, the assumption is (and has always been) that there will never be another. When Apple was the Apple II company, its end was imminent because the Apple II had an easily foreseen demise. When Apple was a Mac company its end was imminent because the Mac was predictably going to decline. Repeat for iPod, iPhone and iPad. It’s a wonder that the company is worth anything at all.”

This feels right, a legitimate analysis of the analysts’ fearmongering: Some folks can’t get past the “fact” that Apple needs hit products to survive because — unlike Amazon, as an example — it doesn’t own a lasting franchise.

In the meantime, we can expect to see more hoses attached to Apple’s money pump.

Next week, I plan to look at iOS and 64-bit processing.

JLG@mondaynote.com

Apple’s Wearables Future

 

Wearable technologies have a huge future. For Apple, they’ll create a new product category with an iPhone-like revenue stream! No so fast. Smartwatches and other wearable consumer products lack key attributes for breaking out of the novelty prison. 

‘I Think the Wrist Is Interesting’ Thus spake Tim Cook on the opening night of last May’s D11 conference.

When pressed to discuss his company’s position on wearable technologies, Cook was unusually forthcoming: Instead of pleading Apple’s Fifth, Cook launched into a substantial discussion of opportunities for his company to enter the field, calling wearables “a very key branch of the tree”.

But when asked about the heavily publicized Google Glass he parried the question by suggesting that people who don’t otherwise wear glasses might be reluctant to don such an accoutrement.

I don’t find Tim Cook’s dismissal of eyewear very insightful: Just go to a shopping center and count the eyewear stores. Many belong to the same rich Italian conglomerate, Luxottica, a company with about ten house brands such as Oakley, Persol, and Ray-Ban, and a supplier to more than twenty designer labels ranging from Armani to Versace. (As the perturbing Sixty Minutes exposé on Luxottica pointed out, the company nicely rounds out its vertical dominance of the sector through its ownership of EyeMed, a vision insurance business.)

Eyewear, necessary or not, is a pervasive, fashionable, rich product category, a fact that hasn’t escaped Google’s eye for numbers. The company is making an effort to transmute their geeky spectacles into fashion accessories. Courtesy of Counternotions I offer this picture of Sergey Brin and fashionista Diane von Furstenberg proudly donning the futuristic eyewear at the NY Fashion Week:

Glass Fashion Brin

On a grander scale, we have a Vogue article, Google Glass and a Futuristic Vision of Fashion:

Glass en Vogue 2

The company’s efforts to make Google Glass fashionable might be panned today for pushing the envelope a little too far but, in a not-too-distant future, they stand a chance of being viewed as truly visionary.

If eyewear doesn’t excite Tim Cook, what does? To him, the wrist feels more natural, more socially acceptable. We all wear one or more objects around our wrist(s).

The wristwear genre isn’t new (recall Microsoft’s 2004 Spot). Ask Google to show you pictures of smartwatches, you get 23M results and screen after screen like this one:

smartwatch_ggl

The genre seems to be stuck in the novelty state. Newer entries such as Samsung’s Gear have gotten mixed reviews. Others contend a 2010 iPod nano with a wristband makes a much nicer smartwatch.

Regardless, by comparison, pre-iPod MP3 players and pre-iPhone smartphones were getting better press – and more customers. Considering the putative iWatch, the excitement about Apple getting into this class of devices appears to be excessive.

The litmus test for the potential of a device is the combination of pervasiveness and frequency of use. Smartphones are a good example, they’re always with us, we look at their screens often (too often, say critics who pretend to ignore the relationship between human nature and the Off button).

The iWatch concept makes two assumptions: a) we’ll wear one and, b) we’ll only wear that one.

Checking around we see young adults who no longer wear watches — they have a smartphone; and middle-agers use watches as jewelry, possessing more than one. This defeats both pervasiveness and frequency of use requirements.

Then there’s the biometry question: How much useful information can a wearable device extract from its wearer?

To get a better idea about what’s actually available (as opposed to fantasized), I bought a Jawbone UP wristband a little over a month ago. With its accelerometers and embedded microprocessors, UP purports to tell you how many steps you took, how long you’ve been inactive during your days, it logs your stretches of light and deep sleep, and even “makes it fun and easy to keep track of what you eat”.  Once or twice a day, you plug it into your smartphone and it syncs with an app that displays your activity in graphic form, tells you how well you’re doing versus various goals and averages. It also suggests that you log your mood in order to “discover connections that affect how you feel.”

At first, I found the device physically grating. I couldn’t accept it the way I’m oblivious to my watch, and I even found it on the floor next to my bed a couple of mornings. But I stuck with it. The battery life is as promised (10 days) and I’ve experienced none of the first versions troubles. I traveled, hiked and showered with it without a hitch other than the cap covering the connecting pin getting a bit out of alignment.

Will I keep using it? Probably not.

Beyond the physical discomfort, I haven’t found the device to be very useful, or even accurate. It’s not that difficult to acquire a useful approximation of hours slept and distance walked during the day — you don’t need a device for these things.

As for accuracy, the other day it declared that I had exhibited a substantial level of physical activity… while I was having breakfast. (I may be French, but I no longer move my hands all that much as I speak.)

The app’s suggestion that I log my food consumption falls into the magical thinking domain of dieting. A Monday morning step on a scale tells us what we know already: Moderation is hard, mysterious, out of the reach of gadgets and incantations.

For a product to start a new worthy species for a company as large as Apple, the currency unit to consider is $10B. Below that level, it’s either an accessory or exists as a member of the ecosystem’s supporting cast. The Airport devices are neat accessories; the more visible Apple TV supports the big money makers — Macs, iPads and iPhones — by enhancing their everyday use.

With this in mind, will “wearables” move the needle, will they cross the $10B revenue line in their second or third year, or does their nature direct them to the supporting cast or accessory bins?

Two elements appear to be missing for wearable technologies to have the economic impact that companies such as Apple would enjoy:

  • The device needs to be easily, naturally worn all the time, even more permanently than the watch we tend to take off at night.
  • It needs to capture more information than devices such as the Jawbone do.

A smartwatch that’s wirelessly linked to my smartphone and shows a subset of the screen in my pocket…I’m not sure this will break out of the novelty category where the devices have been confined thus far.

Going back to Tim Cook’s oracular pronouncement on wearables being “a very key branch of the tree”, I wonder: Was he having fun misdirecting his competition?

JLG@mondaynote.com

—————————————–

PS: After two July Monday Notes on the company, I’ll wait for the Microsoft centipede to drop one or two more shoes before I write about the Why, When, How and Now What of Ballmer’s latest unnatural acts. There in an Analyst Day coming September 19th — and the press has been disinvited.

PPS: In coming days, to keep your sanity when trying to drink from the Apple kommentariat fire hydrant, you can safely direct your steps to three sites/blogs:

  • Apple 2.0 , where Philip Ellmer-DeWitt provides rational news and commentary, skewers idiots and links to other valuable fodder.
  • Asymco, where Horace Dediu provides the absolute best numbers, graphs and insights into the greatest upheaval the tech industry has ever seen. Comments following his articles are lively but thoughtful and civilized.
  • Apple Insider. You might want to focus on learned, detailed editorials by Daniel Eran Dilger such as this one where he discusses Microsoft and Google (partially) shifting to an Apple-like business model. Daniel can be opinionated, animated even, but his articles come with tons of well-organized data.