hardware

The Internet of Things: Look, It Must Work

 

For twenty-five years, we’ve been promised a thoroughly connected world in which our “things” become smarter, safer and save energy. But progress doesn’t seem to match the glowing predictions.

The presentation is straightforward and enticing:

Picture this: A 25¢ smart chip inside a light-bulb socket. Networked through the 110V wires, it provides centralized on-off control and monitors the bulb’s “health” by constantly measuring electrical resistance. Imagine the benefits in a large office, with thousands, or even tens of thousands of fixtures. Energy is saved as lighting is now under central, constantly adaptable control. Maintenance is easier, pinpointed, less expensive: Bulbs are changed at precisely the right time, just before the filament burns out.
Now, add this magic chip to any and all appliances and visualize the enormity of the economic and ease-of-use benefits. This is no dream. . . we’re already working on agreements in energy-conscious Scandinavia.

When did this take place?

There is a one-word giveaway to this otherwise timeless pitch: filament. Incandescent lights have been regulated out of existence, replaced first by CFLs (compact fluorescent lamps — expensive and not so pleasant) and then by LEDs (still expensive, but much nicer).

The pitch, reproduced with a bit of license, took place in 1986. It’s from the business plan of a company called Echelon, the brainchild of Mike Markkula, Apple’s original angel investor and second CEO.

The idea seemed obvious, inevitable: The relentless physics of Moore’s Law would make chips smaller, more powerful, and less expensive. Connected to a central household brain, these chips would control everything from lightbulbs and door locks to furnaces and stoves. Our lives would be safer and easier. . . and we’d all conserve energy.

The idea expresses itself in variations of the core Home Automation concept, the breadth of which you can visualize by googling “home automation images”:

Home Automation Pics copy

In 1992, Vint Cerf, our beloved Internet pioneer, posed with his famous IP On Everything t-shirt:

Vint Cerf T-Shirt IP On Everything copy

This was a modern, ringing restatement of Echelon’s vision: The objects in our homes and offices will have sensors and actuators. . . and a two-way connection to the Internet, to a world of data, applications, people (and, inevitably, marketing trolls).

It’s been a quarter century since Echelon started, more than two decades since Vint Cerf’s pithy yet profound prophecy. We now speak of the Internet Of Things and make bold predictions of billions of interconnected devices.

Earlier this year, Cisco invited us to “capture our share” of the $14.4T (yes, T as in trillion) business opportunity that The Internet of Everything (IoE) will create in the coming decade. Dave Evans, Cisco’s chief futurist, tells us that within ten years we’ll see “50 billion connected things in the world, with trillions of connections among them“.

Maybe. . . but that’s a lot of “things”.

As Network World points out, “[m]ore than 99 percent of physical objects are not now connected to the Internet”. The exact percentage matters less than the existential truth that the obvious, attractive, inevitable idea of a universe of interconnected objects is taking a long, long time to materialize.

Does the concept need a Steve Jobs to coalesce the disparate components into a coherent, vibrant genre? Are important pieces still missing? Or, like Artificial Intelligence (rebranded as Machine Learning in an attempt to soothe the pain of repeated disappointments), are we looking at an ever-receding horizon?

Echelon’s current state (the company went public in 1998) serves as a poster child for the gulf between the $14.4T vision and today’s reality.

First, some context: Mike Markkula, who is still Vice Chairman of Echelon, has assembled a strong Board of Valley veterans who have relevant experience (I know several of them well — these aren’t just “decorative directors”). The company’s Investor Overview offers an impressive Corporate Profile [emphasis mine]:

“Echelon Corporation is an energy control networking company, with the world’s most widely deployed proven, open standard, multi-application platform, selling complete systems and embedded sub-systems for smart grid, smart city and smart building applications. Our platform is embedded in more than 100 million devices, 35 million homes, and 300,000 buildings and powers energy savings applications for smart grids, smart cities and smart buildings. We help our customers reduce operational costs, enhance satisfaction and safety, grow revenues and prepare for a dynamic future.”

But the latest Earnings Call Presentation paints a different picture:

Echelon Q3FY13 Highlights Edited copy

The Gross Margin is good (58.5%), as is the company’s cash position ($56.7M). . . but Echelon’s business is a tiny $18M — about a millionth of Cisco’s predicted motherlode. That’s a decrease of 38% compared to the same quarter last year.

So, we have a company that’s in the hands of competent technologist who have deep knowledge of the domain; a company with real, proven products that have been deployed in millions of homes and offices— but with little revenue to show for its technology and experience.

This seems to be the case for the Everything Connected industry in general. There’s no General Electric, no Microsoft, no Google (the latter abandoned its PowerMeter initiative in 2011).

Why not? The answer might lie in the Echelon presentation already mentioned:

echelin ioT

After more than 25 years of developing devices and platforms, Echelon concludes that the Internet of Things isn’t going to be felt as a direct, personal experience. Instead, it will be mostly invisible: components and subsystems in factories, warehouses, fleets of trucks and buses, office buildings. . .

Consumers certainly don’t have to be sold on the benefits of connected devices. We can’t function without our smartphones, tablets, and PCs. But once we stray outside the really personal computer domain, the desirability of connected devices drops dramatically.

The dream of giving sensors, actuators, and an Internet connection to everyday objects feels good, until one looks at matters of practical and commercial implementation. Will the software in my smart toaster be subject to a licensing agreement? Will it stop toasting if I don’t renew my subscription? (This isn’t just a dystopian strawman; one electric car manufacturer says it can remotely disable the battery if you don’t pay up.)

And then there are the (very real) security and privacy concerns. Could our appliances be hacked? Could my toaster spy on me, collect more data to be used to peddle related goods?

Home automation and security systems seem like a natural fit for the Internet of Things, but they’re still expensive, complicated, and fragile – if not hopelessly primitive. Some connected thermostats, such as the Nest (with its smoke and carbon monoxide detector), work well, but most of them are stubbornly user-hostile.

When we wander into the realm of connected appliances what we see are novelties, fit only for hobbyists and technofetishists (do we really need a toaster that sends a tweet when it’s done?). This is nothing like the smartphone wave, for a simple reason: Appliances are just that, appliances. It’s word we use as an insult to describe a boring car.

JLG@mondaynote.com

 

Intel Is Under New Management – And It Shows

 

Intel rode the PC wave with Microsoft and built an seemingly insurmountable lead in the field of “conventional” (PCs and laptops) microprocessors. But, after his predecessor missed the opportunity to supply the CPU chip for Apple’s iPhone, Intel’s new CEO must now find a way to gain relevance in the smartphone world.

In last May’s The Atlantic magazine, Intel’s then-CEO Paul Otellini confessed to a mistake of historic proportions. Apple had given Intel the chance to be part of the smartphone era, to supply the processor for the first iPhone… and Otellini said no [emphasis and light editing mine]:

“The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do… At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.”
“…while we like to speak with data around here, so many times in my career I’ve ended up making decisions with my gut, and I should have followed my gut. [...] My gut told me to say yes.”

That Otellini found the inner calm to publicly admit his mistake — in an article that would be published on his last day as CEO, no less — is a testament to his character. More important, Otellini’s admission unburdened his successor, Brian Krzanich, freeing him to steer the company in a new direction.

And Krzanich is doing just that.

First: House cleaning. Back in March 2012, the Wall Street Journal heralded Intel as The New Cable Guy. The idea was to combine an Intel-powered box with content in order to serve up a quality experience not found elsewhere (read Apple, Netflix, Roku, Microsoft…). To head the project, which was eventually dubbed OnCue, Intel hired Erik Huggers, a senior industry executive and former head of BBC Online.

At the All Things D conference in February, Huggers announced that the TV service would be available later this year. The Intel TV chief revealed no details about how the service OnCue would differ from existing competitors, or how much the thing would cost…but he assured us that the content would be impressive (“We are working with the entire industry”), and the device’s capabilities would be comprehensive (“This is not a cherry-pick… this is literally everything”).

Intel seemed to be serious. We found out that more than 1,000 Intel employees in Oregon had been engaged in testing the product/service.

Then Krzanich stepped in, and applied a dose of reality:

Intel continues to look at the business model…. we are not experts in the content industry and we’re being careful.” [AllThingsD: New Intel CEO Says Intel TV Sounds Great in Theory. But …]

Indeed, to those of us who have followed the uneasy dance between Apple and content providers since the first Apple TV shipped in 2007, the Intel project sounded bold, to say the least.

Late September, the project was put on hold and, last week, the news came that OnCue had been cancelled and allegedly offered to Verizon, whose V Cast media distribution feats come to mind…

Even before OnCue’s cancellation was made official, the well-traveled Erik Huggers appeared to show an interest in the Hulu CEO job. (If Mr Huggers happens to be reading this: I’d be more than happy to relieve you of the PowerPoints that you used to pitch the project to Intel’s top brass, not to mention the updates on the tortuous negotiations for content, and the reports from the user testing in Oregon. These slides must make fascinating corpospeak logic.)

Krzanich quickly moved from doubt to certainty. He saw that OnCue would neither make money by itself, nor stimulate sales or margins for its main act, x86 processors. OnCue would never be an Apple TV “black puck”, a supporting character whose only mission is to make the main personal computers (small, medium and large; smartphones, tablets and conventional PCs) more useful and pleasant.

So he put an end to the impossible-to-justify adventure.

That was easy.

Tackling Intel’s failure to gain a significant role in the (no longer) new world of smartphones is a much more complicated matter.

With its x86 processors, Intel worked itself into a more-than-comfortable position as part of the Wintel ecosystem. The dominant position achieved by the Microsoft-Intel duopoly over two decades yielded correspondingly high margins for both.

But smartphones changed the game. ARM processors proved themselves better than x86 at the two tasks that are integral to personal, portable devices: lowering power consumption and customization. The ARM architecture didn’t have to wait for the iPhone and Android handsets to dominate the cell phone business. Just as Windows licensing spawned a large number of PC makers, ARM licensing contributed to the creation of a wide range of processor design and manufacturing companies. The ARM site claims 80 licensees for its newer Cortex family and more than 500 for its older Classic Arm processors. No monopoly means lower margins.

Intel saw the unattractive margins offered by ARM processors and didn’t want to commit the billions of dollars required by a fab (a chip manufacturing plant) for a product that would yield profits that were well below Wall Street expectations.

The prospect of bargain basement margins undoubtedly figured in Otellini’s decision to say no to the iPhone. In 2006, no one could have predicted that it could have been made up in volume, that there would be a billion smartphone sales in 2014. (I’m basing the 1B number for the entire industry on Horace Dediu’s estimate of 250 million iOS devices for 2014.)

Even if the Santa Clara company had had the foresight to accept lower margins in order to ensure their future in the smartphone market, there would still have been the problem of customization.

Intel knows how to design and manufacture processors that used “as is” by PC makers. No customization, no problems.

This isn’t how the ARM world works. Licensees design processors that are customized for their specific device, and they send the design to a manufacturer. Were Intel to enter this world, they would no longer design processors, just manufacture them, an activity with less potential for profit.

This explains why Intel, having an ARM license and making XScale processors, sold the business to Marvell in 2006 – a fateful date when looking back on the Apple discussions.

But is Intel’s new CEO is rethinking the “x86 and only x86″ strategy? Last week, a specialty semiconductor company called Altera announced that Intel would fabricate some if its chips containing a 64-bit ARM processor. The company’s business consists of offering faster development times through “programmable logic” circuits. Instead of a “hard circuit” to be designed, manufactured, tested, debugged, modified and sent back to the manufacturing plant in lengthy and costly cycles, you buy a “soft circuit” from Altera and similar companies (Xilinx comes to mind). This more expensive device can be reprogrammed on the spot to assume a different function, or correct the logic in the previous iteration. Pay more and get functioning hardware sooner, without slow and costly turns through a manufacturing process.

With this in mind, what Intel will someday manufacture for Altera isn’t the 64-bit ARM processor that excited some observers: “Intel Makes 14nm ARM for Altera“. The Stratix 10 circuits Altera contracts to Intel manufacturing are complicated and expensive ($500 and up) FPGA (Field Programmable Gate Array) devices where the embedded ARM processor plays a supporting, not central, role. This isn’t the $20-or-less price level arena in which Intel has so far declined to compete.

Manufacturing chips for Altera might simply be work-for-hire, a quick buck for Intel, but I doubt it. Altera’s yearly revenue is just shy of $2B; Intel is a $50B company. The newly announced device, just one in Altera’s product lines, will not “move the needle” for Intel — not in 2014 (the ship date isn’t specified), or ever.

Instead, I take this as a signal, a rehearsal.  250M ARM SoCs at $20 each would yield $5B in revenue, 10% of Intel’s current total…

This might be what Krzanich had in mind about when he inked the “small” manufacturing agreement with Altera; perhaps he was weighing the smaller margins of ARM processors against the risk of slowing PC sales.

Graciously freed from the past by his predecessor, it’s hard to see how Intel’s new CEO won’t take the plunge and use the company’s superb manufacturing technology to finally

make ARM processors.

JLG@mondaynoye.com

 

iPhone 5S surprises

 

I will withhold judgment on the new iPhone until I have a chance to play customer, buy the product (my better half seems to like the 5C while I pine for a 5S), and use it for about two weeks — the time required to go beyond my first and often wrong impressions”.

I wrote those words a little over a month ago. I’ve now played customer for the requisite two weeks — I got an iPhone 5S on October 3rd — and I’m prepared to report.

But first, some context.

iPhone launches always generate controversy, there’s always something to complain about: Antennagate for the iPhone 4, the Siri beta for the 4S, the deserved Maps embarrassment last year – with a clean, dignified Tim Cook apology.

(Whether these fracas translate into lost revenue is another matter).

As I sat in the audience during the introduction of the original iPhone, back in January, 2007, I thought the demo was too good, that Steve was (again) having his way with facts. I feared that when the product shipped a few months later, the undistorted reality would break the spell.

We know now that the iPhone that Steve presented on the stage was unfinished, that he trod a careful path through a demo minefield. But the JesusPhone that Apple shipped — unfinished in many ways (no native apps, no cut-and-paste) — was more than a success: It heralded the Smartphone 2.0 era.

iphone 5s

This year, Tim Cook introduced the riskiest hardware/software combination since the original iPhone. The iPhone 5S wants to be more than just “new and improved”, it attempts to jump off the slope with its combination of two discontinuities: a 64-bit processor and a new 64-bit iOS. Will it work, or will it embarrass itself in a noisome backfire?

First surprise: It works.

Let me explain. I have what attorneys call “personal knowledge” of sausage factories, I’ve been accountable for a couple and a fiduciary for several others. I have first-hand experience with the sights, the aromas, the tumult of the factory floor, so I can’t help but wince when I approach a really new product, I worry in sympathy with its progenitors. The 5S isn’t without its “aromas” (we’ll get to those later), but the phone is sleek and attractive, the house apps are (mostly) solid, and the many new Application Programming Interfaces (API) promise novel applications. Contrary to some opinions, there are fewer warts than anyone could have expected.

Surprise #2, the UI: I had read the scathing critiques of the spartan excesses, and, indeed, I feel the drive for simplicity occasionally goes too far. The buttons on the built-in timer are too thin, too subdued. When I meditate in the dark I can’t distinguish Start from Cancel without my glasses. But I’m generally happy with the simpler look. Windows and views get out of the way quickly and gracefully, text is neatly rendered, the removal of skeuomorphic artifacts is a relief.

The next surprise is the fingerprint sensor a.k.a. Touch ID. Having seen how attempts to incorporate fingerprint recognition into smartphones and laptops have gone nowhere, I had my doubts. Moreover, Apple had acquired AuthenTec, the company that created the fingerprint sensor, a mere 15 months ago. Who could believe that Apple would be able to produce a fingerprint-protected iPhone so quickly?

But it works. It’s not perfect, I sometimes have to try again, or use another finger (I registered three on my right hand and two on my left), but it’s clear that Apple has managed to push Touch ID into the category of “consumer-grade technology”: It works often enough and delivers enough benefit to offset the (small) change in behavior.

A personal favorite surprise is Motion Sensing.

When Apple’s Marketing Supremo Phil Schiller described the M7 motion processor, I didn’t think much of it, I was serving the last days of my two-month sentence wearing the JawBone UP bracelet mentioned in a previous Monday Note. (A friend suggested I affix it to his dog’s collar to see what the data would look like.)

Furthermore, the whole “lifestyle monitoring” business didn’t seem like virgin territory. The Google/Motorola Moto X smartphone introduced last August uses a co-processor that, among other things, monitors your activities, stays awake even when the main processor is asleep, and adjusts the phone accordingly. A similar co-processing arrangement is present in Moto X’s predecessors, the Droid Maxx, Ultra and Mini.

But then I saw a Twitter exchange about Motion Sensing apps about a week after I had activated my iPhone 5S. One thumb touch later, the free Pedometer++ app asked for my permission to use motion data (granted) and immediately told me how many steps I’d taken over the past seven days.

I went to the chauffeured iPhone on my wife’s desk and installed the app. I did the same on friends’ devices. The conclusion was obvious: The M7 processor continuously generates and stores motion data independent of any application. A bit of googling shows that there are quite a few applications that use the motion data that’s obligingly collected by the M7 processor; I downloaded a number of these apps and the step counts are consistent.

(Best in class is the ambitious MotionX 24/7. Philippe Kahn’s company FullPower Technologies licenses MotionX hardware and software to many motion-sensing providers, including Jawbone and, perhaps, Apple. Wearable technologies aren’t just for our wrists…we carry them in our pockets.)

My wife asked if her iPhone would count steps from within her handbag. Ever the obliging husband, I immediately attended to this legitimate query, grabbed her handbag, and stepped out of the house for an experimental stroll. A conservatively dressed couple walked by, gave me a strange look, and didn’t respond to my evening greeting, but, indeed, the steps were counted.

A question arises: Does Apple silently log my movements? No, my iPhone records my locomotion, but the data stays within the device — unless, of course, I let a specific application export them. One must be aware of the permissions.

Other 5S improvements are welcome but not terribly surprising. The camera has been smartly enhanced in several dimensions; search finally works in Mail; and, to please Sen. McCain, apps update themselves automatically.

All of this comes with factory-fresh bugs, of course, a whiff of the sausage-making apparatus. iPhoto crashed on launch the first three or four times I tried it, but has worked without complaint since then.  A black Apple logo on a white background appeared and then quickly disappeared — too brief to be a full reboot, too sparse to be part of an app.

I’ve had to reboot the 5S to recover a dropped cellular connection, and have experienced hard-to-repeat, sporadic WiFi trouble that seems to spontaneously cure itself.(“How did you fix it?” asks my wife when her tech chauffeur gets the sullen device to work again. “I don’t know, I poke the patient everywhere until it responds.”)

From my admittedly geeky perspective, I’m not repelled by these glitches, they didn’t lose my data or prevent me from finishing a task. They’re annoying, but they’re to be expected given the major hardware and software changes. And I expect that the marketplace (as opposed to the kommentariat) will shrug them off and await the bug fixes that will take care of business.

So, yes, overall, the “discontinuous” 5S works.

[I'm also using a pre-release of Mavericks, the upcoming 10.9 version of OS X, on two Macs. There, I wonder if I'm not seeing the opposite of the iPhone 5S: less risk, more bugs. I hope things straighten out for the public release. I'll report if and when warranted.] [I can't resist: The Washington Post's Wonkblog calls the iPhone's third color... Dignified Gold. I wonder: Is it a compliment to Sir Jony's unerring taste? Or a clever, indirect ethnic slur?]

JLG@mondaynote.com

Apple Under Siege

 

Two years after Steve Jobs left us, Apple now wears Tim Cook’s imprint and, for all the doubt and perpetual doomsaying, seems to wear it well. One even comes to wonder if the Cassandras aren’t in fact doing Apple a vital favor.

Last Friday, Tim Cook issued a somber remembrance to Apple employees:

Team-
Tomorrow marks the second anniversary of Steve’s death. I hope everyone will reflect on what he meant to all of us and to the world. Steve was an amazing human being and left the world a better place. I think of him often and find enormous strength in memories of his friendship, vision and leadership. He left behind a company that only he could have built and his spirit will forever be the foundation of Apple. We will continue to honor his memory by dedicating ourselves to the work he loved so much. There is no higher tribute to his memory. I know that he would be proud of all of you.
Best,
Tim

I am one of the many who are in Steve’s debt and I miss him greatly. I consider him the greatest creator and editor of products this industry has ever known, and am awed by how he managed the most successful transformation of a company — and of himself — we’ve ever seen. I watched his career from its very beginning, I was fortunate to have worked with him, and I thoroughly enjoyed agreeing and disagreeing with him.

I tried to convey this in an October 9th, 2011 Monday Note titled Too Soon. I just re-read it and hope you’ll take the time to do the same. You’ll read words of dissent by Richard Stallman and Hamilton Nolan, but you’ll mostly find praise by Barack Obama, John Stewart, Nicholson Baker in the New Yorker, and this elegant image by Jonathan Mak:

steve_silouhette

Two years later, we can look at Apple under Tim Cook’s leadership. These haven’t been an easy twenty-four months: Company shares have gone on a wild ride, execs have been shown the door, there was the Maps embarrassment and apology, and there has been a product drought for most of the last fiscal year (ending in September).

All of this has provided fodder for the Fox Newsstands of the Web, for netwalkers seeking pageviews. The main theme is simple and prurient, hence its power: Without Steve, Apple is on the decline. The variations range from the lack of innovation — Where’s the Apple TV?, the iWatch?, the next Big Thing? — to The Decline of The Brand, Android Is Winning, and Everything Will Be Commoditized.

Scan Philip Ellmer-DeWitt’s Apple 2.0 or John Gruber’s Daring Fireball and treat yourself to intelligent repudiations of this incessant “claim chowder“, discredited pontifications. I’ll extract a few morsels from my own Evernote stash:

Apple’s press conference showed a brand unraveling, or so said VentureBeat in March, 2012. Eighteen months later, Apple passed Coca-Cola to become the world’s most valuable brand.

How Tim Cook can save himself (and Apple), subtitled, for good measure: What the confused Apple CEO can do to avoid getting canned and having to slink away with nothing but his $378 million compensation package as comfort. Penned by a communications consultant who “teaches public relations at NYU”, the article features an unexpected gem: Cook should buy a blazer. You know, “to break the deleterious chokehold of the Steve Jobs’ [sic] legacy”.

Apple: The Beginning of a Long Decline? (note the hedging question mark.) This LinkedIn piece, which questions the value of the fingerprint sensor, ends with a lament:

There was no sign of a watch. So those of us in Silicon Valley are left watching, wondering, and feeling a little empty inside… Jobs is gone. It looks like Apple’s magic is slowly seeping away now too.

Shortly thereafter, Samsung’s iWatch killer came out…and got panned by most reviewers.

Last: Apple’s Board of Directors are concerned about Apple’s pace of innovation, says Fox Business News Charlie Gasparino, who claims to have “reliable sources”.

Considering how secretive the company is, can anyone imagine a member of Apple’s Board blabbing to a Fox Business News irrespondent?

Despite the braying of the visionary sheep, Tim Cook never lost his preternatural calm, he never took the kommentariat’s bait. Nor have his customers: They keep buying, enjoying, and recommending Apple’s products. And they do so in such numbers — 9 million new iPhones sold in the launch weekend — that Apple had to file a Form 8-K with the Security and Exchanges Commission (SEC) to “warn” shareholders that revenue and profits would exceed the guidance they had provided just two months ago when management reviewed the results of the previous quarter.

In Daniel Eran Dilger’s words, Data bites dogma: Apple’s iOS ate up Android, Blackberry U.S. market share losses this summer:

Apple’s increase accounted for 1.5 of the 1.6 percentage points that Android and Blackberry collectively lost. This occurred a full month before the launch of Apple’s iPhone 5s and 5c and the deployment of iOS 7.

Regarding the “Apple no longer innovates” myth, Jay Yarow tells us why Apple Can’t Just ‘Innovate’ A New Product Every Other Year. His explanation draws from a substantial New York Times Magazine article in which Fred Vogelstein describes the convergence of company-wide risk-taking and engineering feats that resulted in the iPhone:

It’s hard to overstate the gamble Jobs took when he decided to unveil the iPhone back in January 2007. Not only was he introducing a new kind of phone — something Apple had never made before — he was doing so with a prototype that barely worked. Even though the iPhone wouldn’t go on sale for another six months, he wanted the world to want one right then. In truth, the list of things that still needed to be done was enormous. 

It’s a great read. But even Vogelstein can’t resist the temptation of inserting a word of caution: “And yet Apple today is under siege…” 

This is something I heard 33 years ago when I signed up to start Apple France in 1980, and I’ve heard it constantly since then. I’ll again quote Horace Dediu, who best summarizes the concern:

“[There's a] perception that Apple is not going to survive as a going concern. At this point of time, as at all other points of time in the past, no activity by Apple has been seen as sufficient for its survival. Apple has always been priced as a company that is in a perpetual state of free-fall. It’s a consequence of being dependent on breakthrough products for its survival. No matter how many breakthroughs it makes, the assumption is (and has always been) that there will never be another. When Apple was the Apple II company, its end was imminent because the Apple II had an easily foreseen demise. When Apple was a Mac company its end was imminent because the Mac was predictably going to decline. Repeat for iPod, iPhone and iPad. It’s a wonder that the company is worth anything at all.”

I recently experienced a small epiphany: I think the never-ending worry about Apple’s future is a good thing for the company. Look at what happened to those who were on top and became comfortable with their place under the sun: Palm, Blackberry, Nokia…

In ancient Rome, victorious generals marched in triumph to the Capitol. Lest the occasion go to the army commander’s head, a slave would march behind the victor, murmuring in his ear, memento mori, “remember you’re mortal”.

With that in mind, one can almost appreciate the doomsayers — well, some of them. They might very well save Apple from becoming inebriated with their prestige and, instead, force the company to remember, two years later and counting, how they won it.

JLG@mondaynote.com

 

64 bits. It’s Nothing. You Don’t Need It. And We’ll Have It In 6 Months

 

Apple’s A7 processor, the new iOS 7 and “house” apps are all industry firsts: genuine, shipping 64-bit mobile hardware and software. As we’ve seen before with the iPhone or the iPad, this new volley of Apple products is first met with the customary bursts of premature evaluations and counterfactual dismissals.

On September 10th, Apple revealed that the new iPhone 5s would be powered by its new 64-bit A7 processor. The initial reactions were less than enthused. We were treated to exhumations of lazy bromides…

“I don’t drink Kool-Aid. Never liked the stuff and I think we owe it to ourselves to collectively question whether or not Apple’s ‘reality distortion field’ is in effect when we consider how revolutionary the iPhone 5S is and if Apple’s 64-bit A7 processor under its shiny casing will be all its [sic] cracked up to be when the device hits the market in volume.” [Forbes]

…and equally lazy “markitecture” accusations…

“With current mobile devices and mobile apps, there really is no advantage [to 64 bits] other than marketing — the ability to say you’re the first to have it…” [InfoWorld]

…and breezy brush-offs, such as this tweet from an industry expert:

“We’ll see just how good Apple’s marketing team is trying to leverage 64-bit. 64-bit add more memory and maybe registers. Period.” [Twitter]

Rather than wonder what these commenters were drinking, let’s turn to AnandTech, widely regarded as one of the best online hardware magazines.

Founded by Anand Lal Shimpi when he was all of 14-years-old, AnandTech is known for its exhaustive (and sometimes exhausting) product reviews. The 14-section September 17th iPhone 5S review doesn’t disappoint. Among other things, it provides detailed iPhone 5S vs. iPhone 5 performance comparisons such as this:

5S GeekBench Anand Edited

There are many other charts, comparisons, and considerations of the new 64-bit ARMv8 instruction set, the move from 16 to 32 floating-point NEON 128-bit registers, the hardware acceleration of cryptography operations… It’s a very long read, but not a boring one (at least not for interested geeks).

The bottom line is plain: The A7 processor is a substantial improvement that’s well supported by the 64-bit iOS7. (And I’d like to meet the author and bow to his encyclopedic knowledge.)

Was it because of AnandTech’s cool analysis that the doubters have changed their tune?

As I predicted, Apple A7 benchmarks well due to CPU arch (for IPC), new GPU, ARM v8′

Now that the A7 had become a Benchmarking Beast, the author of the previous week’s brush-off tweet (“more memory and maybe registers. Period”) has revised his position [emphasis mine]:

“The improvements Apple made with the A7 are truly incredible, and they really went against the grain in their choices. With an industry obsessed with more cores, they went with fewer, larger and efficient cores. With people expecting v8 and 64-bit ARM in late 2014, Apple brings it out in 2013 with full Xcode support and many performance optimizations.” [...] “Apple has done it again, but this time in unexpected fashion.”

That all-purpose defense, unexpected, provides a key to the wrong-footing of many “experts”.

When Apple entered the microprocessor field a mere five years ago with its acquisition of Palo Alto Semiconductor, the move was panned: Apple had no future competing with established industry leaders such as Intel, Qualcomm, Nvidia, and Samsung.

But with the successive, increasing refinement of the A4, A5, and A6, the designs were ultimately viewed as good, very good, roughly on par with the rest of the industry. What these processors lacked in raw power was more than made up for by they way they were integrated into Apple’s notion of a purposeful, usable mobile device: Enhanced UI responsiveness, reduced power consumption, obeisance to the unique requirements of media and communications.

The expectation was that Apple would either fail, or produce a “competent” (meaning not particularly interesting) iteration of previous A4-5-6 designs. No one expected that the processor would actually work, with all in-house apps running in 64-bit mode from day one.

But let’s back up and rewrite a bit of history, ourselves:

On September 10th, Samsung announced its flagship 64-bit Exynos processor, supported by Android 5.0, the 64-bit version of Google’s market-leading mobile OS. The new Galaxy S64 smartphone, which will ship on September 20th, features both 64-bit hardware and software components. Samsung and Google receive high praise:

“Supercomputer-class processor… Industry-leading performance… Tightly integrated 64-bit software and hardware open a new era of super high-performance applications previously impossible on mobile devices…”

And Apple gets its just deserts:

“Once again, Apple gets out-innovated…This confirms the trend we’ve seen since Tim Cook took over… iPhones have become second-class devices… The beginning of a long decline…”

Apple can be thankful this is fantasy: The real world would never treat it like this (right?).

My fantasy isn’t without basis: Within 24 hours of Apple’s September announcement, Samsung’s mobile business chief Shin Jong-kyun said his company will have its own 64-bit Exynos processor:

“Not in the shortest time. But yes, our next smartphones will have 64-bit processing functionality…” [The Korea Times]

As for Android support, no problem: 64-bit versions of the underlying Linux kernel already exist. Of course, the system software layer that resides on top of the Linux kernel — the layer that is Android — will also need to be converted to take advantage of the 64-bit processor, as will the Software Development Kit (SDK) that third-party developers use to create apps. It’s a sizable challenge, but one that’s well within the Android’s team skills and resources; the process has certainly been under way for a while already.

The real trouble starts outside of Google. Which 64-bit processor? Intel’s (the company says it will add 64-bit “capabilities” to Android)? Samsung’s? Qualcomm’s?

Who writes and supports device drivers for custom SoC modules? This sounds a lot like Windows device driver complications, but the complexity is multiplied by Google’s significantly weaker control over hardware variants.

Apple’s inherent control over all of the components in its platform will pay dividends in the speed and quality of the transition. There will be glitches — there will always be new, factory-fresh bugs — but the new 64-bit hardware is designed to run existing 32-bit apps, and it seems to actually do so in practice.

Now let’s go beyond the iPhone 5S. In his September 10th presentation, Phil Schiller, Apple’s Marketing Supremo, called the A7′s performance “desktop class”. These words were carefully calibrated, rehearsed, and approved. This isn’t a “Can’t innovate anymore? My asssaeta, blurted while seized by religious fervor at last Spring’s Apple Developers Conference.

Does “desktop class” imply that Apple could use future versions of its 64-bit processor to replace Intel chips in its Mac devices?

In the AnandTech post quoted above, several benchmarks compare Apple’s A7 to a new x86 chip, Intel’s Baytrail, with interesting results:

AnandTech Baytrail A7

So, yes, in theory, a future Apple 64-bit processor could be fast enough to power a Mac.

But let’s consider a 3GHz iMac running a high-end media creation application such as Photoshop or Autodesk. The processor doesn’t want to be constrained by power consumption requirements, it’s optimized for performance (this even ignores the upcoming MacPro and its thermal management prowess).

Can we see a split in the Mac product line? The lower, more mobile end would use Apple’s processors, and the high-end, the no-holds-barred, always plugged to the wall desktop devices would still use x86 chips. With two code bases to maintain ß OS X applications to port? Probably not.

Apple could continue to cannibalize its (and others’) PC business by producing “desktop-class” tablets. Such speculation throws us back to a well-known problem: How do you compose a complex document without a windowing system and a mouse or trackpad pointer?

We’ve seen the trouble with Microsoft’s hybrid PC/tablet, its dual Windows 8 UI which is considered to be “confusing and difficult to learn (especially when used with a keyboard and mouse instead of a touchscreen).”

The best suggestion I’ve seen so far comes from “a veteran design and management surgeon” who calls himself Kontra and proposes An interim solution for iOS ’multitasking‘ based on a multi-slot clipboard.

If Apple provides a real way to compose complex documents on a future iPad, a solution that normal humans will embrace, then it will capture desktop-class uses and users.

Until such time, Macs and iPads are likely to keep using different processors and different interaction models.

JLG@mondaynote.com

 

Apple Market Share: Facts and Psychology

 

Remember netbooks? When Apple was too greedy and stupid to make a truly low-cost Macintosh? Here we go again, Apple refuses to make a genuinely affordable iPhone. There will be consequences — similar to what happened when the Mac refused to join netbooks circling the drain. 

My first moments with the iPad back in April 2010 were mistaken attempts to use it as a Mac. Last year, it took a long overdue upgrade to my eyeglasses before I warmed to the nimbler iPad mini, never to go back to its older sibling.

With that in mind, I will withhold judgment on the new iPhone until I have a chance to play customer, buy the product (my better half seems to like the 5C while I pine for a 5S), and use it for about two weeks — the time required to go beyond my first and often wrong impressions.

While I wait to put my mitts on the new device, I’ll address the conventional hand-wringing over the 5C’s $549 pricetag (“It’s Too Damned High!” cry the masses).

iphone5c copie

Henry Blodget, who pronounced the iPhone Dead In Water in April 2011, is back sounding the alarm: Apple Is Being Shortsighted — And This Could Clobber The Company. His argument, which is echoed by a number of pundits and analysts, boils down to a deceptively simple equation:

Network Effect + Commoditization = Failure

The Network Effect posits that the power of a platform is an exponential function of the number of users. Android, with 80% of the smartphone market will (clearly) crush iOS by sucking all resources into its gravitational well.

Commoditization means that given an army of active, resourceful, thriving competitors, all smartphones will ultimately look and feel the same. Apple will quickly lose any qualitative advantage it now enjoys, and by having to compete on price it could easily fall behind.

Hence the preordained failure.

As a proof-of-concept, the nay-sayers point to the personal computer battle back in the pre-mobile dark ages: Didn’t we see the same thing when the PC crushed the Mac? Microsoft owned the personal computer market; PC commoditization drove prices into the bargain basement…

Interpret history how you will, the facts show something different. Yes, the Redmond Death Star claimed 90% of the PC market, but it failed to capture all the resources in the ecosystem. There was more than enough room for the Mac to survive despite its small market share.

And, certainly, commoditization has been a great equalizer and price suppressant — within the PC clone market. Microsoft kept most of the money with the de facto monopoly enjoyed by its Windows + Office combo, while it let hardware manufacturers race to the bottom (netbooks come to mind). Last quarter, this left HP, the (still) largest PC maker, with a measly 3% operating profit for its Personal Systems Group. By contrast, Apple’s share of the PC market may only be 10% or less, but the Mac owns 90% of the $1000+ segment in the US and enjoys a 25% to 35% margin.

After surviving a difficult birth, a ruthlessly enforced Windows + Office platform, and competition from PC makers large and small, the Mac has ended up with a viable, profitable business. Why not look at iDevices in the same light and see a small but profitable market share in its future?

Or, better yet, why not look at more than one historical model for comparison? For example, how is it that BMW has remained so popular and profitable with its One Sausage, Three Lengths product line strategy? Aren’t all cars made of steel, aluminium (for Sir Jony), plastic, glass, and rubber? When the Bavarian company remade the Mini, were they simply in a race to the bottom with Tata’s Nano, or were they confidently addressing the logical and emotional needs of a more affluent — and lasting — clientèle?

Back to the colorful but “expensive” 5C, Philip Elmer-DeWitt puts its price into perspective: For most iPhone owners, trading up to the 5C is ‘free‘ due to Apple’s Reuse and Recycle program. We’ll have to see if The Mere Matter of Implementation supports the theory, and where these recycled iPhones end up. If the numbers work, these reborn iPhones could help Apple gain a modest foothold in currently underserved price segments.

Still thinking about prices, I just took a look at the T-Mobile site where, surprise, the 5C is “free“, that is no money down and 24 months at $22 — plus a $10 “SIM Kit” (read the small print.) You can guess what AT&T offers: 24 months at $22/month (again, whip out your reading glasses.) Verizon is more opaque, with a terrible website. Sprint also offers a no-money-down iPhone 5C, although with more expensive voice/data plans.

This is an interesting development: Less than a week ago, Apple introduced the iPhone 5C with a “posted price” of $99 — “free” a few days later.

After much complaining to the media about “excessive” iPhone subsidies, carriers now appear to agree with Horace Dediu who sees the iPhone as a great “salesman” for carriers, because it generates higher revenue per user (ARPU). As a result, the cell philanthropists offer lower prices to attract and keep users — and pay Apple more for the iPhone sales engine.

Of course, none of this will dispel the anticipation of the Cupertino company’s death. We could simply dismiss the Apple doomsayers as our industry’s nattering nabobs of negativism, but let’s take a closer look at what insists under the surface. Put another way, what are the emotions that cause people to reason against established facts, to feel that the small market share that allowed the Mac to prosper at the higher end will inevitably spell failure for iDevices?

I had a distinct recollection that Asymco’s Horace Dediu had offered a sharp insight into the Apple-is-doomed mantra. Three searches later, first into my Evernote catchall, then to Google, then to The Guardian, I found a Juliette Garside article where Horace crisply states the problem [the passage quoted here is from a longer version that's no longer publicly available; emphasis and elision mine]:

“[There's a] perception that Apple is not going to survive as a going concern. At this point of time, as at all other points of time in the past, no activity by Apple has been seen as sufficient for its survival. Apple has always been priced as a company that is in a perpetual state of free-fall. It’s a consequence of being dependent on breakthrough products for its survival. No matter how many breakthroughs it makes, the assumption is (and has always been) that there will never be another. When Apple was the Apple II company, its end was imminent because the Apple II had an easily foreseen demise. When Apple was a Mac company its end was imminent because the Mac was predictably going to decline. Repeat for iPod, iPhone and iPad. It’s a wonder that the company is worth anything at all.”

This feels right, a legitimate analysis of the analysts’ fearmongering: Some folks can’t get past the “fact” that Apple needs hit products to survive because — unlike Amazon, as an example — it doesn’t own a lasting franchise.

In the meantime, we can expect to see more hoses attached to Apple’s money pump.

Next week, I plan to look at iOS and 64-bit processing.

JLG@mondaynote.com

Apple’s Wearables Future

 

Wearable technologies have a huge future. For Apple, they’ll create a new product category with an iPhone-like revenue stream! No so fast. Smartwatches and other wearable consumer products lack key attributes for breaking out of the novelty prison. 

‘I Think the Wrist Is Interesting’ Thus spake Tim Cook on the opening night of last May’s D11 conference.

When pressed to discuss his company’s position on wearable technologies, Cook was unusually forthcoming: Instead of pleading Apple’s Fifth, Cook launched into a substantial discussion of opportunities for his company to enter the field, calling wearables “a very key branch of the tree”.

But when asked about the heavily publicized Google Glass he parried the question by suggesting that people who don’t otherwise wear glasses might be reluctant to don such an accoutrement.

I don’t find Tim Cook’s dismissal of eyewear very insightful: Just go to a shopping center and count the eyewear stores. Many belong to the same rich Italian conglomerate, Luxottica, a company with about ten house brands such as Oakley, Persol, and Ray-Ban, and a supplier to more than twenty designer labels ranging from Armani to Versace. (As the perturbing Sixty Minutes exposé on Luxottica pointed out, the company nicely rounds out its vertical dominance of the sector through its ownership of EyeMed, a vision insurance business.)

Eyewear, necessary or not, is a pervasive, fashionable, rich product category, a fact that hasn’t escaped Google’s eye for numbers. The company is making an effort to transmute their geeky spectacles into fashion accessories. Courtesy of Counternotions I offer this picture of Sergey Brin and fashionista Diane von Furstenberg proudly donning the futuristic eyewear at the NY Fashion Week:

Glass Fashion Brin

On a grander scale, we have a Vogue article, Google Glass and a Futuristic Vision of Fashion:

Glass en Vogue 2

The company’s efforts to make Google Glass fashionable might be panned today for pushing the envelope a little too far but, in a not-too-distant future, they stand a chance of being viewed as truly visionary.

If eyewear doesn’t excite Tim Cook, what does? To him, the wrist feels more natural, more socially acceptable. We all wear one or more objects around our wrist(s).

The wristwear genre isn’t new (recall Microsoft’s 2004 Spot). Ask Google to show you pictures of smartwatches, you get 23M results and screen after screen like this one:

smartwatch_ggl

The genre seems to be stuck in the novelty state. Newer entries such as Samsung’s Gear have gotten mixed reviews. Others contend a 2010 iPod nano with a wristband makes a much nicer smartwatch.

Regardless, by comparison, pre-iPod MP3 players and pre-iPhone smartphones were getting better press – and more customers. Considering the putative iWatch, the excitement about Apple getting into this class of devices appears to be excessive.

The litmus test for the potential of a device is the combination of pervasiveness and frequency of use. Smartphones are a good example, they’re always with us, we look at their screens often (too often, say critics who pretend to ignore the relationship between human nature and the Off button).

The iWatch concept makes two assumptions: a) we’ll wear one and, b) we’ll only wear that one.

Checking around we see young adults who no longer wear watches — they have a smartphone; and middle-agers use watches as jewelry, possessing more than one. This defeats both pervasiveness and frequency of use requirements.

Then there’s the biometry question: How much useful information can a wearable device extract from its wearer?

To get a better idea about what’s actually available (as opposed to fantasized), I bought a Jawbone UP wristband a little over a month ago. With its accelerometers and embedded microprocessors, UP purports to tell you how many steps you took, how long you’ve been inactive during your days, it logs your stretches of light and deep sleep, and even “makes it fun and easy to keep track of what you eat”.  Once or twice a day, you plug it into your smartphone and it syncs with an app that displays your activity in graphic form, tells you how well you’re doing versus various goals and averages. It also suggests that you log your mood in order to “discover connections that affect how you feel.”

At first, I found the device physically grating. I couldn’t accept it the way I’m oblivious to my watch, and I even found it on the floor next to my bed a couple of mornings. But I stuck with it. The battery life is as promised (10 days) and I’ve experienced none of the first versions troubles. I traveled, hiked and showered with it without a hitch other than the cap covering the connecting pin getting a bit out of alignment.

Will I keep using it? Probably not.

Beyond the physical discomfort, I haven’t found the device to be very useful, or even accurate. It’s not that difficult to acquire a useful approximation of hours slept and distance walked during the day — you don’t need a device for these things.

As for accuracy, the other day it declared that I had exhibited a substantial level of physical activity… while I was having breakfast. (I may be French, but I no longer move my hands all that much as I speak.)

The app’s suggestion that I log my food consumption falls into the magical thinking domain of dieting. A Monday morning step on a scale tells us what we know already: Moderation is hard, mysterious, out of the reach of gadgets and incantations.

For a product to start a new worthy species for a company as large as Apple, the currency unit to consider is $10B. Below that level, it’s either an accessory or exists as a member of the ecosystem’s supporting cast. The Airport devices are neat accessories; the more visible Apple TV supports the big money makers — Macs, iPads and iPhones — by enhancing their everyday use.

With this in mind, will “wearables” move the needle, will they cross the $10B revenue line in their second or third year, or does their nature direct them to the supporting cast or accessory bins?

Two elements appear to be missing for wearable technologies to have the economic impact that companies such as Apple would enjoy:

  • The device needs to be easily, naturally worn all the time, even more permanently than the watch we tend to take off at night.
  • It needs to capture more information than devices such as the Jawbone do.

A smartwatch that’s wirelessly linked to my smartphone and shows a subset of the screen in my pocket…I’m not sure this will break out of the novelty category where the devices have been confined thus far.

Going back to Tim Cook’s oracular pronouncement on wearables being “a very key branch of the tree”, I wonder: Was he having fun misdirecting his competition?

JLG@mondaynote.com

—————————————–

PS: After two July Monday Notes on the company, I’ll wait for the Microsoft centipede to drop one or two more shoes before I write about the Why, When, How and Now What of Ballmer’s latest unnatural acts. There in an Analyst Day coming September 19th — and the press has been disinvited.

PPS: In coming days, to keep your sanity when trying to drink from the Apple kommentariat fire hydrant, you can safely direct your steps to three sites/blogs:

  • Apple 2.0 , where Philip Ellmer-DeWitt provides rational news and commentary, skewers idiots and links to other valuable fodder.
  • Asymco, where Horace Dediu provides the absolute best numbers, graphs and insights into the greatest upheaval the tech industry has ever seen. Comments following his articles are lively but thoughtful and civilized.
  • Apple Insider. You might want to focus on learned, detailed editorials by Daniel Eran Dilger such as this one where he discusses Microsoft and Google (partially) shifting to an Apple-like business model. Daniel can be opinionated, animated even, but his articles come with tons of well-organized data.

Blackberry’s Future

 

by Jean-Louis Gassée

Once the king of smartphones for business uses, Blackberry got a new CEO, a new operating system and new devices, with and without the traditional keyboard. In spite of these changes, the company’s latest numbers don’t paint a picture of revival.

Thorsten Heins doesn’t suffer a lack of enthusiasm. During the run up to the March release of its BlackBerry 10 operating system, RIM’s CEO painted an optimistic picture of a company on the rebound, a company that would correct the mistakes of the industry leaders:

“It’s still the same,” Heins said of the iPhone. “It is a sequential way to work and that’s not what people want today anymore. They want multitasking.”

Rechristened as BlackBerry, Heins told us that the company would energize the develop community and spawn devices that are too exciting to describe:

“There’s one new product I’m really excited about, but I can’t really share it,” Heins told CNET in an interview today.

Last week, the company released its latest quarterly numbers and they are exciting, although not in the sense that Heins would like. The forecast was $3.4B in revenue and $0.07 in earnings per share; the reality was $3.1B in sales and, more important, a loss of $0.13 per share.

The numbers “excited” traders so much that BBRY shares lost 28% of their value in a single trading session, putting them back to their one-year-ago level.

The earnings release was followed by the customary conference call where the CEO and CFO review the numbers and answer questions from Wall Street analysts. Courtesy of Seeking Alpha, the call transcript is here and contains the obligatory pablum, including an excessive abuse of the F-word (22 occurrences):

Embracing our heritage of mobility first is very important as we build our culture and go through this transition. We don’t have to be all things to all people and all markets, and embracing this focus allows us to drive efficiency, be flexible and agile, and to ultimately drive best-in-class innovations. [...] We’re continuing to focus on improving all areas of the business…

Curiously, there’s no breakdown of the sales of BlackBerry devices. How much of their revenue was “energized” by the BB10? Without actual numbers, we’re left in a cloud of doubt about how well the new platform is actually doing.

The disquietude continues: There are no subscriber numbers, and no guidance other than an expectation of more losses next quarter. The glowing comments about cash-flow from operations ($630M, a nice number) are undercut by the disclosure of a substantial tax refund, without which the company would have eaten through $400M to $500M of cash.

As for tablets, the Blackberry PlayBook is no more, says the CEO. He’s unhappy with the device’s performance and is determined to focus on the company’s “core hardware portfolio“. (The company’s website no longer describes the product and only offers a software update for existing customers.)

Inevitably, the How Many Moves Remain? question comes up. Blackberry professes to do more than just devices, it claims to offer strong enterprise services and says it will propagate its BBM (Blackberry Messenger) to other platforms including Android and iOS. It also promotes a form of (limited) compatibility for (some) Android apps on its newer smartphones. But is anyone buying and in numbers that can save the company?

More to the point: Who wants to buy Blackberry (the company), for what reasons, and at what price?

Let’s back up. Last week, we heard that Microsoft had once again given up on its perennial hunt to capture a handset maker. This time, the prey was Nokia, Microsoft’s “special” Windows Phone licensee.

The official explanation for the Nokia blowup was that the price tag was too high, but price clearly wasn’t an issue. Nokia’s $14B market capitalization weighs in at about 5% of Microsoft’s $288B. Even when you tack on a 25% acquisition premium, the purchase should have been a reasonably easy sell, especially given Microsoft’s desire to take the handset business into its own hands, if only to counter (or mimic) the strategy established by Google and Motorola.

There’s really only one explanation, as I speculated last week: The engagement was dissolved because of Microsoft’s bleak view of Nokia’s business, that the Finnish company no longer has the technological acumen and brand loyalty that Microsoft needs to make Windows Phone a legitimate competitor with Android and iOS.

BlackBerry’s market capitalization now stands at about $6B. That’s less than half of Nokia’s. If Nokia, supported by Microsoft, can’t gain ground on Google and Apple devices, what gives us confidence that BlackBerry isn’t sliding into insignificance?

The BlackBerry name, as a brand, is strong. But a brand only exists as the carrier of a promise. A brand writes checks that the product cashes. Without a successful product, the brand dies (go ask Kodak).

While Nokia could be acquired by someone interested in the Windows Phone business, one is hard pressed to form a similar thought for Blackberry. It may be struggling, but there is a Windows Phone ecosystem, including handset makers. There is no such thing around BlackBerry. Developers aren’t writing apps for BB10 in ecosystem-making numbers, carriers have taken a wait-and-see posture, even the core group of dedicated users (I used to be one of them) appears to be losing faith.

This isn’t a brightly optimistic picture. Today, Blackberry finds itself caught between Samsung and Apple at the high end, and a rabidly fermenting crowd of Android (official or not) clones at the lower price range.

So, why not consider heresy, or apostasy: Ditch the newer BlackBerry OS too few developers believe in, and bet on Android devices to support BlackBerry’s enterprise services.

The answer is probably the same as it is for Nokia: It’s too late.

JLG@mondaynote.com

Microsoft and Nokia won’t beget a Googorola clone

 

by Jean-Louis Gassée

Microsoft, after its highly visible 2011 bet on Nokia, could have decided to go one step further and buy Nokia to become a fully integrated smartphone. That it didn’t happen doesn’t portend a great future for Windows Phone.

Last week, the Wall Street Journal outed Microsoft’s unsuccessful attempt to acquire Nokia:

Microsoft recently held advanced talks with Nokia about buying its handset business, people familiar with the matter said, as laggards in the fast-moving mobile market struggle to gain ground.

Many saw an acquisition as an inevitable next step, that by acquiring the Finnish handset maker Microsoft could “finish the job” that they started when they licensed a special Windows Phone to Nokia. It would be a blessed union of two vigilant, watchful companies: Microsoft had watched as Android and iOS made its own OS a distant also ran; Nokia, once the world’s largest cell phone maker, couldn’t help but notice that Google and Apple had killed its handset business from both the high and low ends.

But, according to the WSJ, the parlay came to a negative and apparently definitive end:

The discussions faltered over price and worries about Nokia’s slumping market position, among other issues, these people said. One of the people said talks took place as recently as this month but aren’t likely to be revived.

To call Nokia’s fall a “slump” is more than polite. The company saw its market share fall from 39% in 2009 — more than 100 million handsets per quarter — to an estimated (and angrily debated) 3% by the end of 2012.

Microsoft hasn’t done much better with its mobile software. In 2008, Windows Mobile OS held a 11% market share, even as the underlying Windows CE engine was getting long in the tooth, particularly when compared to the Unix-ish Android and iOS engines. With a modern NT kernel, Microsoft’s mobile OS was reborn as Windows Phone 8 and scored a modest 3.2% market share in Q1 2013.  This number comes from IDC, the “research” group that has assured us that come 2016, Microsoft will be the number 2 mobile OS provider with a 19.2% share:

09-table nokia

Behold the vision and precision of IDC’s psychics: Back in June 2012, they could see four years into the future and predict that Windows Phone would edge out iOS… by two tenths of a percent!

We’ve heard the Microsoft-is-buying-a-handset-maker rumors before. Starting in 2007 and recurring year after year, Microsoft was said to be eyeing RIM/Blackberry. For some, yours truly included in January 2012, the RIM story was compellingly straightforward: RIM’s clientèle of loyal, hardcore Blackberry users in businesses and governments made it an ideal fit for the Redmond giant.

Microsoft’s defenders will argue that RIM ’07 was too expensive. Priced at $200 a share (they’re running at about $14 today), RIM would have cost more than a $100B before any acquisition premium. At the time, Microsoft was valued at approximately $250B (similar to today’s $277B). Ideal or not, the match didn’t make sense for Microsoft shareholders. Then, when RIM’s price began to slide, the Blackberry was seen as having lost too much of its shine, too much of its market momentum. The company was damaged goods. (Or, as we might have forgotten, the two co-CEOs, Mike Lazaridis and Jim Balsillie, the ones who spoke in tongues, may have proved too difficult for even Steve Ballmer to deal with.)

Someday, Microsoft’s inability to grab RIM might be seen as a signal failure, a key episode in the company’s slide into irrelevance in the smartphone market. I doubt anyone will see Nokia in a similar light, as the “one who got away”.

The “MicroNokia” relationship has been challenging from the start. In February 2011, Nokia committed itself to a special partnership with Microsoft. It would ditch its operating systems (Symbian, Meego, QT) and become a beacon and standard bearer for Windows Phone 7. Money changed hands: $250M of “platform support” per quarter was sent from Redmond to Espoo in order to offset the unspecified Windows Phone licensing payments that flowed in the opposite direction.

This messy, technologically and culturally unsound arrangement only got worse when Stephen Elop, the former Microsoft exec now running Nokia, announced the switch to Windows Phone ten months before the company would end up shipping devices that ran the new (and problematic) OS. Unsurprisingly, Nokia’s revenue evaporated, leaving it with losses and a minuscule 5% market share (including Symbian-based smartphones).

Why Elop would make an announcement that effectively Osborned the business still mystifies and enrages Nokia supporters such as Tomi Ahonen who keeps calling for Elop’s head in long, irate blog posts. (In industry lore, to “Osborne” is to prematurely announce a product that so clearly obsoletes your current offering that it kills revenue. The suicidal maneuver is named in loving memory of portable computer pioneer Adam Osborne who destroyed his business by bragging that his next product would be so much better than the current one.)

I’m also mystified, but for another reason. I can’t fathom why Nokia picked Windows Phone instead of Android, whose explosive success was obvious even as early as 2010 when the company ditched its CEO. (I’m a little biased here as, in June 2010, I wrote a tongue-in-cheek piece titled Science Fiction: Nokia goes Android.)

Nokia’s excuses for not adopting Android were vague, ranging from “we don’t want to lose control of our destiny”, to Microsoft being a “stronger partner” (read: They paid us). The potential-loss-of-destiny rhetoric falls flat, especially when you look at Android’s licensing terms and see the freedom Samsung and others enjoy with their interpretations of the platform. (We’ve heard that Nokia and Google once talked, but we don’t yet know the reason for their not becoming highly visible partners.)

Today, investors say Nokia is worth about $15B, a tenth of its 2007 peak (I’m excluding the 2000 Internet Bubble number from the comparison). Even with a “25% acquisition premium”, a Nokia acquisition would cost Microsoft less than 10% of its capitalization. So, contrary to the charitable explanation offered to the WSJ by “persons familiar with the matter”, price couldn’t have been an obstacle. That leaves us with Nokia’s “slump”: Microsoft thinks Nokia would be unable to carry Windows Phone to an influential, sustainable market position.

Now, what?

Nokia’s revenue keeps sliding down and, after a brief incursion into the black, it keeps losing money. Is there anything in sight that will reverse the trend? It’s doubtful that the company can try for the high end by offering better hardware than Samsung, nor can they squeeze into a low end that’s inhabited by official and unofficial Android clones that are swiftly killing off feature phones. This leaves Nokia’s future as an independent company in doubt and logically gives rise to more acquisition speculation.

And what will happen to Windows Phone? We now hear that Microsoft is paying developers as much as $100,000 to write or port an application to the platform. This is a rational move on Microsoft’s part, an attempt to create the critical mass that doesn’t seem to be able to happen naturally. But it can also be seen as desperation, an admission that Windows Phone is having trouble gaining momentum as developers and customers are embraced in a downward spiral.

One can’t imagine that Ballmer will call it a day and cede the field to Google and Apple. Personally, I admire his never-give-up attitude, always talking up the future, unfazed by past bold pronouncements gone wrong, but enthusiasm isn’t a strategy. And in the smartphone market, Microsoft doesn’t have many moves left. Regardless of the technical merits of its new mobile OS, momentum seems elusive; market forces that once worked against Windows competitors in the PC field now seem to confine Windows Phone to an insignificant market share against the two dominant and their complementary business models.

We don’t know yet how Google’s acquisition of Motorola will fare, but the Android platform is healthy enough without it. The same can’t be said of Windows Phone without Nokia, which leads one to believe there will be a forced marriage between the once proud Finnish handset maker and an ambitious player, probably Chinese — with Microsoft providing a substantial dowry once again.

In the meantime, we can count on IDC to provide fresh numbers… for 2017.

JLG@mondaynote.com

Android vs. Apple. Market Share vs. Profit Share, Part 255

 

Conventional wisdom and badly reconstructed history can lead to seemingly comfortable but in reality fragile conclusions. Prepare to be confused. 

Ever since the Android platform emerged as the only real competitor to Apple’s iOS devices, we’ve been treated to a debate which I’ll oversimplify: If Apple makes all the money but Android gets all the volume, who will win? A cursory survey of tech journals and blogs would lead one to believe that the case is closed: Market Share trumps Profit Share. It always does.

So Apple should call it a day? I’m skeptical. Not about the conclusion — Market Share isn’t exactly a dark horse — but about the arguments that are trotted out. False memories of Apple’s past have become a template for its future. For example, a recent Wall Street Journal article ends thus [and, sorry, you need a subscription to see the entire article]:

“Unfortunately, Apple has seen this movie before. A generation ago, it also had a top product whose market share was undercut by cheap, inferior rivals. It hopes the iPhone’s story isn’t a sequel to the Mac’s.”

(I emailed the WSJ writer asking three simple, clarifying questions. No answer, but that’s standard practice, as witheringly described by Philip Elmer-DeWitt at the end of this post.)

I was there “a generation ago”. In 1981, when IBM introduced the IBM PC, I was starting Apple France. Big Blue had made startling changes to its old ways, boldly calling its new machine The Personal Computer (we thought the “The” was ours). In an even bolder move, IBM loosened its tie and its dress code, and tried (successfully) to speak to the “common man” by using a Charlie Chaplin imitator as a mascot:

An interesting choice, particularly when juxtaposed with the real Chaplin’s cine-commentary on “labor-saving devices”:

The original PC from IBM’s Boca Raton group was a faithful homage to the Apple ][, right down to the cassette interface. But it wasn't a cheap imitation. There was one important difference:  Where the Apple ][ used a 8-bit 6502 processor, IBM splurged on the much-more-powerful 16-bit Intel chip.

Almost overnight, the pages of InfoWorld, previously replete with salivating reviews of Apple products, were filled with IBM PC articles. The new machine got a major boost with the launch of Lotus 1-2-3, a multi-function spreadsheet that became the gold standard for office applications, especially on desktops that sported hard disks and large color screens. Against the Apple ][, the IBM PC was a superior product -- and deftly marketed.

For the next few years, the Apple ][ family stumbled. The Apple ///, beset by early hardware failures, didn't answer the 16-bit question. It wasn't the modernization of the Apple ][ that the company had promised. The Apple II GS was even worse, not compatible enough with the Apple ][ and not powerful enough to attract developers, particularly Bill Gates who saw no potential for Microsoft applications.

That brings us to 1984. The Macintosh changed the game, right?

Hardly. At its coming out party, the Mac was two years behind schedule. I recall the "Mac's Last Slip" jibes at company meetings. No one would deny the obvious potential, the elegance, the innovative user interface, the clean square pixels on the bit-mapped screen, the fonts, the LaserWriter connection... But the Mac didn't support external hard drives until 1986, and it would be another year before internal disks, additional modularity, and a great Trinitron color monitor were added.

By that time, IBM had had the market to itself for half a decade, and its PC creation had morphed into the Wintel clone industry.

Contrary to the revisionist WSJ story, the "generation ago" Mac never had a market share to undercut. Apple's flagship product -- innovative, elegant, a generation ahead – was a dreamer's machine. Down-to-earth market wisdom said the Mac was perfect for Stanford undergrads, but not serious enough for real business use. The common view was application developers wouldn't be able to afford the investment in time and hardware. Starved of competitive software, the Macintosh was doomed to irrelevance and, ultimately, failure.

It almost happened, especially after Apple's desperate attempt to prop up platform share numbers by licensing Mac clones, a move that resulted in a brutal drop in Apple's margins. Market share vs. Profit Share...

The Mac was saved by Gil Amelio's unintentionally self-sacrificing decision to hand the Apple reins back to Steve Jobs. What followed was the most amazing turnaround our industry has ever seen, and it started with two controversial moves: Jobs rescinded the Mac OS license, and he made a deal with the Microsoft Devil. He convinced Gates' company to "invest" $150M in non-voting Apple shares and develop new Mac versions of the Explorer browser and Office apps (although, in reality, the agreement was part of a settlement of an older IP dispute).

We know the rest of the story, including a meme-adverse fact: For close to seven years, the Mac has consistently gained market share at the expense of PC clones.

Since the advent of another flagship product, the iPhone this time, the riches-to-rags Mac meme has led to predictions of a similar fate: Death by drowning in a sea of "cheap" Android clones. Apple's high price ($650 per iPhone on average) gives too much low-end room for competitors. The price will be undercut, there will be a decline in unit share that, in turn, will lead to lower profits, lower developer interest, lower ability to invest in future products. The road to irrelevance is paved with high margins and low market share.

Never mind two differences. First, the iPhone never lacked apps, 750,000 of them at last count. And never mind that it is immensely profitable, that Apple is embarrassingly flush with more cash than all its high-tech colleagues combined. The pundits won't accept evidence as an answer. Market Share will trump Profit Share. Why let facts cloud a good argument?

One is tempted to point to the race to the bottom that PC clone makers have experienced over the past decade. HP enjoys the largest Market Share of all PC makers, but it also "enjoys" less than 4% operating profit for its efforts. Meanwhile, Apple's margin is in the 25% range for its Mac line. That may not be as enjoyable as the 60% margin for the iPhone, but it's a solid business, particularly when you consider that the clone makers, HP and Dell foremost, are angling to get out of the business altogether. (See an earlier MN: Post-PC: Wall Street Likes the View.)

Returning to the iOS vs Android debate, I will state an opinion - not to be confused with a prediction, let alone The Truth: I think the vertical simplicity of Apple's business will tilt the field in its favor as the complicated Android world devolves into anarchy. Apple vs Google isn't Apple vs Microsoft/Intel/IBM.

Let's back up a bit. Google's 2005 acquisition of Android was a visionary move. (Some say Google's vision was sharpened by Eric Schmidt's presence on Apple's Board as the company worked on the future iPhone. Jobs was furious about Google's decision and summarily asked Schmidt to leave.) Android's unprecedented growth -- more than 50% share of the smartphone market in the US, and even more worldwide – is a testament to the "open" approach. Google gives away the Open Source Android OS; processors are another kind of "open", custom-designed under ARM licenses open to all payers.

But Android is a "cushion shot", it's an indirect way for Google to make money. Android is a Trojan horse that infects smartphones so it can install services that collect the user data that feeds Google's true business: advertising.

Now, Google faces several problems. Android's openness leads to incompatibilities between devices, a problem for developers that didn't happen under Microsoft's rule in the PC era. Worse (for Google), the many diverging versions of Android (a.k.a. forks) -- especially those created in China -- carry no Google services. They harvest no data and so they bring no advertising revenue potential back to Google.

This is clearly a concern for Google, so much so that the company now offers "pure" Android smartphones by Samsung (for $650) and HTC (for $599) on its Google Play site.

On the other hand, Android 2013 is a mature, stable OS. It isn't Windows '95, which was nothing more than a shell bolted on top of DOS. While the Mac's system software wasn't fully developed when it first came out, many saw it as superior -- or potentially superior -- to Microsoft's OS. Android is a tougher competitor than Windows was at the same age.

Then there is Google's subsidiary Motorola Mobility and the relationship with Samsung, the most powerful Android handset maker. As discussed last week, Motorola's stated intention is to push Android phone prices well below the $650 (unsubsidized) level. Is Samsung in a position to wag the Android dog? And if so, how will they react to Motorola's moves?

Let's not forget "the small matter of execution", one that might prove more important than lofty "strategic" considerations. And, to further complicate predictions, we have the herd's tendency to assume Company X will make all the mistakes while its competitors will play a perfect game.

Confused? Then I have accomplished one of my goals, to show how unhelpful the old bromides are when trying to guess what will happen next.

-- JLG@mondaynote.com

PS: I'd be remiss if I didn't direct you recently discovered articles by John Kirk, who calls himself a recovering attorney and indeed writes tightly reasoned posts on Techpinions. I'll whet your appetite with two quotes. One from Does The Rise Of Android's Market Share Mean The End of Apple's Profits? [emphasis mine]:

Steve Jobs wanted, and Apple wants, market share. But they want the RIGHT market share. Apple wants customers who are willing to pay for their products. And Apple wants customers who are good for their platform. In other words, Apple wants market share in their target demographic. Based on the fact that Apple is taking in 72% of the mobile phone profits with only 8% or 9% of the market share, it sure sounds like they’ve aquired the right market share to me.

Does the rise of Android’s market share mean the end of Apple’s profits? Hardly. You can argue as loudly as you like that developers and profit share must necessarily follow market share. But the facts will shout you down.

The other is from 4 Mobile Business Models, 4 Ways To Keep Score where he concludes:

And if you’re going to prophesy that market share alone gives Google data that will someday, somehow, be worth something to someone, then you need to go back and re-read how the “razor-and-blades” business model is scored.

What we desperately need in analyzing mobile computing is far more attention paid to profits and far less attention paid to prophets.

—-