About Jean-Louis Gassée

http://

Posts by Jean-Louis Gassée:

Macintel: The End Is Nigh

When Apple announced its 64-bit A7 processor, I dismissed the speculation that this could lead to a switch away from Intel chips for the Macintosh line for a homegrown “desktop-class” chip. I might have been wrong.

“I don’t know exactly when, but sooner or later, Macs will run on Apple-designed ARM chips.” Thus spake Matt Richman in a 2011 blog post titled “Apple and ARM, Sitting in a Tree”. Richman explained why, after a complicated but ultimately successful switch from PowerPC chips to Intel processors in 2005, Apple will make a similar switch, this time to ARM-based descendants of the A4 chip designed by Apple and manufactured by Samsung.

Cost is the first reason invoked for the move to an An processor:

“Intel charges $378 for the i7 chip in the new high-end 15 inch MacBook Pro. They don’t say how much they charge for the i7 chip in the low-end 15 inch MacBook Pro, but it’s probably around $300. …When Apple puts ARM-based SoC’s in Macs, their costs will go down dramatically. ”

We all know why Intel has been able to command such high prices. Given two microprocessors with the same manufacturing cost, power dissipation, and computing power, but where one runs Windows and the other doesn’t, which chip will achieve the higher market price in the PC market? Thus, Intel runs the table, it tells clone makers which new x86 chips they’ll receive, when they’ll receive them, and, most important, how much they’ll cost. Intel’s margins depend on it.

ARM-based processors, on the other hand, are inherently simpler and therefore cost less to make. Prices are driven even lower because of the fierce competition in the world of mobile devices, where the Wintel monopoly doesn’t apply.

329_A7chip

Cost is the foremost consideration, but power dissipation runs a close second. The aging x86 architecture is beset by layers of architectural silt accreted from a succession of additions to the instruction set. Emerging media formats demand new extensions, while obsolete constructs must be maintained for the sake of Microsoft’s backward compatibility religion. (I’ll hasten to say this has been admirably successful for more than three decades. The x86 nickname used to designate Wintel chips originates from the 8086 processor introduced in 1978 – itself a backward-compatible extension of the 8088…)
Because of this excess baggage, an x86 chip needs more transistors than its ARM-based equivalent, and thus it consumes more power and must dissipate more heat.

Last but not least, Richman quotes Steve Jobs:

“I’ve always wanted to own and control the primary technology in everything we do.”

Apple’s leader has often been criticized for being too independent and controlling, for ignoring hard-earned industry wisdom. Recall how Apple’s decision to design its own processors was met with howls of protest, accusations of arrogance, and the usual predictions of doom.

Since then, the interest for another Grand Processor Switch has been alive and well. Googling “Mac running on ARM” gets you close to 10M results. (When you Bing the same query, you get 220M hits — 22x Google’s results. SEO experts are welcome to comment.)

Back to the future…

In September 2013, almost a year ago already, Apple introduced the 64-bit A7 processor that powers new iPhones and iPads. The usual suspects pooh-poohed Apple’s new homegrown CPU, and I indulged in a little fun skewering the microprocessor truthers: 64 bits. It’s Nothing. You Don’t Need It. And We’ll Have It In 6 Months. Towards the end of the article, unfortunately, I dismissed the speculation that Apple An processors would someday power the Mac. I cited iMacs and Mac Pros — the high end of the product line —as examples of what descendants of the A7 couldn’t power.

A friend set me straight.

In the first place, Apple’s drive to own “all layers of the stack” continues unabated years after Steve’s passing. As a recent example, Apple created its own Swift programming language that complements its Xcode IDE and Clang/LLVM compiler infrastructure. (For kremlinology’s sake I’ll point out that there is an official Apple Swift blog, a first in Apple 2.0 history if you exclude the Hot News section of the of apple.com site. Imagine what would happen if there was an App Store blog… But I digress.)

Secondly, the Mac line is suspended, literally, by the late delivery of Intel’s Broadwell x86 processors. (The delay stems from an ambitious move to a bleeding edge fabrication technology that shrinks the basic building block of a chip to 14 nanometers, down from 22 nanometers in today’s Haswell chips.) Of course, Apple and its An semiconductor vendor could encounter similar problems – but the company would have more visibility, more control of its own destiny.

Furthermore, it looks like I misspoke when I said an An chip couldn’t power a high-end Mac. True, the A7 is optimized for mobile devices: Battery-optimization, small memory footprint, smaller screen graphics than an iMac or a MacBook Pro with a Retina display. But having shown its muscle in designing a processor for the tight constraints of mobile devices, why would we think that the team that created the most advanced smartphone/tablet processor couldn’t now design a 3GHz A10 machine optimized for “desktop-class” (a term used by Apple’s Phil Schiller when introducing the A7) applications?

If we follow this line of reasoning, the advantages of ARM-based processors vs. x86 devices become even more compelling: lower cost, better power dissipation, natural integration with the rest of the machine. For years, Intel has argued that its superior semiconductor design and manufacturing technology would eventually overcome the complexity downsides of the x86 architecture. But that “eventually” is getting a bit stale. Other than a few showcase design wins that have never amounted to much in the real world, x86 devices continue to lose to ARM-derived SoC (System On a Chip) designs.

The Mac business is “only” $20B a year, while iPhones and iPad generate more than 5 times that. Still, $20B isn’t chump change (HP’s Personal Systems Group generates about $30B in revenue), and unit sales are up 18% in last June’s numbers vs. a year ago. Actually, Mac revenue ($5.5B) approaches the iPad’s flagging sales ($5.9B). Today, a 11” MacBook Air costs $899 while a 128Gb iPad Air goes for $799. What would happen to the cost, battery life, and size of an A10-powered MacBook Air? And so on for the rest of the Mac line.

By moving to ARM, Apple could continue to increase its PC market share and scoop much of the profits – it currently rakes in about half of the money made by PC makers. And it could do this while catering to its customers in the Affordable Luxury segment who like owning both an iPad and a Mac.

While this is entirely speculative, I wonder what Intel’s leadership thinks when contemplating a future where their most profitable PC maker goes native.

JLG@mondaynote.com

———-

Postscript: The masthead on Matt Richman’s blog tells us that he’s now an intern at Intel. After reading several of his posts questioning the company’s future, I can’t help but salute Intel management’s open mind and interest in tightly reasoned external viewpoints.

And if it surprises you that Richman is a “mere” intern, be aware that he was all of 16-years-old when he wrote the Apple and ARM post. Since then, his blog has treated us to an admirable series of articles on Intel, Samsung, Blackberry, Apple, Washington nonsense – and a nice Thank You to his parents.

 

Microsoft’s New CEO Needs An Editor

 

Satya Nadella’s latest message to the troops – and to the world – is disquieting. It lacks focus, specifics, and, if not soon sharpened, his words will worry employees, developers, customers, and even shareholders.

As I puzzled over the public email Microsoft’s new CEO sent to his troops, Nicolas Boileau’s immortal dictum came to mind:

Whatever is well conceived is clearly said,
And the words to say it flow with ease.

Clarity and ease are sorely missing from Satya Nadella’s 3,100 plodding words, which were supposed to paint a clear, motivating future for 127,000 Microsoftians anxious to know where the new boss is leading them.

LE WEB PARIS 2013 - CONFERENCES - PLENARY 1 - SATYA NADELLA

Nadella is a repeat befuddler. His first email to employees, sent just after he assumed the CEO mantle on earlier this year, was filled with bombastic and false platitudes:

“We are the only ones who can harness the power of software and deliver it through devices and services that truly empower every individual and every organization. We are the only company with history and continued focus in building platforms and ecosystems that create broad opportunity.”

(More in the February 9th, 2014 Monday Note)

In his latest message, Nadella treats us to more toothless generalities:

“We have clarity in purpose to empower every individual and organization to do more and achieve more. We have the right capabilities to reinvent productivity and platforms for the mobile-first and cloud-first world. Now, we must build the right culture to take advantage of our huge opportunity. And culture change starts with one individual at a time.”

Rather than ceding to the temptation of quoting more gems, let’s turn to a few simple rules of exposition.

First, the hierarchy of ideas:

328_strategy_graph

This admittedly simplistic diagram breaks down an enterprise into four layers and can help diagnose thinking malfunctions.

The top layer deals with the Identity or Culture — I use the two terms interchangeably as one determines the other. One level down, we have Goals, where the group is going. Then come the Strategies or the paths to those goals. Finally, we have the Plan, the deployment of troops, time, and money.

The arrow on the left is a diagnostic tool. It reminds us that as we traverse the diagram from Identity to Plan, the number of words that we need to describe each layer increases.  It should only take a few words to limn a company’s identity (Schlumberger, oil services; Disney, family entertainment), describing the company’s goals will be just a tad more verbose (“in 5 years’ time we’ll achieve $X EPS, Y% revenue growth and Z% market share”), and so on.

The arrow also tells us that the “rate of change” — the frequency at which a description changes — follows the same trajectory. Identity should change only very slowly, if ever. At the other end, the plan will need constant adjustment as the company responds to rapidly shifting circumstances, the economy, the competition.

Using the old Microsoft as an example:
— Identity: We’re the emperor of PC software
— Goals: A PC on every desk and home – running our software
— Strategy: Couple the Windows + Office licenses to help OEMs see the light; Embrace and Extend Office competitors.
— Plan: Changes every week.

Returning to Nadella’s prose, can we mine it for words to fill the top three layers? Definitely not.

Second broken rule: Can I disagree? Any text that relies on platitudes says not much at all; in a message-to-the-troops that’s supposed to give direction, irrefutable statements are deadly. Some randomly selected examples in an unfortunately overabundant field:

“[…] we will strike the right balance between using data to create intelligent, personal experiences, while maintaining security and privacy.”

or…

“Together we have the opportunity to create technology that impacts the planet.”

 or…

“Obsessing over our customers is everybody’s job.”

If I’m presented with statements I cannot realistically disagree with – We Will Behave With Utmost Integrity – I feel there’s something wrong. If it’s all pro and no con, it’s a con.

There are other violations but I’ll stop in order to avoid the tl;dr infraction I reproach Nadella for: Never make a general statement without immediately following it with the sacramental “For Example”.

For example:

“[…] we will modernize our engineering processes to be customer-obsessed, data-driven, speed-oriented and quality-focused.”

… would be more believable if followed by:

Specifically, we’ll ask each each software engineer to spend two days every month visiting customers on even months, and third party developers on odd ones. They will also spend one day per quarter seconding Customer Service Representatives over our phone banks.” 

Satya Nadella is an unusually intelligent man, a Mensa-caliber intellect, well-read, he quotes Nietzsche, Oscar Wilde, and Rainer Maria Rilke. Why, then, does he repeatedly break basic storytelling rules?

Two possible explanations come to mind.

First, because he’s intelligent and literate, he forgot to use an unforgiving editor. ‘Chief, you really want to email that?’ Or, if he used an editor, he was victimized by a sycophantic one. ‘Satya, you nailed it!’

Second, and more likely, Nadella speaks in code. He’s making cryptic statements that are meant to prepare the troops for painful changes. Seemingly bland, obligatory statements about the future will decrypt into wrenching decisions:

“Organizations will change. Mergers and acquisitions will occur. Job responsibilities will evolve. New partnerships will be formed. Tired traditions will be questioned. Our priorities will be adjusted. New skills will be built. New ideas will be heard. New hires will be made. Processes will be simplified. And if you want to thrive at Microsoft and make a world impact, you and your team must add numerous more changes to this list that you will be enthusiastic about driving.”

In plainer English: Shape up or ship out.

Tortured statements from CEOs, politicians, coworkers, spouses, or suppliers, in no hierarchical order, mean one thing: I have something to hide, but I want to be able to say I told you the facts.

With all this in mind, let’s see if we can restate Nadella’s message to the troops:

This is the beginning of our new FY 2015 – and of a new era at Microsoft.
I have good news and bad news.
The bad news is the old Devices and Services mantra won’t work.

For example: I’ve determined we’ll never make money in tablets or smartphones.

So, do we continue to pretend we’re “all in” or do we face reality and make the painful decision to pull out so we can use our resources – including our integrity – to fight winnable battles? With the support of the Microsoft Board, I’ve chosen the latter. We’ll do our utmost to minimize the pain that will naturally arise from this change. Specifically, we’ll offer generous transitions arrangements in and out of the company to concerned Microsoftians and former Nokians.

The good news is we have immense resources to be a major player in the new world of Cloud services and Native Apps for mobile devices. We let the first innings of that game go by, but the sting energizes us. An example of such commitment is the rapid spread of Office applications – and related Cloud services – on any and all mobile devices. All Microsoft Enterprise and Consumer products/services will follow, including Xbox properties.

I realize this will disrupt the status quo and apologize for the pain to come. We have a choice: change or be changed.

Stay tuned.

Or words (about 200) to that effect.

In parting, Nadella would do well to direct his attention to another literate individual, John Kirk, whose latest essay, Microsoft Is The Very Antithesis Of Strategy, is a devastating analysis that compares the company’s game plan to the advice given by Sun Tzu, Liddell Hart, and Carl von Clausewitz, writers who are more appropriate to the war that Microsoft is in than the authors Microsoft’s CEO seems to favor.

The CEO’s July 10th email promises more developments, probably around the July 22nd Earnings release. Let’s hope he’ll offer sharper and shorter words to describe Microsoft’s entry into the Cloud First – Mobile First era.

JLG@mondaynote.com

The Network Is the Computer: Google Tries Again

 

All you need is a dumb device attached to a smart network. It’s an old idea that refuses to die despite repeated failures. Now it’s Google’s turn.

In the late 1980s, Sun Microsystems used a simple, potent war cry to promote its servers: The Network Is The Computer. Entrust all of your business intelligence, computing power, and storage to Sun’s networked SPARC systems and you can replace your expensive workstation with a dumb, low cost machine. PCs are doomed.

Nothing of the sort happened, of course. Sun’s venture was disrupted by inexpensive servers assembled from the PC organ bank and running Open Source software.

PCs prospered, but that didn’t dampen the spirits of those who would rid us of them.

Fast-forward to the mid-1990s and thought re-emerges in a new guise: The Browser Will Be The Operating System (a statement that’s widely misattributed to Marc Andreessen, who holds a more nuanced view on the matter). The browser will serve as a way to access networked services that will process your data. The actual OS on your device, what sort of apps it can run — or even if it can run any (other than a browser) — these questions will fade into insignificance.

Soon after, Oracle took a swing at the Network is the Computer piñata by defining the Network Computer Reference Profile (or NCRP), a specification that focused on network connectivity and deemphasized local storage and processing. It was understood, if not explicitly stated, that an NCRP device must be diskless. A number of manufacturers offered NCRP implementations, including Sun (which would ultimately be acquired by Oracle) with its JavaStation. But despite Larry Ellison’s strongly expressed belief that Network Computers would rid the industry of the evil Microsoft, the effort went nowhere.

Today, The Network Is The Computer lives on under the name Cloud Computing, the purest example of which is a Google Chromebook running on Chrome OS. (And thus, in a sense, Sun’s idea lives on: Google’s first investor was Sun co-founder Andy Bechtolsheim.)

So far, Chromebooks have shown only modest penetration (a topic for musings in a future Monday Note), but despite the slow adoption, Google has become one of the largest and most important Cloud Computing companies on the planet. Combine this with the Android operating system that powers more than a billion active devices, could Google bring us to the point where The Network Really Is The Computer?

It’s a complicated question, partly because the comparison with the previous generation of devices, traditional PCs, can (excuse me) cloud the view.

Unlike PCs, smartphones rely on an expensive wireless infrastructure. One can blame the oligopolistic nature of the wireless carrier industry (in English: too few companies to have a really competitive market), but that doesn’t change the simple fact that wireless bandwidth isn’t cheap. The dumber the device, the more it has to rely on the Cloud to process and store data, and the more bandwidth it will consume.

Let’s visit Marc Andreessen actual words regarding Network-As-Computer, from a 2012 Wired interview [emphasis mine]:

“[I]f you grant me the very big assumption that at some point we will have ubiquitous, high-speed wireless connectivity, then in time everything will end up back in the web model.”

If we interject, on Andreessen’s behalf, that wireless connectivity must be as inexpensive as it is ubiquitous, then we begin to see the problem. The “data hunger” of media intensive apps, from photo processing to games, shows no sign of slowing down. And when you consider the wireless bandwidth scarcity that comes from the rapid expansion of smartphone use, it seems that conditions are, yet again, conspiring against the “dumb device” model.

The situation is further confounded when we consider that Google’s business depends on delivering users to advertisers. Cloud computing will help drive down the cost of Android handsets and thus offer an even wider audience to advertisers…but these advertisers want a pleasant and memorable UI, they want the best canvas for their ads. When you dumb down the phone, you dumb down the ad playback experience.

In a recent blog post titled The next phase of smartphones, Benedict Evans neatly delineates the two leading “cloud views” by contrasting Apple and Google [emphasis mine]:

“Apple’s approach is about a dumb cloud enabling rich apps while Google’s is about devices as dumb glass that are endpoints of cloud services…”

But Google’s “dumb glass” can’t be too dumb.  For its mobile advertising business, Google needs to “see” everything we do on our smartphones, just like it does on our PCs. Evans intimates as much:

“…it seems that Google is trying to make ‘app versus web’ an irrelevant discussion – all content will act like part of the web, searchable and linkable by Google.”

Native apps running on a “really smart” device are inimical to Google’s business model. To keep the advertisers happy, Google would have to “instrument” native apps, insert deep links that will feed its data collection activities.

This is where the Apple vs. Google contrast is particularly significant: iOS apps are not allowed to let advertisers know what we are doing – unless explicitly authorized. Apple’s business model doesn’t rely on peddling our profile to advertisers.

In the end, I wonder if Google really believes in the “dumb glass” approach to smartphones. Perhaps, at least for now, The Computer will remain The Computer.

JLG@mondaynote.com

 

iWatch Thoughts

 

Unlike the almost forgotten Apple TV set, there might be a real product in the iWatch. But as rumors about the device intensify, the scuttlebutt conveniently skirts key questions about the product’s role.

As reverberations of Apple’s Developer Conference begin to die down, the ever-dependable iWatch has offered itself as the focus of another salvo of rumors and speculation. Actually, there’s just one rumor — a Reuters “report” that Quanta Computer will begin manufacturing the iWatch in July — but it was enough to launch a quick-fire series of echoes that bounced around the blogosphere. Not to be outdone, the Wall Street Journal added its own tidbits:

“Apple is planning multiple versions of a smartwatch…[that] will include more than 10 sensors to track and monitor health and fitness data, these people said.”

(“These people” are, of course, the all-knowing “people familiar with the matter”.)

The iWatch hubbub could be nothing more than a sort of seasonal virus, but this time there’s a difference.

At the WWDC three weeks ago, Apple previewed HealthKit, a toolkit iOS developers can use to build health and fitness related applications. HealthKit is a component of the iOS 8 release that Apple plans to ship this fall in conjunction with the newest iDevices. As an example of what developers will be able to do with HealthKit, Apple previewed Health, an application that gives you “an easy-to-read dashboard of your health and fitness data.”

The rumor that Quanta will soon begin “mass production” of the iWatch — the perfect vehicle for health-and-fitness apps — just became a bit more tantalizing… but there are still a number of questions that are left unanswered.

Foremost is iWatch “independence”. How useful will it be when it’s running on its own, unconnected to a smartphone, tablet, or conventional PC? My own guess: Not very useful. Unless Apple plans to build a monstrosity of a device (not likely), the form factor of our putative iWatch will dictate a small battery, which means the processor will have to be power-conserving and thus unable to run iPhone-caliber apps. Power conservation is particularly important if Apple wants to avoid jibes of the ‘My iWatch ran out of battery at the end of the day’ type. Such occurrences, already annoying with a smartphone, could be bad publicity for a “health and fitness” watch.

So, let’s settle for a “mostly dependent” device that relies on a more robust sibling for storage, analysis, and broad overview.

That raises another question: Will the iWatch be part of Apple’s ecosystem only, or will it play nice with Windows PCs or even Android smartphones? If we take Apple’s continued tolerance of the Android version of Beats Music (at least so far) as an example, the notion of an Apple device communicating with a member of the Android tribe is less heretical than it once was. Again, my own guess: Initially, the iWatch will be of restricted to the Apple ecosystem. We’ll see what happens if the device catches on and there’s a demand for an “non-denominational” connection.

As for what role the iWatch will play in the ecosystem, those of us ancient enough might recall the example set by the Smart Personal Objects Technology (SPOT) that Microsoft launched a decade ago. No need to repeat that bit of doomed history by targeting too many platforms, by trying to make “Smart Objects” omniscient. Instead, Apple is likely, as it insisted at its early June WWDC, to tout its Continuity ethos: Let each device do what it does best, but don’t impede the flow of information and activities between devices. In plainer English: Hybrid devices are inferior.

So, besides telling time (perhaps in Yosemite’s new system font, a derivative of Helvetica Neue) what exactly will the iWatch do? The first part of the answer is easy: It will use its sensors to collect data of interest. We’ve already seen what the M7 motion processor and related apps can do in an iPhone 5S; now imagine data that has much finer granularity, and sensors that can measure additional dimensions, such as altitude.

Things quickly get more complicated when we turn to the “other side of the skin”. Heart rhythm and blood pressure measurements look banal, but they shouldn’t be taken for granted, especially if one wants medically reliable data. Oxymetry, the measurement of your oxygen saturation, looks simple — you just slide a cap onto your fingertip — but that cap is actually transmitting lightwaves through your finger. A smartwatch can’t help the nearly 18 million US citizens who suffer from Type II Diabetes (a.k.a Adult Onset Diabetes)  because there are no non-invasive methods for measuring blood sugar. And even as the technical complications of collecting health data are surmounted, device makers can find themselves skirting privacy issues and infringing the HIPAA charter.

The iWatch will also act as a receiver of data from a smartphone, tablet, or PC. This poses many fewer problems, both technical and ethical, than health monitoring, but it also offers few opportunities. Message notifications and calendar alerts are nice but they don’t create a new category, and they certainly haven’t “moved the needle” for existing smartwatches. In a related vein, one can imagine bringing the iWatch close to one’s face and speaking to Siri, asking to set up a calendar event, or sending a text message… but, as with the trend towards larger smartphone screens, one must exercise care when fantasizing about iWatch use cases.

Then we have the question of developers and applications — where’s the support for iWatch app creators? When the iOS App Store opened in 2008, the iPhone became an app phone and solidified the now universal genre. What iWatch rumors fail to address is the presence or absence of an iWatch SDK, of iWatch apps, and of a dedicated App Store section.

Meanwhile, Google has already announced its Android Wear platform and has opened a “Developer Preview” program. Conventional wisdom has it that the Google I/O convention next week will focus on wearables. Samsung has been actively fine-tuning and updating the software for its line of Galaxy Gear smart watches (the watches originally ran on an Android derivative but now use Tizen – until next week).

Finally, we have the question of whether an iWatch will sell in numbers that make the endeavor worthwhile. As the previously-mentioned WSJ story underlines, the smartwatch genre has had a difficult start:

“[...] it isn’t clear how much consumers want the devices. Those on the market so far haven’t sold well, because most wearable devices only offer a limited set of features already found on a smartphone.”

The most ambitious rumors project 50 million iWatches sold in the first 12 months. I think that’s an unrealistic estimate, but if a $300 iWatch can sell at these numbers, that’s $15B for the year. This seems like a huge number until you compare it to a conservative estimate for the iPhone:  50 million iPhones at $650 generates $32B per quarter.

Taking a more hopeful view, let’s recall the history of the iPad. It was a late entrant in the tablet field but it coalesced and redefined the genre. Perhaps the the iWatch will establish itself as The Smartwatch Done Right. But even if it succeeds in this category-defining role, it won’t have the power and flexibility or the huge number of apps of a true trouser pocket computer. As a result, the iWatch will be part of the supporting cast, not a first order product like the iPhone. There’s nothing wrong with that — it might help make high-margin iPhones even more attractive — but it won’t sell in numbers, dollar volume, or profit comparable to the iPhone or iPad. The iWatch, if and when announced, might be The Next Big Thing – for the few weeks of a gargantuan media feast. But it won’t redefine an industry the way PCs, smartphones and tablets did.

JLG@mondaynote.com

 

WWDC: iOS 2.0, the End of Silos

 

Apple tears down the walls between iOS applications, developer rejoice, and Tim Cook delivers a swift kick to Yukari Iwatani Kane’s derrière – more on that at the end.

In this year’s installment of the World Wide Developers Conference, Apple announced a deluge of improvements to their development platforms and tools, including new SDKs (CloudKit, HomeKit, HealthKit); iCloud Drive, the long awaited response to Dropbox; and Swift, an easy-to-learn, leak-free programming language that could spawn a new generation of Apple developers who regard Objective-C as esoteric and burdensome.

If this sounds overly geeky, let’s remind ourselves that WWDC isn’t intended for buyers of Apple products. It’s a sanctuary for people who write OS X and iOS applications. This explains Phil Schiller’s absence from the stage: Techies don’t trust marketing people. (Unfortunately, the conference’s ground rules seem to have been lost on some of the kommentariat.)

The opening keynote is a few breaths short of 2 hours. If you’d rather not drink from the proverbial fire hydrant, you can turn to summaries from Federico Viticci in MacStories, Andrew Cunningham in Ars Technica (“Huge for developers. Massive for everyone else.”), or you can look for reviews, videos, and commentary through Apple’s new favorite search engine, DuckDuckGo, “The search engine that doesn’t track you”.

For today, I’ll focus on the most important WWDC announcement: iOS applications have been freed from the rigid silos, the walls that have prevented them from talking to each other. Apple developers can now write extensions to their apps and avail themselves of the interprocess facilities that they expect from a 21st century OS.

A bit of history will help.

When the first iPhone is shipped in late June, 2007, iOS is incomplete in many respects. There’s no cut and paste, no accented characters, and, most important, there are no native apps. Developers must obey Steve Job’s dictate to extend the iPhone through slow and limited Web 2.0 apps. In my unofficial version numbering, I call this iOS 0.8.

The Web 2.0 religion doesn’t last long. An iOS Software Development Kit (SDK) is announced in the fall and released in February, 2008. When the iTunes-powered App Store opens its doors in July, the virtual shelves are (thinly) stocked with native apps. This is iOS 1.0.

Apple developers enthusiastically embrace the platform and the App Store starts it dizzying climb from an initial 500 apps in 2008 to today’s 1.2 million apps and 75B cumulated downloads.

However, developers’ affections don’t extend to Apple’s “security state”, the limits imposed on their apps in the name of security and simplicity. To be sold in the App Store, an app must agree to stay confined in its own little sandbox, with no way to communicate with other apps.

According to Apple dogma, this limitation is a good thing because it prevents the viruses and other malware that have plagued older operating systems and overly-trusting apps. One wrong click and your device is visited by rogue code that wreaks havoc on your data, yields control to remote computers, or, worst of all, sits silently and unnoticed while it spies on your keystrokes. No such thing on iOS devices. The prohibition against inter-application exchange vastly reduces the malware risk.

This protection comes with a cost. For example, when you use a word processor or presentation tool on a personal computer, you can grab text and images of any provenance and drop them into your project. On the iOS version of Pages, you can only see other Pages documents — everything else is out of sight and out of reach.

The situation becomes even more galling when developers notice that some of Apple’s in-house apps — iMessage, Maps, Calendar with Contacts — are allowed to talk among themselves. To put it a little too simply, Apple engineers can write code that’s forbidden to third party developers.

Apple’s rules for app development and look-and-feel are famously (and frustratingly) rigid, but the company is occasionally willing to shed its dogma. In 2013, for example, skeuomorphism was abandoned…do any of us miss the simulated leather and torn bits of paper on the calendar?

With last week’s unveiling of the new version of iOS, a much more important dogma has been tossed into the dustbin: An app can now reach beyond its sandbox. Apps can interconnect, workflows are simplified, previously unthinkable feats are made possible.

This is the real iOS 2.0. For developers, after the 2008 momentous opening of the App Store that redefined the smartphone, this is the second major release.

With the new iOS, a third-party word processor developer can release his app from its sandbox by simply incorporating the Document Picker:

“The document picker feature lets users select documents from outside your app’s sandbox. This includes documents stored in another app’s iCloud container or documents provided by a third-party extension.”

Users of the word processor will be able to see and incorporate all files, regardless of how they were created or where they’re stored (within the obvious physical limits). This is a welcome change from today’s frustratingly constricted situation.

iOS Extensions, a feature that lets applications offer their own services to other apps, played well when demonstrated by Craig Federighi, Senior VP of Apple Software:

“Federighi was able to easily modify Safari by adding a sharing option for Pinterest and a translation tool courtesy of Bing. Users will also be able to apply photo filters from third-party apps and use document providers like Box or OneDrive…”
Business Insider, Why You Should Be Excited for Extensions in iOS 8 

Prominent among the benefactors of iOS Extensions are third-party keyboard designers. Today, I watch with envy as my Droid compatriots Swype a quick text message. The keyboard layouts and input methods on my iPhone are limited to the choices Apple gives me — and they don’t include Swype. Tomorrow, developers will be able to augment Apple’s offerings, including keyboards that are designed for specific apps.

As expected, developers have reacted enthusiastically to the end of silo hell. Phil Libin, Evernote’s CEO, sums up developer sentiment in the Ars Technica review:

“We’re most excited about extensions, widgets, TouchID APIs and interactive notifications. We’re all over all of that…This is a huge update for us. It feels like we got four out of our top five most wanted requests!”

Now, for the mandatory “To Be Sure” paragraph…

None of this is free. I don’t mean in the financial sense, but in terms of complexity, restrictions, adapting to new ways of doing old things as well as to entirely fresh approaches. While the relaxation of Apple’s “security state” strictures opens many avenues, it also heightens malware risk, something Apple is keenly aware of. In some cases the company will put the onus on the user, asking us to explicitly authorize the use of an extension. In other situations, as Charles Arthur points out in his WWDC article for The Guardian, Apple will put security restrictions on custom keyboards. Quoting Apple’s prerelease documentation:

“There are certain text input objects that your custom keyboard is not eligible to type into. First is any secure text input object [which is] distinguished by presenting typed characters as dots.
When a user taps in a secure text input object, the system temporarily replaces your custom keyboard with the system keyboard. When the user then taps in a nonsecure text input object, your keyboard automatically resumes.”

In part, the price to pay for the new freedoms will depend on Apple’s skills in building safeguards inside the operating system — that’s what all OS strive for. Developers will also have to navigate a new labyrinth of guidelines to avoid triggering the App Store security tripwire.

That said, there is little doubt that the fall 2014 edition of iOS will be well received for both existing and new iDevices. Considering what Apple iOS developers were able to accomplish while adhering to the old dogma, we can expect more than simply more of the same when the new version of iOS is released.

Which brings us to Tim Cook and the stamp he’s put on Apple. Critics who moan that Apple won’t be the same now that Steve Jobs is gone forget the great man’s parting gift: “Don’t try to guess what I would have done. Do what you think its best.” With the Maps fiasco, we saw Cook take the message to heart. In a break with the past, Cook apologized for an Apple product without resorting to lawyerly caveats and justifications. In a real break with the past, he even recommended competing products.

We’ve also seen Cook do what he thinks is best in his changes to the executive team that he inherited from Jobs. Craig Federighi replaces 20-year NeXT/Apple veteran Scott Forstall; Angela Ahrendts is the new head of Retail; there’s a new CFO, Luca Maestri, and a new head of US Sales, Doug Beck. The transitions haven’t always been smooth — both Ahrendts’ and Beck’s immediate predecessors were Cook appointees who didn’t work out and were quickly dismissed. (Beck was preceded by Zane Browe, former CFO at United Airlines…a CFO in a Sales job?)

Inside the company, Cook is liked and respected. He’s seen as calmly demanding yet fair; he guides and is well supported by his Leadership Team. This isn’t what the PR office says, it’s what I hear from French friends who work there. More than just French, they’re hard-to-please Parisians…

I Love Rien I'm Parisien

…but they like Cook, the way he runs the show. (True to their nature, they save a few barbs for the egregious idiots in their midst.)

With this overall picture of corporate cultural health and WWDC success in mind, let’s turn to Yukari Iwatani Kane, the author of Haunted Empire, Apple After Steve Jobs.

On her Web page, Kane insists her book, exemplar of the doomed-without-Jobs attitude, is “hard-hitting yet fair”. That isn’t what most reviewers have to say. The Guardian’s Charles Arthur called it “great title, shame about the contents”; Time’s Harry McCracken saw it as “A Bad Book About Apple After Steve Jobs”; Jason Snell’s detailed review in Macworld neatly addresses the shortcoming that ultimately diminishes the book’s value:

“Apple after the death of Steve Jobs would be a fascinating topic for a book. This isn’t the book. Haunted Empire can’t get out of the way of its own Apple-is-doomed narrative to tell that story.”

Having read the book, I can respect the research and legwork this professional writer, previously at the Wall Street Journal, has put into her opus, but it’s impossible to avoid the feeling that Kane started with a thesis and then built an edifice on that foundation despite the incompatible facts. Even now she churlishly sticks to her negative narrative: Where last week’s successful WWDC felt like a confederation of engineers and application developers happily working together, Kane sees them as caretakers holding a vigil:

Kane Churlish Tweet 450

The reaction to Kane’s tweet was “hard-hitting yet fair”:

Responses to Kane 450

Almost three years after Tim Cook took the helm, the company looks hale, not haunted.

I’ll give Cook the last word. His assessment of Kale’s book:  “nonsense”.

JLG@mondaynote.com

 

The Beats Music Rorschach Blot

 

Apple has a long track record of small, cautious, unheralded acquisitions. Has the company gone off course with hugely risky purchase of Beats Music and Electronics, loudly announced at an industry conference? 

As Benedict Evans’ felicitous tweet put it, Apple’s $3B acquisition of Beats, the headphone maker and music streaming company, is a veritable Rorschach blot:

Benedict Evans Rorschach

The usual and expected interpretations of Anything Apple – with the implied or explicit views of the company’s future – were in full display at last week’s Code Conference after the Beats acquisition was officially announced during the second day of the event. Two of the conference’s high-profile invitees, Apple’s SVP Craig Federighi and Beats’ co-founder, Dr. Dre (née André Young), quickly exited the program so all attention could be focused on the two key players: Eddy Cue, Apple’s Sr. VP of Internet Software and Services; and Jimmy Iovine, Beats’ other co-founder and freshly minted Apple employee. They were interviewed on stage by Walt Mossberg and Kara Swisher, the conference creators (59-minute video here).

Walt and Kara had booked Cue and Iovine weeks before Tim Bradshaw scooped the Apple/Beats story on May 8th in the Financial Times (the original FT article sits behind a paywall; TechCrunch version here). Was the booking a sign of prescience? smart luck? a parting gift from Katie Cotton as she retires as head of Apple PR? (And was Swisher’s warmly worded valentine to Cotton for her 18 years of service a quid pro quo acknowledgment?)

After the official announcement and the evening fireside chat, the Rorschach analysis began. Amidst the epigrams, which were mostly facile and predictable, one stood out with its understated questioning of culture compatibility:

‘Iovine: Ahrendts or Browett?‘ 

The “Browett”, here, is John Browett, the British executive who ran Dixons and Tesco, two notoriously middle-brow retail chains. Apple hired him in April 2012 to succeed Ron Johnson as the head of Apple Retail… and showed him the door seven months later, removed for a clear case of cultural incompatibility. When Browett tried to apply his estimable cost-cutting knowledge and experience to the Italian marble Apple Store, things didn’t work out — and the critics were quick to blame those who hired him.

Nothing of the sort can be said of Dame Angela Ahrendts. Now head of Apple’s physical and on-line stores, Ahrendts was lured from Burberry, a culturally compatible and Apple-friendly affordable luxury enterprise.

Will Iovine be a Browett or an Ahrendts?

In a previous Monday Note, I expressed concern for the cultural integration challenges involved in making the Beats acquisition work. What I learned from the on-stage interview is that Jimmy Iovine and Eddy Cue have known and worked with each other for more than ten years. Iovine says he’ll be coming to Cupertino ‘about once a month’, so my initial skepticism may have been overstated; Apple isn’t acquiring a company of strangers.

But are they acquiring a company that creates quality products?  While many see Beats Music’s content curation as an important differentiator in the streaming business, one that would give a new life to its flagging music sales, others are not so sure. They find Beats Music’s musical choices uninspiring. I’m afraid I have to agree. I downloaded the Beats Music app, defined a profile, and listened for several hours while walking around Palo Alto or sitting at my computer. Perhaps it’s me, my age, or my degenerate tastes but none of the playlists that Beats crafted for me delivered neither the frisson of discovery nor the pleasure of listening to an old favorite long forgotten. And my iPhone became quite hot after using the app for only an hour or so.

Regarding the headphones: They’re popular and sell quite well in spite of what The Guardian calls “lacklustre sound”. I tried Beats Electronic’s stylish Studio headphones for a while, but have since returned to the nondescript noise-canceling Bose QC 20i, a preference that was shared (exactly or approximately) by many at the conference.

There was no doubt, at the conference, that Apple understands there are problems with Beats, but there’s also a feeling that the company sees these problems as opportunities. An overheard hallway discussion about the miserable state of the iTunes application (too strongly worded to repeat here verbatim) neatly summed up the opportunity: ‘Keeping Beats as a separate group affords Cook and Cue an opening for independently developing an alternative to iTunes instead of trying to fix the unfixable.’ It’s worth noting that the Beats Music app is available on mobile devices, only, and it appears there’s no plan to create a desktop version. This underlines the diminished role of desktops, and points out the possibility of a real mobile successor to the aging iTunes application.

Continuing with the blot-reading exercise, many members of the audience found it necessary to defend the $3B price tag. Some point out that since Apple’s valuation is about 3X its revenue, Beats’ purported $1.5B hardware revenue easily “justifies” the $3B number. (Having consorted with investment bankers at various moments of my business life, as an entrepreneur, a company director, and a venture investor, I know they can be trusted to explain a wide range of valuations. Apparently, Apple is paying $500M for the streaming business and $2.5B for the hardware part.)

My own reading is that the acquisition price won’t matter: If it acquisition succeeds, the price will be easily forgotten; if it fails, Apple will have bigger worries.

Ultimately, the Apple-Beats products and services we don’t haven’t yet seen will do the talking.

–JLG@mondaynote.com

Peak PC. Intel Fork.

 

Propelled by Moore’s Law and the Internet, PCs have enjoyed four decades of strong growth, defying many doomsday prophecies along the way. But, with microprocessor performance flattening out, the go-go years have come to an end. Intel, the emperor of PC processors, and a nobody in mobile devices needs to react.]

I’m suspicious of Peak <Anything> predictions. Some of us became aware of the notion of a resource zenith during the 1973 OPEC oil embargo, with its shocking images of cars lined up at gas stations (in America!):

Gas Lines Oil Embargo

This was Peak Oil, and it spelled doom to the auto industry.

We know what happened next: Cars improved in design and performance, manufacturers became more numerous. Looking at this bit of history through my geek glasses, I see three explanations for the rebound: computers, computers, and computers. Computer Assisted Design (CAD) made it easier to design new car models as variations on a platform; Volkswagen’s MQB is a good example. Massive computer systems were used to automate the assembly line and manage the supply chain. It didn’t take long for computers to work their way into the cars themselves, from the ECU under the hood to the processors that monitor the health of the vehicle and control the entertainment and navigation systems.

Since then, we’ve had repeated predictions of Peak Oil, only to be surprised by the news that the US will soon become a net oil exporter and, as Richard Muller points out in his must-read Physics for Future Presidents, we have more than a century of coal reserves. (Unfortunately, the book, by a bona fide, middle-of-the-road physicist, can’t promise us that physics will eventually push politics aside when considering the rise of CO2 in the atmosphere…)

I’ve heard similar End of The Go-Go Days predictions about personal computers since 1968 when my love affair with these machines started at HP France (I was lucky enough to be hired to launch their first desktop machine).

I heard the cry again in 1985 when I landed in Cupertino in time for the marked slowdown in Apple ][ sales. The never-before round of layoffs at Apple prompted young MBAs, freshly imported from Playtex and Pepsi, to intone the It’s All Commodities Now dirge. I interpreted the cry (undiplomatically -- I hadn’t yet learned to speak Californian) as a self-serving It’s All Marketing Now ploy. In the meantime, engineers ignored the hand-wringing, went back to work, and, once again, proved that the technology “mines” were far from exhausted.

In 1988, a Sun Microsystems executive charitably warned me: “PCs are driving towards the Grand Canyon at 100 mph!”.  A subscriber to Sun’s The Network Is The Computer gospel, the gent opined that heavy-duty computing tasks would be performed by muscular computers somewhere (anywhere) on the network. Desktop devices (he confusingly called them “servers” because they were to “serve” a windowing protocol, X11) would become commodities no more sophisticated or costly than a telephone. He had no answer for multimedia applications that require local processing of music, video, and graphics, nor could he account for current and imminent mobile devices. His view wasn’t entirely new. In 1965, Herb Grosch gave us his Law, which told us that bigger computers provide better economics; smaller machines are uneconomical.

And yet, personal computers flourished.

I have vivid memories of the joy of very early adopters, yours truly included. Personal computers are liberating in many ways.

First, they don’t belong to the institution, there’s no need for the intercession of a technopriest, I can lift my PC with my arms, my brains, and my credit card.

Second, and more deeply, the PC is a response to a frustration, to a sense of something amiss. One of mankind’s most important creations is the symbol, a sign without a pre-existing meaning: X as opposed to a drawing of a deer on a cave wall. Strung together, these symbols show formidable power. The expressive and manipulative power of symbol strings runs through the Song of Songs, Rumi’s incandescent poetry, Wall Street greed, and quantum physics.

But our central nervous system hasn’t kept up with our invention. We don’t memorize strings well, we struggle with long division, let alone extracting cubic roots in our heads.

The PC comes to the rescue, with its indefatigable ability to remember and combine symbol strings. Hence the partnership with an object that extends the reach of our minds and bodies.

Around 1994, the Internet came out of the university closet, gave the PC access to millions of servers around the world (thus fulfilling a necessary part of the Sun exec’s prophecy), and extended our grasp.

It’s been great and profitable fun.

But today, we once again hear Peak PC stories. Sales have gone flat, never to return:

PC shipments 2014-18 - PNG

This time, I’m inclined to agree.

Why?

Most evenings, my home-builder spouse and I take a walk around Palo Alto. Right now, this smallish university town is going through a building boom. Offices and three-layer retail + office + residence are going up all around University Avenue. Remodels and raze-and-build projects can be found in the more residential parts of town. No block is left unmolested.

I can’t help but marvel. None of this activity, none of Silicon Valley would exist without Moore’s Law, the promise made in 1965 that semiconductor performance would double every 18 months. And, for the better part of 40 years, it did - and rained money on the tech ecosystem, companies and people. PCs, servers, embedded electronics, giant network routers, cars...they’ve all been propelled because Moore’s Law has been upheld...until recently.

The 1977 Apple ][ had a 1MHz 8-bit processor. Today’s PCs and Mac’s reach 3.7GHz, but number that hasn’t changed in more than three years. This isn’t to say that Intel processors aren’t still improving, but the days when each new chip brought substantial increases in clock speed seem to be over.

One should never say never, but Moore’s Law is now bumping into the Laws of Physics. The energy needed to vibrate matter (electrons in our case) increases with frequency. The higher the clock frequency, the higher the power dissipation and the greater the heat that’s generated…and a PC can withstand only so much heat. Consider the cooling contraptions used by PC gamers when they push the performance envelope of their “rigs”:

EK-Thermosphere_right2_12001

To work around the physical limits, Intel and others resort to stratagems such as “multiple cores”, more processors on the same chip. But if too many computations need the result of the previous step before moving forward, it doesn’t matter how many cores you have. Markitects have an answer to that as well: “speculative branch execution”, the use of several processors to execute possible next steps. When the needed outcome appears, the “bad” branches are pruned and the process goes forward on the already-computed good branch. It makes for interesting technical papers, but it’s no substitute for a 8GHz clock speed.

If we need confirmation of the flattening out of microprocessor progress, we can turn to Intel and the delays in implementing its Broadwell chips. The move to a 14 nanometers  “geometry” — the term here denotes the size of a basic circuit building block — is proving more difficult than expected. And the design isn’t meant to yield faster processors, just less power-hungry ones (plus other goodies such as better multi-media processing).

One possible reaction to this state of affairs is to look at tablets as a new engine of growth. This is what Microsoft seems to be doing by promoting its Intel-inside Surface Pro 3 as a laptop replacement. But even if Microsoft tablets turn out to be every bit as good as Microsoft says they are, they aren’t immune to the flattening out of Intel processor performance. (I don’t have an opinion yet on the product — I tried to buy one but was told to wait till June 20th.)

Does this broaden the opening for ARM-based devices? Among their advantages is a cleaner architecture, one devoid of the layers of backwards compatibility silt x86 devices need. ARM derivaties need less circuitry for the same computing task and, as a result, dissipate less power. This is one of the key reasons for their dominance in the battery-powered world of mobile devices. (The other is the customization and integration flexibility provided by the ARM ecosystem.) But today’s ARM derivatives run at lower speeds (a little above 1GHz for some) than Intel chips. Running at higher speeds will challenge them to do so without hurting battery life and having to add the fan that Microsoft tablets need.

With no room to grow, PC players exit the game. Sony just did. Dell took itself private and is going through the surgery and financial bleeding a company can’t withstand in public. Hewlett-Packard, once the leading PC maker, now trails Lenovo. With no sign of turning its PC business around, HP will soon find itself in an untenable position.

Intel doesn’t have the luxury of leaving their game — they only have one. But I can’t imagine that Brian Krzanich, Intel’s new CEO, will look at Peak PC and be content with the prospect of increasingly difficult x86 iterations. There have been many discussions of Intel finally taking the plunge and becoming a “foundry” for someone else’s ARM-based SoC (System On a Chip) designs instead of owning x86 design and manufacturing decisions. Peak PC will force Intel CEO’s hand.

JLG@mondaynote.com

Misunderstanding Apple

 

We’ve come to expect analysts and pundits to misunderstand Apple. More puzzling is when Apple misunderstands itself.

My three-week Road Trip of a Lifetime, driving all the way from Key West, FL to Palo Alto, was interrupted by a bout of pneumonia, low blood oxygen, paroxysmal cough and, most alarming, a loss of appetite. Thankfully, all indicators are looking good and I’m back walking Palo Alto’s leafy streets.

The succession of wheel time and downtime gave me an opportunity to contemplate two recent controversies: Fred Wilson’s prediction of Apple’s imminent fall, and rumors of Apple’s purchase of Beat Electronics. These are both manifestations of what I’ll call, for lack of a better term, Misunderstanding Apple.

First, Fred Wilson. At the recent TechCrunch Disrupt conference, the successful and articulate venture investor predicted that by 2020 Apple will no longer hold the #1 position in the tech world. They won’t even be in the top three. According to Wilson, Apple “doesn’t think about things they way they need to think about things”. Specifically, the company is “too rooted in hardware…[which] is increasingly becoming a commodity” and “Their stuff in the cloud is largely not good. I don’t think they think about data and the cloud.

I’d be surprised by Wilson’s facile, insight-free truisms, except this isn’t the first time he’s shown a blind spot when considering Apple. Wilson is famous for dumping his Apple shares at $91 in January 2009; AAPL is now at $590 or so. (He also sold Google, which closed at $528 on Friday, for a split-adjusted $160. Perhaps there’s a difference between being a venture investor, an insider who watches and influences a young company, and an outsider subjected to forces and emotions outside of one’s control.)

Calling Apple “too rooted in hardware” misunderstands the company. From its inception, Apple has been in one and only one business: personal computers (which, today, includes smartphones and tablets). Indeed, Apple’s quarterly numbers show that the sale of personal computers makes up 87% of its revenue. Everything else that Apple does, from iTunes to the Apple Store, exists to make its smartphones, tablets, laptops, and desktops more useful, more pleasant. And this “everything else” includes the lovingly machined hardware of the MacBook Air and iPhone 5. If the supporting cast does its job well, the main acts will sell in larger numbers and at higher prices.

Customers don’t buy Apple “hardware” in the same way a weekend carpenter buy nails at the friendly neighborhood hardware store. What Fred Wilson seems to miss is that hardware is more than an inert “thing” for Apple: It’s a conduit to an entire ecosystem, and it can yield an enormous strategic advantage. One such example is the 64-bit A7 processor that took everyone by surprise: 64 bits. It’s Nothing. You Don’t Need It. And We’ll Have It In 6 Months.

When the subject of commodization comes up, I invite people to look at the cars they see in the street. Are the likes of Audi, BMW, and Mercedes being commoditized? Do their owners only care that the wheels are black and round? Serendipitously, someone called “SubstrateUnderflow” answers the question in a comment on Wilson’s blog:

“…when I look around at all the cars out there, from the high end models to the most utilitarian models, almost no one buys the base stripped versions. Key devices that are central to people’s lives, comfort and utility have enough emotional SubstrateUndertow to sustain premium pricing.”

The 30-year old Mac business and its healthy margins (about 25% versus HP’s sub-5% for its PCs) shows that Apple has successfully avoided the commoditized race to the bottom that has plagued Wintel devices and is likely to accelerate for smartphones.

Wilson’s criticism of Apple’s “stuff in the cloud”, on the other hand, carries some sting. As a user of Apple’s products and services, I’m often disappointed with Apple’s Cloud offerings. I find iMessage’s quirks irritating, I see a lack of proper synchronization between iBooks on Macs and iDevices, and I’m still waiting for the Cloud version of iWorks to mature. But let’s turn to Horace Dediu for a crisp summary of Apple’s place in the Cloud:

“Not getting the cloud” means that in the last 12 months Apple obtained:
• 800 million iTunes users and
• an estimated 450 million iCloud users spending
•  $3 billion/yr for end-user services plus
•  $4.7 billion/yr for licensing and other income which includes
•  more than $1 billion/yr paid by Google for traffic through Apple devices and
•  $13 billion/yr in app transactions of which
•  $9 billion/yr was paid to developers and
•  $3.9 billion/yr was retained as operating budget and profit for the App Store.

In addition,
•  more than $1 billion/yr in Apple TV (aka Apple’s Kindle) and video sales and
• $2.7 billion/yr in music download sales and
• $1 billion/yr in eBooks sold

In summary, iTunes, Software and Services has been growing between 30% and 40% for four years and is on its way to $30 billion/yr in transactions and sales for 2014.

Horace is right; Fred Wilson clearly hasn’t done the numbers.

———————

I was still on the road when I read about the rumored $3.2B acquisition of Beats Electronics, the company that began in the headphone business and then spawned a streaming music service.

I’m puzzled. If the rumors prove true, Apple may be guilty of misunderstanding itself.

The hardware side, headphones, is immaterial: The products may look good, but their audio quality is regularly panned. And the revenue, about $500M, doesn’t move the needle.

The current wisdom is that Apple is mostly interested in Beats Music, the subscription streaming service. But that business isn’t big, either; it has only attracted about 110K subscribers.

Maybe Apple is interested in Beats Music’s technology and its vision for the future of streaming and music curation. I took the time to watch Walt Mossberg’s interview of Jimmy Iovine in which the Beats co-founder gives hints about his plans. Iovine’s AI-with-a-human-touch solution for delivering “what comes next” is technically vague — and vaguely dystopian (“we’ll scrape your hard drivewe’ll know where you are tomorrow”). I’m not convinced.

We also have rumors that Iovine and Dr. Dre, Beats’ other co-founder, might become some kind of senior advisers to Apple management. Given what I’ve read about Dre’s troubles with the Law, including a battery charge that landed him in jail, and an assault on a female that was settled out of court, I’m troubled. How will this play inside and outside Apple?

I don’t see how such an acquisition would enhance Apple’s business model or reputation.

That said, I hope I’m as wrong as I was when I thought the iPod was doomed to fail against commoditized, yes, that word, MP3 players. I hadn’t seen iTunes behind the hardware, Cloud storage, the distribution and micro-payments infrastructure that would one day make the iPhone and App Phone.

I also see people whose intellect and motives I respect strongly support the rumored acquisition. Preeminent among them is Ben Thompson who, in his Stratechery blog, explores Why Apple Is Buying Beats. There, after positing personal computers might have reached their peak, Ben asks whether Apple is in fact reinventing itself as a kind of fashion house [emphasis mine]:

“Or are we witnessing a reinvention, into the sort of company that seeks to transcend computing, demoting technology to an essential ingredient of an aspirational brand that identifies its users as the truly with it? Is Apple becoming a fashion house? Think about it: you have Jony Ive as all-up head of design, the equivalent of a Tom Ford or Donatella Versace. There is the hire of Angela Ahrendts – why would she leave the CEO position of Burberry for a Senior VP role? You have an iPhone framed as an experience, not a product. And now you acquire an accessory maker differentiated almost completely by its brand, not its inherent technical quality.”

And ponders at the Damned If You Do, Damned If You Don’t of such cultural change:

“Still, I can imagine the very thought of Apple positioning itself as a fashionable luxury brand is somewhat nauseating for many of my readers. It’s an understandable reaction, and one I somewhat share. I worry that Apple is losing what makes Apple, Apple, especially that desire to make the power of computing accessible for normal people. But I also know that stasis means stagnation, and over the long-run, death.”

To be continued…

JLG@mondaynote.com

Science Fiction: Mining My Own Exhaust

 

A few thoughts on Big Data, self-knowledge, and my hopes for the emergence of a new genre of services.

I’m about to fulfill an old fantasy — the Great American Road Trip. Over the next three weeks, we’ll be driving all the way from Key West, FL to Palo Alto. In that spirit, today I’ll luxuriate in another, more distant reverie: Mining my own data exhaust.

I’m spurred to this indulgence by the words of Satya Nadella, Microsoft’s new CEO, at an April 15th event in San Francisco [emphasis mine]:

“The core evolution of silicon, software and hardware is putting computing everywhere humans are present,” Nadella said. “And it’s generating a massive data exhaust of server logs, sensor data and unstructured social stream information. We can use that exhaust to create ambient intelligence for our users.”

Nadella’s right. I leave a cloud of data exhaust with my Web browsing, credit card purchases, cell phone use, monthly blood tests, pharmacy purchases, and airline trips. Server logs detail the length and breadth of my social interactions on Facebook and Google+… And I don’t have to be on a computer to add to the cloud: I’m tracked by toll passes and license plate snapshots as I drive my car. The car itself monitors my driving habits with its black box recording of my speed and direction. This list, far from exhaustive [no pun intended], is sobering – or exciting, full of possibilities.

Today, we’ll skip the Orwellian paranoia and fantasize about an alternate universe where I can “turn the gratis around”, where I can buy my data back.

Google, Facebook, and the like provide their services for free to induce us to lead them to the mother lode: Our cache of product preferences, search history, and web habits. They forge magic ingots from our personal data, sell the bullion to advertisers, and thus fuel the server farms that mine even more data. I’m not here to throw a monkey wrench into this business model; au contraire, I offer a modest source of additional revenues: I’d like to buy my data back. And I’ll extend that offer to any and all entities that mine my activities: For you, at a special price today, I’m buying my data.

(We all understand that this fantasy must take place in an alternate universe. If our legislators and regulators were beholden to us and not to Google, Verizon, and “Concast” [a willful typo from Twitter wags], they would have long ago made it mandatory that companies provide us with our own data exhaust.)

Pursuing this train of thought, one can conceive of brokers scouring the world for my exhausts — after having secured the right permissions from me, of course. Once this becomes an established activity, no particular feat of imagination is required to see the emergence of Big Data processing companies capable of merging and massaging the disparate flumes obtained from cell carriers, e-merchants, search engines, financial services and other service providers.

So far, especially because it lacks numbers and other annoying implementation details, the theory sounds nice. But to what end?

The impulse can be viewed as a version of the old Delphic injunction: Know Thyself, now updated as Know Thine Quantified Self: Quantify what I do with my body, time, money, relationships, traveling, reading, corresponding, driving, eating… From there, many derivations come to mind, such as probabilistic diagnoses about my health, financial situation, career, and marriage. Or I could put my data in turnaround, mandate a broker to shop facets of my refined profile to the top agencies.

Even if we set aside mounds of unresolved implementation details, objections arise. A key member of my family pointedly asks how much do we really want to know about ourselves?

This reminds me of a conversation I once had with a politely cynical Parisian tailor. I ventured that he could help his customers choose a suit by snapping a picture and displaying it on a 80” flat screen TV in portrait mode. My idea was that the large scale digital picture would offer a much more realistic, a more objective image than does a look in the mirror. The customer would be able to see himself as others see him, what effect the new suit would produce – which, after all, is the point of new duds.

“No way,” said the Parisian fashionista, “are you nuts? My customers, you included,” he tartly added, “really don’t want the cruel truth about their aging bodies…”

Still, I’m curious. And not just about the shape and color of the data exhaust that I leave in my wake, about the truths — pleasant or embarrassing — that might be revealed. I’m curious about the types of companies, services, and business models that would emerge from this arrangement. Even more fascinating: How would the ability to mine and sell our own data affect our cultural vocabulary and social genetics?

JLG@mondaynote.com

PS: As offered here, I recently downloaded my Facebook data set. The data doesn’t appear to be very revealing, but that could be the result of my low Facebook involvement — I’m not a very active user. I’d be curious to see the size and detail level associated with a more involved participant.

PPS: I’ll be on the road, literally, for the next three weeks and may or may not be able to post during that time.

The iPad Is a Tease

 

As Apple is about to release its latest quarterly numbers, new questions arise about the iPad’s “anemic” growth. The answer is simple – but the remedies are not.

The iPad isn’t growing anymore. What happened? 

In anticipation of Apple’s latest quarterly numbers – they’ll be announced on April 23rd – the usual prerelease estimates swirl around the Web. You can find Yahoo’s summary of analysts’ estimates here; Paul Leitao’s Posts At Eventide provides a detailed and tightly reasoned history and forecast for the March 2014 quarter.

The consensus is that for the company as a whole, there won’t be any surprises: Apple will meet the guidance stated in its January 27th earnings call. Revenue will be down, as befits the quarter following the Christmas shopping frenzy, but profit per share (EPS) will be up a bit.

Boring. With one glaring exception:

Braeburn Group iPad Edited
(Source: The Braeburn Group)

In the same quarter for 2013, the iPad’s year-on-year growth was about 55%. Some of this phenomenal growth was due to a rebound from earlier iPad mini supply constraints, but that doesn’t explain the precipitous drop from 2013 to this year.

Are the iPad’s go-go years over?

As Philip Elmer-DeWitt reports on his Apple 2.0 site, this gloomy prediction appears to be the majority opinion among analysts. Elmer-DeWitt acknowledges that there are outliers — Horace Dediu comes in at the high end with an estimate of 21.8M units (and positive growth) — but “the consensus estimate of 19.3 million, would represent a 0.7% decline”.

It’s one thing for a product to increase in unit volume sales but still grow less than the overall market — that’s simply a loss of market share. And we know how fallacious share numbers can be in the absence of an honest disclosure of sales volumes. No, assuming the estimates are right, what we have here isn’t market share dilution, it isn’t a post-Christmas lull, it’s a year-to-year decline in absolute unit numbers.

Why?

I’ll offer an opinion: The iPad is a tease. Its meteoric debut raised expectations that it can’t currently meet.

To explain, let’s go back four years.

Steve Jobs’ last creation took us by surprise, price included, and was initially panned by many in the kommentariat, from Eric Schmidt to Dan Lyons (who subsequently recanted). But normal humans joyously took to the iPad. In 1984, one of Apple’s tag line for the Mac was “Macintosh – the computer for the rest of us.” Decades later, the iPad was quickly perceived as a sort of second coming. As MacWorld put it in June 2011: Now Apple’s really “for the rest of us”.

Indeed, the iPad wasn’t targeted at a particular type — or generation — of user. David Hockney has produced exquisite iPad “paintings”. Daniel Borel, Logitech’s co-founder, told me that his two-year old grandson immediately “got” the iPad (even if it was just to play games, but…he’s two). Coming out of our breakfast meeting, I crossed paths with a couple of seniors — octogenarians, probably — who proudly told me that they were going to an iPad training session at the Palo Alto Apple Store.

The iPad rose and rose. It won legions of admirers because of its simplicity: No windows (no pun), no file system, no cursor keys (memories of the first Mac). Liberated from these old-style personal computer ways, the iPad cannibalized PC sales and came to be perceived as the exemplar Post-PC device.

But that truly blissful simplicity exacts a high price. I recall my first-day disappointment when I went home and tried to write a Monday Note on my new iPad. It’s difficult — impossible, really — to create a real-life composite document, one that combines graphics, spreadsheet data, rich text from several sources and hyperlinks. For such tasks, the Rest of Us have to go back to our PCs and Macs.

I realize there are iPad users who happily perform “productivity tasks” on their iPads. Most of them use a stand and keyboard sold in a number of guises. The number of different offerings is a testament to a real need. (We’ll note that Apple doesn’t seem eager to address this issue directly. They don’t offer an “iPad-sized” keyboard — the Bluetooth keyboard I use is fine for my iMac, but feels gargantuan when I pair it with my iPad. And Apple’s iPad Dock hasn’t been updated to work with the “Lightning” connector on the newer iPads.)

The iPad’s limitations extend beyond classic office productivity tasks. I just tried to build an itinerary for a long postponed road trip, driving all the way from Key West Florida to Palo Alto. On a Mac, you can easily “print to PDF” to produce a map for each leg of the trip. Then you use the wonderful Preview app (I salute its author and dedicated maintainer) to emend unneeded pages, drag and drop, combine and rearrange the PDF files into a single document. Don’t try this on an iPad: How would you “print-to-PDF” a map page, let alone combine such pages?

Despite the inspiring ads, Apple’s hopes for the iPad overshot what the product can actually deliver. Although there’s a large numbers of iPad-only users, there’s also a substantial population of dual-use customers for whom both tablets and conventional PCs are now part of daily life.

I see the lull in iPad sales as a coming down to reality after unrealistic expectations, a realization that iPads aren’t as ready to replace PCs as many initially hoped.

In his introduction of the iPad in January, 2010, Jobs himself seemed a bit tentative when positioning his latest creation. Sitting in the Le Corbusier chair, Jobs stated that his new tablet would have to “find its place between the iPhone and the Mac”.

This “in-between place” is still elusive.

Microsoft tried to find that “in-between place”, and we know how well that worked. For the Redmond company, the iPad’s limitations were an opportunity: Simply emulate the charm and intuitiveness of the market-leading tablet and cater to the needs of the “professional” user. With its touch interface and keyboard, the Surface device sounded like the solution that had eluded Microsoft’s earlier PC Tablets. In the field, customers didn’t like the dueling interfaces, nor the introduction of layers of complexity where simplicity had been promised. Surface tablets didn’t move the revenue needle and cost Microsoft a $900M write-down.

The iPad represents about 20% of Apple’s revenue; allowing iPad numbers to plummet isn’t acceptable. So far, Apple’s bet has been to keep the iPad simple, rigidly so perhaps, rather than creating a neither-nor product: No longer charmingly simple, but not powerful enough for real productivity tasks. But if the iPad wants to cannibalize more of the PC market, it will have to remove a few walls.

Specifically, the iPad is a computer, it has a file system, directories, and the like — why hide these “details” from users? Why prevent us from hunting around for the bits and bobs we need to assemble a brochure or a trip itinerary?

None of this is news to Apple execs, but they also know that success doesn’t depend on What, on a simple feature list. The next step in iPad growth will depend on How new features are integrated into the user experience. It’s a tricky game of the Best of Both Worlds…and it tripped up Microsoft.

When will we know? I have no idea. Perhaps at the WWDC this coming June.

JLG@mondaynote.com