About Jean-Louis Gassée

http://

Posts by Jean-Louis Gassée:

WWDC: iOS 2.0, the End of Silos

 

Apple tears down the walls between iOS applications, developer rejoice, and Tim Cook delivers a swift kick to Yukari Iwatani Kane’s derrière – more on that at the end.

In this year’s installment of the World Wide Developers Conference, Apple announced a deluge of improvements to their development platforms and tools, including new SDKs (CloudKit, HomeKit, HealthKit); iCloud Drive, the long awaited response to Dropbox; and Swift, an easy-to-learn, leak-free programming language that could spawn a new generation of Apple developers who regard Objective-C as esoteric and burdensome.

If this sounds overly geeky, let’s remind ourselves that WWDC isn’t intended for buyers of Apple products. It’s a sanctuary for people who write OS X and iOS applications. This explains Phil Schiller’s absence from the stage: Techies don’t trust marketing people. (Unfortunately, the conference’s ground rules seem to have been lost on some of the kommentariat.)

The opening keynote is a few breaths short of 2 hours. If you’d rather not drink from the proverbial fire hydrant, you can turn to summaries from Federico Viticci in MacStories, Andrew Cunningham in Ars Technica (“Huge for developers. Massive for everyone else.”), or you can look for reviews, videos, and commentary through Apple’s new favorite search engine, DuckDuckGo, “The search engine that doesn’t track you”.

For today, I’ll focus on the most important WWDC announcement: iOS applications have been freed from the rigid silos, the walls that have prevented them from talking to each other. Apple developers can now write extensions to their apps and avail themselves of the interprocess facilities that they expect from a 21st century OS.

A bit of history will help.

When the first iPhone is shipped in late June, 2007, iOS is incomplete in many respects. There’s no cut and paste, no accented characters, and, most important, there are no native apps. Developers must obey Steve Job’s dictate to extend the iPhone through slow and limited Web 2.0 apps. In my unofficial version numbering, I call this iOS 0.8.

The Web 2.0 religion doesn’t last long. An iOS Software Development Kit (SDK) is announced in the fall and released in February, 2008. When the iTunes-powered App Store opens its doors in July, the virtual shelves are (thinly) stocked with native apps. This is iOS 1.0.

Apple developers enthusiastically embrace the platform and the App Store starts it dizzying climb from an initial 500 apps in 2008 to today’s 1.2 million apps and 75B cumulated downloads.

However, developers’ affections don’t extend to Apple’s “security state”, the limits imposed on their apps in the name of security and simplicity. To be sold in the App Store, an app must agree to stay confined in its own little sandbox, with no way to communicate with other apps.

According to Apple dogma, this limitation is a good thing because it prevents the viruses and other malware that have plagued older operating systems and overly-trusting apps. One wrong click and your device is visited by rogue code that wreaks havoc on your data, yields control to remote computers, or, worst of all, sits silently and unnoticed while it spies on your keystrokes. No such thing on iOS devices. The prohibition against inter-application exchange vastly reduces the malware risk.

This protection comes with a cost. For example, when you use a word processor or presentation tool on a personal computer, you can grab text and images of any provenance and drop them into your project. On the iOS version of Pages, you can only see other Pages documents — everything else is out of sight and out of reach.

The situation becomes even more galling when developers notice that some of Apple’s in-house apps — iMessage, Maps, Calendar with Contacts — are allowed to talk among themselves. To put it a little too simply, Apple engineers can write code that’s forbidden to third party developers.

Apple’s rules for app development and look-and-feel are famously (and frustratingly) rigid, but the company is occasionally willing to shed its dogma. In 2013, for example, skeuomorphism was abandoned…do any of us miss the simulated leather and torn bits of paper on the calendar?

With last week’s unveiling of the new version of iOS, a much more important dogma has been tossed into the dustbin: An app can now reach beyond its sandbox. Apps can interconnect, workflows are simplified, previously unthinkable feats are made possible.

This is the real iOS 2.0. For developers, after the 2008 momentous opening of the App Store that redefined the smartphone, this is the second major release.

With the new iOS, a third-party word processor developer can release his app from its sandbox by simply incorporating the Document Picker:

“The document picker feature lets users select documents from outside your app’s sandbox. This includes documents stored in another app’s iCloud container or documents provided by a third-party extension.”

Users of the word processor will be able to see and incorporate all files, regardless of how they were created or where they’re stored (within the obvious physical limits). This is a welcome change from today’s frustratingly constricted situation.

iOS Extensions, a feature that lets applications offer their own services to other apps, played well when demonstrated by Craig Federighi, Senior VP of Apple Software:

“Federighi was able to easily modify Safari by adding a sharing option for Pinterest and a translation tool courtesy of Bing. Users will also be able to apply photo filters from third-party apps and use document providers like Box or OneDrive…”
Business Insider, Why You Should Be Excited for Extensions in iOS 8 

Prominent among the benefactors of iOS Extensions are third-party keyboard designers. Today, I watch with envy as my Droid compatriots Swype a quick text message. The keyboard layouts and input methods on my iPhone are limited to the choices Apple gives me — and they don’t include Swype. Tomorrow, developers will be able to augment Apple’s offerings, including keyboards that are designed for specific apps.

As expected, developers have reacted enthusiastically to the end of silo hell. Phil Libin, Evernote’s CEO, sums up developer sentiment in the Ars Technica review:

“We’re most excited about extensions, widgets, TouchID APIs and interactive notifications. We’re all over all of that…This is a huge update for us. It feels like we got four out of our top five most wanted requests!”

Now, for the mandatory “To Be Sure” paragraph…

None of this is free. I don’t mean in the financial sense, but in terms of complexity, restrictions, adapting to new ways of doing old things as well as to entirely fresh approaches. While the relaxation of Apple’s “security state” strictures opens many avenues, it also heightens malware risk, something Apple is keenly aware of. In some cases the company will put the onus on the user, asking us to explicitly authorize the use of an extension. In other situations, as Charles Arthur points out in his WWDC article for The Guardian, Apple will put security restrictions on custom keyboards. Quoting Apple’s prerelease documentation:

“There are certain text input objects that your custom keyboard is not eligible to type into. First is any secure text input object [which is] distinguished by presenting typed characters as dots.
When a user taps in a secure text input object, the system temporarily replaces your custom keyboard with the system keyboard. When the user then taps in a nonsecure text input object, your keyboard automatically resumes.”

In part, the price to pay for the new freedoms will depend on Apple’s skills in building safeguards inside the operating system — that’s what all OS strive for. Developers will also have to navigate a new labyrinth of guidelines to avoid triggering the App Store security tripwire.

That said, there is little doubt that the fall 2014 edition of iOS will be well received for both existing and new iDevices. Considering what Apple iOS developers were able to accomplish while adhering to the old dogma, we can expect more than simply more of the same when the new version of iOS is released.

Which brings us to Tim Cook and the stamp he’s put on Apple. Critics who moan that Apple won’t be the same now that Steve Jobs is gone forget the great man’s parting gift: “Don’t try to guess what I would have done. Do what you think its best.” With the Maps fiasco, we saw Cook take the message to heart. In a break with the past, Cook apologized for an Apple product without resorting to lawyerly caveats and justifications. In a real break with the past, he even recommended competing products.

We’ve also seen Cook do what he thinks is best in his changes to the executive team that he inherited from Jobs. Craig Federighi replaces 20-year NeXT/Apple veteran Scott Forstall; Angela Ahrendts is the new head of Retail; there’s a new CFO, Luca Maestri, and a new head of US Sales, Doug Beck. The transitions haven’t always been smooth — both Ahrendts’ and Beck’s immediate predecessors were Cook appointees who didn’t work out and were quickly dismissed. (Beck was preceded by Zane Browe, former CFO at United Airlines…a CFO in a Sales job?)

Inside the company, Cook is liked and respected. He’s seen as calmly demanding yet fair; he guides and is well supported by his Leadership Team. This isn’t what the PR office says, it’s what I hear from French friends who work there. More than just French, they’re hard-to-please Parisians…

I Love Rien I'm Parisien

…but they like Cook, the way he runs the show. (True to their nature, they save a few barbs for the egregious idiots in their midst.)

With this overall picture of corporate cultural health and WWDC success in mind, let’s turn to Yukari Iwatani Kane, the author of Haunted Empire, Apple After Steve Jobs.

On her Web page, Kane insists her book, exemplar of the doomed-without-Jobs attitude, is “hard-hitting yet fair”. That isn’t what most reviewers have to say. The Guardian’s Charles Arthur called it “great title, shame about the contents”; Time’s Harry McCracken saw it as “A Bad Book About Apple After Steve Jobs”; Jason Snell’s detailed review in Macworld neatly addresses the shortcoming that ultimately diminishes the book’s value:

“Apple after the death of Steve Jobs would be a fascinating topic for a book. This isn’t the book. Haunted Empire can’t get out of the way of its own Apple-is-doomed narrative to tell that story.”

Having read the book, I can respect the research and legwork this professional writer, previously at the Wall Street Journal, has put into her opus, but it’s impossible to avoid the feeling that Kane started with a thesis and then built an edifice on that foundation despite the incompatible facts. Even now she churlishly sticks to her negative narrative: Where last week’s successful WWDC felt like a confederation of engineers and application developers happily working together, Kane sees them as caretakers holding a vigil:

Kane Churlish Tweet 450

The reaction to Kane’s tweet was “hard-hitting yet fair”:

Responses to Kane 450

Almost three years after Tim Cook took the helm, the company looks hale, not haunted.

I’ll give Cook the last word. His assessment of Kale’s book:  “nonsense”.

JLG@mondaynote.com

 

The Beats Music Rorschach Blot

 

Apple has a long track record of small, cautious, unheralded acquisitions. Has the company gone off course with hugely risky purchase of Beats Music and Electronics, loudly announced at an industry conference? 

As Benedict Evans’ felicitous tweet put it, Apple’s $3B acquisition of Beats, the headphone maker and music streaming company, is a veritable Rorschach blot:

Benedict Evans Rorschach

The usual and expected interpretations of Anything Apple – with the implied or explicit views of the company’s future – were in full display at last week’s Code Conference after the Beats acquisition was officially announced during the second day of the event. Two of the conference’s high-profile invitees, Apple’s SVP Craig Federighi and Beats’ co-founder, Dr. Dre (née André Young), quickly exited the program so all attention could be focused on the two key players: Eddy Cue, Apple’s Sr. VP of Internet Software and Services; and Jimmy Iovine, Beats’ other co-founder and freshly minted Apple employee. They were interviewed on stage by Walt Mossberg and Kara Swisher, the conference creators (59-minute video here).

Walt and Kara had booked Cue and Iovine weeks before Tim Bradshaw scooped the Apple/Beats story on May 8th in the Financial Times (the original FT article sits behind a paywall; TechCrunch version here). Was the booking a sign of prescience? smart luck? a parting gift from Katie Cotton as she retires as head of Apple PR? (And was Swisher’s warmly worded valentine to Cotton for her 18 years of service a quid pro quo acknowledgment?)

After the official announcement and the evening fireside chat, the Rorschach analysis began. Amidst the epigrams, which were mostly facile and predictable, one stood out with its understated questioning of culture compatibility:

‘Iovine: Ahrendts or Browett?‘ 

The “Browett”, here, is John Browett, the British executive who ran Dixons and Tesco, two notoriously middle-brow retail chains. Apple hired him in April 2012 to succeed Ron Johnson as the head of Apple Retail… and showed him the door seven months later, removed for a clear case of cultural incompatibility. When Browett tried to apply his estimable cost-cutting knowledge and experience to the Italian marble Apple Store, things didn’t work out — and the critics were quick to blame those who hired him.

Nothing of the sort can be said of Dame Angela Ahrendts. Now head of Apple’s physical and on-line stores, Ahrendts was lured from Burberry, a culturally compatible and Apple-friendly affordable luxury enterprise.

Will Iovine be a Browett or an Ahrendts?

In a previous Monday Note, I expressed concern for the cultural integration challenges involved in making the Beats acquisition work. What I learned from the on-stage interview is that Jimmy Iovine and Eddy Cue have known and worked with each other for more than ten years. Iovine says he’ll be coming to Cupertino ‘about once a month’, so my initial skepticism may have been overstated; Apple isn’t acquiring a company of strangers.

But are they acquiring a company that creates quality products?  While many see Beats Music’s content curation as an important differentiator in the streaming business, one that would give a new life to its flagging music sales, others are not so sure. They find Beats Music’s musical choices uninspiring. I’m afraid I have to agree. I downloaded the Beats Music app, defined a profile, and listened for several hours while walking around Palo Alto or sitting at my computer. Perhaps it’s me, my age, or my degenerate tastes but none of the playlists that Beats crafted for me delivered neither the frisson of discovery nor the pleasure of listening to an old favorite long forgotten. And my iPhone became quite hot after using the app for only an hour or so.

Regarding the headphones: They’re popular and sell quite well in spite of what The Guardian calls “lacklustre sound”. I tried Beats Electronic’s stylish Studio headphones for a while, but have since returned to the nondescript noise-canceling Bose QC 20i, a preference that was shared (exactly or approximately) by many at the conference.

There was no doubt, at the conference, that Apple understands there are problems with Beats, but there’s also a feeling that the company sees these problems as opportunities. An overheard hallway discussion about the miserable state of the iTunes application (too strongly worded to repeat here verbatim) neatly summed up the opportunity: ‘Keeping Beats as a separate group affords Cook and Cue an opening for independently developing an alternative to iTunes instead of trying to fix the unfixable.’ It’s worth noting that the Beats Music app is available on mobile devices, only, and it appears there’s no plan to create a desktop version. This underlines the diminished role of desktops, and points out the possibility of a real mobile successor to the aging iTunes application.

Continuing with the blot-reading exercise, many members of the audience found it necessary to defend the $3B price tag. Some point out that since Apple’s valuation is about 3X its revenue, Beats’ purported $1.5B hardware revenue easily “justifies” the $3B number. (Having consorted with investment bankers at various moments of my business life, as an entrepreneur, a company director, and a venture investor, I know they can be trusted to explain a wide range of valuations. Apparently, Apple is paying $500M for the streaming business and $2.5B for the hardware part.)

My own reading is that the acquisition price won’t matter: If it acquisition succeeds, the price will be easily forgotten; if it fails, Apple will have bigger worries.

Ultimately, the Apple-Beats products and services we don’t haven’t yet seen will do the talking.

–JLG@mondaynote.com

Peak PC. Intel Fork.

 

Propelled by Moore’s Law and the Internet, PCs have enjoyed four decades of strong growth, defying many doomsday prophecies along the way. But, with microprocessor performance flattening out, the go-go years have come to an end. Intel, the emperor of PC processors, and a nobody in mobile devices needs to react.]

I’m suspicious of Peak <Anything> predictions. Some of us became aware of the notion of a resource zenith during the 1973 OPEC oil embargo, with its shocking images of cars lined up at gas stations (in America!):

Gas Lines Oil Embargo

This was Peak Oil, and it spelled doom to the auto industry.

We know what happened next: Cars improved in design and performance, manufacturers became more numerous. Looking at this bit of history through my geek glasses, I see three explanations for the rebound: computers, computers, and computers. Computer Assisted Design (CAD) made it easier to design new car models as variations on a platform; Volkswagen’s MQB is a good example. Massive computer systems were used to automate the assembly line and manage the supply chain. It didn’t take long for computers to work their way into the cars themselves, from the ECU under the hood to the processors that monitor the health of the vehicle and control the entertainment and navigation systems.

Since then, we’ve had repeated predictions of Peak Oil, only to be surprised by the news that the US will soon become a net oil exporter and, as Richard Muller points out in his must-read Physics for Future Presidents, we have more than a century of coal reserves. (Unfortunately, the book, by a bona fide, middle-of-the-road physicist, can’t promise us that physics will eventually push politics aside when considering the rise of CO2 in the atmosphere…)

I’ve heard similar End of The Go-Go Days predictions about personal computers since 1968 when my love affair with these machines started at HP France (I was lucky enough to be hired to launch their first desktop machine).

I heard the cry again in 1985 when I landed in Cupertino in time for the marked slowdown in Apple ][ sales. The never-before round of layoffs at Apple prompted young MBAs, freshly imported from Playtex and Pepsi, to intone the It’s All Commodities Now dirge. I interpreted the cry (undiplomatically -- I hadn’t yet learned to speak Californian) as a self-serving It’s All Marketing Now ploy. In the meantime, engineers ignored the hand-wringing, went back to work, and, once again, proved that the technology “mines” were far from exhausted.

In 1988, a Sun Microsystems executive charitably warned me: “PCs are driving towards the Grand Canyon at 100 mph!”.  A subscriber to Sun’s The Network Is The Computer gospel, the gent opined that heavy-duty computing tasks would be performed by muscular computers somewhere (anywhere) on the network. Desktop devices (he confusingly called them “servers” because they were to “serve” a windowing protocol, X11) would become commodities no more sophisticated or costly than a telephone. He had no answer for multimedia applications that require local processing of music, video, and graphics, nor could he account for current and imminent mobile devices. His view wasn’t entirely new. In 1965, Herb Grosch gave us his Law, which told us that bigger computers provide better economics; smaller machines are uneconomical.

And yet, personal computers flourished.

I have vivid memories of the joy of very early adopters, yours truly included. Personal computers are liberating in many ways.

First, they don’t belong to the institution, there’s no need for the intercession of a technopriest, I can lift my PC with my arms, my brains, and my credit card.

Second, and more deeply, the PC is a response to a frustration, to a sense of something amiss. One of mankind’s most important creations is the symbol, a sign without a pre-existing meaning: X as opposed to a drawing of a deer on a cave wall. Strung together, these symbols show formidable power. The expressive and manipulative power of symbol strings runs through the Song of Songs, Rumi’s incandescent poetry, Wall Street greed, and quantum physics.

But our central nervous system hasn’t kept up with our invention. We don’t memorize strings well, we struggle with long division, let alone extracting cubic roots in our heads.

The PC comes to the rescue, with its indefatigable ability to remember and combine symbol strings. Hence the partnership with an object that extends the reach of our minds and bodies.

Around 1994, the Internet came out of the university closet, gave the PC access to millions of servers around the world (thus fulfilling a necessary part of the Sun exec’s prophecy), and extended our grasp.

It’s been great and profitable fun.

But today, we once again hear Peak PC stories. Sales have gone flat, never to return:

PC shipments 2014-18 - PNG

This time, I’m inclined to agree.

Why?

Most evenings, my home-builder spouse and I take a walk around Palo Alto. Right now, this smallish university town is going through a building boom. Offices and three-layer retail + office + residence are going up all around University Avenue. Remodels and raze-and-build projects can be found in the more residential parts of town. No block is left unmolested.

I can’t help but marvel. None of this activity, none of Silicon Valley would exist without Moore’s Law, the promise made in 1965 that semiconductor performance would double every 18 months. And, for the better part of 40 years, it did - and rained money on the tech ecosystem, companies and people. PCs, servers, embedded electronics, giant network routers, cars...they’ve all been propelled because Moore’s Law has been upheld...until recently.

The 1977 Apple ][ had a 1MHz 8-bit processor. Today’s PCs and Mac’s reach 3.7GHz, but number that hasn’t changed in more than three years. This isn’t to say that Intel processors aren’t still improving, but the days when each new chip brought substantial increases in clock speed seem to be over.

One should never say never, but Moore’s Law is now bumping into the Laws of Physics. The energy needed to vibrate matter (electrons in our case) increases with frequency. The higher the clock frequency, the higher the power dissipation and the greater the heat that’s generated…and a PC can withstand only so much heat. Consider the cooling contraptions used by PC gamers when they push the performance envelope of their “rigs”:

EK-Thermosphere_right2_12001

To work around the physical limits, Intel and others resort to stratagems such as “multiple cores”, more processors on the same chip. But if too many computations need the result of the previous step before moving forward, it doesn’t matter how many cores you have. Markitects have an answer to that as well: “speculative branch execution”, the use of several processors to execute possible next steps. When the needed outcome appears, the “bad” branches are pruned and the process goes forward on the already-computed good branch. It makes for interesting technical papers, but it’s no substitute for a 8GHz clock speed.

If we need confirmation of the flattening out of microprocessor progress, we can turn to Intel and the delays in implementing its Broadwell chips. The move to a 14 nanometers  “geometry” — the term here denotes the size of a basic circuit building block — is proving more difficult than expected. And the design isn’t meant to yield faster processors, just less power-hungry ones (plus other goodies such as better multi-media processing).

One possible reaction to this state of affairs is to look at tablets as a new engine of growth. This is what Microsoft seems to be doing by promoting its Intel-inside Surface Pro 3 as a laptop replacement. But even if Microsoft tablets turn out to be every bit as good as Microsoft says they are, they aren’t immune to the flattening out of Intel processor performance. (I don’t have an opinion yet on the product — I tried to buy one but was told to wait till June 20th.)

Does this broaden the opening for ARM-based devices? Among their advantages is a cleaner architecture, one devoid of the layers of backwards compatibility silt x86 devices need. ARM derivaties need less circuitry for the same computing task and, as a result, dissipate less power. This is one of the key reasons for their dominance in the battery-powered world of mobile devices. (The other is the customization and integration flexibility provided by the ARM ecosystem.) But today’s ARM derivatives run at lower speeds (a little above 1GHz for some) than Intel chips. Running at higher speeds will challenge them to do so without hurting battery life and having to add the fan that Microsoft tablets need.

With no room to grow, PC players exit the game. Sony just did. Dell took itself private and is going through the surgery and financial bleeding a company can’t withstand in public. Hewlett-Packard, once the leading PC maker, now trails Lenovo. With no sign of turning its PC business around, HP will soon find itself in an untenable position.

Intel doesn’t have the luxury of leaving their game — they only have one. But I can’t imagine that Brian Krzanich, Intel’s new CEO, will look at Peak PC and be content with the prospect of increasingly difficult x86 iterations. There have been many discussions of Intel finally taking the plunge and becoming a “foundry” for someone else’s ARM-based SoC (System On a Chip) designs instead of owning x86 design and manufacturing decisions. Peak PC will force Intel CEO’s hand.

JLG@mondaynote.com

Misunderstanding Apple

 

We’ve come to expect analysts and pundits to misunderstand Apple. More puzzling is when Apple misunderstands itself.

My three-week Road Trip of a Lifetime, driving all the way from Key West, FL to Palo Alto, was interrupted by a bout of pneumonia, low blood oxygen, paroxysmal cough and, most alarming, a loss of appetite. Thankfully, all indicators are looking good and I’m back walking Palo Alto’s leafy streets.

The succession of wheel time and downtime gave me an opportunity to contemplate two recent controversies: Fred Wilson’s prediction of Apple’s imminent fall, and rumors of Apple’s purchase of Beat Electronics. These are both manifestations of what I’ll call, for lack of a better term, Misunderstanding Apple.

First, Fred Wilson. At the recent TechCrunch Disrupt conference, the successful and articulate venture investor predicted that by 2020 Apple will no longer hold the #1 position in the tech world. They won’t even be in the top three. According to Wilson, Apple “doesn’t think about things they way they need to think about things”. Specifically, the company is “too rooted in hardware…[which] is increasingly becoming a commodity” and “Their stuff in the cloud is largely not good. I don’t think they think about data and the cloud.

I’d be surprised by Wilson’s facile, insight-free truisms, except this isn’t the first time he’s shown a blind spot when considering Apple. Wilson is famous for dumping his Apple shares at $91 in January 2009; AAPL is now at $590 or so. (He also sold Google, which closed at $528 on Friday, for a split-adjusted $160. Perhaps there’s a difference between being a venture investor, an insider who watches and influences a young company, and an outsider subjected to forces and emotions outside of one’s control.)

Calling Apple “too rooted in hardware” misunderstands the company. From its inception, Apple has been in one and only one business: personal computers (which, today, includes smartphones and tablets). Indeed, Apple’s quarterly numbers show that the sale of personal computers makes up 87% of its revenue. Everything else that Apple does, from iTunes to the Apple Store, exists to make its smartphones, tablets, laptops, and desktops more useful, more pleasant. And this “everything else” includes the lovingly machined hardware of the MacBook Air and iPhone 5. If the supporting cast does its job well, the main acts will sell in larger numbers and at higher prices.

Customers don’t buy Apple “hardware” in the same way a weekend carpenter buy nails at the friendly neighborhood hardware store. What Fred Wilson seems to miss is that hardware is more than an inert “thing” for Apple: It’s a conduit to an entire ecosystem, and it can yield an enormous strategic advantage. One such example is the 64-bit A7 processor that took everyone by surprise: 64 bits. It’s Nothing. You Don’t Need It. And We’ll Have It In 6 Months.

When the subject of commodization comes up, I invite people to look at the cars they see in the street. Are the likes of Audi, BMW, and Mercedes being commoditized? Do their owners only care that the wheels are black and round? Serendipitously, someone called “SubstrateUnderflow” answers the question in a comment on Wilson’s blog:

“…when I look around at all the cars out there, from the high end models to the most utilitarian models, almost no one buys the base stripped versions. Key devices that are central to people’s lives, comfort and utility have enough emotional SubstrateUndertow to sustain premium pricing.”

The 30-year old Mac business and its healthy margins (about 25% versus HP’s sub-5% for its PCs) shows that Apple has successfully avoided the commoditized race to the bottom that has plagued Wintel devices and is likely to accelerate for smartphones.

Wilson’s criticism of Apple’s “stuff in the cloud”, on the other hand, carries some sting. As a user of Apple’s products and services, I’m often disappointed with Apple’s Cloud offerings. I find iMessage’s quirks irritating, I see a lack of proper synchronization between iBooks on Macs and iDevices, and I’m still waiting for the Cloud version of iWorks to mature. But let’s turn to Horace Dediu for a crisp summary of Apple’s place in the Cloud:

“Not getting the cloud” means that in the last 12 months Apple obtained:
• 800 million iTunes users and
• an estimated 450 million iCloud users spending
•  $3 billion/yr for end-user services plus
•  $4.7 billion/yr for licensing and other income which includes
•  more than $1 billion/yr paid by Google for traffic through Apple devices and
•  $13 billion/yr in app transactions of which
•  $9 billion/yr was paid to developers and
•  $3.9 billion/yr was retained as operating budget and profit for the App Store.

In addition,
•  more than $1 billion/yr in Apple TV (aka Apple’s Kindle) and video sales and
• $2.7 billion/yr in music download sales and
• $1 billion/yr in eBooks sold

In summary, iTunes, Software and Services has been growing between 30% and 40% for four years and is on its way to $30 billion/yr in transactions and sales for 2014.

Horace is right; Fred Wilson clearly hasn’t done the numbers.

———————

I was still on the road when I read about the rumored $3.2B acquisition of Beats Electronics, the company that began in the headphone business and then spawned a streaming music service.

I’m puzzled. If the rumors prove true, Apple may be guilty of misunderstanding itself.

The hardware side, headphones, is immaterial: The products may look good, but their audio quality is regularly panned. And the revenue, about $500M, doesn’t move the needle.

The current wisdom is that Apple is mostly interested in Beats Music, the subscription streaming service. But that business isn’t big, either; it has only attracted about 110K subscribers.

Maybe Apple is interested in Beats Music’s technology and its vision for the future of streaming and music curation. I took the time to watch Walt Mossberg’s interview of Jimmy Iovine in which the Beats co-founder gives hints about his plans. Iovine’s AI-with-a-human-touch solution for delivering “what comes next” is technically vague — and vaguely dystopian (“we’ll scrape your hard drivewe’ll know where you are tomorrow”). I’m not convinced.

We also have rumors that Iovine and Dr. Dre, Beats’ other co-founder, might become some kind of senior advisers to Apple management. Given what I’ve read about Dre’s troubles with the Law, including a battery charge that landed him in jail, and an assault on a female that was settled out of court, I’m troubled. How will this play inside and outside Apple?

I don’t see how such an acquisition would enhance Apple’s business model or reputation.

That said, I hope I’m as wrong as I was when I thought the iPod was doomed to fail against commoditized, yes, that word, MP3 players. I hadn’t seen iTunes behind the hardware, Cloud storage, the distribution and micro-payments infrastructure that would one day make the iPhone and App Phone.

I also see people whose intellect and motives I respect strongly support the rumored acquisition. Preeminent among them is Ben Thompson who, in his Stratechery blog, explores Why Apple Is Buying Beats. There, after positing personal computers might have reached their peak, Ben asks whether Apple is in fact reinventing itself as a kind of fashion house [emphasis mine]:

“Or are we witnessing a reinvention, into the sort of company that seeks to transcend computing, demoting technology to an essential ingredient of an aspirational brand that identifies its users as the truly with it? Is Apple becoming a fashion house? Think about it: you have Jony Ive as all-up head of design, the equivalent of a Tom Ford or Donatella Versace. There is the hire of Angela Ahrendts – why would she leave the CEO position of Burberry for a Senior VP role? You have an iPhone framed as an experience, not a product. And now you acquire an accessory maker differentiated almost completely by its brand, not its inherent technical quality.”

And ponders at the Damned If You Do, Damned If You Don’t of such cultural change:

“Still, I can imagine the very thought of Apple positioning itself as a fashionable luxury brand is somewhat nauseating for many of my readers. It’s an understandable reaction, and one I somewhat share. I worry that Apple is losing what makes Apple, Apple, especially that desire to make the power of computing accessible for normal people. But I also know that stasis means stagnation, and over the long-run, death.”

To be continued…

JLG@mondaynote.com

Science Fiction: Mining My Own Exhaust

 

A few thoughts on Big Data, self-knowledge, and my hopes for the emergence of a new genre of services.

I’m about to fulfill an old fantasy — the Great American Road Trip. Over the next three weeks, we’ll be driving all the way from Key West, FL to Palo Alto. In that spirit, today I’ll luxuriate in another, more distant reverie: Mining my own data exhaust.

I’m spurred to this indulgence by the words of Satya Nadella, Microsoft’s new CEO, at an April 15th event in San Francisco [emphasis mine]:

“The core evolution of silicon, software and hardware is putting computing everywhere humans are present,” Nadella said. “And it’s generating a massive data exhaust of server logs, sensor data and unstructured social stream information. We can use that exhaust to create ambient intelligence for our users.”

Nadella’s right. I leave a cloud of data exhaust with my Web browsing, credit card purchases, cell phone use, monthly blood tests, pharmacy purchases, and airline trips. Server logs detail the length and breadth of my social interactions on Facebook and Google+… And I don’t have to be on a computer to add to the cloud: I’m tracked by toll passes and license plate snapshots as I drive my car. The car itself monitors my driving habits with its black box recording of my speed and direction. This list, far from exhaustive [no pun intended], is sobering – or exciting, full of possibilities.

Today, we’ll skip the Orwellian paranoia and fantasize about an alternate universe where I can “turn the gratis around”, where I can buy my data back.

Google, Facebook, and the like provide their services for free to induce us to lead them to the mother lode: Our cache of product preferences, search history, and web habits. They forge magic ingots from our personal data, sell the bullion to advertisers, and thus fuel the server farms that mine even more data. I’m not here to throw a monkey wrench into this business model; au contraire, I offer a modest source of additional revenues: I’d like to buy my data back. And I’ll extend that offer to any and all entities that mine my activities: For you, at a special price today, I’m buying my data.

(We all understand that this fantasy must take place in an alternate universe. If our legislators and regulators were beholden to us and not to Google, Verizon, and “Concast” [a willful typo from Twitter wags], they would have long ago made it mandatory that companies provide us with our own data exhaust.)

Pursuing this train of thought, one can conceive of brokers scouring the world for my exhausts — after having secured the right permissions from me, of course. Once this becomes an established activity, no particular feat of imagination is required to see the emergence of Big Data processing companies capable of merging and massaging the disparate flumes obtained from cell carriers, e-merchants, search engines, financial services and other service providers.

So far, especially because it lacks numbers and other annoying implementation details, the theory sounds nice. But to what end?

The impulse can be viewed as a version of the old Delphic injunction: Know Thyself, now updated as Know Thine Quantified Self: Quantify what I do with my body, time, money, relationships, traveling, reading, corresponding, driving, eating… From there, many derivations come to mind, such as probabilistic diagnoses about my health, financial situation, career, and marriage. Or I could put my data in turnaround, mandate a broker to shop facets of my refined profile to the top agencies.

Even if we set aside mounds of unresolved implementation details, objections arise. A key member of my family pointedly asks how much do we really want to know about ourselves?

This reminds me of a conversation I once had with a politely cynical Parisian tailor. I ventured that he could help his customers choose a suit by snapping a picture and displaying it on a 80” flat screen TV in portrait mode. My idea was that the large scale digital picture would offer a much more realistic, a more objective image than does a look in the mirror. The customer would be able to see himself as others see him, what effect the new suit would produce – which, after all, is the point of new duds.

“No way,” said the Parisian fashionista, “are you nuts? My customers, you included,” he tartly added, “really don’t want the cruel truth about their aging bodies…”

Still, I’m curious. And not just about the shape and color of the data exhaust that I leave in my wake, about the truths — pleasant or embarrassing — that might be revealed. I’m curious about the types of companies, services, and business models that would emerge from this arrangement. Even more fascinating: How would the ability to mine and sell our own data affect our cultural vocabulary and social genetics?

JLG@mondaynote.com

PS: As offered here, I recently downloaded my Facebook data set. The data doesn’t appear to be very revealing, but that could be the result of my low Facebook involvement — I’m not a very active user. I’d be curious to see the size and detail level associated with a more involved participant.

PPS: I’ll be on the road, literally, for the next three weeks and may or may not be able to post during that time.

The iPad Is a Tease

 

As Apple is about to release its latest quarterly numbers, new questions arise about the iPad’s “anemic” growth. The answer is simple – but the remedies are not.

The iPad isn’t growing anymore. What happened? 

In anticipation of Apple’s latest quarterly numbers – they’ll be announced on April 23rd – the usual prerelease estimates swirl around the Web. You can find Yahoo’s summary of analysts’ estimates here; Paul Leitao’s Posts At Eventide provides a detailed and tightly reasoned history and forecast for the March 2014 quarter.

The consensus is that for the company as a whole, there won’t be any surprises: Apple will meet the guidance stated in its January 27th earnings call. Revenue will be down, as befits the quarter following the Christmas shopping frenzy, but profit per share (EPS) will be up a bit.

Boring. With one glaring exception:

Braeburn Group iPad Edited
(Source: The Braeburn Group)

In the same quarter for 2013, the iPad’s year-on-year growth was about 55%. Some of this phenomenal growth was due to a rebound from earlier iPad mini supply constraints, but that doesn’t explain the precipitous drop from 2013 to this year.

Are the iPad’s go-go years over?

As Philip Elmer-DeWitt reports on his Apple 2.0 site, this gloomy prediction appears to be the majority opinion among analysts. Elmer-DeWitt acknowledges that there are outliers — Horace Dediu comes in at the high end with an estimate of 21.8M units (and positive growth) — but “the consensus estimate of 19.3 million, would represent a 0.7% decline”.

It’s one thing for a product to increase in unit volume sales but still grow less than the overall market — that’s simply a loss of market share. And we know how fallacious share numbers can be in the absence of an honest disclosure of sales volumes. No, assuming the estimates are right, what we have here isn’t market share dilution, it isn’t a post-Christmas lull, it’s a year-to-year decline in absolute unit numbers.

Why?

I’ll offer an opinion: The iPad is a tease. Its meteoric debut raised expectations that it can’t currently meet.

To explain, let’s go back four years.

Steve Jobs’ last creation took us by surprise, price included, and was initially panned by many in the kommentariat, from Eric Schmidt to Dan Lyons (who subsequently recanted). But normal humans joyously took to the iPad. In 1984, one of Apple’s tag line for the Mac was “Macintosh – the computer for the rest of us.” Decades later, the iPad was quickly perceived as a sort of second coming. As MacWorld put it in June 2011: Now Apple’s really “for the rest of us”.

Indeed, the iPad wasn’t targeted at a particular type — or generation — of user. David Hockney has produced exquisite iPad “paintings”. Daniel Borel, Logitech’s co-founder, told me that his two-year old grandson immediately “got” the iPad (even if it was just to play games, but…he’s two). Coming out of our breakfast meeting, I crossed paths with a couple of seniors — octogenarians, probably — who proudly told me that they were going to an iPad training session at the Palo Alto Apple Store.

The iPad rose and rose. It won legions of admirers because of its simplicity: No windows (no pun), no file system, no cursor keys (memories of the first Mac). Liberated from these old-style personal computer ways, the iPad cannibalized PC sales and came to be perceived as the exemplar Post-PC device.

But that truly blissful simplicity exacts a high price. I recall my first-day disappointment when I went home and tried to write a Monday Note on my new iPad. It’s difficult — impossible, really — to create a real-life composite document, one that combines graphics, spreadsheet data, rich text from several sources and hyperlinks. For such tasks, the Rest of Us have to go back to our PCs and Macs.

I realize there are iPad users who happily perform “productivity tasks” on their iPads. Most of them use a stand and keyboard sold in a number of guises. The number of different offerings is a testament to a real need. (We’ll note that Apple doesn’t seem eager to address this issue directly. They don’t offer an “iPad-sized” keyboard — the Bluetooth keyboard I use is fine for my iMac, but feels gargantuan when I pair it with my iPad. And Apple’s iPad Dock hasn’t been updated to work with the “Lightning” connector on the newer iPads.)

The iPad’s limitations extend beyond classic office productivity tasks. I just tried to build an itinerary for a long postponed road trip, driving all the way from Key West Florida to Palo Alto. On a Mac, you can easily “print to PDF” to produce a map for each leg of the trip. Then you use the wonderful Preview app (I salute its author and dedicated maintainer) to emend unneeded pages, drag and drop, combine and rearrange the PDF files into a single document. Don’t try this on an iPad: How would you “print-to-PDF” a map page, let alone combine such pages?

Despite the inspiring ads, Apple’s hopes for the iPad overshot what the product can actually deliver. Although there’s a large numbers of iPad-only users, there’s also a substantial population of dual-use customers for whom both tablets and conventional PCs are now part of daily life.

I see the lull in iPad sales as a coming down to reality after unrealistic expectations, a realization that iPads aren’t as ready to replace PCs as many initially hoped.

In his introduction of the iPad in January, 2010, Jobs himself seemed a bit tentative when positioning his latest creation. Sitting in the Le Corbusier chair, Jobs stated that his new tablet would have to “find its place between the iPhone and the Mac”.

This “in-between place” is still elusive.

Microsoft tried to find that “in-between place”, and we know how well that worked. For the Redmond company, the iPad’s limitations were an opportunity: Simply emulate the charm and intuitiveness of the market-leading tablet and cater to the needs of the “professional” user. With its touch interface and keyboard, the Surface device sounded like the solution that had eluded Microsoft’s earlier PC Tablets. In the field, customers didn’t like the dueling interfaces, nor the introduction of layers of complexity where simplicity had been promised. Surface tablets didn’t move the revenue needle and cost Microsoft a $900M write-down.

The iPad represents about 20% of Apple’s revenue; allowing iPad numbers to plummet isn’t acceptable. So far, Apple’s bet has been to keep the iPad simple, rigidly so perhaps, rather than creating a neither-nor product: No longer charmingly simple, but not powerful enough for real productivity tasks. But if the iPad wants to cannibalize more of the PC market, it will have to remove a few walls.

Specifically, the iPad is a computer, it has a file system, directories, and the like — why hide these “details” from users? Why prevent us from hunting around for the bits and bobs we need to assemble a brochure or a trip itinerary?

None of this is news to Apple execs, but they also know that success doesn’t depend on What, on a simple feature list. The next step in iPad growth will depend on How new features are integrated into the user experience. It’s a tricky game of the Best of Both Worlds…and it tripped up Microsoft.

When will we know? I have no idea. Perhaps at the WWDC this coming June.

JLG@mondaynote.com

The Browser Is The OS: 19 Years Later

 

So it was declared in the early days: Web apps will win over native apps. Why let the facts cloud an appealing theory?

Marc Andreessen, the Netscape co-founder, is credited with many bold, visionary claims such as “Everyone Will Have the Web” (ca. 1992), “Web Businesses Will Live in the Cloud” (1999), “Everything Will Be Social” (2004, four years before joining Facebook’s Board), and “Software Will Eat the World” (2009).

But not all of Andreessen’s predictions are as ringing and relevant. His 1995 proclamation that “The Browser Will Be the Operating System” still reverberates around the Web, despite the elusiveness of the concept.

The idea is that we can rid our computing devices of their bulky, buggy operating systems by running apps in the Cloud and presenting the results in a Web browser. The heavy lifting is performed by muscular servers while our lightweight devices do nothing more than host simple input/output operations. As a result, our devices will become more agile and reliable, they’ll be less expensive to buy and maintain, we’ll never again have to update their software.

The fly in the ointment is the word connected. As Marc Andreessen himself noted in a 2012 Wired interview [emphasis mine]:

[I]f you grant me the very big assumption that at some point we will have ubiquitous, high-speed wireless connectivity, then in time everything will end up back in the web model.

So what do we do until we have ubiquitous, high-speed wireless connectivity?

We must build off-line capabilities into our devices, local programs that provide the ability to format and edit text documents, spreadsheets, and presentations in the absence of a connection to the big App Engines in the Cloud. Easy enough, all you have to do is provide a storage mechanism (a.k.a. a file system), local copies of your Cloud apps, a runtime environment that can host the apps, a local Web server that your Browser can talk to… The inventory of software modules that are needed to run the “Browser OS” in the absence of a connection looks a lot like a conventional operating system… but without a real OS’s expressive power and efficiency.

For expressive power, think of media intensive applications. Photoshop is a good example: It could never work with a browser as the front end, it requires too much bandwidth, the fidelity of the image is too closely tied to the specifics of the display.

With regard to efficiency, consider the constant low-level optimizations required to conserve battery power and provide agile user interaction, none of which can be achieved in a browser plug-in.

Certainly, there are laudable arguments in support of The Browser Is The OS theory. For example: Unified cross-platform development. True, developing an app that will run on a standardized platform decreases development costs, but, let’s think again, do we really want to go for the lowest common denominator? A single standard sounds comfy and economical but it throttles creativity, it discourages the development of apps that take advantage of a device’s specialized hardware.

Similarly, a world without having to update your device because the Cloud always has the latest software is a comforting thought.. but, again, what about when you’re off-line? Also, a growing number of today’s computing devices automatically update themselves.

In any case, the discussion may be moot: The people who pay our salaries — customers — blithely ignore our debates. A recent Flurry Analytics report shows that “Six years into the Mobile Revolution” apps continue to dominate the mobile Web. We spend 86% of our time using apps on our mobile devices and only 14% in our browsers:

Apps 86 Browser 14

…and app use is on the rise, according to the Flurry Analytics forecast for 2014:

Apps Web Flurry 2013 2014

So how did Andreessen get it so wrong, why was his prediction so wide of the mark? It ends up he wasn’t wrong… because he never said “The Browser Will Be the Operating System”. Although it has been chiseled into the tech history tablets, the quote is apocryphal. 

While doing a little bit of research for this Monday Note, I found a 1995 HotWired article, by Chip Bayers, strangely titled “Why Bill Gates Wants to Be the Next Marc Andreessen”. (Given Microsoft’s subsequent misses and Marc Andreessen’s ascendency, perhaps we ought to look for other Chip Bayer prophecies…) The HotWired piece gives us a clear “asked and answered” Andreessen quote [emphasis mine]:

“Does the Web browser become something like an operating system?

No, it becomes a new type of platform. It doesn’t try to do the things an operating system does. Instead of trying to deal with keyboards, mouses, memory, CPUs, and disk drives, it deals with databases and files that people want to secure – transactions and things like that. We’re going to make it possible for people to plug in anything they want.”

Nearly two decades later, we still see stories that sonorously expound “The Browser Is The OS” theory. Just google the phrase and you’ll be rewarded with 275M results such as “10 reasons the browser is becoming the universal OS” or “The Browser Is The New Operating System”. We also see stories that present Google’s Chrome and Chromebooks as the ultimate verification that the prediction has come true.

The Browser Is The OS is a tech meme, an idea that scratches an itch. The nonquote was repeated, gained momentum, and, ultimately, became “Truth”. We’ll be polite and say that the theory is “asymptotically correct”… while we spend more energy figuring out new ways to curate today’s app stores.

JLG@mondaynote.com

TV Done Right: Still A Dream

 

As the strong reactions to even the slightest Apple TV rumor demonstrate, there’s a vigorous appetite for a simple, modern Internet TV experience. The technology is ready but carriers aren’t.

Last week started with Big Apple TV News in an authoritative-sounding Wall Street Journal article:

“Apple Inc. is in talks with Comcast Corp. about teaming up for a streaming-television service that would use an Apple set-top box and get special treatment on Comcast’s cables to ensure it bypasses congestion on the Web, people familiar with the matter say.”

Search for “Comcast” in a news aggregator such as Feedly (there are many other good choices), and you’ll see a wide range of reactions to the Apple-Comcast rumor. Given the heat the article generated, it’s odd that there has been zero follow-up from the main players — nothing from Apple and Comcast, no additional information in the WSJ or any other journal. When a deal of such importance is in the works, “people familiar with the matter” have a strong incentive to keep talking, to add color, to spin their side of the story. Of course, no one expects Apple to do much leaking, but the radio silence from Comcast spinmeisters is another matter entirely.

Philip Elmer-DeWitt offers the most likely explanation: The Wall Street Journal got played by someone intent on throwing a wrench into Comcast’s plan to acquireTime Warner’s cable operations. (This wouldn’t be the first time: Cellphone carriers have repeatedly used the WSJ to air their perennial Poor Me complaints about excessive smartphone subsidies.)

Echoes of the WSJ non-story ricocheted around the blogosphere. Some, such as this BBC article, make painful points about the abuse that US consumers undergo at the hands of broadband carriers:

Broadband Cost

As a sharp-witted Be engineer liked to remark: “It costs more… But it does less.”

Carriers take too much money for a user-hostile experience simply because they can. In most locations, cable companies have little or no competition, so there’s no reason for them to do anything more than milk the most profit from a cheap infrastructure. As Apple Insider’s Neil Hughes reminds us, the user experience isn’t a priority for cable providers. Indeed, as I write this from Paris, I have to juggle set-top box restarts and malfunctioning secondary content subscriptions only reluctantly allowed by the main provider.

It doesn’t have to be that way. No miracle is required to make our Cable TV experience easy and gratifying.

Consider today’s cable arrangement, simplified for our discussion. A coax cable is strung from the street into your basement or crawl space. You plug the coax into a signal splitter, connect one output to your cable modem for Internet access, while the others feed the TVs in your household.

Next, you run an Ethernet cable from your modem to your WiFi access point and maybe you also run a wire from the access point to your “most trusted” computer. Upstairs, we see a set-top box, an Internet TV streaming device (Roku, Apple TV, Boxee, or other), and, if your TV is of a certain age, a digital adaptor.

That’s four or five devices that you have to connect and, when things go wrong, disconnect, power down, and restart in the “proper” order.

It’s only too easy to imagine how a next-generation Apple TV could collapse this maze of impenetrable interfaces into one box: Coax in, Wifi and HDMI out and, miracle, one and only one remote! This is something that Apple seems to have the taste and resources to do well.

There are no technical obstacles, no new technology is required, no new software platform, just a careful integration job. I realize I’m veering dangerously close to the “mere matter of implementation” deception, but regardless of the amount of work it would take to integrate the various technologies, the benefit to the user would make the engineering effort worth it.

And there are many benefits:  We can throw away our DVRs as content becomes an app that we can stream whenever we want — the 60 Minutes iPad app is an elegant, flexible exemplar of the type. Rather than paying for a “package” of channels that are selected by the cable provider, we’ll be able to buy a la carte shows, series, and channels through iTunes or similar content vendor. We’ll be able to watch the free-with-ads version of a show, or we can pay for the ad-free edition.

Some day, the status quo will break, perhaps as the result of a patient encirclement and infrastructure buildup — a better, vertically integrated Content Delivery Network, both very much compatible with Apple’s playbook. As the reactions to the (possibly planted) Apple-Comcast rumor amply demonstrate, users are becoming increasingly aware of the disconnect between the experience that the cable companies offer and TV Done Right.

JLG@mondaynote.com

Wearables Fever

 

While Google, Motorola, and Samsung seem eager to jump into the wearables market, Apple characteristically keeps its counsel – and wisely so: Smartwatches and other wearables produce more pageviews than profits.

Wearables are a danger to your health – your mental health, that is. Smartwatches and sensor-laden bracelets aren’t so new anymore — see Microsoft’s 2004 SPOT Watch — but the vernal equinox seems to have triggered a bout of Wearables Fever the likes of which we haven’t seen since the Tablet Fever of January, 2011, when 76 tablets were announced at the Consumer Electronic Show in Las Vegas. As so often happens with pandemics, there was a smaller outbreak, called the Dawn of the Tablet PC, days before the January 2010 iPad launch.

In this year’s derangement, we are witnessing the birth of another epoch-making class of product — the Wearable. As Wired sees it, for example, Jawbone Is Now the Startup Apple Should Fear Most.

In one respect, Jawbone’s devices are a lot like Apple’s. The company admires minimalism…[b]ut Apple’s minimalism is cold — all brushed metal and glass — while Jawbone’s is warm, squishy, and textured… There’s a chance Apple has designed itself into a corner. But for Jawbone, the future is full of possibility.

Then there’s this analysis, quoted and mocked by John Gruber [emphasis mine]:

Cadie Thompson, writing for CNBC, “Time Is Ticking for Apple to Announce an iWatch, Say Analysts”. Apple needs an iWatch sooner rather than later, or the company will risk losing its innovative edge to rivals, analysts say.

They only have 60 days left to either come up with something or they will disappear,” said Trip Chowdhry, managing director at Global Equities Research. “It will take years for Apple’s $130 billion in cash to vanish, but it will become an irrelevant company… it will become a zombie, if they don’t come up with an iWatch.

I’m guessing the ellipsis denotes when he paused for another line of coke.

Parenthetically, it would be wrong to imply that Mr. Chowdhry might be “incentivized” to shout from the rooftops by rewards more satisfying than pageviews — no allegations of stock manipulation complicity here — but I wonder about the games that he and other anal-ists play. As Philip Elmer-DeWitt pointedly noted in a CNN Money column last year, Mr. Chowdhry urged his clients to unload Apple stock for eight months and then blamed the CEO and CFO “for destroying Apple’s shareholder value”.

If you’re curious enough to look at Mr. Chowdhry’s spartan Global Equities Research site, you’ll see he claims to have Commission Sharing Agreements with Goldman Sachs, Merrill Lynch, Barclays, Jefferies, Morgan Stanley and JP Morgan. As the Wikipedia article points out, such agreements “ask that broker to allocate a portion of the commission directly to an independent research provider.” Here, one wonders what the word independent really means…

Back to Wearables: The announcements pile on.

Samsung tells us they’re moving their smartwatches away from Android to a version of Tizen, itself based on a version of the ubiquitous Linux.

Google announces Android Wear, a version of Android for smartwatches.

Motorola, soon to be a Lenovo brand, announces that its moto 360 smartwatch is “Coming Summer 2014 in a selection of styles” and provides these artful renderings:

Moto Wrist Edited

and…

Moto Modern

(I write renderings because, as the Android Wear intro video indicates, these are simulated pictures. This doesn’t mean that the final product won’t be better looking– but we’re clearly not there yet.)

Why the haste? Did Tim Cook succeed in misdirecting Apple’s competition when he pronounced wearables a “very key branch of the tree? Or is there a giant business to be had?

We have many unanswered questions.

First, paraphrasing Horace Dediu, there are the twin questions of For What and By Whom: For what job is a smartwatch “hired”, and by whom? If we look at phones as a model, we see two “employers”: Carriers hire smartphones to increase their ARPU; normal consumers use them as small, ubiquitous, always-connected personal computers.

Will this model work for smartwatches? We can almost certainly eliminate carriers from the equation: Subsidies are out of question because a watch is unlikely to generate carrier revenue.

For us users, a smartwatch collects sensor data, connects to our smartphone, displays alerts, responds to touch and voice commands… and even tells us the time. These are all worthwhile functions that make for neat promo videos, but to keep users interested after the novelty wears out, smartwatches will have to do more than log the miles we’ve run, give us weather updates, and show us the name of the person who’s ringing the smartphone in our pocket. Put another way: We’re willing to pay a premium for our smartphones (whether directly or by contract) because of the huge range of features they provide, the enormous number of apps in the app stores. Will we be as durably aroused – and willing to part with substantial amounts of money – by (yet another) pulse rate app?

Another batch of questions: Since we no longer need a dedicated timepiece to tell us the time — our smartphone does that — Who wears a (dumb) watch these days, How, When, and Why?

Simplifying a bit, younger people don’t wear watches at all and older generations use them as jewelry — and gender-specific jewelry, at that. Furthermore, how many veteran watch-wearers wear the same watch all the time? Many of us own more than one watch, and select the appropriate timepiece (or two — or none at all) for the occasion. These aren’t trivial issues, they’re uncharted territory for mobile device makers and marketers.

Next question: How will smartwatch makers handle the delicate equilibrium between computing power and battery power? As smartwatches evolve and offer more features, a better display, and a more responsive user interface, they’ll need more computing power — and more computing power means a quicker battery drain. Will we put up with watches that run out of power at the end of the day? Will designers retard functionality in order to extend battery life to 24 hours and beyond… or make a smartwatch so big it’ll look like a miniature phone?

The power equilibrium question is why Samsung moved to a dedicated (and pared down) version of Tizen, and why Google did the same for Android Wear. All without giving much information of battery life.

Finally: Is there a business, there? Here in the Valley, Pebble CEO Eric Migicovsky claims to have sold 400,000 watches since January, 2013. At around $150 each, that’s $60M in revenue — a real tribute to Eric’s long-standing belief in wearables (he’s been working at it for six years).

But even if you multiplied this number by 10, it would barely nudge the needle for a large companies such as Samsung, Motorola/Lenovo, or Apple, which means these devices will be confined to the role of smartphone companion. They’ll help make money by enhancing the main product; they’re not going to be a $10B business in themselves.

As Charles Arthur writes in The Guardian, there are fewer than half a million smartwatches in use in the UK: “Wearable computing faces an uphill battle breaking through to the mainstream…”. Similarly, the Register doesn’t see any good, large-scale answers to the question. It calls Google wearables “A solution looking for a rich nerd”.

These challenges might explain why Apple doesn’t seem to have caught this Spring’s Wearables Fever. Smartwatches are destined to be ecosystem extensions, not The Next Big Thing.

JLG@mondaynote.com

One last thought before we close: Not all Ecosystem Extensions are equal. The no-longer-a-hobby Apple TV now brings substantial revenue and growth:

“Sales of the Apple TV are estimated to have grown by 80 percent in 2013, reaching around 10 million units for the calendar year, or some $1 billion worth of set-top boxes sold to end users.”

Horace Dediu puts a “Fortune 130” label on iTunes. By itself, with yearly gross revenue of $23.5B and growing 34%, iTunes is large enough to rank #130 in the Fortune list of the 500 largest US companies:

On a yearly basis iTunes/Software/Services is nearly half of Google’s core business and growing slightly faster.”

While music sales are on the wane, apps and video (mostly Apple TV) show healthy growth. Compared to an Apple TV, how much would an iWatch add to the iTunes business? Apps? Content?

Apple seems wise to stay out of the game until it can make something more lasting than a novelty.

CarPlay Thoughts

 

Who wouldn’t want an iPhone- or Android-like experience in their car instead of today’s misbegotten navigation and entertainment systems? CarPlay’s answer looks nice – until one looks at the details.

Apple’s CarPlay has an air of inevitability. Previously dubbed “iOS in the Car”, CarPlay brings the iPhone’s aesthetics, ease of use, consistency, and universe of apps to the ugly and dumbfounding world of car navigation and entertainment systems.

Seven years after the iPhone launched the Smartphone 2.0 wave, Apple kickstarts another mobile revolution…

It’s an enticing, simple vision. Instead of today’s disjointed systems — which often cost in the $1,000 range, plus $249 for a DVD of updated maps — you get a screen the size of a small tablet running iOS apps with voice and touch control (on-screen and armrest), off-air map updates, open-ended flexibility… We have arrived.

I’ve struggled with dashboard electronics from German, Japanese, and French car makers (no electronics on the old family Chevrolets), and I’ve seen what happened to Ford when it tried to use Microsoft software for its Sync system. Replacing these hairballs with an iOS system only makes sense.

But sense and reality are still living apart.

carplay2

To start, the “iOS in the Car” phrase is misleading. The iOS device “in your car” is the iPhone or iPad that you’ve brought with you — Apple isn’t about to license iOS to automakers (which may be part of the reason why Apple changed the name to “CarPlay”).

And Apple isn’t going to try to take the place of suppliers such as Delphi, VDO, and Aisin by making subsystems for carmakers — it’s not in Apple’s DNA. Not that it would matter if they tried: Automakers have made an art of pinching fractions of cents from their suppliers’ prices; they’d never tolerate Apple’s margins.

CarPlay replicates your iDevice’s screen as H.264 video spewed through an intelligent Lightning cable connected to your car’s USB port. The video format is widely accepted, so the in-car device either understands it already, or can be updated to do so.

So far, so good. As many observers have pointed out, the idea is a wired echo of Apple’s AirPlay, the technology that connects your iDevices (and other compliant products) to your television via the Apple TV black puck. Complications may arise when you consider the various in-dash screen sizes, resolution, actual uses of USB connections (my car’s USB connector is useless for anything other than charging my smartphone), and other mysterious incompatibilities that are beyond Apple’s control. Still, in general, screen replication demands little from the car maker. As with Airplay and a dumb TV set, the intelligence stays inside the smartphone.

The CarPlay proposal is much more limited than the Open Automotive Alliance, a Google initiative that implants a customized version of Android into a car’s electronics. (“Audi connect” is available today; we can expect similar collaborations with Honda, GM and Hyundai.) But if the in-car system runs Android (or QNX, as is often the case today), so much the better, from the carmaker’s point of view: Let Google or one of its partner do all the work to create an Android-based all-in-one car system and let Apple hitch a ride after the work is done. Serving both Android and iOS users is a no-brainer.

It sounds good… but I can’t help but harbor uneasy feelings about this whole “scene”.

To begin with, we have a clash of cultures. To be sure, Eddy Cue, Apple’s Senior VP of Internet Software and Services, is a dealmaking expert and, as a member of the Board of Ferrari, he has serious automotive industry connections. But the spirit that drives Apple is far from that which motivates automakers.

The automotive industry expects to be in control of everything that gets into their cars. The coup that Apple pulled off with the iPhone and AT&T — taking full control of the content, no crapware, iTunes only for media — isn’t going to happen with Mercedes-Benz, or BMW, or even Hyundai. Cars aren’t phones. We’re not going to see aftermarket Toyota CarPlay kits (let alone entire cars) in Apple Stores. Apple won’t get what it always strives for: Controlled Distribution.

Then there’s the F-word: Fragmentation. In-car electronics are a mess, a new culture grafted onto an old one, Silicon Valley and Detroit in a loveless marriage. Actually, that’s  unfair: Under the hood, embedded electronics do wonders to improve the reliability, safety, and economy of our cars. But where the union breaks down is in the User Experience domain. Competent combustion management engineers and the accountants watching over their shoulders have no empathy for smartphone-loving drivers.

The meanderings get more twisted when we consider a key difference between Google and Apple. Google could tell Audi that they’ll pay, in some form, for the user data collected by Audi connect— but Audi already makes a lot of money, they don’t want to open that can of worms. As they say in their privacy agreement:

“We will not share information about you or your Audi vehicle that is connected with your use of Audi connect’s in-car features with third parties for their own purposes without your consent.”

But what would a legally-troubled, profit-starved automaker such as GM say in response to Google’s offer to subsidize the in-car system?

Apple hasn’t played that game.

An all-in-one navigation/communications/entertainment system is a pleasant dream, it feels “right”. But the technical, business model, and cultural obstacles could make for a long, arduous march.

CarPlay could be a very smart way to hitch a ride on many in-car systems without having to struggle with their design and cost challenges, yet another ecosystem extension play.

JLG@mondaynote.com