The Sweet Spot On Apple’s Racket

 

iPad sales are falling – but the sky is not. We’re merely dealing with a healthy case of expectations adjustment.

The tablet computer has always felt inevitable. The desire to harness the power of a computer in the comfortable form of a letter-size tablet with a keyboard, or perhaps a stylus for more natural interaction — or why not both? — has been with us for a very long time. Here we see Alan Kay holding a prototype of his 1972 Dynabook (the photo is from 2008):

Alan_Kay_and_the_prototype_of_Dynabook,_pt._5_(3010032738)

(credit: http://en.wikipedia.org/wiki/Dynabook)

Alan prophetically called his invention a personal computer for children of all ages.

For more than 40 years, visionaries, entrepreneurs, and captains of industry have whetted our appetite for such tablets. Before it was recast as a PDA, a Personal Digital Assistant, Steve Sakoman’s Newton was a pen-based letter-size tablet. Over time, we saw the GridPad, Jerry Kaplan’s and Robert Carr’s Go, and the related Eo Personal Communicator. And, true to its Embrace and Extend strategy, Microsoft rushed a Windows for Pen Computing extension into Windows 3.1.

These pioneering efforts didn’t succeed, but the hope persisted: ‘Someone, someday will get it right’. Indeed, the tablet dream got a big boost from no less than Bill Gates when, during his State of The Industry keynote speech at Comdex 2001 (Fall edition), Microsoft’s CEO declared that tablets were just around the corner [emphasis mine]:

“The Tablet takes cutting-edge PC technology and makes it available wherever you want it, which is why I’m already using a Tablet as my everyday computer. It’s a PC that is virtually without limits — and within five years I predict it will be the most popular form of PC sold in America.

Unfortunately, the first Tablet PCs, especially those made by Toshiba (I owned two), are competent but unwieldy. All the required ingredients are present, but the sauce refuses to take.

Skip ahead to April 2010. The iPad ships and proves Alan Kay right: The first experience with Apple’s tablet elicits, more often than not, a child-like joy in children of all ages. This time, the tablet mayonnaise took and the “repressed demand” finally found an outlet. As a result, tablets grew even faster than PCs ever did:

PNG - Tablets Fastest Ever

(Source: Mary Meeker’s regular Internet Trends 2014 presentation, always long, never boring)

In her 2013 report, Meeker showed iPads topping the iPhone’s phenomenal growth, climbing three times faster than its more pocketable sibling:

PNG - iPad 3X iPhone Meeker 2013

(Source: Mary Meeker Internet Trends 2013)

There were, however, two unfortunate aspects to this rosy picture.

First, there was the Post-PC noise. The enthusiasm for Android and iOS tablets, combined with the end of the go-go years for PC sales, led many to decree that we had finally entered the “Post-PC” era.

Understandably, the Post-PC tag, with its implication that the PC is no longer necessary or wanted, didn’t please Microsoft. As early as 2011, the company was ready with its own narrative which was delivered by Frank Shaw, the company’s VP of Corporate Communications: Where the PC is headed: Plus is the New “Post”. In Microsoft’s cosmos, the PC remains at the center of the user’s universe while smartphones and tablets become “companion devices”. Reports of the PC’s death are greatly exaggerated, or, as Shaw puts it, with a smile, “the 30-year-old PC isn’t even middle aged yet, and about to take up snowboarding”.

(Actually, the current debate is but a new eruption of an old rash. “Post-PC” seems to have been coined by MIT’s David Clark around 1999, causing Bill Gates to pen a May 31st, 1999 Newsweek op-ed titled: Why the PC Will Not Die…)

Both Bill and Frank are right – mostly. Today’s PC, the descendant of the Altair 8800 for which Gates programmed Microsoft’s first Basic interpreter, is alive and, yes, it’s irreplaceable for many important tasks. But classical PCs — desktops and laptops — are no longer at the center of the personal computing world. They’ve been replaced by smaller (and smallest) PCs — in other words, by tablets and smartphones. The PC isn’t dead or passé, but it is shape-shifting.

There was a second adverse consequence of the iPad’s galloping growth: Expectations ran ahead of reality. Oversold or overbought, it doesn’t matter, the iPad (and its competitors) promised more than they could deliver. Our very personal computers — our tablets and smartphones — have assumed many of the roles that previously belonged to the classical PC, but there are some things they simply can’t do.

For example, in an interview with the Wall Street Journal, Tim Cook confides that “he does 80% of the work of running the world’s most valuable company on an iPad.” Which is to say Tim Cook needs a Mac for the remaining 20%…but the WSJ quote doesn’t tell us how important these 20% are.

We now come to the downward trend in iPad’s unit sales: -2.29% for the first quarter of calendar year 2014 (compared to last year). Even more alarming, unit sales are down 9% for the quarter ending in June. Actually, this seems to be an industry-wide problem rather than an Apple-specific trend. In an exclusive Re/code interview, Best Buy CEO Hubert Joly says tablet sales are “crashing”, and sees hope for PCs.

Many explanations have been offered for this phenomenon, the most common of which is that tablets have a longer replacement cycle than smartphones. But according to some skeptics, such as Peter Bright in an Ars Technica op-ed, there’s a much bigger problem [emphasis mine]:

“It turns out that tablets aren’t the new smartphone…[t]hey’re the new PC; if you’ve already got one, there’s not much reason to buy a new one. Their makers are all out of ideas and they can’t make them better. They can only make them cheaper.”

Bright then concludes:

“[T]he smartphone is essential in a way that the tablet isn’t. A large screen smartphone can do…all the things a tablet can do… Who needs tablets?”

Hmmm…

There is a simpler – and much less portentous – explanation. We’re going through an “expectations adjustment” period in which we’ve come to realize that tablets are not PC replacements. Each personal computer genre carries its own specifics; each instils unique habits of the body, mind, and heart; none of them is simply a “differently sized” version of the other two.

The realization of these different identities manifests itself in Apple’s steadfast refusal to hybridize, to make a “best of both worlds” tablet/laptop product.

Microsoft thinks otherwise and no less steadfastly (and expensively) produces Surface Pro hybrids. I bought the first generation two years ago, skipped the second, and recently bought a Surface Pro 3 (“The tablet that can replace your laptop”). After using it daily for a month, I can only echo what most reviewers have said, including Joanna Stern in the WSJ:

“On its third attempt, Microsoft has leapt forward in bringing the tablet and laptop together—and bringing the laptop into the future. But the Pro 3 also suffers from the Surface curse: You still make considerable compromises for getting everything in one package.”

Trying to offer the best of tablets and laptops in one product ends up compromising both functions. In my experience, too many legacy Windows applications work poorly with my fingers on the touch screen. And the $129 Type Cover is a so-so keyboard and poor trackpad. Opinions will differ, of course, but I prefer using Windows 8.1 on my Mac. We’ll see how the upcoming Windows 9, code name Threshold, will cure the ills of what Mary Jo Foley, a well-introduced Microsoft observer, calls Vista 2.0.

If we consider that Mac unit sales grew 18% last quarter (year-to-year), the company’s game becomes clear: The sweet spot on Apple’s racket is the set of customers who, like Tim Cook, use MacBooks and iPads. It’s by no means the broadest segment, just the most profitable one. Naysayers will continue to contend that the prices of competing tablets are preordained to crash and will bring ruin to Apple’s Affordable Luxury product strategy…just as they predicted netbooks would inflict damage on MacBooks.

As for Peter Bright’s contention that “[tablet] makers are all out of ideas and they can’t make them better”, one can easily see ways in which Google, Lenovo, Microsoft, Apple, and others could make improvements in weight, speed, input methods, system software, and other factors I can’t think of. After we get over the expectations adjustment period, the tablet genre will continue to be innovative, productive, and fun – for children of all ages.

JLG@mondaynote.com

App Store Curation: An Open Letter To Tim Cook

 

With one million titles and no human guides, the Apple App Store has become incomprehensible for mere mortals. A simple solution exists: curation by humans instead of algorithms.

Dear Tim,

You know the numbers better than anyone — I don’t need to quote them to you — but we all know that the iOS App Store is a veritable gold mine. Unfortunately, the App Store isn’t being mined in the best interests of Apple’s customers and developers, nor, in the end, in the interests of the company itself.

The App Store may be a gold mine, but it’s buried in an impenetrable jungle.

Instead of continuing with this complaint, I’ll offer a suggestion: Let humans curate the App Store.

Instead of using algorithms to sort and promote the apps that you permit on your shelves, why not assign a small group of adepts to create and shepherd an App Store Guide, with sections such as Productivity, Photography, Education, and so on. Within each section, this team of respected but unnamed (and so “ungiftable”) critics will review the best-in-class apps. Moreover, they’ll offer seasoned opinions on must-have features, UI aesthetics, and tips and tricks. A weekly newsletter will identify notable new titles, respond to counter-opinions, perhaps present a developer profile, footnote the occasional errata and mea culpa…

The result will be a more intelligible App Store that makes iOS users happier.

If I’m so convinced, why don’t I drive it myself? You might recall that I offered to do so — for free — in a brief lobby conversation at the All Things D conference a couple of years ago. The ever-hovering Katie Cotton gave me the evil eye and that was the end of the exchange.

I look back on my years at Apple with a certain affection, and would be happy to repay the company for what it did for me, so, yes, I would do it for free… but I can’t bankroll a half dozen tech writers, nor can I underwrite the infrastructure costs. And it won’t pay for itself: As an independent publication (or, more likely, an app) an App Store Guide isn’t financially viable. We know it’s next to impossible to entice people to pay for information and, as the Monday Note proves, I have no appetite for becoming a nano-pennies-per-pageview netwalker.

So, the App Store Guide must be an Apple publication, a part of its ecosystem.

Best,

JLG

PS:  We both understand that ideas are just ideas, they’re not actual products. As Apple has shown time and again — and most vividly with the 30-year old tablet idea vs. the actual iPad — it’s the product that counts. If you see the wisdom of a human-curated Apple App Guide, and I hope you do, I will not seek credit.

——————————-

Regular Monday Note readers will remember I already tilted at the App Store curation windmill: Why Apple Should Follow Michelin and  the tongue-in-cheek Google’s Red Guide to the Android App Store. Who knows, the third time might be the charm.

To play devil’s advocate, let’s consider a developer’s bad reaction to an Apple App Guide review. Let’s say MyNewApp gets a thumbs down in the Productivity section of the Guide. I’m furious; I write Tim or Eddy Cue an angry letter, I huffs and puff, threaten to take my business elsewhere — to Windows Phone, for example. I exhort my friends, family, and satisfied customers to contribute to a letter-writing campaign…

Why risk this sort of backlash? Particularly when today’s formula of “featuring” apps seems to be working:

330-Apps_curation

But…does it really work all that well? Today’s way of choosing this app over that one already upsets the non-chosen. Further, the stars used to “measure” user feedback are known to be less than reliable. A thoughtful, detailed, well-reasoned review would serve customers and developers alike.

This leads us to the Guide’s most important contribution to the app universe: Trust. An Apple-sponsored App Guide can be trusted for a simple reason: The company’s one and only motive is to advance its users’ interests by making the App Store more trustworthy, more navigable. As for developers, they can rely on a fair and balanced (seriously) treatment of their work. The best ones will be happier and the “almost best” others will see an opportunity to get their improved work noticed in a future review cycle.

There is also the temptation to shrug the suggestion off with the customary ‘Don’t fix it, it’s not broken.’  Sorry, no, it is broken. See what Marco Arment, a successful Apple developer, says on his blog [emphasis mine]:

“Apple’s App Store design is a big part of the problem. The dominance and prominence of “top lists” stratifies the top 0.02% so far above everyone else that the entire ecosystem is encouraged to design for a theoretical top-list placement that, by definition, won’t happen to 99.98% of them. Top lists reward apps that get people to download them, regardless of quality or long-term use, so that’s what most developers optimize for. Profits at the top are so massive that the promise alone attracts vast floods of spam, sleaziness, clones, and ripoffs.”

and…

Quality, sustainability, and updates are almost irrelevant to App Store success and usually aren’t rewarded as much as we think they should be, and that’s mostly the fault of Apple’s lazy reliance on top lists instead of more editorial selections and better search.

The best thing Apple could do to increase the quality of apps is remove every top list from the App Store.”

We can now turn to my own biases.

Why do I care? Good question, I’m now 70 and could just sit in zazen and enjoy the show. And there’s a lot of show to enjoy: The tech industry is more exciting now than when I was a rookie at HP France in 1968. But in today’s app stores, the excitement fades — and I’m not just talking about Apple, Android’s Google Play is every bit as frustrating. I see poorly exploited gold mines where quantity obscures quality and the lack of human curation ruins the Joy of Apps. There are caves full of riches but, most of of the time, I can’t find a path to the mother lode.

Is it a lack of courage in anticipation of imagined protests? Hunger sated by too much success too soon? An addiction to solving all problems by algorithm instead of by human judgment?

I hope its none of these, and that we’ll soon see a newsletter/blog and a reasoned, regularly enriched guide that leads us to the better App Store titles.

—JLG

 

Macintel: The End Is Nigh

When Apple announced its 64-bit A7 processor, I dismissed the speculation that this could lead to a switch away from Intel chips for the Macintosh line for a homegrown “desktop-class” chip. I might have been wrong.

“I don’t know exactly when, but sooner or later, Macs will run on Apple-designed ARM chips.” Thus spake Matt Richman in a 2011 blog post titled “Apple and ARM, Sitting in a Tree”. Richman explained why, after a complicated but ultimately successful switch from PowerPC chips to Intel processors in 2005, Apple will make a similar switch, this time to ARM-based descendants of the A4 chip designed by Apple and manufactured by Samsung.

Cost is the first reason invoked for the move to an An processor:

“Intel charges $378 for the i7 chip in the new high-end 15 inch MacBook Pro. They don’t say how much they charge for the i7 chip in the low-end 15 inch MacBook Pro, but it’s probably around $300. …When Apple puts ARM-based SoC’s in Macs, their costs will go down dramatically. ”

We all know why Intel has been able to command such high prices. Given two microprocessors with the same manufacturing cost, power dissipation, and computing power, but where one runs Windows and the other doesn’t, which chip will achieve the higher market price in the PC market? Thus, Intel runs the table, it tells clone makers which new x86 chips they’ll receive, when they’ll receive them, and, most important, how much they’ll cost. Intel’s margins depend on it.

ARM-based processors, on the other hand, are inherently simpler and therefore cost less to make. Prices are driven even lower because of the fierce competition in the world of mobile devices, where the Wintel monopoly doesn’t apply.

329_A7chip

Cost is the foremost consideration, but power dissipation runs a close second. The aging x86 architecture is beset by layers of architectural silt accreted from a succession of additions to the instruction set. Emerging media formats demand new extensions, while obsolete constructs must be maintained for the sake of Microsoft’s backward compatibility religion. (I’ll hasten to say this has been admirably successful for more than three decades. The x86 nickname used to designate Wintel chips originates from the 8086 processor introduced in 1978 – itself a backward-compatible extension of the 8088…)
Because of this excess baggage, an x86 chip needs more transistors than its ARM-based equivalent, and thus it consumes more power and must dissipate more heat.

Last but not least, Richman quotes Steve Jobs:

“I’ve always wanted to own and control the primary technology in everything we do.”

Apple’s leader has often been criticized for being too independent and controlling, for ignoring hard-earned industry wisdom. Recall how Apple’s decision to design its own processors was met with howls of protest, accusations of arrogance, and the usual predictions of doom.

Since then, the interest for another Grand Processor Switch has been alive and well. Googling “Mac running on ARM” gets you close to 10M results. (When you Bing the same query, you get 220M hits — 22x Google’s results. SEO experts are welcome to comment.)

Back to the future…

In September 2013, almost a year ago already, Apple introduced the 64-bit A7 processor that powers new iPhones and iPads. The usual suspects pooh-poohed Apple’s new homegrown CPU, and I indulged in a little fun skewering the microprocessor truthers: 64 bits. It’s Nothing. You Don’t Need It. And We’ll Have It In 6 Months. Towards the end of the article, unfortunately, I dismissed the speculation that Apple An processors would someday power the Mac. I cited iMacs and Mac Pros — the high end of the product line —as examples of what descendants of the A7 couldn’t power.

A friend set me straight.

In the first place, Apple’s drive to own “all layers of the stack” continues unabated years after Steve’s passing. As a recent example, Apple created its own Swift programming language that complements its Xcode IDE and Clang/LLVM compiler infrastructure. (For kremlinology’s sake I’ll point out that there is an official Apple Swift blog, a first in Apple 2.0 history if you exclude the Hot News section of the of apple.com site. Imagine what would happen if there was an App Store blog… But I digress.)

Secondly, the Mac line is suspended, literally, by the late delivery of Intel’s Broadwell x86 processors. (The delay stems from an ambitious move to a bleeding edge fabrication technology that shrinks the basic building block of a chip to 14 nanometers, down from 22 nanometers in today’s Haswell chips.) Of course, Apple and its An semiconductor vendor could encounter similar problems – but the company would have more visibility, more control of its own destiny.

Furthermore, it looks like I misspoke when I said an An chip couldn’t power a high-end Mac. True, the A7 is optimized for mobile devices: Battery-optimization, small memory footprint, smaller screen graphics than an iMac or a MacBook Pro with a Retina display. But having shown its muscle in designing a processor for the tight constraints of mobile devices, why would we think that the team that created the most advanced smartphone/tablet processor couldn’t now design a 3GHz A10 machine optimized for “desktop-class” (a term used by Apple’s Phil Schiller when introducing the A7) applications?

If we follow this line of reasoning, the advantages of ARM-based processors vs. x86 devices become even more compelling: lower cost, better power dissipation, natural integration with the rest of the machine. For years, Intel has argued that its superior semiconductor design and manufacturing technology would eventually overcome the complexity downsides of the x86 architecture. But that “eventually” is getting a bit stale. Other than a few showcase design wins that have never amounted to much in the real world, x86 devices continue to lose to ARM-derived SoC (System On a Chip) designs.

The Mac business is “only” $20B a year, while iPhones and iPad generate more than 5 times that. Still, $20B isn’t chump change (HP’s Personal Systems Group generates about $30B in revenue), and unit sales are up 18% in last June’s numbers vs. a year ago. Actually, Mac revenue ($5.5B) approaches the iPad’s flagging sales ($5.9B). Today, a 11” MacBook Air costs $899 while a 128Gb iPad Air goes for $799. What would happen to the cost, battery life, and size of an A10-powered MacBook Air? And so on for the rest of the Mac line.

By moving to ARM, Apple could continue to increase its PC market share and scoop much of the profits – it currently rakes in about half of the money made by PC makers. And it could do this while catering to its customers in the Affordable Luxury segment who like owning both an iPad and a Mac.

While this is entirely speculative, I wonder what Intel’s leadership thinks when contemplating a future where their most profitable PC maker goes native.

JLG@mondaynote.com

———-

Postscript: The masthead on Matt Richman’s blog tells us that he’s now an intern at Intel. After reading several of his posts questioning the company’s future, I can’t help but salute Intel management’s open mind and interest in tightly reasoned external viewpoints.

And if it surprises you that Richman is a “mere” intern, be aware that he was all of 16-years-old when he wrote the Apple and ARM post. Since then, his blog has treated us to an admirable series of articles on Intel, Samsung, Blackberry, Apple, Washington nonsense – and a nice Thank You to his parents.

 

News on mobile: better be a Danish publisher than a Japanese one

 

This is the second part of our Mobile facts to Keep in Mind (see last week Monday Note – or here on Quartz). Today, a few more basic trends and a closer look at healthy markets for digital news. 

Last week, we spoke about the preeminence of mobile applications. Not all readers agree, of course, but I found more data to support the finding; among many sources, the remarkable Reuters Institute Digital News Report (PDF here) is worth reading:

47% of smartphone users say they use mainly apps for news

According to the report, this figure has risen by 6 percentage points in just one year. By contrast, 38% of the news consumption is made via a browser — which is losing ground: -4% in just a year.

The trend is likely to accelerate when taking in account demography: On smartphones, the most active groups are the 18-24s and the 35-44s; on tablets the most active group is the 45-54 segment.

Platform usage varies in accordance to local market share, but when it come to paying for news, Apple leads the game:

iOS users are x1.5 likely to pay for news in the US
and x2 likely to pay in the UK than Android or other users

Here is the bad part, though. Again based on the Reuters report, the use of smartphones does narrow the range of news sources. More than ever, the battle for the first screen is crucial.

Across the ten countries surveyed,
37% of users rely on a single news source
vs. 30% for PC users

In the UK, the trend is even stronger with 55% of mobile users relying a single news source. This goes along with good news for those who still defend original news production: mobile news consumption is quite focused on legacy media. The BBC app crushes the competition with 67% of respondents saying they used the app the previous week, vs. 25% for Sky, MSN and Yahoo are trailing with respectively 2% and 7%.

If you want to survey a healthy digital news market, go to Denmark

MN_328_vikings_logo

A Viking logo (from the TV Series) as viewed by the Brand New blog;
note the ancient reference to technology…

Not only does Denmark rank among the best countries to live and develop a business in, but when it comes to digital news, it leads the pack in several of ways:

Despite the digital tsunami, Denmark retains many strong media brands. As a result, legacy media are the prime way for accessing digital news. And since Danish media did well embracing new platforms, they enjoyed similarly success on social, funneling readers to their properties.
The opposite holds for France and Germany where the transition is much slower; in those countries digital users rely much more on search to reach news brands. Two side effects ensue: News readers are more accidental and therefore generate a much lower ARPU; and the greater reliance on Google is problematic (hence the call to arms in France and Germany against the search engine giant.)

– Because of the strength of its traditional media brands, the Denmark news market has left very little oxygen to pure players: They weigh only 10% of weekly digital news, vs. 39% in the US and 46% in Japan were legacy media have been severely hit.

– Danes are the heaviest users of both smartphones and tablets to access news.

– They use mobile apps more than anywhere else: 19%, vs. 15% for US and 12% for Germany.

– They are mostly Apple users : 58% say they use an iOS device to access news in the last week (vs. 28% in Germany), hence a better ARPU for mobile publishers.

–  Danish news consumers generously overlap their devices way more than in any country. 79% use a PC, 61% a smartphone and 39% a tablet. Only 24% use only a PC for news. In Japan by contrast, 58% admit using only a PC for their news diet; up there, the use of smartphone and tablet to access information is respectively one half and one third of Denmark.

– In Danish public transportation, smartphones has overtaken print as the main news vector by 69% vs. 21% of the usage.

We all know where to seek inspiration for our digital news strategies.

frederic.filloux@mondaynote.com

Microsoft’s New CEO Needs An Editor

 

Satya Nadella’s latest message to the troops – and to the world – is disquieting. It lacks focus, specifics, and, if not soon sharpened, his words will worry employees, developers, customers, and even shareholders.

As I puzzled over the public email Microsoft’s new CEO sent to his troops, Nicolas Boileau’s immortal dictum came to mind:

Whatever is well conceived is clearly said,
And the words to say it flow with ease.

Clarity and ease are sorely missing from Satya Nadella’s 3,100 plodding words, which were supposed to paint a clear, motivating future for 127,000 Microsoftians anxious to know where the new boss is leading them.

LE WEB PARIS 2013 - CONFERENCES - PLENARY 1 - SATYA NADELLA

Nadella is a repeat befuddler. His first email to employees, sent just after he assumed the CEO mantle on earlier this year, was filled with bombastic and false platitudes:

“We are the only ones who can harness the power of software and deliver it through devices and services that truly empower every individual and every organization. We are the only company with history and continued focus in building platforms and ecosystems that create broad opportunity.”

(More in the February 9th, 2014 Monday Note)

In his latest message, Nadella treats us to more toothless generalities:

“We have clarity in purpose to empower every individual and organization to do more and achieve more. We have the right capabilities to reinvent productivity and platforms for the mobile-first and cloud-first world. Now, we must build the right culture to take advantage of our huge opportunity. And culture change starts with one individual at a time.”

Rather than ceding to the temptation of quoting more gems, let’s turn to a few simple rules of exposition.

First, the hierarchy of ideas:

328_strategy_graph

This admittedly simplistic diagram breaks down an enterprise into four layers and can help diagnose thinking malfunctions.

The top layer deals with the Identity or Culture — I use the two terms interchangeably as one determines the other. One level down, we have Goals, where the group is going. Then come the Strategies or the paths to those goals. Finally, we have the Plan, the deployment of troops, time, and money.

The arrow on the left is a diagnostic tool. It reminds us that as we traverse the diagram from Identity to Plan, the number of words that we need to describe each layer increases.  It should only take a few words to limn a company’s identity (Schlumberger, oil services; Disney, family entertainment), describing the company’s goals will be just a tad more verbose (“in 5 years’ time we’ll achieve $X EPS, Y% revenue growth and Z% market share”), and so on.

The arrow also tells us that the “rate of change” — the frequency at which a description changes — follows the same trajectory. Identity should change only very slowly, if ever. At the other end, the plan will need constant adjustment as the company responds to rapidly shifting circumstances, the economy, the competition.

Using the old Microsoft as an example:
– Identity: We’re the emperor of PC software
– Goals: A PC on every desk and home – running our software
– Strategy: Couple the Windows + Office licenses to help OEMs see the light; Embrace and Extend Office competitors.
– Plan: Changes every week.

Returning to Nadella’s prose, can we mine it for words to fill the top three layers? Definitely not.

Second broken rule: Can I disagree? Any text that relies on platitudes says not much at all; in a message-to-the-troops that’s supposed to give direction, irrefutable statements are deadly. Some randomly selected examples in an unfortunately overabundant field:

“[…] we will strike the right balance between using data to create intelligent, personal experiences, while maintaining security and privacy.”

or…

“Together we have the opportunity to create technology that impacts the planet.”

 or…

“Obsessing over our customers is everybody’s job.”

If I’m presented with statements I cannot realistically disagree with – We Will Behave With Utmost Integrity – I feel there’s something wrong. If it’s all pro and no con, it’s a con.

There are other violations but I’ll stop in order to avoid the tl;dr infraction I reproach Nadella for: Never make a general statement without immediately following it with the sacramental “For Example”.

For example:

“[…] we will modernize our engineering processes to be customer-obsessed, data-driven, speed-oriented and quality-focused.”

… would be more believable if followed by:

Specifically, we’ll ask each each software engineer to spend two days every month visiting customers on even months, and third party developers on odd ones. They will also spend one day per quarter seconding Customer Service Representatives over our phone banks.” 

Satya Nadella is an unusually intelligent man, a Mensa-caliber intellect, well-read, he quotes Nietzsche, Oscar Wilde, and Rainer Maria Rilke. Why, then, does he repeatedly break basic storytelling rules?

Two possible explanations come to mind.

First, because he’s intelligent and literate, he forgot to use an unforgiving editor. ‘Chief, you really want to email that?’ Or, if he used an editor, he was victimized by a sycophantic one. ‘Satya, you nailed it!’

Second, and more likely, Nadella speaks in code. He’s making cryptic statements that are meant to prepare the troops for painful changes. Seemingly bland, obligatory statements about the future will decrypt into wrenching decisions:

“Organizations will change. Mergers and acquisitions will occur. Job responsibilities will evolve. New partnerships will be formed. Tired traditions will be questioned. Our priorities will be adjusted. New skills will be built. New ideas will be heard. New hires will be made. Processes will be simplified. And if you want to thrive at Microsoft and make a world impact, you and your team must add numerous more changes to this list that you will be enthusiastic about driving.”

In plainer English: Shape up or ship out.

Tortured statements from CEOs, politicians, coworkers, spouses, or suppliers, in no hierarchical order, mean one thing: I have something to hide, but I want to be able to say I told you the facts.

With all this in mind, let’s see if we can restate Nadella’s message to the troops:

This is the beginning of our new FY 2015 – and of a new era at Microsoft.
I have good news and bad news.
The bad news is the old Devices and Services mantra won’t work.

For example: I’ve determined we’ll never make money in tablets or smartphones.

So, do we continue to pretend we’re “all in” or do we face reality and make the painful decision to pull out so we can use our resources – including our integrity – to fight winnable battles? With the support of the Microsoft Board, I’ve chosen the latter. We’ll do our utmost to minimize the pain that will naturally arise from this change. Specifically, we’ll offer generous transitions arrangements in and out of the company to concerned Microsoftians and former Nokians.

The good news is we have immense resources to be a major player in the new world of Cloud services and Native Apps for mobile devices. We let the first innings of that game go by, but the sting energizes us. An example of such commitment is the rapid spread of Office applications – and related Cloud services – on any and all mobile devices. All Microsoft Enterprise and Consumer products/services will follow, including Xbox properties.

I realize this will disrupt the status quo and apologize for the pain to come. We have a choice: change or be changed.

Stay tuned.

Or words (about 200) to that effect.

In parting, Nadella would do well to direct his attention to another literate individual, John Kirk, whose latest essay, Microsoft Is The Very Antithesis Of Strategy, is a devastating analysis that compares the company’s game plan to the advice given by Sun Tzu, Liddell Hart, and Carl von Clausewitz, writers who are more appropriate to the war that Microsoft is in than the authors Microsoft’s CEO seems to favor.

The CEO’s July 10th email promises more developments, probably around the July 22nd Earnings release. Let’s hope he’ll offer sharper and shorter words to describe Microsoft’s entry into the Cloud First – Mobile First era.

JLG@mondaynote.com

Mobile Facts To Keep In Mind – Part 1

 

By the end of 2014, many news media will collect around 50% of their page views via mobile devices. Here are trends to remember before devising a mobile strategy. (First of a two-part series.)

In the news business, mobile investments are on the rise. That’s the pragmatic response to a major trend: Users shift from web to mobile. Already, all major media outlets are bracing for a momentous threshold: 50% of their viewership coming from mobile devices (smartphones and tablets). Unfortunately, the revenue stream is not likely to follow anytime soon: making users pay for mobile content has proven much more difficult than hoped for. As for advertising, the code has yet to be cracked for (a) finding formats that won’t trigger massive user rejection, and (b) monetizing in ways comparable to the web (i.e. within the context of a controlled deflation). Let’s dive into a few facts:

Apps vs. WebApps or Mobile sites. A couple of years ago, I was among those who defended web apps (i.e. encapsulated HTML5 coding, not tied to a specific OS platform) vs. native apps (for iOS, Android, Windows Phone). The idea was to give publishers more freedom and to avoid the 30% app store levy. Also, every publisher had in mind the success enjoyed by the FT.com when it managed to put all its eggs in its web app and so retain complete control over the relationship with its customers.

big_phone
Credit: Vintage Mobile / Popular Mechanics

All of the above remains true but, from the users’ perspective, facts speak loudly: According to Flurry Analytics, apps now account for 86% of the time spent by mobile users vs. 14% for mobile sites (including web apps.) A year ago, the balance was 80% for apps and 20% for mobile web.

Trend #1: Native apps lead the game
at the expense of web apps and mobile sites 

One remark, though: the result must take in account the weight of games and Facebook apps that account for 50% of the time spent on mobile. News-related usage leans more to mobile as there is not (yet) demand for complex rendering as in a gaming app. But as far news applications are concerned, we haven’t seen major breakthroughs in mobile web or web apps over the last months and it seems development is stalling.

News vs. the rest of the app world. On a daily total of 2hrs 50mn spent by mobile users (source: eMarketer), 2% to 5% of that time is spent on news. Once you turn to growth, the small percentage number starts to look better: The news segment is growing faster (+64% Y/Y) than messaging and social (+28%) or gaming and entertainment (+9% each); the fastest usage segment being the productivity apps (+119%) and that’s due to the transfer of professional uses from the desktop to the mobile.

Trend #2: On mobile, news is growing faster
than game or social 

…And it will grow stronger as publishers will deploy their best efforts to adjust contents and features to small screens and on-the-go usage and as mobile competitors multiply.

iOS vs. Android: the monetization issue. Should publishers go for volume or focus on the ARPU (revenue per user)? If that’s the reasoning, the picture is pretty clear: an iOS customer brings on average five times more money than an Android user. And the gap is not about to close. However, Android OS has about one billion users vs. 470m users for iOS, but most of Android users are in low income countries, where phones can cost as little as $80, and prices are falling fast. By contrast, an iPhone will cost around $600 (without a carrier contract) and the not-so-successful “cheap” iPhone 5C shows that iPhone is likely to remain a premium product.

Trend #3: There is more money to make on iOS
than Android and that’s not likely to change

Beside, we must take in account two sub-trends: iOS will gain in sophistication with the arrival of iOS 8 (see Jean-Louis’ recent column about iOS 8 being the real version 2.0 of iOS) and a new breed of applications based on the new Swift  programming language. Put differently: Advanced functionalities in Swift/iOS 8-based apps will raise the level of user expectations; publishers will be forced to respond accordingly: as apps reside side by side on the same mobile screen, news apps will be required to display the same level of sophistication than, say, a gaming app — that’s also why I’m less bullish on web apps. Behind the iOS/Android gap lies another question: Should publishers have the same app (content, features, revenue model across) all platforms – or must they tailor their product to platform “moneygraphics”?  That’s an open question.

I’ll stop here for today. Next week, I’ll explore trends and options for business models, marketing tactics, why it could be interesting to link a news app to the smartphone accelerometer and why news media should tap into game developers for certain types of qualifications.

–frederic.filloux@mondaynote.com

The Network Is the Computer: Google Tries Again

 

All you need is a dumb device attached to a smart network. It’s an old idea that refuses to die despite repeated failures. Now it’s Google’s turn.

In the late 1980s, Sun Microsystems used a simple, potent war cry to promote its servers: The Network Is The Computer. Entrust all of your business intelligence, computing power, and storage to Sun’s networked SPARC systems and you can replace your expensive workstation with a dumb, low cost machine. PCs are doomed.

Nothing of the sort happened, of course. Sun’s venture was disrupted by inexpensive servers assembled from the PC organ bank and running Open Source software.

PCs prospered, but that didn’t dampen the spirits of those who would rid us of them.

Fast-forward to the mid-1990s and thought re-emerges in a new guise: The Browser Will Be The Operating System (a statement that’s widely misattributed to Marc Andreessen, who holds a more nuanced view on the matter). The browser will serve as a way to access networked services that will process your data. The actual OS on your device, what sort of apps it can run — or even if it can run any (other than a browser) — these questions will fade into insignificance.

Soon after, Oracle took a swing at the Network is the Computer piñata by defining the Network Computer Reference Profile (or NCRP), a specification that focused on network connectivity and deemphasized local storage and processing. It was understood, if not explicitly stated, that an NCRP device must be diskless. A number of manufacturers offered NCRP implementations, including Sun (which would ultimately be acquired by Oracle) with its JavaStation. But despite Larry Ellison’s strongly expressed belief that Network Computers would rid the industry of the evil Microsoft, the effort went nowhere.

Today, The Network Is The Computer lives on under the name Cloud Computing, the purest example of which is a Google Chromebook running on Chrome OS. (And thus, in a sense, Sun’s idea lives on: Google’s first investor was Sun co-founder Andy Bechtolsheim.)

So far, Chromebooks have shown only modest penetration (a topic for musings in a future Monday Note), but despite the slow adoption, Google has become one of the largest and most important Cloud Computing companies on the planet. Combine this with the Android operating system that powers more than a billion active devices, could Google bring us to the point where The Network Really Is The Computer?

It’s a complicated question, partly because the comparison with the previous generation of devices, traditional PCs, can (excuse me) cloud the view.

Unlike PCs, smartphones rely on an expensive wireless infrastructure. One can blame the oligopolistic nature of the wireless carrier industry (in English: too few companies to have a really competitive market), but that doesn’t change the simple fact that wireless bandwidth isn’t cheap. The dumber the device, the more it has to rely on the Cloud to process and store data, and the more bandwidth it will consume.

Let’s visit Marc Andreessen actual words regarding Network-As-Computer, from a 2012 Wired interview [emphasis mine]:

“[I]f you grant me the very big assumption that at some point we will have ubiquitous, high-speed wireless connectivity, then in time everything will end up back in the web model.”

If we interject, on Andreessen’s behalf, that wireless connectivity must be as inexpensive as it is ubiquitous, then we begin to see the problem. The “data hunger” of media intensive apps, from photo processing to games, shows no sign of slowing down. And when you consider the wireless bandwidth scarcity that comes from the rapid expansion of smartphone use, it seems that conditions are, yet again, conspiring against the “dumb device” model.

The situation is further confounded when we consider that Google’s business depends on delivering users to advertisers. Cloud computing will help drive down the cost of Android handsets and thus offer an even wider audience to advertisers…but these advertisers want a pleasant and memorable UI, they want the best canvas for their ads. When you dumb down the phone, you dumb down the ad playback experience.

In a recent blog post titled The next phase of smartphones, Benedict Evans neatly delineates the two leading “cloud views” by contrasting Apple and Google [emphasis mine]:

“Apple’s approach is about a dumb cloud enabling rich apps while Google’s is about devices as dumb glass that are endpoints of cloud services…”

But Google’s “dumb glass” can’t be too dumb.  For its mobile advertising business, Google needs to “see” everything we do on our smartphones, just like it does on our PCs. Evans intimates as much:

“…it seems that Google is trying to make ‘app versus web’ an irrelevant discussion – all content will act like part of the web, searchable and linkable by Google.”

Native apps running on a “really smart” device are inimical to Google’s business model. To keep the advertisers happy, Google would have to “instrument” native apps, insert deep links that will feed its data collection activities.

This is where the Apple vs. Google contrast is particularly significant: iOS apps are not allowed to let advertisers know what we are doing – unless explicitly authorized. Apple’s business model doesn’t rely on peddling our profile to advertisers.

In the end, I wonder if Google really believes in the “dumb glass” approach to smartphones. Perhaps, at least for now, The Computer will remain The Computer.

JLG@mondaynote.com

 

Google might not be a monopoly, after all

 

Despite its dominance, Google doesn’t fit the definition of a monopoly. Still, the Search giant’s growing disconnect from society could lead to serious missteps and, over time, to a weakened position. 

In last week’s column, I opined about the Open Internet Project’s anti-trust lawsuit against Google. Reactions showed divided views of the search engine’s position. Granted, Google is an extremely aggressive company, obsessed with growth, scalability, optimization — and also with its own vulnerability.

But is it really a monopoly in the traditional and historical sense? Probably not. Here is why, in four points:

1. The consent to dependency. It is always dangerous to be too dependent from a supplier one doesn’t control. This is the case in the (illegal) drug business. Price and supply will fluctuate at the whim of unpredictable people.This is what happens to those who build highly Google-dependent businesses such as e-commerce sites and content-farms that provide large quantities of cheap fodder in order to milk ad revenue from Google search-friendly tactics.

326_jaws
In the end, everything is a matter of trust (“Jaws”, courtesy of Louis Goldman)

Many news media brands have sealed their own fate by structuring their output so that 30% to 40% of their traffic is at the mercy of Google algorithms. I’m fascinated by the breadth and depth of the consensual ecosystem that is now built around the Google traffic pipeline: consulting firms helping media rank better in Google Search and Google News; software that rephrases headlines to make it more likely they’ll hit the top ranks; A/B testing on-the-fly that shows what the search engine might like best, etc.

For the media industry, what should have remained a marginal audience extension has turned into a vital stream of page views and revenue. I personally think this is dangerous in two ways. One, we replace the notion of relevance, reader interest, with a purely quantitative/algorithmic construct (listicles vs depth, BuzzFeed vs. ProPublica for instance). Such mechanistic practices further fuel the value deflation of original content. Two, the eagerness to please the algorithms distracts newsrooms, journalists, editors, from their job to find, develop, build intelligent news packages that will lift brand perception and elevate the reader’s mind (BuzzFeed and plenty of others are the quintessence of cheapening alienation.)

2. Choice and Competition. In 1904, Standard Oil Inc. controlled 91% of American oil production and refining, and 85% of sales. This practically inescapable monopoly was able to dictate prices and supply structure. As for Google, it indeed controls 90% of the search market in some regions (Europe especially, where fragmented markets, poor access to capital and other cultural factors prevented the emergence of tech giants.) Google combines its services (search, mail, maps, Android) to produce one of the most potent data gathering systems ever created. Note the emphasis: Google (a) didn’t invent the high tech data X-ray business, nor (b) is it the largest entity to collect gargantuan amounts of data. Read this Quartz article The nine companies that know more about you than Google or Facebook  and see how corporations such as Acxiom, Corelogic, Datalogix, eBureau, ID Analytics, Intelius, PeekYou, Rapleaf, and Recorded Future collect data on a gigantic scale, including court and public records information, or your gambling habit. Did they make you sign a consent form?

You want to escape Google? Use Bing, Yahoo, DuckDuckGo or Exalead for your web search, or go here to find a list of 40 alternatives. You don’t want your site to be indexed by Google? Insert a robot exclusion line in your html pages, and the hated crawler won’t see your content. You’re sick of Adwords in your pages or in Gmail? Use AdBlock plug-in, it’s even available for the Google Chrome browser. The same applies for storing your data, getting a digital map or web mail services. You’re “creeped out” by Google’s ability to reconstruct every move around your block or from one city to another by injecting data from your Android phone into Maps? You’re right! Google Maps Location History is frightening; to kill it, you can turn off your device’s geolocation, or use Windows Phone or an iPhone (be simply aware that they do exactly the same thing, but they don’t advertise it). Unlike public utilities, you can escape Google. Simply, its services are more convenient, perform well and… are better integrated, which gets us to our third point:

3. Transparent strategy. To Google’s credit, for the most part, its strategy is pretty transparent. What some see as a monopoly in the making is a deliberate — and open — strategy of systematic (and systemic) integration. Here is the chart I made few months ago:

326 graph_goolge

We could include several recent additions such as trip habits from Uber (don’t like it? Try Lyft, or better, a good old Parisian taxi – they don’t even take credit cards); or temperature setting patterns soon coming from Nest thermostats (if you chose to trust Tony Fadell’s promises)… Even Google X, the company’s moonshot factory (story in Fast Company) offers glimpses of Google’s future reach with the development of autonomous cars, projects to bring the internet to remote countries using balloons (see Project Loon) or other airborne platforms.

4. Innovation. Monopolies are known to kill innovation. That was the case with oil companies, cartels of car makers that discouraged alternate transportation systems, or even Microsoft which made our life miserable thanks to a pipeline of operating systems without real competition. By contrast, Google is obsessed with innovative projects seen as an absolute necessity for its survival. Some are good, other are bad, or remain in beta for years.

However, Google is already sowing the seeds of its own erosion. This company is terribly disconnected from the real world. This shows everywhere, from the minutest details of its employees daily life pampered in a overabundance of comfort and amenities that keep them inside a cosy bubble, to its own vital statistics (published by the company itself). Google is mostly white (61%), male (70%), recruits in major universities (in that order: Stanford, UC Berkeley, MIT, Carnegie Mellon, UCLA), with very little “blood” from fields other than scientific or technical. For a company that says it wants to connect its business to a myriad of sectors, such cultural blinders are a serious issue. Combined to the certainty of its own excellence, the result is a distorted view of the world in which the distinction between right and wrong can easily blur. A business practice internally considered virtuous because it supports the perpetuation of the company’s evangelistic vision of a better world can be seen as predatory in the “real” world. Hence a growing rift between the tech giant and its partners and customers, and the nations who host them.

frederic.filloux@mondaynote.com

Google and the European media: Back to the Ice Age

 

Prominent members of the European press are joining a major EU-induced antitrust lawsuit against Google. The move is short on rationale and long on ideology. 

A couple of weeks ago, Axelle Lemaire, France’s deputy minister for digital affairs,  was quoted contending Google’s size and market power effectively prevented the emergence of a “French Google”. A rather surprising statement from a public official whose profile stands in sharp contrast to the customary high civil service profile. As an MP, Mrs Lemaire represents French citizens living overseas and holds dual French and Canadian citizenship; she got a Ph.D. in International Law at London’s King’s College as well as a Law degree at the Sorbonne. Ms. Lemaire then practiced Law in the UK and served as a parliamentary aide in the British House of Commons. Still, her distinguished and unusually “open” background didn’t help: She’s dead wrong about why there is no French Google.

The reasons for France’s “failure” to give birth to a Google-class search engine are simply summarized: Education and money. Google is a pure product of what France misses the most: a strong and diversified engineering pipeline supported by a business-oriented education system, and access to abundant capital. Take the famous (though controversial) Shanghai higher education ranking in computer science: France ranks in the 76 to 100 group with the University of Bordeaux; 101 to 150 for the highly regarded Ecole Normale Supérieure; and the much celebrated Ecole Polytechnique sits deep in the 150-200 group – with performance slowly degrading over the last ten years and a minuscule faculty of… 7 CS professors and assistants professors. That’s the reality of computer science education in the most prestigious engineering school in France. As for access to capital, two numbers say it all: according to its own trade association, the size of the French venture capital sector is 1/33th of the US’ while the GDP ratio is only 1 to 6. That’s for 2013; in 2012, the ratio was 1/46th, things are improving.

The structural weakness of French tech clearly isn’t Google’s fault. Which reveals the ideological facts-be-damned nature of the blame, an attitude broadly shared by other European countries.

A few weeks ago, a surreal event took place in Paris, at the Cité Universitaire Internationale de Paris (which wants to look like a Cambridge replica). There, the Open Internet Project uncovered the next European antitrust action against Google. On stage was an disparate crew: media executives from German and French companies; the former antitrust litigator Gary Reback known for his fight against Microsoft in the Nineties – and now said to help Microsoft in its fight against Google; Laurent Alexandre, a strange surgeon/entrepreneur and self-proclaimed visionary  living in Luxembourg Brussels where his company DNA Vision is headquartered, who almost got a standing ovation by explaining how Google intended to connect our brains to its gigantic neuronal network by around 2040; all of the above wrapped up with a speech from French Economy Minister Arnaud Montebourg who never misses an opportunity to apply his government’s seal on anti-imperialist initiatives.

The lawsuit alleges market distortion practices, discrimination in several guises, anticompetitive conduct, preference for its own vertical services at the expense of fairness in its search results, illegal use of data, etc. (The summary of EU allegations is here). The complaint paves the way for painstaking litigation that will drag on for years.

Among the eleven corporations or trade groups funding the lawsuit we find seven media entities, including the giant German Axel Springer GroupLagardère Active whose boss invoked the “moral obligation” to fight Google. There is also CCM Benchmark Group, a large diversified digital player whose boss, Benoît Sillard, had his own epiphany while speaking with Nikesh Arora in Mountain View a while ago. There and then, Mr. Sillard saw the search giant’s grand plan to dominate the digital world. (I paid a couple of visits to Google’s headquarters but was never granted such a religious experience – I will try again, I promise.)

Despite the media industry’s weight, the lawsuit fails to expose Google practices directly affecting the P&L of news providers. Indeed, some media companies have developed business that competes with Google verticals. That’s the case of Lagardère’s shopping site LeGuide.com but, again, the group’s CEO, Denis Olivennes, was long on whining and short on relevant facts. (The only fun element he mentioned was outside the scope of OIP’s legal action: with only €50m in revenue, LeGuide.com paid the same amount of taxes as Google whose French operation generates $1.6bn in revenue).

Needless to say, that doesn’t mean that Google couldn’t be using its power in questionable ways at the expense of scores of e-retailers. But as far as the media sector is concerned, gains largely outweigh losses as most web sites enjoy a boost in their traffic thanks to Google Search and Google News. (The value of Google-generated clicks is extremely difficult to assess — a subject for a future Monday Note.)

One fact remains obvious: In this legal action, media groups are being played to defend interests… that are not theirs.

In this whole affair, the French news media industry is putting itself in an awkward position. In February 2013, Google and the French government hammered a deal in which the tech giant committed €60m ($81m) over a 3-year period to fund digital projects run by the French press. (In 2013, according to the fund’s report, 23 projects have been started, totaling €16m in funding.) The agreement between Google and the French press stipulates that, for the duration of the deal, the French will refrain from suing Google on copyrights grounds – such as the use of snippets in search results. But those who signed the deal found themselves dragged in the OIP lawsuit through the GESTE, a legacy trade association – more talkative than effective – going back to the Minitel era  that supports the OIP lawsuit on antirust rather than copyrights grounds. (Those who signed the Google Funds agreement issues a convoluted communiqué to distance themselves from the OIP initiative.)

In Mountain View, many are upset by French media that, on one hand, get hefty subsidies and, on the other, file an anti-Google suit before the Europe Court of Justice. “Back home, the [Google] Fund always had its opponents”, a Google exec told me, “and now they have reasons to speak louder…” Will they be heard? It is unlikely that Google will pull the plug on the Fund, I’m told. But people I talk to also said that any renewal, under any form, now looks unlikely. So will be the extension of an innovation funding scheme in Germany — or elsewhere. “Google is at a loss when trying to develop peaceful relations with the French”, another Google insider told me… “We put our big EMEA [Europe and Middle East] headquarters in Paris, we created a nicely funded Cultural Institute, we fueled the innovation fund for the press, and now we are bitten by the same ones who take our subsidies…”

Regardless of its merits, the European press’ involvement in this antitrust case is ill-advised. It might throw the relationship with Google back to the Ice Age. As another Google exec said to me: “News media should not forget that we don’t need them to thrive…”

–frederic.filloux@mondaynote.com

 

iWatch Thoughts

 

Unlike the almost forgotten Apple TV set, there might be a real product in the iWatch. But as rumors about the device intensify, the scuttlebutt conveniently skirts key questions about the product’s role.

As reverberations of Apple’s Developer Conference begin to die down, the ever-dependable iWatch has offered itself as the focus of another salvo of rumors and speculation. Actually, there’s just one rumor — a Reuters “report” that Quanta Computer will begin manufacturing the iWatch in July — but it was enough to launch a quick-fire series of echoes that bounced around the blogosphere. Not to be outdone, the Wall Street Journal added its own tidbits:

“Apple is planning multiple versions of a smartwatch…[that] will include more than 10 sensors to track and monitor health and fitness data, these people said.”

(“These people” are, of course, the all-knowing “people familiar with the matter”.)

The iWatch hubbub could be nothing more than a sort of seasonal virus, but this time there’s a difference.

At the WWDC three weeks ago, Apple previewed HealthKit, a toolkit iOS developers can use to build health and fitness related applications. HealthKit is a component of the iOS 8 release that Apple plans to ship this fall in conjunction with the newest iDevices. As an example of what developers will be able to do with HealthKit, Apple previewed Health, an application that gives you “an easy-to-read dashboard of your health and fitness data.”

The rumor that Quanta will soon begin “mass production” of the iWatch — the perfect vehicle for health-and-fitness apps — just became a bit more tantalizing… but there are still a number of questions that are left unanswered.

Foremost is iWatch “independence”. How useful will it be when it’s running on its own, unconnected to a smartphone, tablet, or conventional PC? My own guess: Not very useful. Unless Apple plans to build a monstrosity of a device (not likely), the form factor of our putative iWatch will dictate a small battery, which means the processor will have to be power-conserving and thus unable to run iPhone-caliber apps. Power conservation is particularly important if Apple wants to avoid jibes of the ‘My iWatch ran out of battery at the end of the day’ type. Such occurrences, already annoying with a smartphone, could be bad publicity for a “health and fitness” watch.

So, let’s settle for a “mostly dependent” device that relies on a more robust sibling for storage, analysis, and broad overview.

That raises another question: Will the iWatch be part of Apple’s ecosystem only, or will it play nice with Windows PCs or even Android smartphones? If we take Apple’s continued tolerance of the Android version of Beats Music (at least so far) as an example, the notion of an Apple device communicating with a member of the Android tribe is less heretical than it once was. Again, my own guess: Initially, the iWatch will be of restricted to the Apple ecosystem. We’ll see what happens if the device catches on and there’s a demand for an “non-denominational” connection.

As for what role the iWatch will play in the ecosystem, those of us ancient enough might recall the example set by the Smart Personal Objects Technology (SPOT) that Microsoft launched a decade ago. No need to repeat that bit of doomed history by targeting too many platforms, by trying to make “Smart Objects” omniscient. Instead, Apple is likely, as it insisted at its early June WWDC, to tout its Continuity ethos: Let each device do what it does best, but don’t impede the flow of information and activities between devices. In plainer English: Hybrid devices are inferior.

So, besides telling time (perhaps in Yosemite’s new system font, a derivative of Helvetica Neue) what exactly will the iWatch do? The first part of the answer is easy: It will use its sensors to collect data of interest. We’ve already seen what the M7 motion processor and related apps can do in an iPhone 5S; now imagine data that has much finer granularity, and sensors that can measure additional dimensions, such as altitude.

Things quickly get more complicated when we turn to the “other side of the skin”. Heart rhythm and blood pressure measurements look banal, but they shouldn’t be taken for granted, especially if one wants medically reliable data. Oxymetry, the measurement of your oxygen saturation, looks simple — you just slide a cap onto your fingertip — but that cap is actually transmitting lightwaves through your finger. A smartwatch can’t help the nearly 18 million US citizens who suffer from Type II Diabetes (a.k.a Adult Onset Diabetes)  because there are no non-invasive methods for measuring blood sugar. And even as the technical complications of collecting health data are surmounted, device makers can find themselves skirting privacy issues and infringing the HIPAA charter.

The iWatch will also act as a receiver of data from a smartphone, tablet, or PC. This poses many fewer problems, both technical and ethical, than health monitoring, but it also offers few opportunities. Message notifications and calendar alerts are nice but they don’t create a new category, and they certainly haven’t “moved the needle” for existing smartwatches. In a related vein, one can imagine bringing the iWatch close to one’s face and speaking to Siri, asking to set up a calendar event, or sending a text message… but, as with the trend towards larger smartphone screens, one must exercise care when fantasizing about iWatch use cases.

Then we have the question of developers and applications — where’s the support for iWatch app creators? When the iOS App Store opened in 2008, the iPhone became an app phone and solidified the now universal genre. What iWatch rumors fail to address is the presence or absence of an iWatch SDK, of iWatch apps, and of a dedicated App Store section.

Meanwhile, Google has already announced its Android Wear platform and has opened a “Developer Preview” program. Conventional wisdom has it that the Google I/O convention next week will focus on wearables. Samsung has been actively fine-tuning and updating the software for its line of Galaxy Gear smart watches (the watches originally ran on an Android derivative but now use Tizen – until next week).

Finally, we have the question of whether an iWatch will sell in numbers that make the endeavor worthwhile. As the previously-mentioned WSJ story underlines, the smartwatch genre has had a difficult start:

“[...] it isn’t clear how much consumers want the devices. Those on the market so far haven’t sold well, because most wearable devices only offer a limited set of features already found on a smartphone.”

The most ambitious rumors project 50 million iWatches sold in the first 12 months. I think that’s an unrealistic estimate, but if a $300 iWatch can sell at these numbers, that’s $15B for the year. This seems like a huge number until you compare it to a conservative estimate for the iPhone:  50 million iPhones at $650 generates $32B per quarter.

Taking a more hopeful view, let’s recall the history of the iPad. It was a late entrant in the tablet field but it coalesced and redefined the genre. Perhaps the the iWatch will establish itself as The Smartwatch Done Right. But even if it succeeds in this category-defining role, it won’t have the power and flexibility or the huge number of apps of a true trouser pocket computer. As a result, the iWatch will be part of the supporting cast, not a first order product like the iPhone. There’s nothing wrong with that — it might help make high-margin iPhones even more attractive — but it won’t sell in numbers, dollar volume, or profit comparable to the iPhone or iPad. The iWatch, if and when announced, might be The Next Big Thing – for the few weeks of a gargantuan media feast. But it won’t redefine an industry the way PCs, smartphones and tablets did.

JLG@mondaynote.com