Macintel: The End Is Nigh

When Apple announced its 64-bit A7 processor, I dismissed the speculation that this could lead to a switch away from Intel chips for the Macintosh line for a homegrown “desktop-class” chip. I might have been wrong.

“I don’t know exactly when, but sooner or later, Macs will run on Apple-designed ARM chips.” Thus spake Matt Richman in a 2011 blog post titled “Apple and ARM, Sitting in a Tree”. Richman explained why, after a complicated but ultimately successful switch from PowerPC chips to Intel processors in 2005, Apple will make a similar switch, this time to ARM-based descendants of the A4 chip designed by Apple and manufactured by Samsung.

Cost is the first reason invoked for the move to an An processor:

“Intel charges $378 for the i7 chip in the new high-end 15 inch MacBook Pro. They don’t say how much they charge for the i7 chip in the low-end 15 inch MacBook Pro, but it’s probably around $300. …When Apple puts ARM-based SoC’s in Macs, their costs will go down dramatically. ”

We all know why Intel has been able to command such high prices. Given two microprocessors with the same manufacturing cost, power dissipation, and computing power, but where one runs Windows and the other doesn’t, which chip will achieve the higher market price in the PC market? Thus, Intel runs the table, it tells clone makers which new x86 chips they’ll receive, when they’ll receive them, and, most important, how much they’ll cost. Intel’s margins depend on it.

ARM-based processors, on the other hand, are inherently simpler and therefore cost less to make. Prices are driven even lower because of the fierce competition in the world of mobile devices, where the Wintel monopoly doesn’t apply.

329_A7chip

Cost is the foremost consideration, but power dissipation runs a close second. The aging x86 architecture is beset by layers of architectural silt accreted from a succession of additions to the instruction set. Emerging media formats demand new extensions, while obsolete constructs must be maintained for the sake of Microsoft’s backward compatibility religion. (I’ll hasten to say this has been admirably successful for more than three decades. The x86 nickname used to designate Wintel chips originates from the 8086 processor introduced in 1978 – itself a backward-compatible extension of the 8088…)
Because of this excess baggage, an x86 chip needs more transistors than its ARM-based equivalent, and thus it consumes more power and must dissipate more heat.

Last but not least, Richman quotes Steve Jobs:

“I’ve always wanted to own and control the primary technology in everything we do.”

Apple’s leader has often been criticized for being too independent and controlling, for ignoring hard-earned industry wisdom. Recall how Apple’s decision to design its own processors was met with howls of protest, accusations of arrogance, and the usual predictions of doom.

Since then, the interest for another Grand Processor Switch has been alive and well. Googling “Mac running on ARM” gets you close to 10M results. (When you Bing the same query, you get 220M hits — 22x Google’s results. SEO experts are welcome to comment.)

Back to the future…

In September 2013, almost a year ago already, Apple introduced the 64-bit A7 processor that powers new iPhones and iPads. The usual suspects pooh-poohed Apple’s new homegrown CPU, and I indulged in a little fun skewering the microprocessor truthers: 64 bits. It’s Nothing. You Don’t Need It. And We’ll Have It In 6 Months. Towards the end of the article, unfortunately, I dismissed the speculation that Apple An processors would someday power the Mac. I cited iMacs and Mac Pros — the high end of the product line —as examples of what descendants of the A7 couldn’t power.

A friend set me straight.

In the first place, Apple’s drive to own “all layers of the stack” continues unabated years after Steve’s passing. As a recent example, Apple created its own Swift programming language that complements its Xcode IDE and Clang/LLVM compiler infrastructure. (For kremlinology’s sake I’ll point out that there is an official Apple Swift blog, a first in Apple 2.0 history if you exclude the Hot News section of the of apple.com site. Imagine what would happen if there was an App Store blog… But I digress.)

Secondly, the Mac line is suspended, literally, by the late delivery of Intel’s Broadwell x86 processors. (The delay stems from an ambitious move to a bleeding edge fabrication technology that shrinks the basic building block of a chip to 14 nanometers, down from 22 nanometers in today’s Haswell chips.) Of course, Apple and its An semiconductor vendor could encounter similar problems – but the company would have more visibility, more control of its own destiny.

Furthermore, it looks like I misspoke when I said an An chip couldn’t power a high-end Mac. True, the A7 is optimized for mobile devices: Battery-optimization, small memory footprint, smaller screen graphics than an iMac or a MacBook Pro with a Retina display. But having shown its muscle in designing a processor for the tight constraints of mobile devices, why would we think that the team that created the most advanced smartphone/tablet processor couldn’t now design a 3GHz A10 machine optimized for “desktop-class” (a term used by Apple’s Phil Schiller when introducing the A7) applications?

If we follow this line of reasoning, the advantages of ARM-based processors vs. x86 devices become even more compelling: lower cost, better power dissipation, natural integration with the rest of the machine. For years, Intel has argued that its superior semiconductor design and manufacturing technology would eventually overcome the complexity downsides of the x86 architecture. But that “eventually” is getting a bit stale. Other than a few showcase design wins that have never amounted to much in the real world, x86 devices continue to lose to ARM-derived SoC (System On a Chip) designs.

The Mac business is “only” $20B a year, while iPhones and iPad generate more than 5 times that. Still, $20B isn’t chump change (HP’s Personal Systems Group generates about $30B in revenue), and unit sales are up 18% in last June’s numbers vs. a year ago. Actually, Mac revenue ($5.5B) approaches the iPad’s flagging sales ($5.9B). Today, a 11” MacBook Air costs $899 while a 128Gb iPad Air goes for $799. What would happen to the cost, battery life, and size of an A10-powered MacBook Air? And so on for the rest of the Mac line.

By moving to ARM, Apple could continue to increase its PC market share and scoop much of the profits – it currently rakes in about half of the money made by PC makers. And it could do this while catering to its customers in the Affordable Luxury segment who like owning both an iPad and a Mac.

While this is entirely speculative, I wonder what Intel’s leadership thinks when contemplating a future where their most profitable PC maker goes native.

JLG@mondaynote.com

———-

Postscript: The masthead on Matt Richman’s blog tells us that he’s now an intern at Intel. After reading several of his posts questioning the company’s future, I can’t help but salute Intel management’s open mind and interest in tightly reasoned external viewpoints.

And if it surprises you that Richman is a “mere” intern, be aware that he was all of 16-years-old when he wrote the Apple and ARM post. Since then, his blog has treated us to an admirable series of articles on Intel, Samsung, Blackberry, Apple, Washington nonsense – and a nice Thank You to his parents.

 

News on mobile: better be a Danish publisher than a Japanese one

 

This is the second part of our Mobile facts to Keep in Mind (see last week Monday Note – or here on Quartz). Today, a few more basic trends and a closer look at healthy markets for digital news. 

Last week, we spoke about the preeminence of mobile applications. Not all readers agree, of course, but I found more data to support the finding; among many sources, the remarkable Reuters Institute Digital News Report (PDF here) is worth reading:

47% of smartphone users say they use mainly apps for news

According to the report, this figure has risen by 6 percentage points in just one year. By contrast, 38% of the news consumption is made via a browser — which is losing ground: -4% in just a year.

The trend is likely to accelerate when taking in account demography: On smartphones, the most active groups are the 18-24s and the 35-44s; on tablets the most active group is the 45-54 segment.

Platform usage varies in accordance to local market share, but when it come to paying for news, Apple leads the game:

iOS users are x1.5 likely to pay for news in the US
and x2 likely to pay in the UK than Android or other users

Here is the bad part, though. Again based on the Reuters report, the use of smartphones does narrow the range of news sources. More than ever, the battle for the first screen is crucial.

Across the ten countries surveyed,
37% of users rely on a single news source
vs. 30% for PC users

In the UK, the trend is even stronger with 55% of mobile users relying a single news source. This goes along with good news for those who still defend original news production: mobile news consumption is quite focused on legacy media. The BBC app crushes the competition with 67% of respondents saying they used the app the previous week, vs. 25% for Sky, MSN and Yahoo are trailing with respectively 2% and 7%.

If you want to survey a healthy digital news market, go to Denmark

MN_328_vikings_logo

A Viking logo (from the TV Series) as viewed by the Brand New blog;
note the ancient reference to technology…

Not only does Denmark rank among the best countries to live and develop a business in, but when it comes to digital news, it leads the pack in several of ways:

Despite the digital tsunami, Denmark retains many strong media brands. As a result, legacy media are the prime way for accessing digital news. And since Danish media did well embracing new platforms, they enjoyed similarly success on social, funneling readers to their properties.
The opposite holds for France and Germany where the transition is much slower; in those countries digital users rely much more on search to reach news brands. Two side effects ensue: News readers are more accidental and therefore generate a much lower ARPU; and the greater reliance on Google is problematic (hence the call to arms in France and Germany against the search engine giant.)

— Because of the strength of its traditional media brands, the Denmark news market has left very little oxygen to pure players: They weigh only 10% of weekly digital news, vs. 39% in the US and 46% in Japan were legacy media have been severely hit.

— Danes are the heaviest users of both smartphones and tablets to access news.

— They use mobile apps more than anywhere else: 19%, vs. 15% for US and 12% for Germany.

— They are mostly Apple users : 58% say they use an iOS device to access news in the last week (vs. 28% in Germany), hence a better ARPU for mobile publishers.

—  Danish news consumers generously overlap their devices way more than in any country. 79% use a PC, 61% a smartphone and 39% a tablet. Only 24% use only a PC for news. In Japan by contrast, 58% admit using only a PC for their news diet; up there, the use of smartphone and tablet to access information is respectively one half and one third of Denmark.

— In Danish public transportation, smartphones has overtaken print as the main news vector by 69% vs. 21% of the usage.

We all know where to seek inspiration for our digital news strategies.

frederic.filloux@mondaynote.com

Microsoft’s New CEO Needs An Editor

 

Satya Nadella’s latest message to the troops – and to the world – is disquieting. It lacks focus, specifics, and, if not soon sharpened, his words will worry employees, developers, customers, and even shareholders.

As I puzzled over the public email Microsoft’s new CEO sent to his troops, Nicolas Boileau’s immortal dictum came to mind:

Whatever is well conceived is clearly said,
And the words to say it flow with ease.

Clarity and ease are sorely missing from Satya Nadella’s 3,100 plodding words, which were supposed to paint a clear, motivating future for 127,000 Microsoftians anxious to know where the new boss is leading them.

LE WEB PARIS 2013 - CONFERENCES - PLENARY 1 - SATYA NADELLA

Nadella is a repeat befuddler. His first email to employees, sent just after he assumed the CEO mantle on earlier this year, was filled with bombastic and false platitudes:

“We are the only ones who can harness the power of software and deliver it through devices and services that truly empower every individual and every organization. We are the only company with history and continued focus in building platforms and ecosystems that create broad opportunity.”

(More in the February 9th, 2014 Monday Note)

In his latest message, Nadella treats us to more toothless generalities:

“We have clarity in purpose to empower every individual and organization to do more and achieve more. We have the right capabilities to reinvent productivity and platforms for the mobile-first and cloud-first world. Now, we must build the right culture to take advantage of our huge opportunity. And culture change starts with one individual at a time.”

Rather than ceding to the temptation of quoting more gems, let’s turn to a few simple rules of exposition.

First, the hierarchy of ideas:

328_strategy_graph

This admittedly simplistic diagram breaks down an enterprise into four layers and can help diagnose thinking malfunctions.

The top layer deals with the Identity or Culture — I use the two terms interchangeably as one determines the other. One level down, we have Goals, where the group is going. Then come the Strategies or the paths to those goals. Finally, we have the Plan, the deployment of troops, time, and money.

The arrow on the left is a diagnostic tool. It reminds us that as we traverse the diagram from Identity to Plan, the number of words that we need to describe each layer increases.  It should only take a few words to limn a company’s identity (Schlumberger, oil services; Disney, family entertainment), describing the company’s goals will be just a tad more verbose (“in 5 years’ time we’ll achieve $X EPS, Y% revenue growth and Z% market share”), and so on.

The arrow also tells us that the “rate of change” — the frequency at which a description changes — follows the same trajectory. Identity should change only very slowly, if ever. At the other end, the plan will need constant adjustment as the company responds to rapidly shifting circumstances, the economy, the competition.

Using the old Microsoft as an example:
— Identity: We’re the emperor of PC software
— Goals: A PC on every desk and home – running our software
— Strategy: Couple the Windows + Office licenses to help OEMs see the light; Embrace and Extend Office competitors.
— Plan: Changes every week.

Returning to Nadella’s prose, can we mine it for words to fill the top three layers? Definitely not.

Second broken rule: Can I disagree? Any text that relies on platitudes says not much at all; in a message-to-the-troops that’s supposed to give direction, irrefutable statements are deadly. Some randomly selected examples in an unfortunately overabundant field:

“[…] we will strike the right balance between using data to create intelligent, personal experiences, while maintaining security and privacy.”

or…

“Together we have the opportunity to create technology that impacts the planet.”

 or…

“Obsessing over our customers is everybody’s job.”

If I’m presented with statements I cannot realistically disagree with – We Will Behave With Utmost Integrity – I feel there’s something wrong. If it’s all pro and no con, it’s a con.

There are other violations but I’ll stop in order to avoid the tl;dr infraction I reproach Nadella for: Never make a general statement without immediately following it with the sacramental “For Example”.

For example:

“[…] we will modernize our engineering processes to be customer-obsessed, data-driven, speed-oriented and quality-focused.”

… would be more believable if followed by:

Specifically, we’ll ask each each software engineer to spend two days every month visiting customers on even months, and third party developers on odd ones. They will also spend one day per quarter seconding Customer Service Representatives over our phone banks.” 

Satya Nadella is an unusually intelligent man, a Mensa-caliber intellect, well-read, he quotes Nietzsche, Oscar Wilde, and Rainer Maria Rilke. Why, then, does he repeatedly break basic storytelling rules?

Two possible explanations come to mind.

First, because he’s intelligent and literate, he forgot to use an unforgiving editor. ‘Chief, you really want to email that?’ Or, if he used an editor, he was victimized by a sycophantic one. ‘Satya, you nailed it!’

Second, and more likely, Nadella speaks in code. He’s making cryptic statements that are meant to prepare the troops for painful changes. Seemingly bland, obligatory statements about the future will decrypt into wrenching decisions:

“Organizations will change. Mergers and acquisitions will occur. Job responsibilities will evolve. New partnerships will be formed. Tired traditions will be questioned. Our priorities will be adjusted. New skills will be built. New ideas will be heard. New hires will be made. Processes will be simplified. And if you want to thrive at Microsoft and make a world impact, you and your team must add numerous more changes to this list that you will be enthusiastic about driving.”

In plainer English: Shape up or ship out.

Tortured statements from CEOs, politicians, coworkers, spouses, or suppliers, in no hierarchical order, mean one thing: I have something to hide, but I want to be able to say I told you the facts.

With all this in mind, let’s see if we can restate Nadella’s message to the troops:

This is the beginning of our new FY 2015 – and of a new era at Microsoft.
I have good news and bad news.
The bad news is the old Devices and Services mantra won’t work.

For example: I’ve determined we’ll never make money in tablets or smartphones.

So, do we continue to pretend we’re “all in” or do we face reality and make the painful decision to pull out so we can use our resources – including our integrity – to fight winnable battles? With the support of the Microsoft Board, I’ve chosen the latter. We’ll do our utmost to minimize the pain that will naturally arise from this change. Specifically, we’ll offer generous transitions arrangements in and out of the company to concerned Microsoftians and former Nokians.

The good news is we have immense resources to be a major player in the new world of Cloud services and Native Apps for mobile devices. We let the first innings of that game go by, but the sting energizes us. An example of such commitment is the rapid spread of Office applications – and related Cloud services – on any and all mobile devices. All Microsoft Enterprise and Consumer products/services will follow, including Xbox properties.

I realize this will disrupt the status quo and apologize for the pain to come. We have a choice: change or be changed.

Stay tuned.

Or words (about 200) to that effect.

In parting, Nadella would do well to direct his attention to another literate individual, John Kirk, whose latest essay, Microsoft Is The Very Antithesis Of Strategy, is a devastating analysis that compares the company’s game plan to the advice given by Sun Tzu, Liddell Hart, and Carl von Clausewitz, writers who are more appropriate to the war that Microsoft is in than the authors Microsoft’s CEO seems to favor.

The CEO’s July 10th email promises more developments, probably around the July 22nd Earnings release. Let’s hope he’ll offer sharper and shorter words to describe Microsoft’s entry into the Cloud First – Mobile First era.

JLG@mondaynote.com

Mobile Facts To Keep In Mind – Part 1

 

By the end of 2014, many news media will collect around 50% of their page views via mobile devices. Here are trends to remember before devising a mobile strategy. (First of a two-part series.)

In the news business, mobile investments are on the rise. That’s the pragmatic response to a major trend: Users shift from web to mobile. Already, all major media outlets are bracing for a momentous threshold: 50% of their viewership coming from mobile devices (smartphones and tablets). Unfortunately, the revenue stream is not likely to follow anytime soon: making users pay for mobile content has proven much more difficult than hoped for. As for advertising, the code has yet to be cracked for (a) finding formats that won’t trigger massive user rejection, and (b) monetizing in ways comparable to the web (i.e. within the context of a controlled deflation). Let’s dive into a few facts:

Apps vs. WebApps or Mobile sites. A couple of years ago, I was among those who defended web apps (i.e. encapsulated HTML5 coding, not tied to a specific OS platform) vs. native apps (for iOS, Android, Windows Phone). The idea was to give publishers more freedom and to avoid the 30% app store levy. Also, every publisher had in mind the success enjoyed by the FT.com when it managed to put all its eggs in its web app and so retain complete control over the relationship with its customers.

big_phone
Credit: Vintage Mobile / Popular Mechanics

All of the above remains true but, from the users’ perspective, facts speak loudly: According to Flurry Analytics, apps now account for 86% of the time spent by mobile users vs. 14% for mobile sites (including web apps.) A year ago, the balance was 80% for apps and 20% for mobile web.

Trend #1: Native apps lead the game
at the expense of web apps and mobile sites 

One remark, though: the result must take in account the weight of games and Facebook apps that account for 50% of the time spent on mobile. News-related usage leans more to mobile as there is not (yet) demand for complex rendering as in a gaming app. But as far news applications are concerned, we haven’t seen major breakthroughs in mobile web or web apps over the last months and it seems development is stalling.

News vs. the rest of the app world. On a daily total of 2hrs 50mn spent by mobile users (source: eMarketer), 2% to 5% of that time is spent on news. Once you turn to growth, the small percentage number starts to look better: The news segment is growing faster (+64% Y/Y) than messaging and social (+28%) or gaming and entertainment (+9% each); the fastest usage segment being the productivity apps (+119%) and that’s due to the transfer of professional uses from the desktop to the mobile.

Trend #2: On mobile, news is growing faster
than game or social 

…And it will grow stronger as publishers will deploy their best efforts to adjust contents and features to small screens and on-the-go usage and as mobile competitors multiply.

iOS vs. Android: the monetization issue. Should publishers go for volume or focus on the ARPU (revenue per user)? If that’s the reasoning, the picture is pretty clear: an iOS customer brings on average five times more money than an Android user. And the gap is not about to close. However, Android OS has about one billion users vs. 470m users for iOS, but most of Android users are in low income countries, where phones can cost as little as $80, and prices are falling fast. By contrast, an iPhone will cost around $600 (without a carrier contract) and the not-so-successful “cheap” iPhone 5C shows that iPhone is likely to remain a premium product.

Trend #3: There is more money to make on iOS
than Android and that’s not likely to change

Beside, we must take in account two sub-trends: iOS will gain in sophistication with the arrival of iOS 8 (see Jean-Louis’ recent column about iOS 8 being the real version 2.0 of iOS) and a new breed of applications based on the new Swift  programming language. Put differently: Advanced functionalities in Swift/iOS 8-based apps will raise the level of user expectations; publishers will be forced to respond accordingly: as apps reside side by side on the same mobile screen, news apps will be required to display the same level of sophistication than, say, a gaming app — that’s also why I’m less bullish on web apps. Behind the iOS/Android gap lies another question: Should publishers have the same app (content, features, revenue model across) all platforms – or must they tailor their product to platform “moneygraphics”?  That’s an open question.

I’ll stop here for today. Next week, I’ll explore trends and options for business models, marketing tactics, why it could be interesting to link a news app to the smartphone accelerometer and why news media should tap into game developers for certain types of qualifications.

–frederic.filloux@mondaynote.com

The Network Is the Computer: Google Tries Again

 

All you need is a dumb device attached to a smart network. It’s an old idea that refuses to die despite repeated failures. Now it’s Google’s turn.

In the late 1980s, Sun Microsystems used a simple, potent war cry to promote its servers: The Network Is The Computer. Entrust all of your business intelligence, computing power, and storage to Sun’s networked SPARC systems and you can replace your expensive workstation with a dumb, low cost machine. PCs are doomed.

Nothing of the sort happened, of course. Sun’s venture was disrupted by inexpensive servers assembled from the PC organ bank and running Open Source software.

PCs prospered, but that didn’t dampen the spirits of those who would rid us of them.

Fast-forward to the mid-1990s and thought re-emerges in a new guise: The Browser Will Be The Operating System (a statement that’s widely misattributed to Marc Andreessen, who holds a more nuanced view on the matter). The browser will serve as a way to access networked services that will process your data. The actual OS on your device, what sort of apps it can run — or even if it can run any (other than a browser) — these questions will fade into insignificance.

Soon after, Oracle took a swing at the Network is the Computer piñata by defining the Network Computer Reference Profile (or NCRP), a specification that focused on network connectivity and deemphasized local storage and processing. It was understood, if not explicitly stated, that an NCRP device must be diskless. A number of manufacturers offered NCRP implementations, including Sun (which would ultimately be acquired by Oracle) with its JavaStation. But despite Larry Ellison’s strongly expressed belief that Network Computers would rid the industry of the evil Microsoft, the effort went nowhere.

Today, The Network Is The Computer lives on under the name Cloud Computing, the purest example of which is a Google Chromebook running on Chrome OS. (And thus, in a sense, Sun’s idea lives on: Google’s first investor was Sun co-founder Andy Bechtolsheim.)

So far, Chromebooks have shown only modest penetration (a topic for musings in a future Monday Note), but despite the slow adoption, Google has become one of the largest and most important Cloud Computing companies on the planet. Combine this with the Android operating system that powers more than a billion active devices, could Google bring us to the point where The Network Really Is The Computer?

It’s a complicated question, partly because the comparison with the previous generation of devices, traditional PCs, can (excuse me) cloud the view.

Unlike PCs, smartphones rely on an expensive wireless infrastructure. One can blame the oligopolistic nature of the wireless carrier industry (in English: too few companies to have a really competitive market), but that doesn’t change the simple fact that wireless bandwidth isn’t cheap. The dumber the device, the more it has to rely on the Cloud to process and store data, and the more bandwidth it will consume.

Let’s visit Marc Andreessen actual words regarding Network-As-Computer, from a 2012 Wired interview [emphasis mine]:

“[I]f you grant me the very big assumption that at some point we will have ubiquitous, high-speed wireless connectivity, then in time everything will end up back in the web model.”

If we interject, on Andreessen’s behalf, that wireless connectivity must be as inexpensive as it is ubiquitous, then we begin to see the problem. The “data hunger” of media intensive apps, from photo processing to games, shows no sign of slowing down. And when you consider the wireless bandwidth scarcity that comes from the rapid expansion of smartphone use, it seems that conditions are, yet again, conspiring against the “dumb device” model.

The situation is further confounded when we consider that Google’s business depends on delivering users to advertisers. Cloud computing will help drive down the cost of Android handsets and thus offer an even wider audience to advertisers…but these advertisers want a pleasant and memorable UI, they want the best canvas for their ads. When you dumb down the phone, you dumb down the ad playback experience.

In a recent blog post titled The next phase of smartphones, Benedict Evans neatly delineates the two leading “cloud views” by contrasting Apple and Google [emphasis mine]:

“Apple’s approach is about a dumb cloud enabling rich apps while Google’s is about devices as dumb glass that are endpoints of cloud services…”

But Google’s “dumb glass” can’t be too dumb.  For its mobile advertising business, Google needs to “see” everything we do on our smartphones, just like it does on our PCs. Evans intimates as much:

“…it seems that Google is trying to make ‘app versus web’ an irrelevant discussion – all content will act like part of the web, searchable and linkable by Google.”

Native apps running on a “really smart” device are inimical to Google’s business model. To keep the advertisers happy, Google would have to “instrument” native apps, insert deep links that will feed its data collection activities.

This is where the Apple vs. Google contrast is particularly significant: iOS apps are not allowed to let advertisers know what we are doing – unless explicitly authorized. Apple’s business model doesn’t rely on peddling our profile to advertisers.

In the end, I wonder if Google really believes in the “dumb glass” approach to smartphones. Perhaps, at least for now, The Computer will remain The Computer.

JLG@mondaynote.com

 

Google might not be a monopoly, after all

 

Despite its dominance, Google doesn’t fit the definition of a monopoly. Still, the Search giant’s growing disconnect from society could lead to serious missteps and, over time, to a weakened position. 

In last week’s column, I opined about the Open Internet Project’s anti-trust lawsuit against Google. Reactions showed divided views of the search engine’s position. Granted, Google is an extremely aggressive company, obsessed with growth, scalability, optimization — and also with its own vulnerability.

But is it really a monopoly in the traditional and historical sense? Probably not. Here is why, in four points:

1. The consent to dependency. It is always dangerous to be too dependent from a supplier one doesn’t control. This is the case in the (illegal) drug business. Price and supply will fluctuate at the whim of unpredictable people.This is what happens to those who build highly Google-dependent businesses such as e-commerce sites and content-farms that provide large quantities of cheap fodder in order to milk ad revenue from Google search-friendly tactics.

326_jaws
In the end, everything is a matter of trust (“Jaws”, courtesy of Louis Goldman)

Many news media brands have sealed their own fate by structuring their output so that 30% to 40% of their traffic is at the mercy of Google algorithms. I’m fascinated by the breadth and depth of the consensual ecosystem that is now built around the Google traffic pipeline: consulting firms helping media rank better in Google Search and Google News; software that rephrases headlines to make it more likely they’ll hit the top ranks; A/B testing on-the-fly that shows what the search engine might like best, etc.

For the media industry, what should have remained a marginal audience extension has turned into a vital stream of page views and revenue. I personally think this is dangerous in two ways. One, we replace the notion of relevance, reader interest, with a purely quantitative/algorithmic construct (listicles vs depth, BuzzFeed vs. ProPublica for instance). Such mechanistic practices further fuel the value deflation of original content. Two, the eagerness to please the algorithms distracts newsrooms, journalists, editors, from their job to find, develop, build intelligent news packages that will lift brand perception and elevate the reader’s mind (BuzzFeed and plenty of others are the quintessence of cheapening alienation.)

2. Choice and Competition. In 1904, Standard Oil Inc. controlled 91% of American oil production and refining, and 85% of sales. This practically inescapable monopoly was able to dictate prices and supply structure. As for Google, it indeed controls 90% of the search market in some regions (Europe especially, where fragmented markets, poor access to capital and other cultural factors prevented the emergence of tech giants.) Google combines its services (search, mail, maps, Android) to produce one of the most potent data gathering systems ever created. Note the emphasis: Google (a) didn’t invent the high tech data X-ray business, nor (b) is it the largest entity to collect gargantuan amounts of data. Read this Quartz article The nine companies that know more about you than Google or Facebook  and see how corporations such as Acxiom, Corelogic, Datalogix, eBureau, ID Analytics, Intelius, PeekYou, Rapleaf, and Recorded Future collect data on a gigantic scale, including court and public records information, or your gambling habit. Did they make you sign a consent form?

You want to escape Google? Use Bing, Yahoo, DuckDuckGo or Exalead for your web search, or go here to find a list of 40 alternatives. You don’t want your site to be indexed by Google? Insert a robot exclusion line in your html pages, and the hated crawler won’t see your content. You’re sick of Adwords in your pages or in Gmail? Use AdBlock plug-in, it’s even available for the Google Chrome browser. The same applies for storing your data, getting a digital map or web mail services. You’re “creeped out” by Google’s ability to reconstruct every move around your block or from one city to another by injecting data from your Android phone into Maps? You’re right! Google Maps Location History is frightening; to kill it, you can turn off your device’s geolocation, or use Windows Phone or an iPhone (be simply aware that they do exactly the same thing, but they don’t advertise it). Unlike public utilities, you can escape Google. Simply, its services are more convenient, perform well and… are better integrated, which gets us to our third point:

3. Transparent strategy. To Google’s credit, for the most part, its strategy is pretty transparent. What some see as a monopoly in the making is a deliberate — and open — strategy of systematic (and systemic) integration. Here is the chart I made few months ago:

326 graph_goolge

We could include several recent additions such as trip habits from Uber (don’t like it? Try Lyft, or better, a good old Parisian taxi – they don’t even take credit cards); or temperature setting patterns soon coming from Nest thermostats (if you chose to trust Tony Fadell’s promises)… Even Google X, the company’s moonshot factory (story in Fast Company) offers glimpses of Google’s future reach with the development of autonomous cars, projects to bring the internet to remote countries using balloons (see Project Loon) or other airborne platforms.

4. Innovation. Monopolies are known to kill innovation. That was the case with oil companies, cartels of car makers that discouraged alternate transportation systems, or even Microsoft which made our life miserable thanks to a pipeline of operating systems without real competition. By contrast, Google is obsessed with innovative projects seen as an absolute necessity for its survival. Some are good, other are bad, or remain in beta for years.

However, Google is already sowing the seeds of its own erosion. This company is terribly disconnected from the real world. This shows everywhere, from the minutest details of its employees daily life pampered in a overabundance of comfort and amenities that keep them inside a cosy bubble, to its own vital statistics (published by the company itself). Google is mostly white (61%), male (70%), recruits in major universities (in that order: Stanford, UC Berkeley, MIT, Carnegie Mellon, UCLA), with very little “blood” from fields other than scientific or technical. For a company that says it wants to connect its business to a myriad of sectors, such cultural blinders are a serious issue. Combined to the certainty of its own excellence, the result is a distorted view of the world in which the distinction between right and wrong can easily blur. A business practice internally considered virtuous because it supports the perpetuation of the company’s evangelistic vision of a better world can be seen as predatory in the “real” world. Hence a growing rift between the tech giant and its partners and customers, and the nations who host them.

frederic.filloux@mondaynote.com

Google and the European media: Back to the Ice Age

 

Prominent members of the European press are joining a major EU-induced antitrust lawsuit against Google. The move is short on rationale and long on ideology. 

A couple of weeks ago, Axelle Lemaire, France’s deputy minister for digital affairs,  was quoted contending Google’s size and market power effectively prevented the emergence of a “French Google”. A rather surprising statement from a public official whose profile stands in sharp contrast to the customary high civil service profile. As an MP, Mrs Lemaire represents French citizens living overseas and holds dual French and Canadian citizenship; she got a Ph.D. in International Law at London’s King’s College as well as a Law degree at the Sorbonne. Ms. Lemaire then practiced Law in the UK and served as a parliamentary aide in the British House of Commons. Still, her distinguished and unusually “open” background didn’t help: She’s dead wrong about why there is no French Google.

The reasons for France’s “failure” to give birth to a Google-class search engine are simply summarized: Education and money. Google is a pure product of what France misses the most: a strong and diversified engineering pipeline supported by a business-oriented education system, and access to abundant capital. Take the famous (though controversial) Shanghai higher education ranking in computer science: France ranks in the 76 to 100 group with the University of Bordeaux; 101 to 150 for the highly regarded Ecole Normale Supérieure; and the much celebrated Ecole Polytechnique sits deep in the 150-200 group – with performance slowly degrading over the last ten years and a minuscule faculty of… 7 CS professors and assistants professors. That’s the reality of computer science education in the most prestigious engineering school in France. As for access to capital, two numbers say it all: according to its own trade association, the size of the French venture capital sector is 1/33th of the US’ while the GDP ratio is only 1 to 6. That’s for 2013; in 2012, the ratio was 1/46th, things are improving.

The structural weakness of French tech clearly isn’t Google’s fault. Which reveals the ideological facts-be-damned nature of the blame, an attitude broadly shared by other European countries.

A few weeks ago, a surreal event took place in Paris, at the Cité Universitaire Internationale de Paris (which wants to look like a Cambridge replica). There, the Open Internet Project uncovered the next European antitrust action against Google. On stage was an disparate crew: media executives from German and French companies; the former antitrust litigator Gary Reback known for his fight against Microsoft in the Nineties – and now said to help Microsoft in its fight against Google; Laurent Alexandre, a strange surgeon/entrepreneur and self-proclaimed visionary  living in Luxembourg Brussels where his company DNA Vision is headquartered, who almost got a standing ovation by explaining how Google intended to connect our brains to its gigantic neuronal network by around 2040; all of the above wrapped up with a speech from French Economy Minister Arnaud Montebourg who never misses an opportunity to apply his government’s seal on anti-imperialist initiatives.

The lawsuit alleges market distortion practices, discrimination in several guises, anticompetitive conduct, preference for its own vertical services at the expense of fairness in its search results, illegal use of data, etc. (The summary of EU allegations is here). The complaint paves the way for painstaking litigation that will drag on for years.

Among the eleven corporations or trade groups funding the lawsuit we find seven media entities, including the giant German Axel Springer GroupLagardère Active whose boss invoked the “moral obligation” to fight Google. There is also CCM Benchmark Group, a large diversified digital player whose boss, Benoît Sillard, had his own epiphany while speaking with Nikesh Arora in Mountain View a while ago. There and then, Mr. Sillard saw the search giant’s grand plan to dominate the digital world. (I paid a couple of visits to Google’s headquarters but was never granted such a religious experience – I will try again, I promise.)

Despite the media industry’s weight, the lawsuit fails to expose Google practices directly affecting the P&L of news providers. Indeed, some media companies have developed business that competes with Google verticals. That’s the case of Lagardère’s shopping site LeGuide.com but, again, the group’s CEO, Denis Olivennes, was long on whining and short on relevant facts. (The only fun element he mentioned was outside the scope of OIP’s legal action: with only €50m in revenue, LeGuide.com paid the same amount of taxes as Google whose French operation generates $1.6bn in revenue).

Needless to say, that doesn’t mean that Google couldn’t be using its power in questionable ways at the expense of scores of e-retailers. But as far as the media sector is concerned, gains largely outweigh losses as most web sites enjoy a boost in their traffic thanks to Google Search and Google News. (The value of Google-generated clicks is extremely difficult to assess — a subject for a future Monday Note.)

One fact remains obvious: In this legal action, media groups are being played to defend interests… that are not theirs.

In this whole affair, the French news media industry is putting itself in an awkward position. In February 2013, Google and the French government hammered a deal in which the tech giant committed €60m ($81m) over a 3-year period to fund digital projects run by the French press. (In 2013, according to the fund’s report, 23 projects have been started, totaling €16m in funding.) The agreement between Google and the French press stipulates that, for the duration of the deal, the French will refrain from suing Google on copyrights grounds – such as the use of snippets in search results. But those who signed the deal found themselves dragged in the OIP lawsuit through the GESTE, a legacy trade association – more talkative than effective – going back to the Minitel era  that supports the OIP lawsuit on antirust rather than copyrights grounds. (Those who signed the Google Funds agreement issues a convoluted communiqué to distance themselves from the OIP initiative.)

In Mountain View, many are upset by French media that, on one hand, get hefty subsidies and, on the other, file an anti-Google suit before the Europe Court of Justice. “Back home, the [Google] Fund always had its opponents”, a Google exec told me, “and now they have reasons to speak louder…” Will they be heard? It is unlikely that Google will pull the plug on the Fund, I’m told. But people I talk to also said that any renewal, under any form, now looks unlikely. So will be the extension of an innovation funding scheme in Germany — or elsewhere. “Google is at a loss when trying to develop peaceful relations with the French”, another Google insider told me… “We put our big EMEA [Europe and Middle East] headquarters in Paris, we created a nicely funded Cultural Institute, we fueled the innovation fund for the press, and now we are bitten by the same ones who take our subsidies…”

Regardless of its merits, the European press’ involvement in this antitrust case is ill-advised. It might throw the relationship with Google back to the Ice Age. As another Google exec said to me: “News media should not forget that we don’t need them to thrive…”

–frederic.filloux@mondaynote.com

 

iWatch Thoughts

 

Unlike the almost forgotten Apple TV set, there might be a real product in the iWatch. But as rumors about the device intensify, the scuttlebutt conveniently skirts key questions about the product’s role.

As reverberations of Apple’s Developer Conference begin to die down, the ever-dependable iWatch has offered itself as the focus of another salvo of rumors and speculation. Actually, there’s just one rumor — a Reuters “report” that Quanta Computer will begin manufacturing the iWatch in July — but it was enough to launch a quick-fire series of echoes that bounced around the blogosphere. Not to be outdone, the Wall Street Journal added its own tidbits:

“Apple is planning multiple versions of a smartwatch…[that] will include more than 10 sensors to track and monitor health and fitness data, these people said.”

(“These people” are, of course, the all-knowing “people familiar with the matter”.)

The iWatch hubbub could be nothing more than a sort of seasonal virus, but this time there’s a difference.

At the WWDC three weeks ago, Apple previewed HealthKit, a toolkit iOS developers can use to build health and fitness related applications. HealthKit is a component of the iOS 8 release that Apple plans to ship this fall in conjunction with the newest iDevices. As an example of what developers will be able to do with HealthKit, Apple previewed Health, an application that gives you “an easy-to-read dashboard of your health and fitness data.”

The rumor that Quanta will soon begin “mass production” of the iWatch — the perfect vehicle for health-and-fitness apps — just became a bit more tantalizing… but there are still a number of questions that are left unanswered.

Foremost is iWatch “independence”. How useful will it be when it’s running on its own, unconnected to a smartphone, tablet, or conventional PC? My own guess: Not very useful. Unless Apple plans to build a monstrosity of a device (not likely), the form factor of our putative iWatch will dictate a small battery, which means the processor will have to be power-conserving and thus unable to run iPhone-caliber apps. Power conservation is particularly important if Apple wants to avoid jibes of the ‘My iWatch ran out of battery at the end of the day’ type. Such occurrences, already annoying with a smartphone, could be bad publicity for a “health and fitness” watch.

So, let’s settle for a “mostly dependent” device that relies on a more robust sibling for storage, analysis, and broad overview.

That raises another question: Will the iWatch be part of Apple’s ecosystem only, or will it play nice with Windows PCs or even Android smartphones? If we take Apple’s continued tolerance of the Android version of Beats Music (at least so far) as an example, the notion of an Apple device communicating with a member of the Android tribe is less heretical than it once was. Again, my own guess: Initially, the iWatch will be of restricted to the Apple ecosystem. We’ll see what happens if the device catches on and there’s a demand for an “non-denominational” connection.

As for what role the iWatch will play in the ecosystem, those of us ancient enough might recall the example set by the Smart Personal Objects Technology (SPOT) that Microsoft launched a decade ago. No need to repeat that bit of doomed history by targeting too many platforms, by trying to make “Smart Objects” omniscient. Instead, Apple is likely, as it insisted at its early June WWDC, to tout its Continuity ethos: Let each device do what it does best, but don’t impede the flow of information and activities between devices. In plainer English: Hybrid devices are inferior.

So, besides telling time (perhaps in Yosemite’s new system font, a derivative of Helvetica Neue) what exactly will the iWatch do? The first part of the answer is easy: It will use its sensors to collect data of interest. We’ve already seen what the M7 motion processor and related apps can do in an iPhone 5S; now imagine data that has much finer granularity, and sensors that can measure additional dimensions, such as altitude.

Things quickly get more complicated when we turn to the “other side of the skin”. Heart rhythm and blood pressure measurements look banal, but they shouldn’t be taken for granted, especially if one wants medically reliable data. Oxymetry, the measurement of your oxygen saturation, looks simple — you just slide a cap onto your fingertip — but that cap is actually transmitting lightwaves through your finger. A smartwatch can’t help the nearly 18 million US citizens who suffer from Type II Diabetes (a.k.a Adult Onset Diabetes)  because there are no non-invasive methods for measuring blood sugar. And even as the technical complications of collecting health data are surmounted, device makers can find themselves skirting privacy issues and infringing the HIPAA charter.

The iWatch will also act as a receiver of data from a smartphone, tablet, or PC. This poses many fewer problems, both technical and ethical, than health monitoring, but it also offers few opportunities. Message notifications and calendar alerts are nice but they don’t create a new category, and they certainly haven’t “moved the needle” for existing smartwatches. In a related vein, one can imagine bringing the iWatch close to one’s face and speaking to Siri, asking to set up a calendar event, or sending a text message… but, as with the trend towards larger smartphone screens, one must exercise care when fantasizing about iWatch use cases.

Then we have the question of developers and applications — where’s the support for iWatch app creators? When the iOS App Store opened in 2008, the iPhone became an app phone and solidified the now universal genre. What iWatch rumors fail to address is the presence or absence of an iWatch SDK, of iWatch apps, and of a dedicated App Store section.

Meanwhile, Google has already announced its Android Wear platform and has opened a “Developer Preview” program. Conventional wisdom has it that the Google I/O convention next week will focus on wearables. Samsung has been actively fine-tuning and updating the software for its line of Galaxy Gear smart watches (the watches originally ran on an Android derivative but now use Tizen – until next week).

Finally, we have the question of whether an iWatch will sell in numbers that make the endeavor worthwhile. As the previously-mentioned WSJ story underlines, the smartwatch genre has had a difficult start:

“[...] it isn’t clear how much consumers want the devices. Those on the market so far haven’t sold well, because most wearable devices only offer a limited set of features already found on a smartphone.”

The most ambitious rumors project 50 million iWatches sold in the first 12 months. I think that’s an unrealistic estimate, but if a $300 iWatch can sell at these numbers, that’s $15B for the year. This seems like a huge number until you compare it to a conservative estimate for the iPhone:  50 million iPhones at $650 generates $32B per quarter.

Taking a more hopeful view, let’s recall the history of the iPad. It was a late entrant in the tablet field but it coalesced and redefined the genre. Perhaps the the iWatch will establish itself as The Smartwatch Done Right. But even if it succeeds in this category-defining role, it won’t have the power and flexibility or the huge number of apps of a true trouser pocket computer. As a result, the iWatch will be part of the supporting cast, not a first order product like the iPhone. There’s nothing wrong with that — it might help make high-margin iPhones even more attractive — but it won’t sell in numbers, dollar volume, or profit comparable to the iPhone or iPad. The iWatch, if and when announced, might be The Next Big Thing – for the few weeks of a gargantuan media feast. But it won’t redefine an industry the way PCs, smartphones and tablets did.

JLG@mondaynote.com

 

Legacy Media: The Missing Gene

 

Legacy media is at great risk of losing against tech culture. This is because incumbents miss a key driver: an obsession with their own mortality. Such missing paranoia gene negatively impacts every aspect of their business. 

At the last Code conference (the tech gathering hosted by Walter Mossberg and Kara Swisher), Google co-founder Sergey Brin made a surprising statement (at least to me): Asked by Swisher how Google sees itself, Brin responded in his usual terse manner: “There is the external and the internal view. For the outside, we are Goliath and the rest are Davids. From the inside, we are the Davids”. From someone who co-founded a $378bn market cap company that commands more than 80% of the global internet search, this is indeed an unexpected acknowledgement.

Sergey Brin’s statement echoes Bill Gates’ own view when, about fifteen years ago, he was asked about his biggest concern: Was it a decisive move or product by another big tech company? No, says, Gates, it is the fact that somewhere, somehow, a small group of people is inventing something that will change everything… With the rise of Google and Facebook, his fears came true on a scale he couldn’t even imagine. Roughly at the same time, Andy Grove, then CEO of Intel, published a book with a straightforward title: “Only the Paranoid Survives“. Among my favorites Grove quotes:

“Business success contains the seeds of its own destruction. The more successful you are, the more people want a chunk of your business and then another chunk and then another until there is nothing.”

Still, Intel wasn’t paranoid enough and completely missed the mobile revolution, leaving to ARM licensees the entire market of microprocessors for smartphones and tablets.

This deep-rooted sense of fragility is a potent engine of modern tech culture. It spurs companies to grow as fast as they can by raising lots of capital in the shortest possible time. It also drives them to capture market share by all means necessary (including the worst ones), and to develop a culture of excellence by hiring the best people at any cost while trimming the workforce as needed while obsessively maintaining a culture of agility to quickly learn form mistakes and to adapt to market conditions. Lastly, the ever-present sense of mortality drives rising tech companies to quickly erect barriers-to-entry and to generate network effects needed to keep incumbents at bay.

For a large part, these drives stem from these companies’ early history and culture. Most started combining a great idea with clever execution – as opposed to being born within an expensive infrastructure. Take Uber or AirBnB. Both started with a simple concept: harness digital tools to achieve swift and friction-free connections between customers and service providers. Gigantic infrastructure or utterly complicated applications weren’t required. Instead, the future of these companies was secured by a combination of flawless execution and fast growth (read this New York Times story about the Uber network effect challenge). Hence the rapid-fire rounds of financing that will boost Uber’s valuation to $17bn, allowing it to accelerate its worldwide expansion – and also combat a possible price war, as stated by its founder himself at the aforementioned Code Conference.

Unfortunately, paranoia-driven growth sometimes comes with ugly business practices. Examples abound: Amazon’s retaliation against publishers who fight its pricing conditions; Uber bullying tactics against its rival – followed by an apology; Google offering for free what others were used to sell, or distorting search results, etc.

Such behaviors leave the analog world completely flummoxed. Historical players had experienced nothing but a cosy competitive gentlemen-like environment, with a well-defined map of players. This left incumbents without the genes, the culture required to fight digital barbarians. Whether they are media dealing with Google, publishers negotiating with Amazon, hotels fighting Booking.com or AirBnB, or taxi confronting Uber, legacy players look like the proverbial deer caught in the headlights. In some instances, they created their own dependency to new powerful distributors (like websites whose traffic relies largely on Google), before realizing that it was time to sue the dope dealer. (This is exactly what the European press is doing by assigning Google before the European Court of Justice invoking antitrust violations — a subject for a future Monday Note). The appeal to legislators underlines the growing feeling of impotence vis-a-vis the take-no-prisoners approach of new digital players: Unable to respond on the business side, the old guard turns to political power to develop a legal (but short-lasting) containment strategy.

In the media industry, historic players never developed a sense of urgency. The situation varies from one market to another but, in many instances, the “too important to fail” was the dominant belief. It always amazed me: As I witnessed the rise of the digital sector – its obsession with fast growth, and its inevitable collision course with legacy media – incumbents were frozen in the quiet certitude that their role in society was in fact irreplaceable, and that under no circumstances they would be left to succumb to a distasteful Darwinian rule. This deep-rooted complacency is, for a large part, responsible for the current state of the media industry.

Back in 1997, Andy Grove’s book explained how to deal with change :

“The implication was that either the people in the room needed to change their areas of knowledge and expertise or people themselves needed to be changed” 

Instead, our industry made too few changes, too late. Since the first digital tremors hit business models ten years ago, we have been through one or two generations of managers in traditional media company. It is amazing to see how the same DNA is being replicated over and over. Some layers are moving faster than others, though. The higher you go in the food chain, the more people are penetrated by a sense of vital urgency. But the rank-and-file and middle management are holding back, unable to exit their comfort zone.

Earlier this year, the French newspaper Liberation chose the outdated slogan: “We are a Newspaper” in reaction to its new owners ideas (read this story in the NYT). Last week, Liberation opted to appoint as it editor-in-chief one of the strongest opponent to digital media (he is just out from the weekly Le Nouvel Observateur which he gently led into a quiet nursing home, leaving it worth next to nothing).

The gap between the managers of pure digital players and those who still lead legacy media has never been greater. Keenly aware of their own mortality, the former rely more than ever on brutal street-fight tactics, while the incumbents evolve at a different pace, still hoping that older models will resist longer than feared. For old media, it is time for a radical genetic alteration — if performed down to every layer of the media industry.

frederic.filloux@mondaynote.com

 

WWDC: iOS 2.0, the End of Silos

 

Apple tears down the walls between iOS applications, developer rejoice, and Tim Cook delivers a swift kick to Yukari Iwatani Kane’s derrière – more on that at the end.

In this year’s installment of the World Wide Developers Conference, Apple announced a deluge of improvements to their development platforms and tools, including new SDKs (CloudKit, HomeKit, HealthKit); iCloud Drive, the long awaited response to Dropbox; and Swift, an easy-to-learn, leak-free programming language that could spawn a new generation of Apple developers who regard Objective-C as esoteric and burdensome.

If this sounds overly geeky, let’s remind ourselves that WWDC isn’t intended for buyers of Apple products. It’s a sanctuary for people who write OS X and iOS applications. This explains Phil Schiller’s absence from the stage: Techies don’t trust marketing people. (Unfortunately, the conference’s ground rules seem to have been lost on some of the kommentariat.)

The opening keynote is a few breaths short of 2 hours. If you’d rather not drink from the proverbial fire hydrant, you can turn to summaries from Federico Viticci in MacStories, Andrew Cunningham in Ars Technica (“Huge for developers. Massive for everyone else.”), or you can look for reviews, videos, and commentary through Apple’s new favorite search engine, DuckDuckGo, “The search engine that doesn’t track you”.

For today, I’ll focus on the most important WWDC announcement: iOS applications have been freed from the rigid silos, the walls that have prevented them from talking to each other. Apple developers can now write extensions to their apps and avail themselves of the interprocess facilities that they expect from a 21st century OS.

A bit of history will help.

When the first iPhone is shipped in late June, 2007, iOS is incomplete in many respects. There’s no cut and paste, no accented characters, and, most important, there are no native apps. Developers must obey Steve Job’s dictate to extend the iPhone through slow and limited Web 2.0 apps. In my unofficial version numbering, I call this iOS 0.8.

The Web 2.0 religion doesn’t last long. An iOS Software Development Kit (SDK) is announced in the fall and released in February, 2008. When the iTunes-powered App Store opens its doors in July, the virtual shelves are (thinly) stocked with native apps. This is iOS 1.0.

Apple developers enthusiastically embrace the platform and the App Store starts it dizzying climb from an initial 500 apps in 2008 to today’s 1.2 million apps and 75B cumulated downloads.

However, developers’ affections don’t extend to Apple’s “security state”, the limits imposed on their apps in the name of security and simplicity. To be sold in the App Store, an app must agree to stay confined in its own little sandbox, with no way to communicate with other apps.

According to Apple dogma, this limitation is a good thing because it prevents the viruses and other malware that have plagued older operating systems and overly-trusting apps. One wrong click and your device is visited by rogue code that wreaks havoc on your data, yields control to remote computers, or, worst of all, sits silently and unnoticed while it spies on your keystrokes. No such thing on iOS devices. The prohibition against inter-application exchange vastly reduces the malware risk.

This protection comes with a cost. For example, when you use a word processor or presentation tool on a personal computer, you can grab text and images of any provenance and drop them into your project. On the iOS version of Pages, you can only see other Pages documents — everything else is out of sight and out of reach.

The situation becomes even more galling when developers notice that some of Apple’s in-house apps — iMessage, Maps, Calendar with Contacts — are allowed to talk among themselves. To put it a little too simply, Apple engineers can write code that’s forbidden to third party developers.

Apple’s rules for app development and look-and-feel are famously (and frustratingly) rigid, but the company is occasionally willing to shed its dogma. In 2013, for example, skeuomorphism was abandoned…do any of us miss the simulated leather and torn bits of paper on the calendar?

With last week’s unveiling of the new version of iOS, a much more important dogma has been tossed into the dustbin: An app can now reach beyond its sandbox. Apps can interconnect, workflows are simplified, previously unthinkable feats are made possible.

This is the real iOS 2.0. For developers, after the 2008 momentous opening of the App Store that redefined the smartphone, this is the second major release.

With the new iOS, a third-party word processor developer can release his app from its sandbox by simply incorporating the Document Picker:

“The document picker feature lets users select documents from outside your app’s sandbox. This includes documents stored in another app’s iCloud container or documents provided by a third-party extension.”

Users of the word processor will be able to see and incorporate all files, regardless of how they were created or where they’re stored (within the obvious physical limits). This is a welcome change from today’s frustratingly constricted situation.

iOS Extensions, a feature that lets applications offer their own services to other apps, played well when demonstrated by Craig Federighi, Senior VP of Apple Software:

“Federighi was able to easily modify Safari by adding a sharing option for Pinterest and a translation tool courtesy of Bing. Users will also be able to apply photo filters from third-party apps and use document providers like Box or OneDrive…”
Business Insider, Why You Should Be Excited for Extensions in iOS 8 

Prominent among the benefactors of iOS Extensions are third-party keyboard designers. Today, I watch with envy as my Droid compatriots Swype a quick text message. The keyboard layouts and input methods on my iPhone are limited to the choices Apple gives me — and they don’t include Swype. Tomorrow, developers will be able to augment Apple’s offerings, including keyboards that are designed for specific apps.

As expected, developers have reacted enthusiastically to the end of silo hell. Phil Libin, Evernote’s CEO, sums up developer sentiment in the Ars Technica review:

“We’re most excited about extensions, widgets, TouchID APIs and interactive notifications. We’re all over all of that…This is a huge update for us. It feels like we got four out of our top five most wanted requests!”

Now, for the mandatory “To Be Sure” paragraph…

None of this is free. I don’t mean in the financial sense, but in terms of complexity, restrictions, adapting to new ways of doing old things as well as to entirely fresh approaches. While the relaxation of Apple’s “security state” strictures opens many avenues, it also heightens malware risk, something Apple is keenly aware of. In some cases the company will put the onus on the user, asking us to explicitly authorize the use of an extension. In other situations, as Charles Arthur points out in his WWDC article for The Guardian, Apple will put security restrictions on custom keyboards. Quoting Apple’s prerelease documentation:

“There are certain text input objects that your custom keyboard is not eligible to type into. First is any secure text input object [which is] distinguished by presenting typed characters as dots.
When a user taps in a secure text input object, the system temporarily replaces your custom keyboard with the system keyboard. When the user then taps in a nonsecure text input object, your keyboard automatically resumes.”

In part, the price to pay for the new freedoms will depend on Apple’s skills in building safeguards inside the operating system — that’s what all OS strive for. Developers will also have to navigate a new labyrinth of guidelines to avoid triggering the App Store security tripwire.

That said, there is little doubt that the fall 2014 edition of iOS will be well received for both existing and new iDevices. Considering what Apple iOS developers were able to accomplish while adhering to the old dogma, we can expect more than simply more of the same when the new version of iOS is released.

Which brings us to Tim Cook and the stamp he’s put on Apple. Critics who moan that Apple won’t be the same now that Steve Jobs is gone forget the great man’s parting gift: “Don’t try to guess what I would have done. Do what you think its best.” With the Maps fiasco, we saw Cook take the message to heart. In a break with the past, Cook apologized for an Apple product without resorting to lawyerly caveats and justifications. In a real break with the past, he even recommended competing products.

We’ve also seen Cook do what he thinks is best in his changes to the executive team that he inherited from Jobs. Craig Federighi replaces 20-year NeXT/Apple veteran Scott Forstall; Angela Ahrendts is the new head of Retail; there’s a new CFO, Luca Maestri, and a new head of US Sales, Doug Beck. The transitions haven’t always been smooth — both Ahrendts’ and Beck’s immediate predecessors were Cook appointees who didn’t work out and were quickly dismissed. (Beck was preceded by Zane Browe, former CFO at United Airlines…a CFO in a Sales job?)

Inside the company, Cook is liked and respected. He’s seen as calmly demanding yet fair; he guides and is well supported by his Leadership Team. This isn’t what the PR office says, it’s what I hear from French friends who work there. More than just French, they’re hard-to-please Parisians…

I Love Rien I'm Parisien

…but they like Cook, the way he runs the show. (True to their nature, they save a few barbs for the egregious idiots in their midst.)

With this overall picture of corporate cultural health and WWDC success in mind, let’s turn to Yukari Iwatani Kane, the author of Haunted Empire, Apple After Steve Jobs.

On her Web page, Kane insists her book, exemplar of the doomed-without-Jobs attitude, is “hard-hitting yet fair”. That isn’t what most reviewers have to say. The Guardian’s Charles Arthur called it “great title, shame about the contents”; Time’s Harry McCracken saw it as “A Bad Book About Apple After Steve Jobs”; Jason Snell’s detailed review in Macworld neatly addresses the shortcoming that ultimately diminishes the book’s value:

“Apple after the death of Steve Jobs would be a fascinating topic for a book. This isn’t the book. Haunted Empire can’t get out of the way of its own Apple-is-doomed narrative to tell that story.”

Having read the book, I can respect the research and legwork this professional writer, previously at the Wall Street Journal, has put into her opus, but it’s impossible to avoid the feeling that Kane started with a thesis and then built an edifice on that foundation despite the incompatible facts. Even now she churlishly sticks to her negative narrative: Where last week’s successful WWDC felt like a confederation of engineers and application developers happily working together, Kane sees them as caretakers holding a vigil:

Kane Churlish Tweet 450

The reaction to Kane’s tweet was “hard-hitting yet fair”:

Responses to Kane 450

Almost three years after Tim Cook took the helm, the company looks hale, not haunted.

I’ll give Cook the last word. His assessment of Kale’s book:  “nonsense”.

JLG@mondaynote.com