About Jean-Louis Gassée

http://

Posts by Jean-Louis Gassée:

Security Shouldn’t Trump Privacy – But I’m Afraid It Will

 

The NSA and security agencies from other countries are shooting for total surveillance, for complete protection against terrorism and other crimes. This creates the potential for too much knowledge falling one day in the wrong hands.

An NSA contractor, Edward Snowden, takes it upon himself to gather a mountain of secret internal documents that describe our surveillance methods and targets, and shares them with journalist Glenn Greenwald. Since May of this year, Greenwald has provided us with a trickle of Snowden’s revelations… and our elected officials, both here and abroad, treat us to their indignation.

What have we learned? We Spy On Everyone.

We spy on enemies known or suspected. We spy on friends, love interests, heads of state, and ourselves. We spy in a dizzying number of ways, both ingenious and disingenuous.

(Before I continue, a word on the word “we”. I don’t believe it’s honest or emotionally healthy to say “The government spies”. Perhaps we should have been paying more attention, or maybe we should have prodded our solons to do the jobs we elected them for… but let’s not distance ourselves from our national culpability.)
You can read Greenwald’s truly epoch-making series On Security and Liberty in The Guardian and pick your own approbations or invectives. You may experience an uneasy sense of wonder when contemplating the depth and breadth of our methods, from cryptographic and social engineering exploits (doubly the right word), to scooping up metada and address books and using them to construct a security-oriented social graph.

We manipulate technology and take advantage of human foibles; we twist the law and sometimes break it, aided by a secret court without opposing counsel; we outsource our spying by asking our friends to suck petabytes of data from submarine fiber cables, data that’s immediately combed for keywords and then stored in case the we need to “walk back the cat“.

NSA-Merkel-Phone

Sunday’s home page of the German site Die Welt

The reason for this panopticon is simple: Terrorists, drugs, and “dirty” money can slip through the tiniest crack in the wall. We can’t let a single communication evade us. We need to know everything. No job too small, no surveillance too broad.

As history shows, absolute anything leads to terrible consequences. In a New York Review of Books article, James Bamford, the author of noted books on the NSA, quotes Senator Frank Church who, way back in 1975, was already worried about the dangers of absolute surveillance [emphasis mine]:

“That capability at any time could be turned around on the American people and no American would have any privacy left, such [is] the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide. If this government ever became a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back, because the most careful effort to combine together in resistance to the government, no matter how privately it was done, is within the reach of the government to know. Such is the capability of this technology…. I don’t want to see this country ever go across the bridge. I know the capacity that is there to make tyranny total in America, and we must see to it that this agency and all agencies that possess this technology operate within the law and under proper supervision, so that we never cross over that abyss. That is the abyss from which there is no return.

From everything we’ve learned in recent months, we’ve fallen into the abyss.

We’ve given absolute knowledge to a group of people who want to keep the knowledge to themselves, who seem to think they know best for reasons they can’t (or simply won’t) divulge, and who have deemed themselves above the law. General Keith Alexander, the head of the NSA, contends that “the courts and the policy-makers” should stop the media from exposing our spying activities. (As Mr. Greenwald witheringly observes in the linked-to article, “Maybe [someone] can tell The General about this thing called ‘the first amendment’.”)

Is the situation hopeless? Are we left with nothing but to pray that we don’t elect bad guys who would use surveillance tools to hurt us?

I’m afraid so.

Some believe that technology will solve the problem, that we’ll find ways to hide our communications. We have the solution today! they say: We already have unbreakable cryptography, even without having to wait for quantum improvements. We can hide behind mathematical asymmetry: Computers can easily multiply very large numbers to create a key that encodes a message, but it’s astronomically difficult to reverse the operation.

Is it because of this astronomic difficulty — but not impossibility — that the NSA is “the largest employer of mathematicians in the country“? And is this why “civilian” mathematicians worry about the ethics of those who are working for the Puzzle Palace?

It might not matter. In a total surveillance society, privacy protection via unbreakable cryptography won’t save you from scrutiny or accusations of suspicious secrecy. Your unreadable communication will be detected. In the name of State Security, the authorities will knock on your door and demand the key.

Even the absence of communication is suspect. Such mutism could be a symptom of covert activities. (Remember that Bin Laden’s compound in Abbottabad was thoroughly unwired: No phones, no internet connection.)

My view is that we need to take another look at what we’re pursuing. Pining for absolute security is delusional, and we know it. We risk our lives every time we step into our cars — or even just walk down the street — but we insist on the freedom to move around. We’re willing to accept a slight infringement on our liberties as we obey the rules of the road, and we trust others will do the same. We’re not troubled by the probability of ending up mangled while driving to work, but the numbers aren’t unknown (and we’re more than happy to let insurance companies make enormous profits by calculating the odds).

Regarding surveillance, we could search for a similar risk/reward balance. We could determine the “amount of terror” we’re willing to accept and then happily surrender just enough of our privacy to ensure our safety. We could accept a well-defined level of surveillance if we thought it were for a good cause (as in keeping us alive).

Unfortunately, this pleasant-sounding theory doesn’t translate into actual numbers, on either side of the equation. We have actuarial tables for health and automotive matters, but none for terrorism; we have no way of evaluating the odds of, say, a repeat of the 9/11 terrorist attack. And how do you dole out measures of privacy? Even if we could calculate the risk and guarantee a minimum of privacy, imagine that you’re the elected official who has to deliver the message:

In return for guaranteed private communication with members of your immediate family (only), we’ll accept an X% risk of a terrorist attack resulting in Y deaths and Z wounded in the next T months.

In the absence of reliable numbers and courageous government executives, we’re left with an all-or-nothing fortress mentality.

Watching the surveillance exposition unfold, I’m reminded of authoritarian regimes that have come and gone (and, in some cases, come back). I can’t help but think that we’ll coat ourselves in the lubricant of social intercourse: hypocrisy. We’ll think one thing, say another, and pretend to ignore that we’re caught in a bad bargain.

JLG@mondaynote.com

 

iPhone 5S surprises

 

I will withhold judgment on the new iPhone until I have a chance to play customer, buy the product (my better half seems to like the 5C while I pine for a 5S), and use it for about two weeks — the time required to go beyond my first and often wrong impressions”.

I wrote those words a little over a month ago. I’ve now played customer for the requisite two weeks — I got an iPhone 5S on October 3rd — and I’m prepared to report.

But first, some context.

iPhone launches always generate controversy, there’s always something to complain about: Antennagate for the iPhone 4, the Siri beta for the 4S, the deserved Maps embarrassment last year – with a clean, dignified Tim Cook apology.

(Whether these fracas translate into lost revenue is another matter).

As I sat in the audience during the introduction of the original iPhone, back in January, 2007, I thought the demo was too good, that Steve was (again) having his way with facts. I feared that when the product shipped a few months later, the undistorted reality would break the spell.

We know now that the iPhone that Steve presented on the stage was unfinished, that he trod a careful path through a demo minefield. But the JesusPhone that Apple shipped — unfinished in many ways (no native apps, no cut-and-paste) — was more than a success: It heralded the Smartphone 2.0 era.

iphone 5s

This year, Tim Cook introduced the riskiest hardware/software combination since the original iPhone. The iPhone 5S wants to be more than just “new and improved”, it attempts to jump off the slope with its combination of two discontinuities: a 64-bit processor and a new 64-bit iOS. Will it work, or will it embarrass itself in a noisome backfire?

First surprise: It works.

Let me explain. I have what attorneys call “personal knowledge” of sausage factories, I’ve been accountable for a couple and a fiduciary for several others. I have first-hand experience with the sights, the aromas, the tumult of the factory floor, so I can’t help but wince when I approach a really new product, I worry in sympathy with its progenitors. The 5S isn’t without its “aromas” (we’ll get to those later), but the phone is sleek and attractive, the house apps are (mostly) solid, and the many new Application Programming Interfaces (API) promise novel applications. Contrary to some opinions, there are fewer warts than anyone could have expected.

Surprise #2, the UI: I had read the scathing critiques of the spartan excesses, and, indeed, I feel the drive for simplicity occasionally goes too far. The buttons on the built-in timer are too thin, too subdued. When I meditate in the dark I can’t distinguish Start from Cancel without my glasses. But I’m generally happy with the simpler look. Windows and views get out of the way quickly and gracefully, text is neatly rendered, the removal of skeuomorphic artifacts is a relief.

The next surprise is the fingerprint sensor a.k.a. Touch ID. Having seen how attempts to incorporate fingerprint recognition into smartphones and laptops have gone nowhere, I had my doubts. Moreover, Apple had acquired AuthenTec, the company that created the fingerprint sensor, a mere 15 months ago. Who could believe that Apple would be able to produce a fingerprint-protected iPhone so quickly?

But it works. It’s not perfect, I sometimes have to try again, or use another finger (I registered three on my right hand and two on my left), but it’s clear that Apple has managed to push Touch ID into the category of “consumer-grade technology”: It works often enough and delivers enough benefit to offset the (small) change in behavior.

A personal favorite surprise is Motion Sensing.

When Apple’s Marketing Supremo Phil Schiller described the M7 motion processor, I didn’t think much of it, I was serving the last days of my two-month sentence wearing the JawBone UP bracelet mentioned in a previous Monday Note. (A friend suggested I affix it to his dog’s collar to see what the data would look like.)

Furthermore, the whole “lifestyle monitoring” business didn’t seem like virgin territory. The Google/Motorola Moto X smartphone introduced last August uses a co-processor that, among other things, monitors your activities, stays awake even when the main processor is asleep, and adjusts the phone accordingly. A similar co-processing arrangement is present in Moto X’s predecessors, the Droid Maxx, Ultra and Mini.

But then I saw a Twitter exchange about Motion Sensing apps about a week after I had activated my iPhone 5S. One thumb touch later, the free Pedometer++ app asked for my permission to use motion data (granted) and immediately told me how many steps I’d taken over the past seven days.

I went to the chauffeured iPhone on my wife’s desk and installed the app. I did the same on friends’ devices. The conclusion was obvious: The M7 processor continuously generates and stores motion data independent of any application. A bit of googling shows that there are quite a few applications that use the motion data that’s obligingly collected by the M7 processor; I downloaded a number of these apps and the step counts are consistent.

(Best in class is the ambitious MotionX 24/7. Philippe Kahn’s company FullPower Technologies licenses MotionX hardware and software to many motion-sensing providers, including Jawbone and, perhaps, Apple. Wearable technologies aren’t just for our wrists…we carry them in our pockets.)

My wife asked if her iPhone would count steps from within her handbag. Ever the obliging husband, I immediately attended to this legitimate query, grabbed her handbag, and stepped out of the house for an experimental stroll. A conservatively dressed couple walked by, gave me a strange look, and didn’t respond to my evening greeting, but, indeed, the steps were counted.

A question arises: Does Apple silently log my movements? No, my iPhone records my locomotion, but the data stays within the device — unless, of course, I let a specific application export them. One must be aware of the permissions.

Other 5S improvements are welcome but not terribly surprising. The camera has been smartly enhanced in several dimensions; search finally works in Mail; and, to please Sen. McCain, apps update themselves automatically.

All of this comes with factory-fresh bugs, of course, a whiff of the sausage-making apparatus. iPhoto crashed on launch the first three or four times I tried it, but has worked without complaint since then.  A black Apple logo on a white background appeared and then quickly disappeared — too brief to be a full reboot, too sparse to be part of an app.

I’ve had to reboot the 5S to recover a dropped cellular connection, and have experienced hard-to-repeat, sporadic WiFi trouble that seems to spontaneously cure itself.(“How did you fix it?” asks my wife when her tech chauffeur gets the sullen device to work again. “I don’t know, I poke the patient everywhere until it responds.”)

From my admittedly geeky perspective, I’m not repelled by these glitches, they didn’t lose my data or prevent me from finishing a task. They’re annoying, but they’re to be expected given the major hardware and software changes. And I expect that the marketplace (as opposed to the kommentariat) will shrug them off and await the bug fixes that will take care of business.

So, yes, overall, the “discontinuous” 5S works.

[I'm also using a pre-release of Mavericks, the upcoming 10.9 version of OS X, on two Macs. There, I wonder if I'm not seeing the opposite of the iPhone 5S: less risk, more bugs. I hope things straighten out for the public release. I'll report if and when warranted.] [I can't resist: The Washington Post's Wonkblog calls the iPhone's third color... Dignified Gold. I wonder: Is it a compliment to Sir Jony's unerring taste? Or a clever, indirect ethnic slur?]

JLG@mondaynote.com

Microsoft Mission Impossible

 

You’re Microsoft’s new CEO. How do you like staring at the abyss between two mutually exclusive ways of making money? The old business model, Windows and Office licensing, is going away. The Devices and Services future puts you in direct competition against the likes of Google and Apple as well as former licensing vassals such as HP and Dell. Can you take the company to the other side, or will you fall to the bottom of the business model transition canyon?

Life used to be simple and immensely profitable at Microsoft. As its name implies, the company started as a supplier of microcomputer software. Simplifying a bit, it all started with the BASIC interpreter, which found its way into many early personal computers including the Apple ][. After that came DOS, the operating system for IBM’s Personal Computer; and Multiplan, an early foray into desktop productivity. DOS begat Windows, and Multiplan was succeeded in steps by the full Office suite. Through a series of astute business and lawyerly maneuvers, the Windows + Office combo eventually spread to virtually all PC clones.

This made Microsoft the most successful software company the world had ever seen, and its founding CEO, Bill Gates, became the richest man on the planet. In 2000, the company’s market capitalization reached $540B (approximately $800B in today’s dollars). As this Wikinvest graph shows, Microsoft dwarfed all other tech companies:

msft_graph1

(At the time, the NASDAQ index of mostly tech stocks stood a little above 4,000, it closed at 3,792 this past Friday.)

Back then, Windows + Office licensing was the only money pump that really mattered. Everything else — all other software products and even sales of enterprise servers — either depended on Microsoft’s huge PC installed base, or didn’t move the needle. Hardware and entertainment lines of business were largely immaterial; online activities weren’t yet the money sink we’ve seen in recent years.

According to the company’s 2000 Annual Report, the combination of the “Windows Platforms” and “Productivity Applications” accounted for $19.3B in revenue ($9.3B and $10B, respectively). That’s 84% of the company’s $23B total revenue and, even more important, 98% of Microsoft’s Operating Income!

Moving to Q1 2013, the market capitalization picture has drastically changed:

msft_graph2

Google is in many ways becoming Microsoft 2.0, Oracle has grown nicely, and Apple is now on top.

What happened?

Mobile personal computing happened. Smartphones and tablets are displacing conventional PCs, desktops, and laptops.

To put it even more succinctly: the iPhone did it.

When Steve Jobs stepped onto the stage at MacWorld in January, 2007, there were plenty of smartphones on the market. Windows Mobile, Palm Treo, Nokia, Blackberry… But Apple’s iPhone was different. It really was a personal computer with a modern operating system. While the iPhone didn’t initially support third party apps, a Software Development Kit (SDK) and App Store were soon introduced.

Android quickly followed suit, the Smartphone 2.0 race was on, and the incumbents were left to suffer grievous losses.

Riding on the iPhone’s success and infrastructure, the iPad was introduced, with Android-powered tablets not far behind. These new, mobile personal computers caused customers to Think Different, to re-examine their allegiance to the one-and-only PC.

As these products flooded the market, Microsoft went through its own version of the Stages of Grief, from denial to ultimate acceptance.

First: It’s Nothing. See Steve Ballmer memorably scoffing at the iPhone in 2007. Recall ODM Director Eddie Wu’s 2008 predication that Windows Mobile would enjoy 40% market share by 2012.

Second: There is no Post-PC…”Plus is the new ‘Post’“. Smartphones and tablets are mere companion devices that will complement our evergreen PCs. The party line was eloquently asserted two years ago by Frank Shaw, Microsoft’s VP of Communications:

“So while it’s fun for the digerati to pronounce things dead, and declare we’re post-PC, we think it’s far more accurate to say that the 30-year-old PC isn’t even middle aged yet, and about to take up snowboarding.”

Next comes Bargaining: Microsoft makes a tablet, but with all the attributes of a PC. Actually, they make two Surface devices, one using an ARM processor, the other a conventional Intel CPU.

Today comes Acceptance: We’re indeed in a Post-PC era. PCs aren’t going to disappear any time soon, but the 30-year epoch of year after year double digit growth is over. We’re now a Devices and Services company!

It’s a crisp motto with a built-in logic: Devices create demand for Microsoft services that, in turn, will fuel the market’s appetite for devices. It’s a great circular synergy.

But behind the slick corpospeak lurks a problem that might seriously maim the company: Microsoft wants to continue to license software to hardware makers while it builds a Devices business that competes with these same licensees. They want it both ways.

Real business model transitions are dangerous. By real transition I don’t mean adding a new line of peripherals or accessories, I mean moving to a new way of making money that negatively impacts the old one. The old money flow might dry up before the new one is able to replace it, causing an earnings trough.

For publicly traded companies, this drought is unacceptable. Rather than attempt the transition and face the ire of Wall Street traders, some companies slowly sink into irrelevance. Others take themselves private to allow the blood-letting to take place out of public view. When the curtain lifts some months later, a smaller, healthier outfit is relaunched on the stock market. Dell is a good example of this: Michael Dell gathered investors, himself included, to buy the company back and adapt its business model to a Post-PC world behind closed doors.

Microsoft can’t abandon its current model entirely, it can’t stop selling software licenses to hardware makers. But the company realizes that it also has to get serious about making its own hardware if it wants to stay in the tablets and smartphone race.

The key reason for Microsoft’s dilemma is Android. Android is inexpensive enough (if not exactly free) that it could kill Redmond’s mobile licensing business. (Microsoft might get a little bit of money from makers of Android-powered hardware thanks to its patent portfolio, but that doesn’t change the game.) This is why Microsoft offered “platform support payments” to Nokia, which essentially made Windows Phone free. And, now we have the belated, under duress acquisition of Nokia’s smartphone business, complete with 32,000 angry Finns.

(Microsoft is rumored to have approached HTC with an offer to dual-boot Windows Phone on HTC’s Android handsets. It’s not very believable rumor — two competing operating systems on the same smartphone? But it has a satisfying irony: In an earlier incarnation I saw Microsoft play legal hardball against anyone who tried to sell PCs with both Windows and another OS installed at the factory…)

Another example of trying to keep one foot on each side of the abyss is the Surface tablet. Microsoft tried to create a hybrid “best-of-both-worlds” PC/tablet, complete with two different UIs. I bought one and found what many experienced: It doesn’t have the simplicity and agility of a genuine tablet, nor does it offer the classic workflow found on Windows 7. We’ll have to see how helpful the upcoming Windows 8.1 is in that regard.

So… What about our new CEO?

  • S/he finds a company that’s in the middle of a complicated structural and cultural reorganization.
  • The legacy PC business is slowing down, cannibalized by mobile personal computers.
  • Old OEM partners aren’t pleased with the company’s new direction(1). They have to be kept inside the tent while the Surface tablets experiment plays out. Success will let Microsoft discard Legacy PC makers. Failure will lead Redmond to warmly re-embrace its old vassals.
  • The Windows Phone licensing business lost its clients as a result of the Nokia acquisition.
  • Integrating Nokia will be difficult, if not a slow-moving disaster.
  • The Windows Phone OS needs work, including a tablet version that has to compete with straight tablets from Android licensees and from Apple.
  • Employees have to be kept on board.
  • So do shareholders.

How would you like the job?

JLG@mondaynote.com

(1) HP’s Meg Whitman now sees Microsoft as a competitor — and introduces a Google-powered Chromebook. What we think this will do for HP’s Personal Systems Group revenue and profit is best left unsaid.

Apple Under Siege

 

Two years after Steve Jobs left us, Apple now wears Tim Cook’s imprint and, for all the doubt and perpetual doomsaying, seems to wear it well. One even comes to wonder if the Cassandras aren’t in fact doing Apple a vital favor.

Last Friday, Tim Cook issued a somber remembrance to Apple employees:

Team-
Tomorrow marks the second anniversary of Steve’s death. I hope everyone will reflect on what he meant to all of us and to the world. Steve was an amazing human being and left the world a better place. I think of him often and find enormous strength in memories of his friendship, vision and leadership. He left behind a company that only he could have built and his spirit will forever be the foundation of Apple. We will continue to honor his memory by dedicating ourselves to the work he loved so much. There is no higher tribute to his memory. I know that he would be proud of all of you.
Best,
Tim

I am one of the many who are in Steve’s debt and I miss him greatly. I consider him the greatest creator and editor of products this industry has ever known, and am awed by how he managed the most successful transformation of a company — and of himself — we’ve ever seen. I watched his career from its very beginning, I was fortunate to have worked with him, and I thoroughly enjoyed agreeing and disagreeing with him.

I tried to convey this in an October 9th, 2011 Monday Note titled Too Soon. I just re-read it and hope you’ll take the time to do the same. You’ll read words of dissent by Richard Stallman and Hamilton Nolan, but you’ll mostly find praise by Barack Obama, John Stewart, Nicholson Baker in the New Yorker, and this elegant image by Jonathan Mak:

steve_silouhette

Two years later, we can look at Apple under Tim Cook’s leadership. These haven’t been an easy twenty-four months: Company shares have gone on a wild ride, execs have been shown the door, there was the Maps embarrassment and apology, and there has been a product drought for most of the last fiscal year (ending in September).

All of this has provided fodder for the Fox Newsstands of the Web, for netwalkers seeking pageviews. The main theme is simple and prurient, hence its power: Without Steve, Apple is on the decline. The variations range from the lack of innovation — Where’s the Apple TV?, the iWatch?, the next Big Thing? — to The Decline of The Brand, Android Is Winning, and Everything Will Be Commoditized.

Scan Philip Ellmer-DeWitt’s Apple 2.0 or John Gruber’s Daring Fireball and treat yourself to intelligent repudiations of this incessant “claim chowder“, discredited pontifications. I’ll extract a few morsels from my own Evernote stash:

Apple’s press conference showed a brand unraveling, or so said VentureBeat in March, 2012. Eighteen months later, Apple passed Coca-Cola to become the world’s most valuable brand.

How Tim Cook can save himself (and Apple), subtitled, for good measure: What the confused Apple CEO can do to avoid getting canned and having to slink away with nothing but his $378 million compensation package as comfort. Penned by a communications consultant who “teaches public relations at NYU”, the article features an unexpected gem: Cook should buy a blazer. You know, “to break the deleterious chokehold of the Steve Jobs’ [sic] legacy”.

Apple: The Beginning of a Long Decline? (note the hedging question mark.) This LinkedIn piece, which questions the value of the fingerprint sensor, ends with a lament:

There was no sign of a watch. So those of us in Silicon Valley are left watching, wondering, and feeling a little empty inside… Jobs is gone. It looks like Apple’s magic is slowly seeping away now too.

Shortly thereafter, Samsung’s iWatch killer came out…and got panned by most reviewers.

Last: Apple’s Board of Directors are concerned about Apple’s pace of innovation, says Fox Business News Charlie Gasparino, who claims to have “reliable sources”.

Considering how secretive the company is, can anyone imagine a member of Apple’s Board blabbing to a Fox Business News irrespondent?

Despite the braying of the visionary sheep, Tim Cook never lost his preternatural calm, he never took the kommentariat’s bait. Nor have his customers: They keep buying, enjoying, and recommending Apple’s products. And they do so in such numbers — 9 million new iPhones sold in the launch weekend — that Apple had to file a Form 8-K with the Security and Exchanges Commission (SEC) to “warn” shareholders that revenue and profits would exceed the guidance they had provided just two months ago when management reviewed the results of the previous quarter.

In Daniel Eran Dilger’s words, Data bites dogma: Apple’s iOS ate up Android, Blackberry U.S. market share losses this summer:

Apple’s increase accounted for 1.5 of the 1.6 percentage points that Android and Blackberry collectively lost. This occurred a full month before the launch of Apple’s iPhone 5s and 5c and the deployment of iOS 7.

Regarding the “Apple no longer innovates” myth, Jay Yarow tells us why Apple Can’t Just ‘Innovate’ A New Product Every Other Year. His explanation draws from a substantial New York Times Magazine article in which Fred Vogelstein describes the convergence of company-wide risk-taking and engineering feats that resulted in the iPhone:

It’s hard to overstate the gamble Jobs took when he decided to unveil the iPhone back in January 2007. Not only was he introducing a new kind of phone — something Apple had never made before — he was doing so with a prototype that barely worked. Even though the iPhone wouldn’t go on sale for another six months, he wanted the world to want one right then. In truth, the list of things that still needed to be done was enormous. 

It’s a great read. But even Vogelstein can’t resist the temptation of inserting a word of caution: “And yet Apple today is under siege…” 

This is something I heard 33 years ago when I signed up to start Apple France in 1980, and I’ve heard it constantly since then. I’ll again quote Horace Dediu, who best summarizes the concern:

“[There's a] perception that Apple is not going to survive as a going concern. At this point of time, as at all other points of time in the past, no activity by Apple has been seen as sufficient for its survival. Apple has always been priced as a company that is in a perpetual state of free-fall. It’s a consequence of being dependent on breakthrough products for its survival. No matter how many breakthroughs it makes, the assumption is (and has always been) that there will never be another. When Apple was the Apple II company, its end was imminent because the Apple II had an easily foreseen demise. When Apple was a Mac company its end was imminent because the Mac was predictably going to decline. Repeat for iPod, iPhone and iPad. It’s a wonder that the company is worth anything at all.”

I recently experienced a small epiphany: I think the never-ending worry about Apple’s future is a good thing for the company. Look at what happened to those who were on top and became comfortable with their place under the sun: Palm, Blackberry, Nokia…

In ancient Rome, victorious generals marched in triumph to the Capitol. Lest the occasion go to the army commander’s head, a slave would march behind the victor, murmuring in his ear, memento mori, “remember you’re mortal”.

With that in mind, one can almost appreciate the doomsayers — well, some of them. They might very well save Apple from becoming inebriated with their prestige and, instead, force the company to remember, two years later and counting, how they won it.

JLG@mondaynote.com

 

Microsoft Directors Have Much Explaining To Do

 

Blaming Steve Ballmer for Microsoft’s string of mistakes won’t do. Why did the Board of Directors keep him on the job for thirteen years, only to let him “retire” in the midst of several dangerous transitions — without naming a successor? What does this say about the Board’s qualifications to pick Microsoft’s next CEO?

For more than a decade, a team of physicians has been ministering to a patient who was once vital and robust, but now no longer thrives. Recurring diagnostic errors, stubborn inattention to symptoms, improper prescriptions haven’t yet killed the object of their care but, lately, the patient’s declining health has become so obvious that the doctors, now embarrassed and desperate, have scheduled a heart transplant.

Now comes the test: Would you entrust the patient’s future to such a confederacy of dunces?

With this metaphor in mind, let’s contemplate the record of Microsoft Directors since Steve Ballmer assumed the mantle 13 years ago, and ask if they’re qualified to appoint a successor.

Consider the Directors’ obdurate passivity while they watched the company miss opportunities, take one wrong turn after another, and fail to execute crucial transitions. Search was conceded to Google; digital music (players and distribution) is dominated by Apple; social networking belongs to Facebook, Twitter, and LinkedIn; the smartphone market is handed over to Google’s Android and Apple’s iPhone; tablets from the same duo are now bleeding the Windows + Office Golden Goose; Windows Vista and now Windows 8; Surface tablets… Even the once mighty Internet Explorer browser has been displaced by Google’s Chrome running on all desktop and mobile platforms.

Blaming (and forgiving) the CEO for one or two mistakes is reasonable. But if these missteps were entirely Ballmer’s fault, why did the Directors keep him at the helm? This raises the question: How much of the company’s value did the Directors themselves let Google, Apple, and others run away with? Is Microsoft’s Board a danger to the company?

The latter question comes in sharper relief when looking at the timing and manner of Ballmer’s exit.

ballmer

On July 11th, Ballmer announces a major company reorganization. More than just the usual medley of beheadings and redistribution of spoils, Microsoft was to restructure itself away from its old divisional arrangement and move towards the type of functional organization used by companies such as Apple. In addition, the new company motto became Devices and Services, evoking a virtuous circle: Best-of-class branded devices would sell more great Microsoft services, while the latter would give a boost to Microsoft devices.

A week later, on July 18th, Microsoft releases pedestrian quarterly numbers, the lowlight of which is a $900M write-off attributed to very poor sales of Surface PC/tablets

On August 23rd, Ballmer announces his sooner-than-planned retirement — sometime in the following 12 months. No word of a successor.

And, to top everything off, on September 3rd, with Ballmer on his way out, the Board approves the emergency acquisition of Nokia’s handset business, complete with 32,000 angry Finns. (We’ll discuss their misdirected anger in a future Monday Note.)

A drastic company reorganization makes sense. Instead of one more turn of the optimizing crank, Microsoft acknowledges that it needs to Think Different.

Writing off unsold inventory is the sensible recognition of a problem; it removes an impediment by facilitating a fire sale.

There was a clear and present danger for Nokia’s handset business to fail, or to become the walking dead. Microsoft bought it to avoid the possible collapse of the Windows Phone platform. In theory (i.e., ignoring cultural realities), the acquisition gives Microsoft more control over its smartphone future.

All rational moves.

But letting Ballmer go right in the middle of two huge and complicated transitions — and without immediately appointing a successor? On its face, the timing and manner of Ballmer’s exit defies common business sense. It also raises questions about the Board’s failure to adequately plan for Ballmer’s succession. Supposedly, Succession Planning is a key component of good Corporate Governance. In plain language, a Board of Directors is obligated to identify and groom successors for key positions, starting with the CEO.

Which raises a few more questions.

Microsoft undertakes two risky, company-redefining moves: a profound structural and strategic reorganization, followed by its most foreign, most people-intensive acquisition ever. What was the overwhelming need to announce Ballmer’s departure – without naming a successor – right in the middle of such instability?

Considering its résumé, what makes Microsoft’s Board qualified to pick a new CEO?

And what are the parameters of the search for Mr. Right? Assuming Microsoft hires an industry heavyweight, will this individual be given the space and power to be his own woman or man, that is to reshuffle the Board? And what about the freedom from deferring to the company’s Founder?

And what must the mood be like at Microsoft? “When you receive an order, do absolutely nothing and wait for the countermanding directive.” This ancient Army saying must now be popular in Redmond. It’s not that people working there don’t care, but they just don’t know what the next CEO will want, and they certainly don’t know when. How can one not expect damaging paralysis and politicking when the CEO is let go without a successor?

All interesting questions.

JLG@mondaynote.com

————————-

[I'll leave alone rumors such as Ford's CEO Alan Mullally replacing Ballmer. Notwithstanding the obligatory congratulations, there would be much giggling in Mountain View and Cupertino. Competent management is a necessary but not sufficient condition...see Ballmer.]

64 bits. It’s Nothing. You Don’t Need It. And We’ll Have It In 6 Months

 

Apple’s A7 processor, the new iOS 7 and “house” apps are all industry firsts: genuine, shipping 64-bit mobile hardware and software. As we’ve seen before with the iPhone or the iPad, this new volley of Apple products is first met with the customary bursts of premature evaluations and counterfactual dismissals.

On September 10th, Apple revealed that the new iPhone 5s would be powered by its new 64-bit A7 processor. The initial reactions were less than enthused. We were treated to exhumations of lazy bromides…

“I don’t drink Kool-Aid. Never liked the stuff and I think we owe it to ourselves to collectively question whether or not Apple’s ‘reality distortion field’ is in effect when we consider how revolutionary the iPhone 5S is and if Apple’s 64-bit A7 processor under its shiny casing will be all its [sic] cracked up to be when the device hits the market in volume.” [Forbes]

…and equally lazy “markitecture” accusations…

“With current mobile devices and mobile apps, there really is no advantage [to 64 bits] other than marketing — the ability to say you’re the first to have it…” [InfoWorld]

…and breezy brush-offs, such as this tweet from an industry expert:

“We’ll see just how good Apple’s marketing team is trying to leverage 64-bit. 64-bit add more memory and maybe registers. Period.” [Twitter]

Rather than wonder what these commenters were drinking, let’s turn to AnandTech, widely regarded as one of the best online hardware magazines.

Founded by Anand Lal Shimpi when he was all of 14-years-old, AnandTech is known for its exhaustive (and sometimes exhausting) product reviews. The 14-section September 17th iPhone 5S review doesn’t disappoint. Among other things, it provides detailed iPhone 5S vs. iPhone 5 performance comparisons such as this:

5S GeekBench Anand Edited

There are many other charts, comparisons, and considerations of the new 64-bit ARMv8 instruction set, the move from 16 to 32 floating-point NEON 128-bit registers, the hardware acceleration of cryptography operations… It’s a very long read, but not a boring one (at least not for interested geeks).

The bottom line is plain: The A7 processor is a substantial improvement that’s well supported by the 64-bit iOS7. (And I’d like to meet the author and bow to his encyclopedic knowledge.)

Was it because of AnandTech’s cool analysis that the doubters have changed their tune?

As I predicted, Apple A7 benchmarks well due to CPU arch (for IPC), new GPU, ARM v8′

Now that the A7 had become a Benchmarking Beast, the author of the previous week’s brush-off tweet (“more memory and maybe registers. Period”) has revised his position [emphasis mine]:

“The improvements Apple made with the A7 are truly incredible, and they really went against the grain in their choices. With an industry obsessed with more cores, they went with fewer, larger and efficient cores. With people expecting v8 and 64-bit ARM in late 2014, Apple brings it out in 2013 with full Xcode support and many performance optimizations.” [...] “Apple has done it again, but this time in unexpected fashion.”

That all-purpose defense, unexpected, provides a key to the wrong-footing of many “experts”.

When Apple entered the microprocessor field a mere five years ago with its acquisition of Palo Alto Semiconductor, the move was panned: Apple had no future competing with established industry leaders such as Intel, Qualcomm, Nvidia, and Samsung.

But with the successive, increasing refinement of the A4, A5, and A6, the designs were ultimately viewed as good, very good, roughly on par with the rest of the industry. What these processors lacked in raw power was more than made up for by they way they were integrated into Apple’s notion of a purposeful, usable mobile device: Enhanced UI responsiveness, reduced power consumption, obeisance to the unique requirements of media and communications.

The expectation was that Apple would either fail, or produce a “competent” (meaning not particularly interesting) iteration of previous A4-5-6 designs. No one expected that the processor would actually work, with all in-house apps running in 64-bit mode from day one.

But let’s back up and rewrite a bit of history, ourselves:

On September 10th, Samsung announced its flagship 64-bit Exynos processor, supported by Android 5.0, the 64-bit version of Google’s market-leading mobile OS. The new Galaxy S64 smartphone, which will ship on September 20th, features both 64-bit hardware and software components. Samsung and Google receive high praise:

“Supercomputer-class processor… Industry-leading performance… Tightly integrated 64-bit software and hardware open a new era of super high-performance applications previously impossible on mobile devices…”

And Apple gets its just deserts:

“Once again, Apple gets out-innovated…This confirms the trend we’ve seen since Tim Cook took over… iPhones have become second-class devices… The beginning of a long decline…”

Apple can be thankful this is fantasy: The real world would never treat it like this (right?).

My fantasy isn’t without basis: Within 24 hours of Apple’s September announcement, Samsung’s mobile business chief Shin Jong-kyun said his company will have its own 64-bit Exynos processor:

“Not in the shortest time. But yes, our next smartphones will have 64-bit processing functionality…” [The Korea Times]

As for Android support, no problem: 64-bit versions of the underlying Linux kernel already exist. Of course, the system software layer that resides on top of the Linux kernel — the layer that is Android — will also need to be converted to take advantage of the 64-bit processor, as will the Software Development Kit (SDK) that third-party developers use to create apps. It’s a sizable challenge, but one that’s well within the Android’s team skills and resources; the process has certainly been under way for a while already.

The real trouble starts outside of Google. Which 64-bit processor? Intel’s (the company says it will add 64-bit “capabilities” to Android)? Samsung’s? Qualcomm’s?

Who writes and supports device drivers for custom SoC modules? This sounds a lot like Windows device driver complications, but the complexity is multiplied by Google’s significantly weaker control over hardware variants.

Apple’s inherent control over all of the components in its platform will pay dividends in the speed and quality of the transition. There will be glitches — there will always be new, factory-fresh bugs — but the new 64-bit hardware is designed to run existing 32-bit apps, and it seems to actually do so in practice.

Now let’s go beyond the iPhone 5S. In his September 10th presentation, Phil Schiller, Apple’s Marketing Supremo, called the A7′s performance “desktop class”. These words were carefully calibrated, rehearsed, and approved. This isn’t a “Can’t innovate anymore? My asssaeta, blurted while seized by religious fervor at last Spring’s Apple Developers Conference.

Does “desktop class” imply that Apple could use future versions of its 64-bit processor to replace Intel chips in its Mac devices?

In the AnandTech post quoted above, several benchmarks compare Apple’s A7 to a new x86 chip, Intel’s Baytrail, with interesting results:

AnandTech Baytrail A7

So, yes, in theory, a future Apple 64-bit processor could be fast enough to power a Mac.

But let’s consider a 3GHz iMac running a high-end media creation application such as Photoshop or Autodesk. The processor doesn’t want to be constrained by power consumption requirements, it’s optimized for performance (this even ignores the upcoming MacPro and its thermal management prowess).

Can we see a split in the Mac product line? The lower, more mobile end would use Apple’s processors, and the high-end, the no-holds-barred, always plugged to the wall desktop devices would still use x86 chips. With two code bases to maintain ß OS X applications to port? Probably not.

Apple could continue to cannibalize its (and others’) PC business by producing “desktop-class” tablets. Such speculation throws us back to a well-known problem: How do you compose a complex document without a windowing system and a mouse or trackpad pointer?

We’ve seen the trouble with Microsoft’s hybrid PC/tablet, its dual Windows 8 UI which is considered to be “confusing and difficult to learn (especially when used with a keyboard and mouse instead of a touchscreen).”

The best suggestion I’ve seen so far comes from “a veteran design and management surgeon” who calls himself Kontra and proposes An interim solution for iOS ’multitasking‘ based on a multi-slot clipboard.

If Apple provides a real way to compose complex documents on a future iPad, a solution that normal humans will embrace, then it will capture desktop-class uses and users.

Until such time, Macs and iPads are likely to keep using different processors and different interaction models.

JLG@mondaynote.com

 

Apple Market Share: Facts and Psychology

 

Remember netbooks? When Apple was too greedy and stupid to make a truly low-cost Macintosh? Here we go again, Apple refuses to make a genuinely affordable iPhone. There will be consequences — similar to what happened when the Mac refused to join netbooks circling the drain. 

My first moments with the iPad back in April 2010 were mistaken attempts to use it as a Mac. Last year, it took a long overdue upgrade to my eyeglasses before I warmed to the nimbler iPad mini, never to go back to its older sibling.

With that in mind, I will withhold judgment on the new iPhone until I have a chance to play customer, buy the product (my better half seems to like the 5C while I pine for a 5S), and use it for about two weeks — the time required to go beyond my first and often wrong impressions.

While I wait to put my mitts on the new device, I’ll address the conventional hand-wringing over the 5C’s $549 pricetag (“It’s Too Damned High!” cry the masses).

iphone5c copie

Henry Blodget, who pronounced the iPhone Dead In Water in April 2011, is back sounding the alarm: Apple Is Being Shortsighted — And This Could Clobber The Company. His argument, which is echoed by a number of pundits and analysts, boils down to a deceptively simple equation:

Network Effect + Commoditization = Failure

The Network Effect posits that the power of a platform is an exponential function of the number of users. Android, with 80% of the smartphone market will (clearly) crush iOS by sucking all resources into its gravitational well.

Commoditization means that given an army of active, resourceful, thriving competitors, all smartphones will ultimately look and feel the same. Apple will quickly lose any qualitative advantage it now enjoys, and by having to compete on price it could easily fall behind.

Hence the preordained failure.

As a proof-of-concept, the nay-sayers point to the personal computer battle back in the pre-mobile dark ages: Didn’t we see the same thing when the PC crushed the Mac? Microsoft owned the personal computer market; PC commoditization drove prices into the bargain basement…

Interpret history how you will, the facts show something different. Yes, the Redmond Death Star claimed 90% of the PC market, but it failed to capture all the resources in the ecosystem. There was more than enough room for the Mac to survive despite its small market share.

And, certainly, commoditization has been a great equalizer and price suppressant — within the PC clone market. Microsoft kept most of the money with the de facto monopoly enjoyed by its Windows + Office combo, while it let hardware manufacturers race to the bottom (netbooks come to mind). Last quarter, this left HP, the (still) largest PC maker, with a measly 3% operating profit for its Personal Systems Group. By contrast, Apple’s share of the PC market may only be 10% or less, but the Mac owns 90% of the $1000+ segment in the US and enjoys a 25% to 35% margin.

After surviving a difficult birth, a ruthlessly enforced Windows + Office platform, and competition from PC makers large and small, the Mac has ended up with a viable, profitable business. Why not look at iDevices in the same light and see a small but profitable market share in its future?

Or, better yet, why not look at more than one historical model for comparison? For example, how is it that BMW has remained so popular and profitable with its One Sausage, Three Lengths product line strategy? Aren’t all cars made of steel, aluminium (for Sir Jony), plastic, glass, and rubber? When the Bavarian company remade the Mini, were they simply in a race to the bottom with Tata’s Nano, or were they confidently addressing the logical and emotional needs of a more affluent — and lasting — clientèle?

Back to the colorful but “expensive” 5C, Philip Elmer-DeWitt puts its price into perspective: For most iPhone owners, trading up to the 5C is ‘free‘ due to Apple’s Reuse and Recycle program. We’ll have to see if The Mere Matter of Implementation supports the theory, and where these recycled iPhones end up. If the numbers work, these reborn iPhones could help Apple gain a modest foothold in currently underserved price segments.

Still thinking about prices, I just took a look at the T-Mobile site where, surprise, the 5C is “free“, that is no money down and 24 months at $22 — plus a $10 “SIM Kit” (read the small print.) You can guess what AT&T offers: 24 months at $22/month (again, whip out your reading glasses.) Verizon is more opaque, with a terrible website. Sprint also offers a no-money-down iPhone 5C, although with more expensive voice/data plans.

This is an interesting development: Less than a week ago, Apple introduced the iPhone 5C with a “posted price” of $99 — “free” a few days later.

After much complaining to the media about “excessive” iPhone subsidies, carriers now appear to agree with Horace Dediu who sees the iPhone as a great “salesman” for carriers, because it generates higher revenue per user (ARPU). As a result, the cell philanthropists offer lower prices to attract and keep users — and pay Apple more for the iPhone sales engine.

Of course, none of this will dispel the anticipation of the Cupertino company’s death. We could simply dismiss the Apple doomsayers as our industry’s nattering nabobs of negativism, but let’s take a closer look at what insists under the surface. Put another way, what are the emotions that cause people to reason against established facts, to feel that the small market share that allowed the Mac to prosper at the higher end will inevitably spell failure for iDevices?

I had a distinct recollection that Asymco’s Horace Dediu had offered a sharp insight into the Apple-is-doomed mantra. Three searches later, first into my Evernote catchall, then to Google, then to The Guardian, I found a Juliette Garside article where Horace crisply states the problem [the passage quoted here is from a longer version that's no longer publicly available; emphasis and elision mine]:

“[There's a] perception that Apple is not going to survive as a going concern. At this point of time, as at all other points of time in the past, no activity by Apple has been seen as sufficient for its survival. Apple has always been priced as a company that is in a perpetual state of free-fall. It’s a consequence of being dependent on breakthrough products for its survival. No matter how many breakthroughs it makes, the assumption is (and has always been) that there will never be another. When Apple was the Apple II company, its end was imminent because the Apple II had an easily foreseen demise. When Apple was a Mac company its end was imminent because the Mac was predictably going to decline. Repeat for iPod, iPhone and iPad. It’s a wonder that the company is worth anything at all.”

This feels right, a legitimate analysis of the analysts’ fearmongering: Some folks can’t get past the “fact” that Apple needs hit products to survive because — unlike Amazon, as an example — it doesn’t own a lasting franchise.

In the meantime, we can expect to see more hoses attached to Apple’s money pump.

Next week, I plan to look at iOS and 64-bit processing.

JLG@mondaynote.com

Apple’s Wearables Future

 

Wearable technologies have a huge future. For Apple, they’ll create a new product category with an iPhone-like revenue stream! No so fast. Smartwatches and other wearable consumer products lack key attributes for breaking out of the novelty prison. 

‘I Think the Wrist Is Interesting’ Thus spake Tim Cook on the opening night of last May’s D11 conference.

When pressed to discuss his company’s position on wearable technologies, Cook was unusually forthcoming: Instead of pleading Apple’s Fifth, Cook launched into a substantial discussion of opportunities for his company to enter the field, calling wearables “a very key branch of the tree”.

But when asked about the heavily publicized Google Glass he parried the question by suggesting that people who don’t otherwise wear glasses might be reluctant to don such an accoutrement.

I don’t find Tim Cook’s dismissal of eyewear very insightful: Just go to a shopping center and count the eyewear stores. Many belong to the same rich Italian conglomerate, Luxottica, a company with about ten house brands such as Oakley, Persol, and Ray-Ban, and a supplier to more than twenty designer labels ranging from Armani to Versace. (As the perturbing Sixty Minutes exposé on Luxottica pointed out, the company nicely rounds out its vertical dominance of the sector through its ownership of EyeMed, a vision insurance business.)

Eyewear, necessary or not, is a pervasive, fashionable, rich product category, a fact that hasn’t escaped Google’s eye for numbers. The company is making an effort to transmute their geeky spectacles into fashion accessories. Courtesy of Counternotions I offer this picture of Sergey Brin and fashionista Diane von Furstenberg proudly donning the futuristic eyewear at the NY Fashion Week:

Glass Fashion Brin

On a grander scale, we have a Vogue article, Google Glass and a Futuristic Vision of Fashion:

Glass en Vogue 2

The company’s efforts to make Google Glass fashionable might be panned today for pushing the envelope a little too far but, in a not-too-distant future, they stand a chance of being viewed as truly visionary.

If eyewear doesn’t excite Tim Cook, what does? To him, the wrist feels more natural, more socially acceptable. We all wear one or more objects around our wrist(s).

The wristwear genre isn’t new (recall Microsoft’s 2004 Spot). Ask Google to show you pictures of smartwatches, you get 23M results and screen after screen like this one:

smartwatch_ggl

The genre seems to be stuck in the novelty state. Newer entries such as Samsung’s Gear have gotten mixed reviews. Others contend a 2010 iPod nano with a wristband makes a much nicer smartwatch.

Regardless, by comparison, pre-iPod MP3 players and pre-iPhone smartphones were getting better press – and more customers. Considering the putative iWatch, the excitement about Apple getting into this class of devices appears to be excessive.

The litmus test for the potential of a device is the combination of pervasiveness and frequency of use. Smartphones are a good example, they’re always with us, we look at their screens often (too often, say critics who pretend to ignore the relationship between human nature and the Off button).

The iWatch concept makes two assumptions: a) we’ll wear one and, b) we’ll only wear that one.

Checking around we see young adults who no longer wear watches — they have a smartphone; and middle-agers use watches as jewelry, possessing more than one. This defeats both pervasiveness and frequency of use requirements.

Then there’s the biometry question: How much useful information can a wearable device extract from its wearer?

To get a better idea about what’s actually available (as opposed to fantasized), I bought a Jawbone UP wristband a little over a month ago. With its accelerometers and embedded microprocessors, UP purports to tell you how many steps you took, how long you’ve been inactive during your days, it logs your stretches of light and deep sleep, and even “makes it fun and easy to keep track of what you eat”.  Once or twice a day, you plug it into your smartphone and it syncs with an app that displays your activity in graphic form, tells you how well you’re doing versus various goals and averages. It also suggests that you log your mood in order to “discover connections that affect how you feel.”

At first, I found the device physically grating. I couldn’t accept it the way I’m oblivious to my watch, and I even found it on the floor next to my bed a couple of mornings. But I stuck with it. The battery life is as promised (10 days) and I’ve experienced none of the first versions troubles. I traveled, hiked and showered with it without a hitch other than the cap covering the connecting pin getting a bit out of alignment.

Will I keep using it? Probably not.

Beyond the physical discomfort, I haven’t found the device to be very useful, or even accurate. It’s not that difficult to acquire a useful approximation of hours slept and distance walked during the day — you don’t need a device for these things.

As for accuracy, the other day it declared that I had exhibited a substantial level of physical activity… while I was having breakfast. (I may be French, but I no longer move my hands all that much as I speak.)

The app’s suggestion that I log my food consumption falls into the magical thinking domain of dieting. A Monday morning step on a scale tells us what we know already: Moderation is hard, mysterious, out of the reach of gadgets and incantations.

For a product to start a new worthy species for a company as large as Apple, the currency unit to consider is $10B. Below that level, it’s either an accessory or exists as a member of the ecosystem’s supporting cast. The Airport devices are neat accessories; the more visible Apple TV supports the big money makers — Macs, iPads and iPhones — by enhancing their everyday use.

With this in mind, will “wearables” move the needle, will they cross the $10B revenue line in their second or third year, or does their nature direct them to the supporting cast or accessory bins?

Two elements appear to be missing for wearable technologies to have the economic impact that companies such as Apple would enjoy:

  • The device needs to be easily, naturally worn all the time, even more permanently than the watch we tend to take off at night.
  • It needs to capture more information than devices such as the Jawbone do.

A smartwatch that’s wirelessly linked to my smartphone and shows a subset of the screen in my pocket…I’m not sure this will break out of the novelty category where the devices have been confined thus far.

Going back to Tim Cook’s oracular pronouncement on wearables being “a very key branch of the tree”, I wonder: Was he having fun misdirecting his competition?

JLG@mondaynote.com

—————————————–

PS: After two July Monday Notes on the company, I’ll wait for the Microsoft centipede to drop one or two more shoes before I write about the Why, When, How and Now What of Ballmer’s latest unnatural acts. There in an Analyst Day coming September 19th — and the press has been disinvited.

PPS: In coming days, to keep your sanity when trying to drink from the Apple kommentariat fire hydrant, you can safely direct your steps to three sites/blogs:

  • Apple 2.0 , where Philip Ellmer-DeWitt provides rational news and commentary, skewers idiots and links to other valuable fodder.
  • Asymco, where Horace Dediu provides the absolute best numbers, graphs and insights into the greatest upheaval the tech industry has ever seen. Comments following his articles are lively but thoughtful and civilized.
  • Apple Insider. You might want to focus on learned, detailed editorials by Daniel Eran Dilger such as this one where he discusses Microsoft and Google (partially) shifting to an Apple-like business model. Daniel can be opinionated, animated even, but his articles come with tons of well-organized data.

Culture War: Jeff Bezos and The Washington Post

 

by Jean-Louis Gassée

After predicting the death of newspapers, that was last year, Jeff Bezos, the Amazon founder, now buys himself the The Washington Post. Necrophilia or the beginning of another spectacular transformation of an old genre?

A successful business man reaches the dangerous age of 50, looks at his fortune and makes a decision: He’s going to plough a few of his millions into a restaurant. In the past 25 years, he’s been to many of the best dining places around the world. Power lunches, closing dinners, gastronomy road trips with the family, he’s done it all.

He knows restaurants.

But he keeps failing. He fires the chef, changes suppliers, hires a new dining room manager, looks for a classier sommelier, fights city inspectors, calls on his acquaintances and asks them to bring their celebrity friends… nothing works.

He was blinded by his command of his true calling: being a customer. He saw the show from a comfortable box seat and only went backstage when invited by a knowing proprietor eager to glad-hand a moneyed patron. Our gastronome failed to see he knew very little about being a restaurateur, the intricacies, the people challenges (theft, drugs and sex), the politics that are involved in running a real restaurant.

(During my psychosocial moratorium, before I joined the high-tech industry in 1968, I worked in a bar, a food-serving strip-joint, and a restaurant. I thought these places were deranged. Decades later, I read Anthony Bourdain’s Kitchen Confidential and realized the “people challenges” I witnessed aren’t so unusual after all. Enjoy the book and think about the goings-on back there next time a maitre d’ looks down his aquiline nose at you.)

Failed restaurants are common in Silicon Valley, with its crowd of affluent and well-traveled business people who think they can master the trade. A few of them subsidize the great dinners we get to enjoy — for a while. They have our fleeting gratitude and end up with a painfully depleted bank account.

Is this a valid parable for Jeff Bezos plowing $250M (so far) into The Washington Post? To start, the price paid for the DC “paper of record” amounts to less than 1% of the Amazon founder’s fortune. Even if he has to double or triple his initial investment while he turns the paper around, it won’t trouble Bezos’ pocketbook much — he can eminently afford the bet.

And, unlike our failed restaurateur, I don’t think Bezos’ purchase was made in a mid-life fit of vanity. (Although see this delicious piece of Internet satire that contends he bought the paper as a result of a mistaken click.) Read Bezos’ Wikipedia bio, or his letters to shareholders… you’ll see he’s a deep-thinking geek (now a term of respect. The Urban Dictionary updates the meaning: people you pick on in high school and wind up working for as an adult). He’s justifiably famous for taking the very long view, and he’s quotably willing to be “misunderstood” for a long time.

But can he win?

Personally, I hope so. I used to love newspapers, I remember how much I enjoyed breakfast with two local and two national papers, all delivered to my doorstep, an unimaginable luxury in France.

Once upon a time, for their advertising revenue, newpapers enjoyed an oligopoly. With three or four dailies in each market, prices were contained. And we, the readers, certainly didn’t mind that advertisers paid 75% of the cost of our daily fix.

Then, the Internet that Bezos has ridden so well intervened and newspapers lost the news race. The Internet won on velocity and, too often, on relevance. In a Fortune Tech piece offering “5 hacks for Jeff Bezos“, Ryan Holmes, CEO of Social Media Management company HootSuite, points to the speed and tone of social media as sources of fixes for the Post:

Perhaps the greatest criticism of newspapers today is that they have lost relevance to their own readers. Writing on the decline of the Post, New York Times media columnist David Carr points out that “[the] days when people snapped open the daily paper to find out the things they should care about were long past …” Big newspapers, in particular, have proven startlingly inept at delivering timely, relevant news to the people they serve. So, naturally, readers have gone elsewhere, to myriad online sources that better cater to their interests.

Since the Net offered a seemingly unconstrained amount of billboard space, the price that newspapers could charge for ads was quickly cut by a factor of ten and, more recently, sixteen.

But it wasn’t just the emergence of the Internet as a news medium that dealt newspapers a near fatal blow. They also lost the race because of internal, cultural circumstances.

In another case of the Incumbent’s Curse, newspapers looked down on the Internet and those annoying high-tech people and things.  Kara Swisher, co-head of AllThingsD (a Wall Street Journal enterprise), recounts her trouble with the old, arrogant culture at the Post in her Dear Jeff Bezos, Here’s What I Saw as an Analog Nobody in the Mailroom of the Washington Post letter:

“It happened every day — other reporters playfully mocking me for using email so much or for borrowing the Post’s few suitcase cellphones, or major editors telling me that the Internet was like the CB-radio fad, or sales people insisting that the good times would never end for newspapers as long as there were local businesses that needed to reach consumers. (In truth, they still do, but that’s another letter.)”

Sadly, the Post’s cultural reluctance isn’t unique. In another country, two prominent dailies I know exhibit very similar symptoms, print journalists who actively despise or even obstruct the Internet side of their house.

Much has been written about Jeff Bezos’ personal (not Amazon’s) purchase of the Post. For example: Good Luck With That – Pew Research Graphs Bezos’ Stunning Challenge, where Tom Foremski steps us through the Post’s business challenges, starting with the inexorable decline in Print revenues:

Post Revenue Decline

Another comment well worth reading, Stop the Presses: A New Media Baron Appears, comes to us courtesy of Michael Moritz, a.k.a. Sir Michael, a journalist who went over to the Dark Side and is now Chairman of Sequoia Capital, a leading venture firm. The article reminds us of Bezos’ foremost preoccupation with customers [emphasis mine]:

“It won’t come as a surprise that Bezos explains that pleasing, if not thrilling, customers is Amazon’s most important task. In his 2009 letter he provided a peek into the internals of Amazon explaining that of the company’s 452 detailed goals for the ensuing year 360 had an impact on the customer, the word ‘revenue’ was used just eight times, ‘free cash flow’ only four times and ‘net income’, ‘gross profit’, ‘margin’ and operating profit were not mentioned. Even though there is no line item on any financial statement for the intangible value associated with the trust of customers this is, by far and away, Amazon’s most important asset.

Elsewhere, Moritz reminds us of another source of Amazon’s prosperity, Free Cash-Flow, a frequent topic in Bezos’ letters to shareholders:

“Since inception Amazon has generated $20.2 billion from operations almost half of which ($8.6 B), has been used for capital expenditures such as new distribution centers, which improve life for the customer.”

With this and more in mind, we now turn to the letter Bezos wrote to employees at the newspaper. While he professes no desire to “be leading The Washington Post day-to-day”, he nonetheless makes no mystery of his goal to be an agent of change, of modernization, of adapting to the Internet Age:

“There will, of course, be change at The Post over the coming years. That’s essential and would have happened with or without new ownership. The Internet is transforming almost every element of the news business: shortening news cycles, eroding long-reliable revenue sources, and enabling new kinds of competition, some of which bear little or no news-gathering costs. There is no map, and charting a path ahead will not be easy. We will need to invent, which means we will need to experiment. Our touchstone will be readers, understanding what they care about – government, local leaders, restaurant openings, scout troops, businesses, charities, governors, sports – and working backwards from there. I’m excited and optimistic about the opportunity for invention.”

This comes from a man who, last year, said ‘People Won’t Pay For News On The Web, Print Will Be Dead In 20 Years‘.

Changing business models as a publicly traded company is impossible in practice. The old model dies faster than the new one kicks in and Wall Street runs away from the transition’s “earnings trough”. By buying the Washington Post, Bezos is afforded a privacy that the old public ownership structure doesn’t permit. (That’s exactly why Michael Dell wants to take his own company private, so he can perform surgery behind the curtains.)

Which leaves the new owner with his biggest challenge: Understanding and changing the culture at the old “paper” — which sounds harder and more expensive than a gastronome trying to become a restaurateur.

There will be blood.

This is no reflection on Bezos’ truly amazing diversity and depth of skills, but a sincere concern borne of Culture’s ability to devour anything that stands in its way, sometimes silently until it’s too late. As the saying goes, Culture Eats Strategy for Breakfast.

Of course, we have examples of people performing seemingly impossible feats. Steve Jobs’ Apple 2.0 comes to mind, a turnaround of monumental proportions to which Bezos’ Amazon achievements could be fairly compared. So, why couldn’t Bezos build a WaPo 2.0?

As Aaron Levie, the founding CEO of Box, tweeted last week:

“Industries are transformed by outsiders who think anything is possible, not insiders who think they already know what is impossible.”

One more thing, a thought I can’t suppress: Unlike Steve Jobs, who gained insight from his tribulations and then spread the benefits on the largest of scales, Bezos hasn’t been burned and tempered by failure.

JLG@mondaynote.com

 

Surveillance: The Enemy of Innovation

 

by Jean-Louis Gassée

When we think of government surveillance, we worry about our liberties, about losing a private space where no one knows what we do, say, think. But there is more. Total Surveillance is the enemy of innovation, of anything that threatens public or private incumbents.

No apocryphal levity this week. Instead, a somber look into an almost-present future. For once, Tim Cook isn’t holding his cards close to his chest; he makes no secret of Apple’s interest in wearable technologies. Among the avenues for notable growth (in multiples of $10B), I think wearable devices is a good fit for Apple, more than the likable but just-for-hobbyist TV, and certainly more than the cloudy automotive domain where Google Maps could be a hard obstacle.

Apple isn’t alone, every tech company seems to be developing smart watches, smart glasses, and other health and life-style monitoring devices. (Well, almost every tech company…we haven’t heard from Michael Dell, but perhaps he’s too busy keeping his almost-private company out of Carl Icahn’s clutches.)

To gather more facts for a future Monday Note on wearable devices, I took my usual Play Customer route and followed the example of friends who sport activity-monitoring bracelets such as the popular Jawbone UP wristband (see Frédéric’s experience in a recent MN). I look up on-line review and find more than a few negative comments, but I choose to ignore them and listen, instead, to users who say the bugs have been squashed.

At the nearby Palo Alto Apple Store, a sales gent performs the fitting and the cashectomy with equal competence. Five minutes later, I download the required smartphone app, read a few instructions, and complete a first sync. I’m ready to monitor both my daytime activities and nighttime sleep patterns.

That was three days ago. It’s too early to say much about the product experience, which has been uneventful so far, but a dark, nagging thought comes to fill the void. Here is yet another part of my life that’s monitored, logged, accessible. The somber ruminations of a recent Privacy: You Have Nothing To Fear Monday Note are rekindled. At the time, I wondered if perhaps I was being paranoid. That was before the flow of Edward Snowden’s revelations to The Guardian’s Glenn Greenwald.

This is what we think we know so far: The State, whatever that means these days, monitors and records everything everywhere. We’re assured that this is done with good intentions and with our best interests in mind: Restless vigilance is needed in the war on terror, drug trafficking, money-laundering. Laws that get in the way — such as the one that, on the surface, forbids the US to spy on its own citizens — are bent in ingenious ways, such as outsourcing the surveillance to a friendly or needy ally.

If this sounds outlandish, see The Guardian’s revelations about XKeyscore, the NSA tool that collects “nearly everything a user does on the internet”. Or read about the relationship between the NSA and the UK’s GCHQ:

…the Guardian has discovered GCHQ receives tens of millions of pounds from the NSA every year…In turn, the US expects a service, and, potentially, access to a range of programmes, such as Tempora [GCHQ's data storage system].

Those campaigners and academics who fear the agencies are too close, and suspect they do each other’s “dirty work”, will probably be alarmed by the explicit nature of the quid-pro-quo arrangements.

Every day there’s another story. Today, the WSJ tells us that the FBI has mastered the hacking tools required to remotely turn on microphones and cameras on smartphones and laptops:

Earlier this year, a federal warrant application in a Texas identity-theft case sought to use software to extract files and covertly take photos using a computer’s camera, according to court documents. 

The surveillance and snooping isn’t just about computers. We have license plate recorders and federally mandated black boxes in cars. And now we hear about yet another form of metadata collection: It seems that the US Post Office scans every envelope that they process. Not e-mail, “sneaker mail”. Reading someone else’s mail is, of course, a federal offense. No problem, we’ll just scan the envelopes so we know who’s writing to whom, when, how often…

To this litany we must add private companies that record everything we do. Not just our posts, emails, and purchases, but the websites we visit, the buttons we click, even the way movement of the mouse…everything is recorded in a log file, and it’s made available to the “authorities” as well as buyers/sellers of profiling information. It’s all part of the Grand Bargain known as If the Product Is Free Then You Are the Product Being Sold.

When asked why Google doesn’t encrypt the user data that it stores, Vinton Cerf, the revered Internet Pioneer turned Google’s PR person, sorry, VP and Chief Evangelist, serenely admits that doing so would conflict with Google’s business model and disrupt user features.

At public events, Vint Cerf, a Google employee who was an early architect of the Internet, has said that encrypting information while it is stored would prevent Google from showing the right online advertisements to users.

I’m not singling out Google: Facebook and many others would have to make the same statement.

We’re now closer to trouble with innovation. In an almost-present future, we’ll have zero privacy. Many will know what we do, what we say, where we are, at all times. This will cast a Stasi shadow over our lives, our minds, our emotions. (See The Lives Of Others, Florian Henckel von Donnersmarck’s dramatization of state-sponsored surveillance in East Berlin.)

Let’s not dwell on the discredited You Have Nothing To Fear retort and turn to what happens to All Things New under a total surveillance regime.

Personal freedoms, civil rights, new ways of doing, thinking, speaking, dressing or undressing, science and philosophy, religion, fashion or cooking or smoking… Anything really new breaks existing canons, the rules, laws, habits, and understandings of the established order.

Total surveillance protects everything, starting with the status quo. With everything open to scrutiny by our Benevolent Guardians, there’s no safe place to discuss ideas that may seem disturbing at first, but that, given time and privacy, can evolve into new standards, behaviors, and technologies.

Anything that sticks out gets pounded.

Take past 100 years. Behold all the disruptive liberties and the inventions that upended public and private incumbents. Now, imagine how many would have been killed in the womb under a total State and private surveillance blanket.

But, you’ll say, we have a democratic system; if we don’t want our privacy invaded, surely we can voice our objection through our votes. After all, we elect and fire our representatives, the ones who make the laws and who hire and fire government executives for us.

Not really, or not anymore.

Two thousand years ago, Juvenal condemned Roman politicians who tried to buy votes with food and entertainment. It was a panem et circenses culture in which society “restrains itself and anxiously hopes for just two things: bread and circuses.”

The politicking in our current demagocracy is just as unsavory. To get elected one must promise to provide more with fewer taxes — or whatever bread and circus the latest Big Data says we crave. Instead of shedding light, the campaigning makes sick entertainment.

Once in office, our solons need money to be reelected so they promptly sell “our” votes to lobbyists who are eager to finance the reelection campaigns. Even worse, these same lobbyists provide the platoons of lawyers and consultants who inject the “appropriate” loopholes into inscrutable laws.

All of this makes (most) business feel like an oasis of sense and good will. Many otherwise capable people turn up their noses at the cesspool of politics and stick to their cleaner fun.

Is the situation hopeless?

I pray not. But I can’t help but see our laws — the tax code is the prime example — as old operating systems that are patched together, that have accumulated layer upon layer of silt. No one can comprehend these rules anymore, they’re too big and complicated to fit in one’s head…they’re seemingly unfixable.

This could leave us pining for a messiah, an Arthurian pure heart who pulls the sword from the stone and leads a revolution. We know what happens next in this narrative: the Okhrana becomes the NKVD.

Or perhaps technology itself will come to the rescue. Just as terrorism is viewed as an asymmetric threat in which a small, agile, and stealthy enemy can inflict damage on a giant, perhaps technology will provide us with an asymmetric advantage against surveillance and recreate a modicum of private space for us.

What I don’t see is The State simply renouncing its surveillance, it’s so convenient. Nor do I see us paying for truly anonymous Gmail, Google Maps, or Facebook.

JLG@mondaynote.com