hardware

The Apple Tesla Connection: Fun and Reason With Numbers

 

Apple acquiring Tesla would make for juicy headlines but would also be very dangerous. There are more sensible ways for the two companies to make money together.

Apple has never suffered from a lack of advice. As long as I can remember — 33 years in my case — words of wisdom have rained down upon the company, and yet the company stubbornly insists on following its own compass instead of treading a suggested path.

(Actually, that’s not entirely true. A rudderless, mid-nineties Apple yielded to the pressure of pundit opinion and licensed its Mac OS to Power Computing and Motorola… and promptly lost its profits to Mac clones. When Steve Jobs returned in 1997, he immediately canceled the licenses. This “harsh” decision was met with a howl of protest, but it stanched the bleeding and made room for the other life-saving maneuvers that saved the company.)

Now that Jobs is no longer with us and Apple’s growth has slowed, the advice rain is more intense than ever… and the pageviews netwalkers are begging for traffic. Suggestions range from the deranged (Tim Cook needs to buy a blazer), to – having forgotten what happened to netbooks – joining the race to the bottom (Apple needs a low-cost iPhone!), to the catchall “New Categories” such as Wearables, TV, Payment Systems, and on to free-floating hysteria: “For Apple’s sake, DO SOMETHING, ANYTHING!”

The visionary sheep point to the titanic deals that other tech giants can’t seem to resist: Google buys Nest for $3.3B; Facebook acquires WhatsApp for $16B (or $19B, depending on what and how you count). Why doesn’t Apple use its $160B of cash and make a big acquisition that will solidify its position and rekindle growth?

Lately, we’ve been hearing suggestions that Apple ought to buy Tesla. The company is eminently affordable: Even after TSLA’s recent run-up to $30B, the company is well within Apple’s means. (That Wall Street seems to be telling us that Tesla is worth about half of GM and Ford is another story entirely.)

Indeed, Adrian Perica, Apple’s head of acquisitions, met Tesla’s CEO Elon Musk last Spring. Musk, who later confirmed the meeting, called a deal “very unlikely”, but fans of both companies think it’s an ideal match: Tesla is the first Silicon Valley car company, great at design, robotics, and software. Like Apple, Tesla isn’t afraid to break with traditional distribution models. And, to top it off, Musk is a Steve Jobs-grade leader, innovator, and industry contrarian. I can see the headline: “Tesla, the Apple of the auto industry…”

But we can also picture the clash of cultures…to say nothing of egos.

I have vivid recollections of the clash of cultures after Exxon’s acquisition of high-tech companies to form Exxon Office Systems in the 1970s.

Still reeling from the OPEC oil crisis, Exxon’s management was hypnotized by the BCG (Boston Consulting Group) who insisted that “Information Is the Oil of the 21st Century.” The BCG was right, Tech has its Robber Barons: Apple, Oracle, Google, Facebook, Intel, and Microsoft all weigh more than Shell, Exxon, or BP.

But the BCG was also wrong: Exxon’s culture had no ability to understand what made the computer industry tick, and the tech folks thoroughly despised the Exxon people. The deal went nowhere and cost Exxon about $4B – a lot of money more than 30 years ago.

This history lesson isn’t lost on Apple. So: If Apple isn’t “interested” in Tesla, why are they interested in Tesla?

Could it be batteries?

A look at battery numbers for the two companies brings up an interesting parallel. Tesla plans to make 35,000 Tesla S cars this year. According to Car and Driver magazine, the battery weighs 1323 pounds (600 kilograms — we’ll stick to metric weights moving forward):

Tesla S Battery 1323lbs edited

That’s 21,000 (metric) tons of batteries.

For Apple devices the computation is more complicated — and more speculative — because the company publicizes battery capacity (in watt-hours) rather than weight. But after some digging around, I found the weight information for an iPhone 4S on the iFixit site: 26 grams. From there, I estimated that the weight of the larger iPhone 5S battery is 30 grams.

I reasoned that the weight/capacity ratio is probably the same for all Apple batteries, so if a 26g iPhone battery provides 5.25 watt-hrs, the iPad Air battery that yields 32.4 watt-hrs must weigh approximately 160g. Sparing you the details of the mix of iPad minis and the approximations for the various Macs, we end up with these numbers for 2014 (I’m deliberately omitting the iPod):

100M iPads @ 130g = 13,000 tons
200M iPhones @ 30g = 6,000 tons
20M Macs @ 250g = 5,000 tons
Total Apple batteries = 24,000 metric tons

It’s a rough estimate, but close enough for today’s purpose: Apple and Tesla need about the same tonnage of batteries this year.

Now consider that Tesla just announced it will build a battery mill it calls the Gigafactory:

gigafactory process

According to Tesla, the plant’s capacity in 2020 will be higher than what the entire world produced in 2013.

A more likely explanation for Apple’s conversation with Tesla might be something Apple does all the time: Sit with a potential supplier and discuss payment in advance as a way to secure supply of a critical component.

Of course, neither Tesla nor Apple will comment. Why should they? But a partnership born of their comparable needs for battery volumes makes a lot more sense than for the two companies to become one.

–JLG@mondaynote.com

follow gassee

Postscript: Certainly, Apple is no stranger to acquisitions — the company recently disclosed that it purchased 23 small companies in the last 16 months — but the operative word, here, is “small”. Apple has made two large purchases in twenty years: The late-1996 acquisition of NeXT for $429M (and 1.5M Apple shares), and the $356M purchase of Authentec in July 2012. Other other than that, Apple’s acquisitions have focused on talent and technologies rather than sheer size.

This seems wise. In small acquisitions, everyone knows who’s in charge, it’s the acquirer’s way or the highway, and failure rarely makes headlines. Bigger deals always involve explicit or implicit power sharing and, more important, a melding of cultures, of habits of the heart and mind, of “the ways we do things here”.

——–

Comcast and Us

 

Comcast tells us how much better our lives will be after they acquire Time Warner. Great, thanks! Perhaps this is an opportunity to look at other ways that we can “acquire” Cable TV and Internet access.

Comcast CEO Brian Roberts thinks we’re powerless idiots. This is what his company’s website says about the planned Time Warner acquisition :

“Transaction Creates Multiple Pro-Consumer and Pro-Competitive Benefits…”

Don’t read the full legal verbiage that purports to explain the maneuver. A more productive use of your time will be had by reading Counternotion’s pointed summary in Obfuscation by disclosure: a lawyerly design pattern:

(tl;dr: According to Comcast, the merger is “pro-sumer” if you “get past some of the hysteria,” it’s “approvable” by the regulators and won’t “reduce consumer choice at all”. Will it raise prices? “not promising that they will go down or even that they will increase less rapidly.” Given the historical record of the industry, it’s Comedy Central material.)

Let’s not loiter around Comcast’s lobbying operations, either — the $18.8M spent in 2013, the pictures of Mr. Roberts golfing with our President, the well-oiled revolving door between the FCC and the businesses they regulate. Feelings of powerlessness and anger may ensue, as trenchantly expressed in this lament from a former FCC Commissioner.

Instead, let’s use our agitation as an opportunity to rethink what we really want from Cable carriers. The wish list is long: TV à la carte instead of today’s stupid bundles, real cable competition vs. de facto local monopolies, metered Internet access in exchange for neutrality and lower prices for lighter usage, decent set-top boxes, 21st century cable modems, and, of course, lower prices.

These are all valid desires, but if there were just one thing that we could change about the carrier business, what would it be? What would really make a big, meaningful difference to our daily use of TV and the Internet?

Do you remember the Carterfone Decision? For a century (telephone service started in the US in 1877), AT&T reigned supreme in telecommunications networking. (I should say the former AT&T, not today’s company rebuilt from old body parts.) The company owned everything along its path, all the way down to your telephone handset — only MaBell’s could be used.

Then, in the late fifties, a company called Carterfone began to sell two-way radios that could be hooked up to a telephone. The device was invented by a Texan named Thomas Carter as a clumsy but clever way to allow oil field owners and managers sitting in their offices in Dallas to reach their workers out at the pumps.

AT&T was not amused.

“[AT&T] advised their subscribers that the Carterfone, when used in conjunction with the subscriber’s telephone, is a prohibited interconnecting device, the use of which would subject the user to the penalties provided in the tariff…”

Carterfone brought an antitrust suit against AT&T… and won. With its decision in favor of Thomas Carter’s company, the Federal Communications Commission got us to a new era where any device meeting the appropriate technical standards could connect to the phone network.

“…we hold, as did the examiner, that application of the tariff to bar the Carterfone in the future would be unreasonable and unduly discriminatory.”

The regulator — an impartial representative, in an ideal world — decides what can connect to the network. It’s not a decision that’s left to the phone company.

Back in the 21st century, we need a Carterfone Decision for cable boxes and modems. We need a set of rules that would allow Microsoft, Google, Roku, Samsung, Amazon, Apple — and companies that are yet to be founded — to provide true alternatives to Comcast’s set-top boxes.

Today, you have a cable modem that’s so dumb it forces you to restart everything in a particular sequence after a power outage. You have a WiFi base station stashed in among the wires. Your set-top box looks like it was made in the former Soviet Union (a fortuitous product introduction days before the merger announcement doesn’t improve things, much). You have to find your TV’s remote in order to switch between broadcast TV, your game console, and your Roku/AppleTV/Chromecast…and you have to reach into your basket of remotes just to change channels.

Imagine what would happen if a real tech company were allowed to compete on equal terms with the cable providers.

Microsoft, for example, could offer an integrated Xbox that would provide Internet access, TV channels with a guide designed by Microsoft, WiFi, an optional telephone, games of course, and other apps as desired. One box, three connectors: power, coax from the street, and HDMI to the TV set. There would be dancing in the streets.

But, you’ll object, what about the technical challenges? Cable systems are antiquated and poorly standardized. The cables themselves carry all sorts of noisy signals. What tech giant would want to deal with this mess?

To which one can reply: Look at the smartphone. It’s the most complicated consumer device we’ve ever known. It contains radios (Wifi, Bluetooth, multi-band cellular), accelerometers/gyroscopes, displays, loudspeakers, cameras, batteries… And yet, smartphones are made in huge quantities and function across a wide range of network standards. There’s no dearth of engineering talent (and money) to overcome the challenges, especially when they’re tackled outside of the cable companies and their cost-before-everything cultures.

Skeptics are more likely to be correct about the regulatory environment or, to be more precise, regulatory capture, a phrase that…captures the way regulators now work for the industries they were supposed to control. Can we imagine the FCC telling Comcast: “Go ahead and buy Time Warner…just one little condition, make sure any and all of your connection protocols and services APIs are open to any and all that pass the technical tests listed in Appendix FU at the end of this ruling.”

That’s not going to happen. We must prepare ourselves for a sorry display of bad faith and financial muscle. Who knows, in the end, Comcast might give up, as AT&T did after telling us how pro-consumer the merger with T-Mobile would be.

JLG@mondaynote.com

@gassee

Apple Numbers For Normals: It’s The 5C, Stupid!

 

Today’s unscientific and friendly castigation of Apple’s iPhone 5C costly stumble: misdirected differentiation without enough regard for actual customer aspirations.

Here’s a quick snapshot of Apple’s numbers for the quarter ending December 2013, with percentage changes over the same quarter a year ago:

307TABLE JLG

We can disregard the iPod’s “alarming” decrease. The iPod, which has become more of an iPhone ingredient, is no longer expected to be the star revenue maker that it was back in 2006 when it eclipsed the Mac ($7.6B vs. $7.4B for the full year).

For iPhones, iPads, and overall revenue, on the other hand, these are record numbers…. and yet Apple shares promptly lost 8% of their value.

Why?

It couldn’t have been that the market was surprised. The numbers exactly match the guidance (a prophylactic legalese substitute for forecast) that was given to us by CFO Peter Oppenheimer last October:

“We expect revenue to be between $55 billion and $58 billion compared to $54.5 billion in the year ago quarter. We expect gross margins to be between 36.5% and 37.5%.”

(Non-normals can feast their eyes on Apple’s 10-Q filing and its lovingly detailed MD&A section. I’m sincere about the “lovingly” part — it’s great reading if you’re into it.)

Apple guidance be damned, Wall Street traders expected higher iPhone numbers. As Philip Elmer-DeWitt summarizes in an Apple 2.0 post, professional analysts expected about 55M iPhones, 4M more than the company actually sold. At $640 per iPhone, that’s about $2.5B in lost revenue and, assuming 60% margin, $1.5B in profit. The traders promptly dumped the shares they had bought on the hopes of higher revenues.

In Apple’s choreographed, one-hour Earnings Call last Monday (transcript here), company execs offered a number of explanations for the shortfall (one might say they offered a few too many explanations). Discussing proportion of sales of the iPhone 5S vs. iPhone 5C. Here what Tim Cook had to say [emphasis mine]:

“Our North American business contracted somewhat year over year. And if you look at the reason for this, one was that as we entered the quarter, and forecasted our iPhone sales, where we achieved what we thought, we actually sold more iPhone 5Ss than we projected.

And so the mix was stronger to the 5S, and it took us some amount of time in order to build the mix that customers were demanding. And as a result, we lost some sort of units for part of the quarter in North America and relative to the world, it took us the bulk of the quarter, almost all the quarter, to get the iPhone 5S into proper supply.

[…]

It was the first time we’d ever run that particular play before, and demand percentage turned out to be different than we thought.

In plainer English:

“Customers preferred the 5S to the 5C. We were caught short, we didn’t have enough 5Ss to meet the demand and so we missed out on at least 4 million iPhone sales.”

Or, reading between the lines:

“Customers failed to see the crystalline purity of the innovative 5C design and flocked instead to the more derivative — but flattering — 5S.”

Later, Cook concludes the 5S/5C discussion and offers rote congratulations all around:

“I think last quarter we did a tremendous job, particularly given the mix was something very different than we thought.”

… which means:

“Floggings will begin behind closed doors.”

How can a company that’s so precisely managed — and so tuned-in to its customers’ desires — make such a puzzling forecast error? This isn’t like the shortcoming in the December 2012 quarter when Apple couldn’t deliver the iMacs it had announced in October. This is a different kind of mistake, a bad marketing call, a deviation from the Apple game plan.

With previous iPhone releases, Apple stuck to a simple price ladder with $100 intervals. For example, when Apple launched the iPhone 5 in October 2012, US carriers offered the new device for $200 (with a two-year contract), the 2011 iPhone 4S was discounted to $100, and the 2010 iPhone 4 was “free”.

But when the iPhone 5S was unveiled last September, Apple didn’t deploy the 2012 iPhone 5 for $100 less than the new flagship device. Instead, Apple “market engineered” the plastic-clad 5C to take its place. Mostly built of iPhone 5 innards, the colorful 5C was meant to provide differentiation… and it did, but not in ways that helped Apple’s revenue — or their customers’ self-image.

Picture two iPhone users. One has a spanking new iPhone 5S, the other has an iPhone 5 that he bought last year. What do you see? Two smartphone users of equally discerning taste who, at different times, bought the top-of-the-line product. The iPhone 5 user isn’t déclassé, he’s just waiting for the upgrade window to open.

Now, replace the iPhone 5 with an iPhone 5C. We see two iPhones bought at the same time… but the 5C owner went for the cheaper, plastic model.

We might not like to hear psychologists say we build parts of our identity with objects we surround ourselves with, but they’re largely right. From cars, to Burberry garments and accessories, to smartphones, the objects we choose mean something about who we are — or who we want to appear to be.

I often hear people claim they’re not interested in cars, that they just buy “transportation”, but when I look at an office or shopping center parking lot, I don’t see cars that people bought simply because the wheels were round and black. When you’re parking your two-year old Audi 5S coupe (a vehicle once favored by a very senior Apple exec) next to the new and improved 2014 model, do you feel you’re of a lesser social station? Of course not. You both bought into what experts call the Affordable Luxury category. But you’re self-assessment would be different if you drove up in a Volkswagen Jetta. It’s made by the same German conglomerate, but now you’re in a different class. (This isn’t to say brand image trumps function. To the contrary, function can kill image, ask Nokia or Detroit.)

The misbegotten iPhone 5C is the Jetta next to the Audi 5S coupé. Both are fine cars and the 5C is a good smartphone – but customers, in numbers large enough to disrupt Apple’s forecast, didn’t like what the 5C would do to their image..

As always, it’ll be interesting to observe how the company steers out of this marketing mistake.

There is much more to watch in coming months: How Apple and its competitors adapt to a new era of slower growth; how carriers change their behavior (pricing and the all important subsidies) in the new growth mode; and, of course, if and how “new categories” change Apple’s business. On this, one must be cautious and refrain from expecting another iPhone or iPad explosion, with new products yielding tens of billions of dollars in revenue. Fodder for future Monday Notes.

JLG@mondaynote.com

@gassee

 

Mac Pro: Seymour Cray Would Have Approved

 

As we celebrate 30 years of Macintosh struggles and triumphs, let’s start with a semiserious, unscientific comparison between the original 128K Mac and its dark, polished, brooding descendant, the Mac Pro.

Mac 128KMac Pro

The original 128K Mac was 13.6” high, 9.6” wide, 10.9” deep (35.4 x 24.4 x 26.4 cm) and 16.5 lb (7.5 kg). Today’s Mac Pro is 9.9″ by 6.6″ (25 by 17 cm) and weighs 11 lb (5 kg) — smaller, shorter, and lighter than its ancient progenitor. Open your hand and stretch your fingers wide: The distance from the tip of your pinky to the tip of your thumb is in the 9 to 10 inches range (for most males). This gives you an idea of how astonishingly small the Mac Pro is.

At 7 teraflops, the new Pro’s performance specs are impressive…but what’s even more impressive is how all that computing power is stuffed into such a small package without everything melting down. Look inside the new Mac Pro and you’ll find a Xeon processor, twin AMD FirePro graphics engines, main memory, a solid-state “drive”, driven by 450W of maximum electric power… and all cooled by a single fan. The previous Mac Pro version, at only 2 teraflops, needed eight blowers to keep its GPU happy.

The Mac Pro achieves a level of “computing energy density” that Seymour Cray — the master of finding ways to cool high-performance, tightly packaged systems, and a Mac user himself — would have approved of.

(I’ve long been an admirer of Seymour Cray, ever since the introduction of his company’s first commercial supercomputer, the CDC 6600. In the early nineties, I was a Board member and investor at Cray Inc.  My memories of Seymour would fill an entire Monday Note. If you’re familiar with the name but not the supercomputer genius himself, I can recommend the Wikipedia article; it’s quite well-written.)

During Cray’s era of supercomputing — the 1960’s to early 90’s — processors were discrete, built from separate components. All of these building blocks had to be kept as close to each other as possible in order to stay in sync, to stay within the same “time horizon”. (Grace Hopper’s famous “one nanosecond equals a foot of wire” illustration comes to mind.) However, the faster the electronic module is, the more heat it generates, and when components are packed tightly together, it becomes increasingly difficult to pump out enough heat to avoid a meltdown.

That’s where Cray’s genius expressed itself. Not only could he plot impossibly tight circuit paths to guarantee the same propagation time for all logic signals, he designed these paths in ways that allowed adequate cooling. He sometimes referred to himself, half-seriously, as a good plumber.

(Seymour once told me he could fold a suit, change of shirt, and underwear in his small Delsey briefcase, and thus speed through airports on the way to a fund raising meeting while his investment bankers struggled with their unwieldy Hartmann garment bags…)

I finally met Seymour in December 1985 while I was head of Apple’s Product Development. The Mac Plus project was essentially done and the Mac II and Mac SE projects were also on their way (they would launch in 1987). Having catered to the most urgent tasks, we were looking at a more distant horizon, at ways to leap ahead of everyone else in the personal computer field. We concluded we had to design our own CPU chip, a quad-processor (today we’d call it a “four-core chip”). To do this, we needed a computer that could run the design and simulation software for such an ambitious project, a computer of commensurate capabilities, hence our choice of a Cray X/MP, and the visit to Seymour Cray.

For the design of the chip, the plan was to work with AT&T Microelectronics — not the AT&T we know now, but the home of Bell Labs, the birthplace of the transistor, Unix, the C language, cellular telephony and many other inventions. Our decision to create our own CPU wasn’t universally well-received. The harshest critics cast Apple as a “toy company” that had no business designing its own CPU chip. Others understood the idea but felt we vastly underestimated the technical challenges. Unfortunately, they turned out to be right. AT&T Microelectronics ultimately bailed out of the microprocessor business altogether.

(Given this history, I couldn’t help be amused when critics scoffed at Apple’s decision to acquire P.A. Semiconductor in 2008 and, once again, attempt to design its own microprocessors. Even if the chip could be built, Apple could never compete against the well-established experts in the field… and it would cost Apple a billion dollars, either way. The number was widely off the mark – and knowing Apple’s financials wouldn’t matter anyway. We know what happened: The 64-bit A7 device took the industry by surprise.)

Thirty years after the introduction of the original Mac, the Mac Pro is both different and consistent. It’s not a machine for everyone: If you mostly just use ordinary office productivity apps, an iMac will provide more bang for less buck (which means that, sadly, I don’t qualify as a Mac Pro user). But like the 128K Mac, the Mac Pro is dedicated to our creative side; it serves the folks who produce audio and video content, who run graphics-intensive simulations. As Steve put it so well, the Mac Pro is at the crossroad of technology and liberal arts:

Crossroads

Still, thirty years later, I find the Mac, Pro or “normal” every bit as seductive, promising – and occasionally frustrating – as its now enshrined progenitor.

As a finishing touch, the Mac Pro, like its ancestor, is designed and assembled in the US.

JLG@mondaynote.com

————————–

Postscript. At the risk of spoiling the fun in the impressive Making the all-new Mac Pro video, I wonder about the contrast between the powerful manufacturing operation depicted in the video and the delivery constipation. When I ordered my iMac early October 2013, I was promised delivery in 5-7 business days, a strange echo of of the December 2012 quarter iMac shipments shortfall. The machine arrived five weeks later without explanation or updated forecast. Let’s hope this was due to higher than expected demand, and that Apple’s claim that Mac Pro orders will ship “in March” won’t leave media pros wanting.

The Hybrid Tablet Temptation

 

In no small part, the iPad’s success comes from its uncompromising Do Less To Do More philosophy. Now a reasonably mature product, can the iPad expand its uses without falling into the hybrid PC/tablet trap?

When the iPad came out, almost four years ago, it was immediately misunderstood by industry insiders – and joyously embraced by normal humans. Just Google iPad naysayer for a few nuggets of iPad negativism. Even Google’s CEO, Eric Schmidt, couldn’t avoid the derivative trap: He saw the new object as a mere evolution of an existing one and shrugged off the iPad as a bigger phone. Schmidt should have known better, he had been an Apple director in the days when Jobs believed the two companies were “natural allies”.

I was no wiser. I got my first iPad on launch day and was immediately disappointed. My new tablet wouldn’t let me do the what I did on my MacBook Air – or my tiny EeePC running Windows Xp (not Vista!). For example, writing a Monday Note on an iPad was a practical impossibility – and still is.

I fully accept the personal nature of this view and, further, I don’t buy the media consumption vs. productivity dichotomy Microsoft and its shills (Gartner et al.) tried to foist on us. If by productivity we mean work, work product, earning one’s living, tablets in general and the iPad in particular have more than made the case for their being productivity tools as well as education and entertainment devices.

Still, preparing a mixed media document, even a moderately complex one, irresistibly throws most users back to a conventional PC or laptop. With multiple windows and folders, the PC lets us accumulate text, web pages, spreadsheets and graphics to be distilled, cut and pasted into the intended document.

Microsoft now comes to the rescue. Their hybrid Surface PC/Tablet lets you “consume” media, play games in purely tablet mode – and switch to the comfortable laptop facilities offered by Windows 8. The iPad constricts you to ersatz folders, preventing you to put your document’s building blocks in one place? No problem, the Surface device features a conventional desktop User Interface, familiar folders, comfy Office apps as well as a “modern” tile-based Touch UI. The best of both worlds, skillfully promoted in TV ads promising work and fun rolled into one device.

What’s not to like?

John Kirk, a self-described “recovering attorney”, whose tightly argued and fun columns are always worth reading, has answers. In a post on Tablets Metaphysics – unfortunately behind a paywall – he focuses on the Aristotelian differences between tablets and laptops. Having paid my due$$ to the Techpinions site, I will quote Kirk’s summation [emphasis mine]:

Touch is ACCIDENTAL to a Notebook computer. It’s plastic surgery. It may enhance the usefulness of a Notebook but it doesn’t change the essence of what a Notebook computer is. A keyboard is ACCIDENTAL to a Tablet. It’s plastic surgery. It may enhance the usefulness of a Tablet, but it doesn’t change the essence of what a Tablet is. Further — and this is key — a touch input metaphor and a pixel input metaphor must be wholly different and wholly incompatible with one another. It’s not just that they do not comfortably co-exist within one form factor. It’s also that they do not comfortably co-exist within our minds eye.

In plain words, it’s no accident that tablets and notebooks are distinctly different from one another. On the contrary, their differences — their incompatibilities — are the essence of what makes them what they are.

Microsoft, deeply set in the culture of backwards compatibility that served it so well for so long did the usual thing, it added a tablet layer on top of Windows 7. The result didn’t take the market by storm and appears to have caused the exit of Steve Sinofsky, the Windows czar now happily ensconced at Harvard Business School and a Board Partner with the Andreessen Horowitz venture firm. Many think the $900M Surface RT write-off also contributed to Ballmer’s August 2013 resignation.

Now equipped with hindsight, Apple’s decision to stick to a “pure” tablet looks more inspired than lucky. If we remember that a tablet project preceded the iPhone, only to be set aside for a while, Apple’s “stubborn minimalism”, its refusal to hybridize the iPad might be seen as the result of long experimentation – with more than a dash of Steve Jobs (and Scott Forstall) inflexibility.

Apple’s bet can be summed up thus: MacBooks and iPads have their respective best use cases, they both reap high customer satisfaction scores. Why ruin a good game?

Critics might add: Why sell one device when we can sell two? Apple would rather “force” us to buy two devices in order to maximize revenue. On this, Tim Cook often reminds Wall Street of Apple’s preference for self-cannibalization, for letting its new and less expensive products displace existing ones. Indeed, the iPad keeps cannibalizing laptops, PCs and Macs alike.

All this leaves one question unanswered: Is that it? Will the iPad fundamentals stay the way they have been from day one? Are we going to be thrown back to our notebooks when composing the moderately complex mixed-media documents I earlier referred to? Or will the iPad hardware/software combination become more adept at such uses?

To start, we can eliminate a mixed-mode iOS/Mac device. Flip a switch, it’s an iPad, flip it again, add a keyboard/touchpad and you have a Mac. No contraption allowed. We know where to turn to for that.

Next, a new iOS version allows multiple windows to appear on the iPad screen; folders are no longer separately attached to each app as they are today but lets us store documents from multiple apps in one place. Add a blinking cursor for text and you have… a Mac, or something too close to a Mac but still different. Precisely the reason why that won’t work.

(This might pose the question of an A7 or A8 processor replacing the Intel chip inside a MacBook Air. It can be done – a “mere matter of software” – but how much would it cut from the manufacturing cost? $30 to $50 perhaps. Nice but not game-changing, a question for another Monday Note.)

More modest, evolutionary changes might still be welcome. Earlier this year, Counternotions proposed a slotted clipboard as An interim solution for iOS ’multitasking‘:

[...] until Apple has a more general solution to multitasking and inter-app navigation, the four-slot clipboard with a visible UI should be announced at WWDC. I believe it would buy Ive another year for a more comprehensive architectural solution, as he’ll likely need it.

This year’s WWDC came and went with the strongest iOS update so far, but no general nor interim solution to the multitasking and inter-app navigation discussed in the post. (Besides  the Counternotions blog, this erudite and enigmatic author also edits counternotions.tumblr.com and can be followed on Twitter as @Kontra.)

A version of the above suggestion could be conceptualized as a floating dropbox to be invoked when needed, hovering above the document worked on. This would not require the recreation of a PC-like windows and desktop UI. Needed components could be extracted from the floating store, dragged and dropped on the work in process.

We’ll have to wait and see if and how Apple evolves the iPad without falling into the hybrid trap.

On even more speculative ground, a recent iPad Air intro video offered a quick glimpse of the Pencil stylus by Fifty-Three, the creators of the well-regarded Paper iPad app. So far, styli haven’t done well on the iPad. Apple only stocks children-oriented devices from Disney and Marvel. Nothing else, in spite of the abundance of such devices offered on Amazon. Perhaps we’ll someday see Apple grant Bill Gates his wish, as recounted by Jobs’ biographer Walter Isaacson:

“I’ve been predicting a tablet with a stylus for many years,” he told me. “I will eventually turn out to be right or be dead.”

Someday, we might see an iPad, larger or not, Pro or not, featuring a screen with more degrees of pressure sensitivity. After seeing David Hockney’s work on iPads at San Francisco’s de Young museum, my hopes are high.

JLG@mondaynote.com

@gassee

Sound Holiday Thoughts

 

Nothing too serious this week. No Microsoft CEO succession, no Samsung $14B marketing budget exceeding Iceland’s GDP, no Apple Doom. Just Holiday – or Cyber Monday – audio talk.

I used to listen to sound. Now I enjoy music. It started with explosives. I was lucky to be born at a time and place (an arch-communist suburb of post-war Paris) where a 9-year old kid could hopscotch to the drugstore around the corner and buy nitric, sulfuric, or hydrochloric acid, sulfur, potassium chlorate, hydrogen peroxide… and other fascinating wares – among which a flogger with short leather lashes I was also acquainted with. Imagine this in today’s California…

After a minor eye-burn incident, I was firmly redirected towards electronics and started building crystal radios, rudimentary AM sets using a galena (lead sulfide) crystal.

My good fortune continued. In 1955, my parents decided to send their increasingly restive child to a Roman Catholic boarding school in Brittany. What awaited me there, besides a solid classical education, was a geeky Prefect of Discipline who had a passion for hobby electronics. After hours, I would go to his study to read Radio Plans and Le Haut-Parleur — the French equivalents of Nuts and Volts — and salivate over the first OC71 transistor that had just landed on his desk (amazingly, the transistor is still available). This was exciting: Fragile, noisy, power hungry vacuum tubes that required both high and low voltages were going to be replaced by transistors. Numerous, randomly successful projects followed: radios, mono and stereo amplifiers, hacked surplus walkie-talkies.

Years later, in June 1968, I landed a dream job launching HP’s first desktop computer, the 9100A, on the French market. I distinctly recall the exultant feeling: After years of the psycho-social moratorium referred to in an earlier Monday Note, I had entered the industry I love to this day.

With more money, I was able to afford better turntables, tape decks, receivers, amplifiers and, above all, speakers. For a while I started to listen more to the sound they produced than to the music itself. The Lacanians have a phrase for the disease: Regressive Fixation On Partial Objects…

HP had installed an über-geek, Barney Oliver, as head of its Research organization, HP Labs. Adored for his giant intellect and free spirit, Oliver decided stereo amplifiers of the day (early 70′s) were either expensive frauds or noisy trash. Or both. So he raided the HP parts bin and built us a real stereo amplifier. (The manual and schematics are lovingly preserved here.) Four hundred were built. I bought two, because you never know. This was a vastly “overbuilt” device that used high-precision gas chromatograph attenuators with .1dB steps as volume controls. (Most of us have trouble perceiving a 1dB difference.) The power supply had such enormous capacitors that the amplifier would keep “playing” for 25 seconds after it was turned off.

HP, the old, real HP, truly was technophile heaven.

As years passed, I became uncomfortable with the audio arms race, the amps that pushed out hundreds or even thousands of watts, the claims of ever-vanishing .1%, nay. .01% distortion levels, the speakers that cost tens of thousands of dollars. (The Rolls-Royce of audio equipment of the time was…McIntosh.)

A chance encounter with The Audio Critic helped me on the road to recovery. Peter Aczel, the magazine’s publisher and main author is a determined Objectivist Audiophile, a camp that believes that “audio components and systems must pass rigorously conducted double-blind tests and meet specified performance requirements in order to validate the claims made by their proponents”. Committed to debunking Subjectivists‘ claims of “philosophic absolutes” and ethereal nuance, Aczel has attracted the ire of high-end equipment makers who hate it when he proves that their oxygen-free copper cables with carefully aligned grains are no better than 12-gauge zip wire at 30 cents per foot.

(A helpful insight from Aczel: In an A/B audio comparison, the louder gear inevitably wins, so loudness needs to be carefully equalized. This “sounds” like the reason why, over the last two or three decades, wines have increased their alcohol concentration to 14% or more: In tastings, the stronger wine is almost always preferred.)

The real turning point from sound fetishism to music appreciation came in early 2002 when I bought an iMac G4 that came with two small but surprisingly good external loudspeakers:

iMac G4 w Speakers

They won’t fill a concert hall, they can’t compete with my old JBL speakers but coupled with iTunes, the iMac had become a pleasant stereo. (Due, of course, to the improvements in magnetic alloys such as neodymium compounds, more efficient Class D amplifiers, and… but I’ll stop before I relapse.)

A decade later — and skipping the politically incorrect jokes about married men experiencing premature hearing impairment in the high-frequency region of the spectrum — I’m now able to focus on music and expect the reproduction equipment to stay out of the way, in both practical and auditory terms.

Today’s “disk drives” are solid state and store hundreds of gigabytes; CDs and DVDs have all but disappeared; iPods, after a few years in the sun, have been absorbed into phones and tablets. (And we watch iTunes on the road to becoming Apple’s Windows Vista.)

After years of experiment, I’ve come to a happy set of arrangements for enjoying music at home, at work, and on the go. Perhaps these will help your own entertainment. (Needless to say, I bought all the following – and many others – with my own money, and the Monday Note doesn’t receive compensation of any kind.)

At home, I use a Bose Companion V desktop set-up. It consists of two pods, one on each side of the screen, plus a bass module anywhere under the desk. Bose’s idea is to take your PC’s output from a USB port and process it to add an illusion of depth/breadth when sitting at your desk. For me, it works. And the output is strong enough for a family/kitchen/dining room.

That said, I’m not fond of all Bose products. I find the smaller Companion units too bass-heavy, and I didn’t like (and returned) their AirPlay speaker. As for the company’s design sensibility, Amar Bose gave me the evil eye more than 15 years ago when I dared suggest that the industrial design of his Wave System could use updating (I was visiting his Framingham Mountain office with a “noted Silicon Valley electrics retailer”). The design hasn’t changed and is selling well.

At the office, I followed advice from my old friends at Logitech and bought two Ultimate Ears Bluetooth speakers. With a (recently improved) smartphone app, they provide very good stereo sound. At $360/pair, the UE system costs about the same as the Companion V; what UE lacks in the Bose’s power, it makes up for in portability. The only knock is that the mini-USB charging port is under the speaker’s bottom — you have to turn it on its head to charge it..

Speaking of portability, Bose’s Soundlink Mini, another testament to modern speaker and amplifier technology, fits in a bag or roll-aboard and shocks unprepared listeners with its clean, powerful sound and clean design. No discounts on Amazon, which we can attribute to Bose’s unwavering price control and to the system’s desirability.

I kept the best for last: Noise-reducing earphones. The premise is simple: A microphone captures ambient sound, embedded circuitry flips the waveform and adds it into the signal, thus canceling the background noise and allowing us to enjoy our music undisturbed. This is a consumer application of Bose’s first noise-canceling headphones for aviation applications, still considered the domain’s standard. A “pro” set cost about $1,000. Consumer versions are $300 or less.

To my ears, early models were disappointing, they introduced small levels of parasitic noise and featured indifferent music reproduction. Nonetheless, sales were strong.

Later models, from Bose and others, improved both music playback and noise cancelation, but still felt big, unwieldy. Again, a matter of personal preference.

Yielding to the friendly bedside manner of an Apple Store gent, I recently bought a pair of Bose QC 20i “noiseless” earphones (about $300). The earbuds are comfortable and so “skin-friendly” that you forget you’re wearing them (I mention this because comfort will always trump quality). They’re also more secure, less prone to falling out of your ears than are Apple’s own devices.

Now, as I take my evening walk in the streets of Palo Alto enjoying the Bach Partitas, the street noise is barely a whisper, cars seem to glide by as they were all Teslas. For civility and safety, there’s a button to defeat noise reduction, and the mandatory Pause for phone or street conversations. There are other nice details such as a spring-loaded clip for your shirt or lapel, or a dead-battery mode that still lets music — and noise —  come through.

Next week, we’ll return to more cosmic concerns.

JLG@mondaynote.com

The Internet of Things: Look, It Must Work

 

For twenty-five years, we’ve been promised a thoroughly connected world in which our “things” become smarter, safer and save energy. But progress doesn’t seem to match the glowing predictions.

The presentation is straightforward and enticing:

Picture this: A 25¢ smart chip inside a light-bulb socket. Networked through the 110V wires, it provides centralized on-off control and monitors the bulb’s “health” by constantly measuring electrical resistance. Imagine the benefits in a large office, with thousands, or even tens of thousands of fixtures. Energy is saved as lighting is now under central, constantly adaptable control. Maintenance is easier, pinpointed, less expensive: Bulbs are changed at precisely the right time, just before the filament burns out.
Now, add this magic chip to any and all appliances and visualize the enormity of the economic and ease-of-use benefits. This is no dream. . . we’re already working on agreements in energy-conscious Scandinavia.

When did this take place?

There is a one-word giveaway to this otherwise timeless pitch: filament. Incandescent lights have been regulated out of existence, replaced first by CFLs (compact fluorescent lamps — expensive and not so pleasant) and then by LEDs (still expensive, but much nicer).

The pitch, reproduced with a bit of license, took place in 1986. It’s from the business plan of a company called Echelon, the brainchild of Mike Markkula, Apple’s original angel investor and second CEO.

The idea seemed obvious, inevitable: The relentless physics of Moore’s Law would make chips smaller, more powerful, and less expensive. Connected to a central household brain, these chips would control everything from lightbulbs and door locks to furnaces and stoves. Our lives would be safer and easier. . . and we’d all conserve energy.

The idea expresses itself in variations of the core Home Automation concept, the breadth of which you can visualize by googling “home automation images”:

Home Automation Pics copy

In 1992, Vint Cerf, our beloved Internet pioneer, posed with his famous IP On Everything t-shirt:

Vint Cerf T-Shirt IP On Everything copy

This was a modern, ringing restatement of Echelon’s vision: The objects in our homes and offices will have sensors and actuators. . . and a two-way connection to the Internet, to a world of data, applications, people (and, inevitably, marketing trolls).

It’s been a quarter century since Echelon started, more than two decades since Vint Cerf’s pithy yet profound prophecy. We now speak of the Internet Of Things and make bold predictions of billions of interconnected devices.

Earlier this year, Cisco invited us to “capture our share” of the $14.4T (yes, T as in trillion) business opportunity that The Internet of Everything (IoE) will create in the coming decade. Dave Evans, Cisco’s chief futurist, tells us that within ten years we’ll see “50 billion connected things in the world, with trillions of connections among them“.

Maybe. . . but that’s a lot of “things”.

As Network World points out, “[m]ore than 99 percent of physical objects are not now connected to the Internet”. The exact percentage matters less than the existential truth that the obvious, attractive, inevitable idea of a universe of interconnected objects is taking a long, long time to materialize.

Does the concept need a Steve Jobs to coalesce the disparate components into a coherent, vibrant genre? Are important pieces still missing? Or, like Artificial Intelligence (rebranded as Machine Learning in an attempt to soothe the pain of repeated disappointments), are we looking at an ever-receding horizon?

Echelon’s current state (the company went public in 1998) serves as a poster child for the gulf between the $14.4T vision and today’s reality.

First, some context: Mike Markkula, who is still Vice Chairman of Echelon, has assembled a strong Board of Valley veterans who have relevant experience (I know several of them well — these aren’t just “decorative directors”). The company’s Investor Overview offers an impressive Corporate Profile [emphasis mine]:

“Echelon Corporation is an energy control networking company, with the world’s most widely deployed proven, open standard, multi-application platform, selling complete systems and embedded sub-systems for smart grid, smart city and smart building applications. Our platform is embedded in more than 100 million devices, 35 million homes, and 300,000 buildings and powers energy savings applications for smart grids, smart cities and smart buildings. We help our customers reduce operational costs, enhance satisfaction and safety, grow revenues and prepare for a dynamic future.”

But the latest Earnings Call Presentation paints a different picture:

Echelon Q3FY13 Highlights Edited copy

The Gross Margin is good (58.5%), as is the company’s cash position ($56.7M). . . but Echelon’s business is a tiny $18M — about a millionth of Cisco’s predicted motherlode. That’s a decrease of 38% compared to the same quarter last year.

So, we have a company that’s in the hands of competent technologist who have deep knowledge of the domain; a company with real, proven products that have been deployed in millions of homes and offices— but with little revenue to show for its technology and experience.

This seems to be the case for the Everything Connected industry in general. There’s no General Electric, no Microsoft, no Google (the latter abandoned its PowerMeter initiative in 2011).

Why not? The answer might lie in the Echelon presentation already mentioned:

echelin ioT

After more than 25 years of developing devices and platforms, Echelon concludes that the Internet of Things isn’t going to be felt as a direct, personal experience. Instead, it will be mostly invisible: components and subsystems in factories, warehouses, fleets of trucks and buses, office buildings. . .

Consumers certainly don’t have to be sold on the benefits of connected devices. We can’t function without our smartphones, tablets, and PCs. But once we stray outside the really personal computer domain, the desirability of connected devices drops dramatically.

The dream of giving sensors, actuators, and an Internet connection to everyday objects feels good, until one looks at matters of practical and commercial implementation. Will the software in my smart toaster be subject to a licensing agreement? Will it stop toasting if I don’t renew my subscription? (This isn’t just a dystopian strawman; one electric car manufacturer says it can remotely disable the battery if you don’t pay up.)

And then there are the (very real) security and privacy concerns. Could our appliances be hacked? Could my toaster spy on me, collect more data to be used to peddle related goods?

Home automation and security systems seem like a natural fit for the Internet of Things, but they’re still expensive, complicated, and fragile – if not hopelessly primitive. Some connected thermostats, such as the Nest (with its smoke and carbon monoxide detector), work well, but most of them are stubbornly user-hostile.

When we wander into the realm of connected appliances what we see are novelties, fit only for hobbyists and technofetishists (do we really need a toaster that sends a tweet when it’s done?). This is nothing like the smartphone wave, for a simple reason: Appliances are just that, appliances. It’s word we use as an insult to describe a boring car.

JLG@mondaynote.com

 

Intel Is Under New Management – And It Shows

 

Intel rode the PC wave with Microsoft and built an seemingly insurmountable lead in the field of “conventional” (PCs and laptops) microprocessors. But, after his predecessor missed the opportunity to supply the CPU chip for Apple’s iPhone, Intel’s new CEO must now find a way to gain relevance in the smartphone world.

In last May’s The Atlantic magazine, Intel’s then-CEO Paul Otellini confessed to a mistake of historic proportions. Apple had given Intel the chance to be part of the smartphone era, to supply the processor for the first iPhone… and Otellini said no [emphasis and light editing mine]:

“The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do… At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.”
“…while we like to speak with data around here, so many times in my career I’ve ended up making decisions with my gut, and I should have followed my gut. [...] My gut told me to say yes.”

That Otellini found the inner calm to publicly admit his mistake — in an article that would be published on his last day as CEO, no less — is a testament to his character. More important, Otellini’s admission unburdened his successor, Brian Krzanich, freeing him to steer the company in a new direction.

And Krzanich is doing just that.

First: House cleaning. Back in March 2012, the Wall Street Journal heralded Intel as The New Cable Guy. The idea was to combine an Intel-powered box with content in order to serve up a quality experience not found elsewhere (read Apple, Netflix, Roku, Microsoft…). To head the project, which was eventually dubbed OnCue, Intel hired Erik Huggers, a senior industry executive and former head of BBC Online.

At the All Things D conference in February, Huggers announced that the TV service would be available later this year. The Intel TV chief revealed no details about how the service OnCue would differ from existing competitors, or how much the thing would cost…but he assured us that the content would be impressive (“We are working with the entire industry”), and the device’s capabilities would be comprehensive (“This is not a cherry-pick… this is literally everything”).

Intel seemed to be serious. We found out that more than 1,000 Intel employees in Oregon had been engaged in testing the product/service.

Then Krzanich stepped in, and applied a dose of reality:

Intel continues to look at the business model…. we are not experts in the content industry and we’re being careful.” [AllThingsD: New Intel CEO Says Intel TV Sounds Great in Theory. But …]

Indeed, to those of us who have followed the uneasy dance between Apple and content providers since the first Apple TV shipped in 2007, the Intel project sounded bold, to say the least.

Late September, the project was put on hold and, last week, the news came that OnCue had been cancelled and allegedly offered to Verizon, whose V Cast media distribution feats come to mind…

Even before OnCue’s cancellation was made official, the well-traveled Erik Huggers appeared to show an interest in the Hulu CEO job. (If Mr Huggers happens to be reading this: I’d be more than happy to relieve you of the PowerPoints that you used to pitch the project to Intel’s top brass, not to mention the updates on the tortuous negotiations for content, and the reports from the user testing in Oregon. These slides must make fascinating corpospeak logic.)

Krzanich quickly moved from doubt to certainty. He saw that OnCue would neither make money by itself, nor stimulate sales or margins for its main act, x86 processors. OnCue would never be an Apple TV “black puck”, a supporting character whose only mission is to make the main personal computers (small, medium and large; smartphones, tablets and conventional PCs) more useful and pleasant.

So he put an end to the impossible-to-justify adventure.

That was easy.

Tackling Intel’s failure to gain a significant role in the (no longer) new world of smartphones is a much more complicated matter.

With its x86 processors, Intel worked itself into a more-than-comfortable position as part of the Wintel ecosystem. The dominant position achieved by the Microsoft-Intel duopoly over two decades yielded correspondingly high margins for both.

But smartphones changed the game. ARM processors proved themselves better than x86 at the two tasks that are integral to personal, portable devices: lowering power consumption and customization. The ARM architecture didn’t have to wait for the iPhone and Android handsets to dominate the cell phone business. Just as Windows licensing spawned a large number of PC makers, ARM licensing contributed to the creation of a wide range of processor design and manufacturing companies. The ARM site claims 80 licensees for its newer Cortex family and more than 500 for its older Classic Arm processors. No monopoly means lower margins.

Intel saw the unattractive margins offered by ARM processors and didn’t want to commit the billions of dollars required by a fab (a chip manufacturing plant) for a product that would yield profits that were well below Wall Street expectations.

The prospect of bargain basement margins undoubtedly figured in Otellini’s decision to say no to the iPhone. In 2006, no one could have predicted that it could have been made up in volume, that there would be a billion smartphone sales in 2014. (I’m basing the 1B number for the entire industry on Horace Dediu’s estimate of 250 million iOS devices for 2014.)

Even if the Santa Clara company had had the foresight to accept lower margins in order to ensure their future in the smartphone market, there would still have been the problem of customization.

Intel knows how to design and manufacture processors that used “as is” by PC makers. No customization, no problems.

This isn’t how the ARM world works. Licensees design processors that are customized for their specific device, and they send the design to a manufacturer. Were Intel to enter this world, they would no longer design processors, just manufacture them, an activity with less potential for profit.

This explains why Intel, having an ARM license and making XScale processors, sold the business to Marvell in 2006 – a fateful date when looking back on the Apple discussions.

But is Intel’s new CEO is rethinking the “x86 and only x86″ strategy? Last week, a specialty semiconductor company called Altera announced that Intel would fabricate some if its chips containing a 64-bit ARM processor. The company’s business consists of offering faster development times through “programmable logic” circuits. Instead of a “hard circuit” to be designed, manufactured, tested, debugged, modified and sent back to the manufacturing plant in lengthy and costly cycles, you buy a “soft circuit” from Altera and similar companies (Xilinx comes to mind). This more expensive device can be reprogrammed on the spot to assume a different function, or correct the logic in the previous iteration. Pay more and get functioning hardware sooner, without slow and costly turns through a manufacturing process.

With this in mind, what Intel will someday manufacture for Altera isn’t the 64-bit ARM processor that excited some observers: “Intel Makes 14nm ARM for Altera“. The Stratix 10 circuits Altera contracts to Intel manufacturing are complicated and expensive ($500 and up) FPGA (Field Programmable Gate Array) devices where the embedded ARM processor plays a supporting, not central, role. This isn’t the $20-or-less price level arena in which Intel has so far declined to compete.

Manufacturing chips for Altera might simply be work-for-hire, a quick buck for Intel, but I doubt it. Altera’s yearly revenue is just shy of $2B; Intel is a $50B company. The newly announced device, just one in Altera’s product lines, will not “move the needle” for Intel — not in 2014 (the ship date isn’t specified), or ever.

Instead, I take this as a signal, a rehearsal.  250M ARM SoCs at $20 each would yield $5B in revenue, 10% of Intel’s current total…

This might be what Krzanich had in mind about when he inked the “small” manufacturing agreement with Altera; perhaps he was weighing the smaller margins of ARM processors against the risk of slowing PC sales.

Graciously freed from the past by his predecessor, it’s hard to see how Intel’s new CEO won’t take the plunge and use the company’s superb manufacturing technology to finally

make ARM processors.

JLG@mondaynoye.com

 

iPhone 5S surprises

 

I will withhold judgment on the new iPhone until I have a chance to play customer, buy the product (my better half seems to like the 5C while I pine for a 5S), and use it for about two weeks — the time required to go beyond my first and often wrong impressions”.

I wrote those words a little over a month ago. I’ve now played customer for the requisite two weeks — I got an iPhone 5S on October 3rd — and I’m prepared to report.

But first, some context.

iPhone launches always generate controversy, there’s always something to complain about: Antennagate for the iPhone 4, the Siri beta for the 4S, the deserved Maps embarrassment last year – with a clean, dignified Tim Cook apology.

(Whether these fracas translate into lost revenue is another matter).

As I sat in the audience during the introduction of the original iPhone, back in January, 2007, I thought the demo was too good, that Steve was (again) having his way with facts. I feared that when the product shipped a few months later, the undistorted reality would break the spell.

We know now that the iPhone that Steve presented on the stage was unfinished, that he trod a careful path through a demo minefield. But the JesusPhone that Apple shipped — unfinished in many ways (no native apps, no cut-and-paste) — was more than a success: It heralded the Smartphone 2.0 era.

iphone 5s

This year, Tim Cook introduced the riskiest hardware/software combination since the original iPhone. The iPhone 5S wants to be more than just “new and improved”, it attempts to jump off the slope with its combination of two discontinuities: a 64-bit processor and a new 64-bit iOS. Will it work, or will it embarrass itself in a noisome backfire?

First surprise: It works.

Let me explain. I have what attorneys call “personal knowledge” of sausage factories, I’ve been accountable for a couple and a fiduciary for several others. I have first-hand experience with the sights, the aromas, the tumult of the factory floor, so I can’t help but wince when I approach a really new product, I worry in sympathy with its progenitors. The 5S isn’t without its “aromas” (we’ll get to those later), but the phone is sleek and attractive, the house apps are (mostly) solid, and the many new Application Programming Interfaces (API) promise novel applications. Contrary to some opinions, there are fewer warts than anyone could have expected.

Surprise #2, the UI: I had read the scathing critiques of the spartan excesses, and, indeed, I feel the drive for simplicity occasionally goes too far. The buttons on the built-in timer are too thin, too subdued. When I meditate in the dark I can’t distinguish Start from Cancel without my glasses. But I’m generally happy with the simpler look. Windows and views get out of the way quickly and gracefully, text is neatly rendered, the removal of skeuomorphic artifacts is a relief.

The next surprise is the fingerprint sensor a.k.a. Touch ID. Having seen how attempts to incorporate fingerprint recognition into smartphones and laptops have gone nowhere, I had my doubts. Moreover, Apple had acquired AuthenTec, the company that created the fingerprint sensor, a mere 15 months ago. Who could believe that Apple would be able to produce a fingerprint-protected iPhone so quickly?

But it works. It’s not perfect, I sometimes have to try again, or use another finger (I registered three on my right hand and two on my left), but it’s clear that Apple has managed to push Touch ID into the category of “consumer-grade technology”: It works often enough and delivers enough benefit to offset the (small) change in behavior.

A personal favorite surprise is Motion Sensing.

When Apple’s Marketing Supremo Phil Schiller described the M7 motion processor, I didn’t think much of it, I was serving the last days of my two-month sentence wearing the JawBone UP bracelet mentioned in a previous Monday Note. (A friend suggested I affix it to his dog’s collar to see what the data would look like.)

Furthermore, the whole “lifestyle monitoring” business didn’t seem like virgin territory. The Google/Motorola Moto X smartphone introduced last August uses a co-processor that, among other things, monitors your activities, stays awake even when the main processor is asleep, and adjusts the phone accordingly. A similar co-processing arrangement is present in Moto X’s predecessors, the Droid Maxx, Ultra and Mini.

But then I saw a Twitter exchange about Motion Sensing apps about a week after I had activated my iPhone 5S. One thumb touch later, the free Pedometer++ app asked for my permission to use motion data (granted) and immediately told me how many steps I’d taken over the past seven days.

I went to the chauffeured iPhone on my wife’s desk and installed the app. I did the same on friends’ devices. The conclusion was obvious: The M7 processor continuously generates and stores motion data independent of any application. A bit of googling shows that there are quite a few applications that use the motion data that’s obligingly collected by the M7 processor; I downloaded a number of these apps and the step counts are consistent.

(Best in class is the ambitious MotionX 24/7. Philippe Kahn’s company FullPower Technologies licenses MotionX hardware and software to many motion-sensing providers, including Jawbone and, perhaps, Apple. Wearable technologies aren’t just for our wrists…we carry them in our pockets.)

My wife asked if her iPhone would count steps from within her handbag. Ever the obliging husband, I immediately attended to this legitimate query, grabbed her handbag, and stepped out of the house for an experimental stroll. A conservatively dressed couple walked by, gave me a strange look, and didn’t respond to my evening greeting, but, indeed, the steps were counted.

A question arises: Does Apple silently log my movements? No, my iPhone records my locomotion, but the data stays within the device — unless, of course, I let a specific application export them. One must be aware of the permissions.

Other 5S improvements are welcome but not terribly surprising. The camera has been smartly enhanced in several dimensions; search finally works in Mail; and, to please Sen. McCain, apps update themselves automatically.

All of this comes with factory-fresh bugs, of course, a whiff of the sausage-making apparatus. iPhoto crashed on launch the first three or four times I tried it, but has worked without complaint since then.  A black Apple logo on a white background appeared and then quickly disappeared — too brief to be a full reboot, too sparse to be part of an app.

I’ve had to reboot the 5S to recover a dropped cellular connection, and have experienced hard-to-repeat, sporadic WiFi trouble that seems to spontaneously cure itself.(“How did you fix it?” asks my wife when her tech chauffeur gets the sullen device to work again. “I don’t know, I poke the patient everywhere until it responds.”)

From my admittedly geeky perspective, I’m not repelled by these glitches, they didn’t lose my data or prevent me from finishing a task. They’re annoying, but they’re to be expected given the major hardware and software changes. And I expect that the marketplace (as opposed to the kommentariat) will shrug them off and await the bug fixes that will take care of business.

So, yes, overall, the “discontinuous” 5S works.

[I'm also using a pre-release of Mavericks, the upcoming 10.9 version of OS X, on two Macs. There, I wonder if I'm not seeing the opposite of the iPhone 5S: less risk, more bugs. I hope things straighten out for the public release. I'll report if and when warranted.] [I can't resist: The Washington Post's Wonkblog calls the iPhone's third color... Dignified Gold. I wonder: Is it a compliment to Sir Jony's unerring taste? Or a clever, indirect ethnic slur?]

JLG@mondaynote.com

Apple Under Siege

 

Two years after Steve Jobs left us, Apple now wears Tim Cook’s imprint and, for all the doubt and perpetual doomsaying, seems to wear it well. One even comes to wonder if the Cassandras aren’t in fact doing Apple a vital favor.

Last Friday, Tim Cook issued a somber remembrance to Apple employees:

Team-
Tomorrow marks the second anniversary of Steve’s death. I hope everyone will reflect on what he meant to all of us and to the world. Steve was an amazing human being and left the world a better place. I think of him often and find enormous strength in memories of his friendship, vision and leadership. He left behind a company that only he could have built and his spirit will forever be the foundation of Apple. We will continue to honor his memory by dedicating ourselves to the work he loved so much. There is no higher tribute to his memory. I know that he would be proud of all of you.
Best,
Tim

I am one of the many who are in Steve’s debt and I miss him greatly. I consider him the greatest creator and editor of products this industry has ever known, and am awed by how he managed the most successful transformation of a company — and of himself — we’ve ever seen. I watched his career from its very beginning, I was fortunate to have worked with him, and I thoroughly enjoyed agreeing and disagreeing with him.

I tried to convey this in an October 9th, 2011 Monday Note titled Too Soon. I just re-read it and hope you’ll take the time to do the same. You’ll read words of dissent by Richard Stallman and Hamilton Nolan, but you’ll mostly find praise by Barack Obama, John Stewart, Nicholson Baker in the New Yorker, and this elegant image by Jonathan Mak:

steve_silouhette

Two years later, we can look at Apple under Tim Cook’s leadership. These haven’t been an easy twenty-four months: Company shares have gone on a wild ride, execs have been shown the door, there was the Maps embarrassment and apology, and there has been a product drought for most of the last fiscal year (ending in September).

All of this has provided fodder for the Fox Newsstands of the Web, for netwalkers seeking pageviews. The main theme is simple and prurient, hence its power: Without Steve, Apple is on the decline. The variations range from the lack of innovation — Where’s the Apple TV?, the iWatch?, the next Big Thing? — to The Decline of The Brand, Android Is Winning, and Everything Will Be Commoditized.

Scan Philip Ellmer-DeWitt’s Apple 2.0 or John Gruber’s Daring Fireball and treat yourself to intelligent repudiations of this incessant “claim chowder“, discredited pontifications. I’ll extract a few morsels from my own Evernote stash:

Apple’s press conference showed a brand unraveling, or so said VentureBeat in March, 2012. Eighteen months later, Apple passed Coca-Cola to become the world’s most valuable brand.

How Tim Cook can save himself (and Apple), subtitled, for good measure: What the confused Apple CEO can do to avoid getting canned and having to slink away with nothing but his $378 million compensation package as comfort. Penned by a communications consultant who “teaches public relations at NYU”, the article features an unexpected gem: Cook should buy a blazer. You know, “to break the deleterious chokehold of the Steve Jobs’ [sic] legacy”.

Apple: The Beginning of a Long Decline? (note the hedging question mark.) This LinkedIn piece, which questions the value of the fingerprint sensor, ends with a lament:

There was no sign of a watch. So those of us in Silicon Valley are left watching, wondering, and feeling a little empty inside… Jobs is gone. It looks like Apple’s magic is slowly seeping away now too.

Shortly thereafter, Samsung’s iWatch killer came out…and got panned by most reviewers.

Last: Apple’s Board of Directors are concerned about Apple’s pace of innovation, says Fox Business News Charlie Gasparino, who claims to have “reliable sources”.

Considering how secretive the company is, can anyone imagine a member of Apple’s Board blabbing to a Fox Business News irrespondent?

Despite the braying of the visionary sheep, Tim Cook never lost his preternatural calm, he never took the kommentariat’s bait. Nor have his customers: They keep buying, enjoying, and recommending Apple’s products. And they do so in such numbers — 9 million new iPhones sold in the launch weekend — that Apple had to file a Form 8-K with the Security and Exchanges Commission (SEC) to “warn” shareholders that revenue and profits would exceed the guidance they had provided just two months ago when management reviewed the results of the previous quarter.

In Daniel Eran Dilger’s words, Data bites dogma: Apple’s iOS ate up Android, Blackberry U.S. market share losses this summer:

Apple’s increase accounted for 1.5 of the 1.6 percentage points that Android and Blackberry collectively lost. This occurred a full month before the launch of Apple’s iPhone 5s and 5c and the deployment of iOS 7.

Regarding the “Apple no longer innovates” myth, Jay Yarow tells us why Apple Can’t Just ‘Innovate’ A New Product Every Other Year. His explanation draws from a substantial New York Times Magazine article in which Fred Vogelstein describes the convergence of company-wide risk-taking and engineering feats that resulted in the iPhone:

It’s hard to overstate the gamble Jobs took when he decided to unveil the iPhone back in January 2007. Not only was he introducing a new kind of phone — something Apple had never made before — he was doing so with a prototype that barely worked. Even though the iPhone wouldn’t go on sale for another six months, he wanted the world to want one right then. In truth, the list of things that still needed to be done was enormous. 

It’s a great read. But even Vogelstein can’t resist the temptation of inserting a word of caution: “And yet Apple today is under siege…” 

This is something I heard 33 years ago when I signed up to start Apple France in 1980, and I’ve heard it constantly since then. I’ll again quote Horace Dediu, who best summarizes the concern:

“[There's a] perception that Apple is not going to survive as a going concern. At this point of time, as at all other points of time in the past, no activity by Apple has been seen as sufficient for its survival. Apple has always been priced as a company that is in a perpetual state of free-fall. It’s a consequence of being dependent on breakthrough products for its survival. No matter how many breakthroughs it makes, the assumption is (and has always been) that there will never be another. When Apple was the Apple II company, its end was imminent because the Apple II had an easily foreseen demise. When Apple was a Mac company its end was imminent because the Mac was predictably going to decline. Repeat for iPod, iPhone and iPad. It’s a wonder that the company is worth anything at all.”

I recently experienced a small epiphany: I think the never-ending worry about Apple’s future is a good thing for the company. Look at what happened to those who were on top and became comfortable with their place under the sun: Palm, Blackberry, Nokia…

In ancient Rome, victorious generals marched in triumph to the Capitol. Lest the occasion go to the army commander’s head, a slave would march behind the victor, murmuring in his ear, memento mori, “remember you’re mortal”.

With that in mind, one can almost appreciate the doomsayers — well, some of them. They might very well save Apple from becoming inebriated with their prestige and, instead, force the company to remember, two years later and counting, how they won it.

JLG@mondaynote.com