Uncategorized

The Rise of AdBlock Reveals A Serious Problem in the Advertising Ecosystem

 

By Frédéric Filloux

Seeing a threat to their ecosystem, French publishers follow their German colleagues and prepare to sue startup Eyeo GmbH, the creator of anti-advertising software AdBlock Plus. But they cannot ignore that, by using ABP, millions of users actively protest against the worst forms of advertising. 

On grounds that it represents a major economic threat to their business, two groups of French publishers are considering a lawsuit against AdBlockPlus creator Eyeo GmbH. (Les Echos, broke the news in this story, in French).
Plaintiffs are said to be the GESTE and the French Internet Advertising Bureau. The first is known for its aggressive stance against Google via its contribution to the Open Internet Project. (To be clear, GESTE said they were at a “legal consulting stage”, no formal complaint has been filed yet.) By his actions, the second plaintiff, the French branch of the Internet Advertising Bureau is in fact acknowledging its failure to tame the excesses of the digital advertising market.

Regardless of its validity, the legal action misses a critical point. By downloading the plug-in AdBlock Plus (ABP) on a massive scale, users do vote with their mice against the growing invasiveness of digital advertising. Therefore, suing Eyeo, the company that maintains ABP, is like using Aspirin to fight cancer. A different approach is required but very few seem ready to face that fact.

I use AdBlock Plus on a daily basis. I’m not especially proud of this, nor do I support anti-advertising activism, I use the ad-blocker for practical, not ideological, reasons. On too many sites, the invasion of pop-up windows and heavily animated ad “creations” has became an annoyance. A visual and a technical one. When a page loads, the HTML code “calls” all sorts of modules, sometimes 10 or 15. Each sends a request to an ad server and sometimes, for the richest content, the ad elements trigger the activation of a third-party plug-in like Adobe’s Shockwave which will work hard to render the animated ads. Most of the time, these ads are poorly optimized because creative agencies don’t waste their precious time on such trivial task as providing clean, efficient code to their clients. As a consequence, the computer’s CPU is heavily taxed, it overheats, making fans buzz loudly. Suddenly, you feel like your MacBook Pro is about to take off. That’s why, with a couple of clicks, I installed AdBlock Plus. My ABP has spared me several thousands of ad exposures. My surfing is now faster, crash-free, and web pages looks better.

I asked around and I couldn’t find a friend or a colleague not using the magic plug-in. Everyone seems to enjoy ad-free surfing. If this spreads, it could threaten the very existence of a vast majority of websites that rely on advertising.

First, a reality check. How big and dangerous is the phenomenon? PageFair, a startup-based in Dublin, Ireland, comes up with some facts. Here are key elements drawn from a 17-pages PDF document available here.

347_adblock_1c

 

347_adblock_2c

 

Put another way, if your site, or your apps, are saturated with pop-up windows, screaming videos impossible to mute or skip, you are encouraging the adoption of AdBlock Plus — and once it’s installed on a browser, do not expect any turning  back. As an example of an unwitting APB advocate:

347_adblock_3c

Eyeo’s AdBlock Plus takes the advertising rejection in its own hands — but these are greedy and dirty ones. Far from being the work of a selfless white knight, Eyeo’s business model borders on racketeering. In its Acceptable Ads Manifesto, Eyeo states the virtues of what the company feels are tolerable formats:

1. Acceptable Ads are not annoying.
2. Acceptable Ads do not disrupt or distort the page content we’re trying to read.
3. Acceptable Ads are transparent with us about being an ad.
4. Acceptable Ads are effective without shouting at us.
5. Acceptable Ads are appropriate to the site that we are on.

Who could disagree? But such blandishments go with a ruthless business model that attests to the merits of straight talk:

We are being paid by some larger properties that serve non-intrusive advertisements that want to participate in the Acceptable Ads initiative.
Whitelisting is free for all small and medium-sized websites and blogs. However, managing this list requires significant effort on our side and this task cannot be completely taken over by volunteers as it happens with common filter lists.
Note that we will never whitelist any ads that don’t meet these criteria. There is no way to buy a spot in the whitelist. Also note that whitelisting is free for small- and medium-sized websites.
In addition, we received startup capital from our investors, like Tim Schumacher, who believe in Acceptable Ads and want to see the concept succeed.

Of course, there is no public rate card. Eyeo doesn’t provide any measure of what defines  “small and medium size websites” either. A 5 million monthly uniques site can be small in the English speaking market but huge in Finland. And the number of “larger properties” and the amount they had to pay to be whitelisted remains a closely guarded secret. According to some German websites, Eyeo is said to have snatched $30m from big internet players; not bad for a less than 30 people operation (depending of the recurrence of this “compliance fee” — for lack of a better term.)

There are several issues here.

One, a single private entity cannot decide what is acceptable or not for an entire sector. Especially in such an opaque fashion.

Two, we must admit that Eyeo GmbH is filling a vacuum created by the incompetence and sloppiness of the advertising community’s, namely creative agencies, media buyers and organizations that are supposed to coordinate the whole ecosystem (such as the Internet Advertising Bureau.)

Three, the rise of ad blockers is the offspring of two major trends: a continual deflation of digital ads economics, and the growing reliance on ad exchanges and Real Time Bidding, both pushing prices further down.

Even Google begins to realize that the explosion of questionable advertising formats has become a problem. Proof is its recent Contributor program that proposes ad-free navigation in exchange for a fee ranging from $1 to $3 per month (read this story on NiemanLab, and more in a future Monday Note).

The growing rejection of advertising AdBlock Plus is built upon is indeed a threat to the ecosystem and it needs to be addressed decisively. For example, by bringing at the same table publishers and advertisers to meet and design ways to clean up the ad mess. But the entity and leaders who can do the job have yet to be found.

frederic.filloux@mondaynote.com

Apple Watch: Hard Questions, Facile Predictions

 

by Jean-Louis Gassée

Few Apple products have agitated forecasters and competitors as much as the company’s upcoming watch. The result is an escalation of silly numbers – and one profound observation from a timepiece industry insider.

Apple Watch 2015 sales predictions are upon us: 10 million, 20 million, 24 million, 30 million, even 40 million! Try googling “xx million apple watch”, you won’t be disappointed. Microsoft’s Bing doesn’t put a damper on the enthusiasm either: It finds a prediction for first year sales of 60 million Apple Watches!

These are scientific, irony-free numbers, based on “carefully weighed percentages of iPhone users” complemented by investigations into “supplier orders” and backed up by interviews with “potential buyers”. Such predictions reaffirm our notion that the gyrations and divinations of certain anal-ists and researchers are best appreciated as black comedy— cue PiperJaffray’s Gene Munster with his long-running Apple TV Set gag.

Fortunately, others are more thoughtful. They consider how the product will actually be experienced by real people and how the new Apple product will impact the watch industry.

As you’ll recall from the September 14th “Apple Watch Is And Isn’t”, Jean-Claude Biver, the LVMH executive in charge of luxury watch brands such as Hublot and TAG Heuer, offered his frank opinion of the “too feminine” AppleWatch:

“To be totally honest, it looks like it was designed by a student in their first trimester.” 

At the time, it sounded like You Don’t Need This sour grapes from disconcerted competitor. But recently, Biver has also given us deeper, more meaningful thoughts:

“A smartwatch is very difficult for us because it is contradictory,” said Mr. Biver. “Luxury is supposed to be eternal … How do you justify a $2,000 smart watch whose technology will become obsolete in two years?” he added, waving his iPhone 6. 

Beautiful. All the words count. Luxury and Eternity vs. Moore’s Law.

To help us think about the dilemma that preoccupies the LVMH exec, let’s take a detour through another class of treasured objects: Single Lens Reflex cameras.

347_Nikon_F_Photomic_FTn-2714

 

Unless you were a photojournalist or fashion photographer taking hundreds of pictures a day, these cameras lasted forever. A decade of use would come and go without impact on the quality of your pictures or the solid feel of the product. People treasured their Hasselblads, Leicas (not an SLR), Canons, and more obscure marques such as the Swiss Alpa. (I’m a bit partial, here, I bought a Nikon exactly like the one pictured above back in 1970.)

These were purely mechanical marvels. No battery, the light sensor was powered by…light.

Then, in the mid-nineties, digital electronics begin to sneak in. Sensor chips replaced silver-halide film; microcomputers automated more and more of the picture taking process.

The most obvious victim was Eastman Kodak, a company that had dominated the photographic film industry for more than a century – and filed for bankruptcy in 2012. (A brief moment of contemplation: Kodak owned many digital photography patents and even developed the first digital camera in 1975, but “…the product was dropped for fear it would threaten Kodak’s photographic film business.” [Wikipedia].)

The first digital cameras weren’t so great. Conventional film users rightly criticized the lack of resolution, the chromatic aberrations, and other defects of early implementations. But better sensors, more powerful microprocessors, and clever software won the day. A particular bit of cleverness that has saved a number of dinner party snapshots was introduced in the late-nineties: A digital SLR sends a short burst of flash to evaluate the scene, and then uses the measurements to automatically balance shutter speed and aperture, thus correcting the classical mistake of flooding the subject in the foreground while leaving the background in shadows.

Digital cameras have become so good we now have nostalgia “film packs” that recreate the defects — sorry, the ambiance — of analog film stock such as Ektachrome or Fuji Provia.

But Moore’s Law exacts a heavy price. At the high end, the marvelous digital cameras from Nikon, Canon, and Sony are quickly displaced year after year by new models that have better sensors, faster microprocessors, and improved software. Pros and prosumers can move their lenses — the most expensive pieces of their equipment — from last year’s model to this one’s, but the camera body is obsolete. In this regard, the most prolific iterator seems to be Sony, today’s king of sensor chips; the company introduces new SLR models once or twice a year.

At the medium to low end, the impact of Moore’s law was nearly lethal. Smartphone cameras have become both so good and so convenient (see Chase Jarvis’ The Best Camera is the One That’s With You) that they have displaced almost all other consumer picture taking devices.

What does the history of cameras say for watches?

At the high-end, a watch is a piece of jewelry. Like a vintage Leica or Canon mechanical camera, a Patek watch works for decades, it doesn’t use batteries, and it doesn’t run on software. Mechanical watches have even gained a retro chic among under-forty urbanites who have never had to wind a stem. (A favorite of techies seems to be the Officine Panerai.)

So far, electronic watches haven’t upended the watch industry. They’ve mostly replaced a spring with a battery and have added a few functions and indicator displays – with terrible user interfaces. This is about to change. Better/faster/cheaper organs are poised to invade watches: sensors, microprocessors + software, wireless links…

Jean-Claude Biver is right to wonder how the onslaught of ever-improving technology will affect the “eternity” of the high-end, fashion-conscious watch industry…and he’ll soon find out:  He’s planning a (yet-to-be announced) TAG Heuer smartwatch.

With this in mind, Apple’s approach is intriguing: The company plays the technology angle, of course, and has loaded their watch with an amazing — some might say disquieting — amount of hardware and software, but they also play the fashion and luxury game. The company invited fashion writers to the launch; it hosted a celebrity event at Colette in Paris with the likes of Karl Lagerfeld and Anna Wintour in attendance. The design of the watch, the choice of materials for the case and bands/bracelets… Apple obviously intends to offer customers a differentiated combination of traditional fashion statement and high-tech functions.

But we’re left with a few questions…

Battery life is one question — we don’t know what it will be. The AppleWatch user interface is another.

The product seems to be loaded with features and apps… will users “get” the UI, or will they abandon hard-to-use functions, as we’ve seen in many of today’s complicated watches?

But the biggest question is, of course, Moore’s Law. Smartphone users have no problem upgrading every two years to new models that offer enticing improvements, but part of that ease is afforded by carrier subsidies (and the carriers play the subsidy game well, despite their disingenuous whining).

There’s no carrier subsidy for the AppleWatch. That could be a problem when Moore’s Law makes the $5K high-end model obsolete. (Expert Apple observer John Gruber has wondered if Apple could just update the watch processor or offer a trade-in — that would be novel.)

We’ll see how all of this plays out with regard to sales. I’ll venture that the first million or so AppleWatches will sell easily. I’ll certainly buy one, the entry-level Sports model with the anodized aluminum case and elastomer band. If I like it, I’ll even consider the more expensive version with a steel case and ingenious Marc Newson link bracelet — reselling my original purchase should be easy enough.

Regardless of the actual sales, first-week numbers won’t matter. It’s what happens after that that matters.

Post-purchase Word of Mouth is still the most potent marketing device. Advertising might create awareness, but user buzz is what makes or breaks products such as a watch or phone (as opposed to cigarettes and soft drinks). It will take a couple months after the AppleWatches arrive on the shelves before we can judge whether or not the product will thrive.

Only then can we have a sensible discussion about how the luxury segment of the line might plan to deal with the eternity vs. Moore’s Law question.

JLG@mondaynote.com

Hard Comparison: Legacy Media vs. Digital Native

 

by Frédéric Filloux

From valuations to management cultures, the gap between legacy media companies and digital natives ones seems to widen. The chart below maps the issues and shows where efforts should focus. 

At conferences and workshops in Estonia, Spain or in the US, most discussions I recently had ended up zeroing on the cultural divide between legacy media and internet natives. About fifteen years into the digital wave, tectonic plates seems to drift more apart that ever. On one side, most media brands — the surviving ones — are still struggling with an endless transition. On the other, digital native companies, all with deeply embedded technology, expand at an incredible pace. Hence the central question: can legacy media catch up? What are the most critical levers to pull in order to accelerate change?

Once again, it’s not a matter of a caricatural opposition of fossilized media brands versus agile and creative media startups. The reality is far more complex. I come from a world in which information had price and cost; facts were verified; seasoned editors called the shots; readers were demanding and loyal — and journalists occasionally autistic. I’m coming from the culture of great stories, intense competition (now gone) and the certitude of the important role of great journalism in society.

That said, I simply had the luck to be in the right place at the right time to embrace the new culture: Small companies, starting on a blank slate with the unbreakable faith and systemic understanding that combine into a vision of growth and success, all wrapped-up in the virtues of risk-taking. I always wanted to believe that the two cultures could be compatible — in fact, I hoped the old world would be able to morph swiftly and efficiently enough to catch the wave, deal with new kinds of readers, with a wider set of technologies and a proteiform competition. I still want to believe this.

In the following chart, I list the most critical issues and pinpoint the areas of transformation that are both the most urgent and the easiest to address.

345_Divide_table

[Footnotes]

1. Funding: The main reason why newcomers are able to quickly leave the incumbent in the dust. When venture firms compete to provide $160m to Flipboard, $61m to Vox Media, or $96m to BuzzFeed, the consequences are not just staggering valuations. Abundant funds translate into the ability to hire more and better qualified people. Just one example: Netflix’s recommendation system — critical to ensure both viewer engagement and retention — can count on a $150m yearly budget, far more than the entire revenue of many mid-sized media companies. Fact is: old media companies in transition will never be able to attract such level of funding due to inherent scalability limitations (it is extremely rare to see a legacy media corporation suddenly jumping out of its ancestral business.)

2. Resource Allocation. Typically, the management team of a legacy media will assign just enough resources to launch a product or service and hope for the best. This deliberate scarcity has several consequences: From the start, the project team will be in the fight/survival mode, both internally (vs. other projects or “historical” operations); second consequence, in the (likely) case of a failure, it will be difficult to find the cause: Was the product or service inherently flawed? Or did it fail to achieve “ignition” because the approach was too cautious? The half-baked, half-supported legacy product might stagnate for ever, without making sufficient money to be seen as a success, nor significant losses to justify a termination. By contrast, a digital native corporation will go at full throttle from day one with scores of managers, engineers, marketers and sufficient development time for tests, market research, promotion, etc. The idea is to succeed — or to fail, but fast and clearly.

3. Approach to timing. The tragedy for the vast majority of legacy media is they no longer have the luxury of long term thinking. Shareholder pressure and weak P&L impose quick results. By contrast, most digital companies are built for the long term: Their management is asked to grow, conquer, secure market positions and then monetize. It can take years, as seen in many instances, form Flipboard to Amazon (which might have pushed the envelope a bit too far.)

4. Scalability vs. sustainability. Many reasons — readership structure, structurally constrained markets — explain the difficulty for legacy media to scale up. At the polar opposite, disrupters like Uber or AirBnB, or super-optimizers such as BuzzFeed or The Huffington Post are designed and built to scale — globally.

5. Customer relations. On this aspect, the digital world has reset the standard. All of a sudden, legacy media companies appeared outdated when it comes to customer satisfaction, from poor subscription handling to the virtuous circle of acquisition-engagement-retention of customers.

In the chart above, my allocation of purple dots (feasibility) illustrates the height of hurdles facing large, established media brands. Many components remain extremely hard to move – I personally experience that on a daily basis.  But there is no excuse not to take a better care of customers, not to reward the risk-taking of committed staffers, assign resources in a decisive manner or induce a better sense of competition.

frederic.filloux@mondaynote.com

Clayton Christensen Becomes His Own Devil’s Advocate

 

by Jean-Louis Gassée

Every generation has its high tech storytellers, pundits who ‘understand’ why products and companies succeed and why they fail. And each next generation tosses out the stories of their elders. Perhaps it’s time to dispense with “Disruption”.

“I’m never wrong.”

Thus spake an East Coast academic, who, in the mid- to late-eighties, parlayed his position into a consulting money pump. He advised — terrorized, actually — big company CEOs with vivid descriptions of their impending failure, and then offered them salvation if they followed his advice. His fee was about $200K per year, per company; he saw no ethical problem in consulting for competing organizations.

The guru and I got into a heated argument while walking around the pool at one of Apple’s regular off-sites. When I disagreed with one of his wild fantasies, his retort never varied: I’m never wrong.

Had I been back in France, I would have told him, in unambiguous and colorful words, what I really thought, but I had acclimated myself to the polite, passive-aggressive California culture and used therapy-speak to “share my feelings of discomfort and puzzlement” at his Never Wrong posture. “I’ve always been proved right… sometimes it simply takes longer than expected”, was his comeback. The integrity of his vision wasn’t to be questioned, even if reality occasionally missed its deadline.

When I had entered the tech business a decade and a half earlier, I marveled at the prophets who could part the sea of facts and reveal the True Way. Then came my brief adventures with the BCG-advised diversification of Exxon into the computer industry.

Preying on the fear of The End of Oil in the late-seventies, consultants from the prestigious Boston company hypnotized company executives with their chant: Information Is The Oil of The 21st Century. Four billion dollars later (a lot of money at the time), Exxon finally recognized the cultural mismatch of the venture and returned to the well-oiled habits of its hearts and minds.

It was simply a matter of time, but the BCG was ultimately proved right — we now have our new Robber Barons of zeroes and ones. But they were wrong about something even more fundamental but slippery, something they couldn’t divine from their acetate foils: culture.

A little later, we had In Search of Excellence, the 1982 best-seller that turned into a cult. Tom Peters, the more exuberant of the book’s two authors, was a constant on pledge-drive public TV. As I watched him one Sunday morning with the sound off, his sweaty fervor and cutting gestures reminded me of the Bible-thumping preacher, Jimmy “I Sinned Against You” Swaggart. (These were my early days in California; I flipped through a lot of TV channels before Sunday breakfast, dazzled by the excess.)

Within a couple of years, several of the book’s exemplary companies — NCR, Wang, Xerox — weren’t doing so well. Peters’ visibility led to noisy accusations and equally loud denials of faking the data, or at least of carefully picking particulars.

These false prophets commit abuses under the color of authority. They want us to respect their craft as a form of science, when what they’re really doing is what Neil Postman, one of my favorite curmudgeons, views as simple storytelling: They felicitously arrange the facts in order to soothe anxiety in the face of a confusing if not revolting reality. (Two enjoyable and enlightening Postman books: Conscientious Objections, a series of accessible essays, and Amusing Ourselves To Death, heavier, very serious fare.)

A more recent and widely celebrated case of storytelling in a scientist’s lab coat is Clayton Christensen’s theory of disruptive innovation. In order to succeed these days — and, especially, to pique an investor’s interest — a new venture must be disruptive, with extra credit if the disrupter has attended the Disrupt conference and bears a Renommierschmiss from the Startup Battlefield.

345_christensen__
(Credit: www.claytonchristensen.com )

Christensen’s body of work is (mostly) complex, sober, and nuanced storytelling that’s ill-served by the overly-simple and bellicose Disruption! battle cry. Nonetheless, I’ll do my share and provide my own tech world simplification: The incumbency of your established company is forever threatened by lower cost versions of the products and services you provide. To avoid impending doom, you must enrich your offering and engorge your price tag. As you abandon the low end, the interloper gains business, muscles up, and chases you farther up the price ladder. Some day — and it’s simply a matter of time — the disruptor will displace you.

According to Christensen, real examples abound. The archetypes, in the tech world, are the evolution of the disk drive, and the disruptive ascension from mainframe to minicomputer to PC – and today’s SDN (Software Defined Networking) entrants.

But recently, skeptical voices have disrupted the Disruption business.

Ben Thompson (@monkbent) wrote a learned paper that explains What Clayton Christensen Got Wrong. In essence, Ben says, disruption theory is an elegant explanation of situations where the customer is a business that’s focused on cost. If the customer is a consumer, price is often trumped by the ineffable values (ease-of-use, primarily) that can only be experienced, that can’t be described in a dry bullet list of features.

More broadly, Christensen came under attack by Jill Lepore, the New Yorker staff writer who, like Christensen, is a Harvard academic. In a piece titled The Disruption Machine, What the gospel of innovation gets wrong, Lepore asserts her credentials as a techie and then proceeds to point out numerous examples where Christensen’s vaunted storytelling is at odds with facts [emphasis and edits mine]:

“In fact, Seagate Technology was not felled by disruption. Between 1989 and 1990, its sales doubled, reaching $2.4 billion, “more than all of its U.S. competitors combined,” according to an industry report. In 1997, the year Christensen published ‘The Innovator’s Dilemma,”’Seagate was the largest company in the disk-drive industry, reporting revenues of nine billion dollars. Last year, Seagate shipped its two-billionth disk drive. Most of the entrant firms celebrated by Christensen as triumphant disrupters, on the other hand, no longer exist

Between 1982 and 1984, Micropolis made the disruptive leap from eight-inch to 5.25-inch drives through what Christensen credits as the ‘Herculean managerial effort’ of its C.E.O., Stuart Mahon. But, shortly thereafter, Micropolis, unable to compete with companies like Seagate, failed. 

MiniScribe, founded in 1980, started out selling 5.25-inch drives and saw quick success. ‘That was MiniScribe’s hour of glory,’ the company’s founder later said. ‘We had our hour of infamy shortly after that.’ In 1989, MiniScribe was investigated for fraud and soon collapsed; a report charged that the company’s practices included fabricated financial reports and ‘shipping bricks and scrap parts disguised as disk drives.’”

Echoes of the companies that Tom Peters celebrated when he went searching for excellence.

Christensen is admired for his towering intellect and also for his courage facing health challenges — one of my children has witnessed both and can vouch for the scholar’s inspiring presence. Unfortunately, his reaction to Lepore’s criticism was less admirable. In a BusinessWeek interview Christensen sounds miffed and entitled:

“I hope you can understand why I am mad that a woman of her stature could perform such a criminal act of dishonesty—at Harvard, of all places.”

At Harvard, of all places. Hmmm…

In another attempt to disprove Jill Lepore’s disproof, a San Francisco- based investment banker wrote a scholarly rearrangement of Disruption epicycles. In his TechCrunch post, the gentleman glows with confidence in his use of the theory to predict venture investment successes and failures:

“Adding all survival and failure predictions together, the total gross accuracy was 84 percent.”

and…

“In each case, the predictions have sustained 99 percent levels of statistical confidence without a flinch.”

Why the venture industry hasn’t embraced the model, and why the individual hasn’t become richer than Warren Buffet as a result of the unflinching accuracy remains a story to be told.

Back to the Disruption sage, he didn’t help his case when, as soon as the iPhone came out, he predicted Apple’s new device was vulnerable to disruption:

“The iPhone is a sustaining technology relative to Nokia. In other words, Apple is leaping ahead on the sustaining curve [by building a better phone]. But the prediction of the theory would be that Apple won’t succeed with the iPhone. They’ve launched an innovation that the existing players in the industry are heavily motivated to beat: It’s not [truly] disruptive. History speaks pretty loudly on that, that the probability of success is going to be limited.”

Not truly disruptive? Five years later, in 2012, Christensen had an opportunity to let “disruptive facts” enter his thinking. But no, he stuck to his contention that Modularity always defeats integration:

“I worry that modularity will do its work on Apple.”

In 2013, Ben Thompson, in his already quoted piece, called Christensen out for sticking to his theory:

“[…] the theory of low-end disruption is fundamentally flawed. And Christensen is going to go 0 for 3.”

Perhaps, like our poolside guru, Christensen believes he’s always right…but, on rare occasions, he’s simply wrong on the timing.

Apple will, of course, eventually meet its maker, whether through some far off, prolonged mediocrity, or by a swift, regrettable decision. But such predictions are useless, they’re storytelling – and a bad, facile kind at that. What would be really interesting and courageous would be a detailed scenario of Apple’s failure, complete with a calendar of main steps towards the preordained ending. No more Wrong on the Timing excuses.

A more interesting turn for a man of Christensen’s intellect and reach inside academia would be to become his own Devil’s Advocate. Good lawyers pride themselves in researching their cases so well they could plead either side. Perhaps Clayton Christensen could explain, with his usual authority, how the iPhone defines a new theory of innovation. Or why the Macintosh has prospered and ended up disrupting the PC business by sucking up half of the segment profits. He could then draw comparisons to other premium goods that are happily chosen by consumers, from cars to clothes and…watches.

JLG@mondaynote.com

Cultural Adventures In Payment Systems – Part I

 

by Jean-Louis Gassée

Payment systems and user behaviors have evolved over the past three decades. In this first of a two-part Monday Note, I offer a look at the obstacles and developments that preceded the Apple Pay launch.

When I landed in Cupertino in 1985, I was shocked, shocked to find that so much gambling was going on in here. But it wasn’t the Rick’s Café Américain kind of gambling, it was the just-as-chancy use of plastic: Colleagues would heedlessly offer their credit card numbers to merchants over the phone; serious, disciplined executives would hand their AmEx Platinums to their assistants without a second thought.

This insouciant way of doing business was unheard of in my Gallic homeland. The French (and most Europeans) think that trust is something that must be earned, that it has a value that is debased when it’s handed out too freely. They think an American’s trusting optimism is naïve, even infantile.

After I got over my shock, I came to see that my new countrymates weren’t such greenhorns. They understood that if you want to lubricate the wheels of commerce, you have to risk an occasional loss, that the rare, easily-remedied abuses are more than compensated for by a vibrant business. It wasn’t long before I, too, was asking my assistant to run to the store with my Visa to make last-minute purchases before a trip.

(On the importance of Trust and its contribution to The Wealth of Nations — or their poverty — see Alain Peyrefitte’s La Société de Confiance [The Society of Trust]. Unfortunately the work hasn’t been translated into English, unlike two of Peyrefitte’s other books, The Trouble with France and the prophetic 1972 best-seller The Immobile Empire. The title of the latter is a deplorable translation of Quand la Chine s’éveillera… Le monde tremblera, “When China Awakes, The World Will Shake”, a foreboding attributed to Napoleon.)

The respective attitudes towards trust point out a profound cultural difference between my two countries. But I also noticed other differences that made my new environment feel a little antiquated.

For example, direct deposit and direct deduction weren’t nearly as prevalent in America as in France. In Cupertino, I received a direct deposit paycheck, but checks to cover expenses were still “cut”, and I had to write checks for utilities and taxes and drop them in the mailbox.

Back in Paris, everything had been directly wired into and out of my bank account. Utilities were automatically deducted ten days after the bill was sent, as mandated by law (the delay allowed for protests and stop-payments if warranted). Paying taxes was ingeniously simple: Every month through October, a tenth of last year’s total tax was deducted from your bank account. In November and December, you got a reprieve for Holiday spending fun (or, if your income had gone up, additional tax payments to Uncle François — Mitterrand at the time, not Hollande).

Like a true Frenchman, I once mocked these “primitive” American ways in a conversation with a Bank of America exec in California. A true Californian, she smiled, treated me to a well-rehearsed Feel-Felt-Found comeback, and then, dropping the professional mask, she told me that the distrust of electronic commerce that so astonished me here in Silicon Valley (of all places), it was nothing compared to Florida where it’s common for retirees to cash their Social Security checks at the bank, count the physical banknotes and coins, and then deposit the money into their accounts.

Perhaps this was the heart of the “Trust Gap” between Europe and the US: Europeans have no problem trusting electronic commerce as long as it doesn’t involve people; Americans trust people, not machines.

My fascination with electronic payment modes preceded my new life in Silicon Valley. In 1981, shortly after starting Apple France, I met Roland Moreno, the colorful Apple ][ hardware and software developer who invented the carte à puce (literally “chip card”, but better known as a “smart card”) that’s found in a growing number of credit cards, and in mobile phones where it’s used as a Subscriber Identity Module (SIM).

343_jlg

The key to Moreno’s device was that it could securely store a small amount of information, hence its applicability to payment cards and mobile phones.

I carried memories of my conversations with Moreno with me to Cupertino. In 1986, we briefly considered adding a smart card reader to the new ADB Mac keyboard, but nothing came of it. A decade later, Apple made a feeble effort to promote the smart card for medical applications such as a patient ID, but nothing came of that, either.

The results of the credit cards industry’s foray into smart card technology were just as tepid. In 2002, American Express introduced its Blue smart card in the US with little success:

“But even if you have Blue (and Blue accounts for nearly 10% of AmEx’s 50 million cards), you may still have a question: What the hell does that chip (and smart cards in general) do?

The answer: Mostly, nothing. So few stores have smart-card readers that Blue relies on its magnetic strip for routine charges.”

In the meantime, the secure smart chip found its way into a number of payment cards in Europe, thus broadening the Trust Gap between the Old and New Worlds, and heightening Roland’s virtuous and vehement indignation.

(Moreno, who passed away in 2012, was a true polymath; he was an author, gourmand, inventor of curious musical instruments, and, I add without judgment, an ardent connoisseur of a wide range of earthly delights).

Next came the “Chip and PIN” model. Despite its better security — the customer had to enter a PIN after the smart card was recognized — Chip and PIN never made it to the US, not only because there were no terminals into which the customers could type their PINs (let alone that could read the smart cards in the first place), but, just as important, because there was a reluctance on the part of the credit card companies to disturb ingrained customer behavior.

It appeared that smart cards in the US were destined to butt up against these two insurmountable obstacles: The need for a new infrastructure of payment terminals and a skepticism that American customers would change their ingrained behavior to accept them.

In 2003, I made a bad investment in the payment system field on behalf of the venture company I had just joined. The entrepreneur that came to us had extensive “domain knowledge” and proposed an elegant way to jump over both the infrastructure and the customer behavior obstacles by foregoing the smart card altogether. Instead, he would secure the credit card’s magnetic stripe.

(more next page)

Tim Cook Free At Last

 

by Jean-Louis Gassée

Trading one’s privacy for the benefit of others isn’t an easy decision. Tim Cook just made such a swap, and the reverberations are beginning to be heard.

I’m happy and relieved that Tim Cook decided to “come out”, to renounce his cherished privacy and speak of his sexual orientation in plain terms rather than veiled, contorted misdirections. The unsaid is toxic.

If you haven’t done so already, please take the time to read Tim’s I’m Proud to Be Gay Businessweek editorial. Soberly written and discreetly moving, the piece concludes with:

“…I’m doing my part, however small, to help others. We pave the sunlit path toward justice together, brick by brick. This is my brick.”

It’s an admirable cause…but why should I care? Why does this 70-year old French-born American, a happily married-up father of three adult and inexplicably civilized children, care that Cook’s sexuality is now part of the public record?

342-Tim_cook

First, I like and respect Cook for what he does, how he does it, and the way he handles his critics. For the past three years he’s been bombarded by questions about Apple’s slowing growth and the absent Next Big Thing, he’s been criticized for both hastening and impeding the inevitable commoditization of All Things Apple, he’s been called a liar by the NYT. Above all, he’s had to suffer the hidden — and occasionally blatant — accusation: You’re no Steve Jobs.

Throughout it all, Cook has displayed a preternatural calm in refusing to take the bait. In a previous Monday Note, I attributed his ability to deflect the cruel jibes to his having grown up “different” in Alabama. In his editorial, Cook confirms as much:

“It’s been tough and uncomfortable at times… [but] it’s also given me the skin of a rhinoceros, which comes in handy when you’re the CEO of Apple.”

Second, I’ve seen the ravages of homophobia at close range. A salient and personal example is the young gay architect of our first Palo Alto house. He quickly sensed he could be open with us, and would tease my wife Brigitte by showing her pictures of a glorious group of young bucks on vacation in Greece, adding, “What a loss for females”. But he also told us of his shame when he became aware of his desires in his adolescence, that he kneeled down every night to pray that his god would have mercy and make him “normal”. His parents rejected him and refused to keep in touch, even after the HIV virus made him perilously sick.

One morning when we were driving to his place in San Francisco to deliver a painting Brigitte had made for him, his partner called and told us not to come. Our friend had just passed away, still unaccepted by his parents.

Another personal example. A local therapist, a gay Buddhist, told me he couldn’t work as an M.D. in his native Caracas because the oppressive culture wouldn’t allow a gay man to so much as touch another man — even as a doctor. When he decided to tell his parents he was gay, he had to take them to a California mountain and mellow them with a certain herb before they would hear him out, and even then they didn’t entirely embrace his “choice” of sexuality.

Years of conversation with the fellow — who’s exactly my age — in a setting that facilitates honesty have brought empathy and insights that aren’t prevalent or even encouraged in the Parisian culture I come from, even in the supposedly liberated Left Bank that has been the home of lionized gay men such as Yves Saint-Laurent and Karl Lagerfeld. (I recommend Alicia Drake’s The Beautiful Fall. Lagerfeld, Saint Laurent, and Glorious Excess in 1970s Paris, a well-document and beautifully written parallel life history.)

This leads me to my third point, brought up by my wife. Gays have always been accepted in creative milieus. In many fields — fashion, certainly, but even in high tech — it’s almost expected that a “designer” is homosexual. Despite counter examples such as  Christian Lacroix, or our own Sir Jony, the stereotype endures.

According to the stereotype, it’s okay for “artistes” (I’ve learned the proper dismissive pronunciation, an elongated ‘eee’ after the first ’t’) to be unconventional, but serious business people must be straight. When I landed in Cupertino in 1985, I became acquainted with the creative <=> gay knee jerk. True-blue business people who didn’t like Apple took to calling us “fags” because of our “creative excesses” and disregard of the establishment.

What Brigitte likes most about Cook’s coming out is that it portends a liberation of the Creative Ghetto. Cook isn’t just outing himself has a gay executive; he’s declaring that being gay — or “creatively excessive”, or unconventional — is fully appropriate at the very top of American business. It helps, she concludes, that Apple’s CEO has made his statement from a position of strength, at a time when the company’s fortunes have reached a new peak and his leadership is more fully recognized than ever.

The ripples now start. Perhaps they’ll bring retroactive comfort to many execs such as former BP CEO John Browne who, in 2007, left his job in fear of a revelation about his lifestyle – and an affirmation to myriads of “different” people at the bottom of the pyramid.

Tim Cook brings hope of a more accepting world – both inside and outside of business. For this he must be happy, and so am I.

And, while I’m at it, Happy Birthday.

JLG@mondaynote.com

Science Fiction: Apple Makes A Toaster Fridge…

 

…a supremely elegant one, naturally.

Plummeting iPad sales rekindle fantasies of a hybrid device, a version that adopts PC attributes, something like a better execution of the Microsoft Surface Pro concept. Or not.

For a company that has gained a well-deserved reputation for its genre-shifting — even genre-creating — devices, it might seem odd that these devices evolve relatively slowly, almost reluctantly, after they’ve been introduced.

It took five years for the iPhone to grow from its original 3.5” in 2007, to a doubled 326 ppi on the same screen size for the June 2010 iPhone 4, to a 5” screen for the 2012 iPhone 5.

In the meantime, Samsung’s 5.3” Galaxy Note, released in 2011, was quickly followed by a 5.5” phablet version. Not to be outdone, Sony’s 2013 Xperia Z Ultra reached 6.4” (160 mm). And nothing could match the growth spurt of the long-forgotten (and discontinued) Dell Streak: from 5” in 2010 to 7” a year later.

Moreover, Apple’s leadership has a reputation — again, well-deserved — of being dismissive of the notion that their inspired creations need to evolve. While dealing with the iPhone 4 antenna fracas at a specially convened press event in 2010, a feisty Steve Jobs took the opportunity to ridicule Apple’s Brobdingnagian smarphone rivals, calling them “Hummers”, predicting that no one will buy a phone so big “you can’t get your hand around it”.

A smaller iPad? Nah, you’d have to shave your fingertips. Quoting the Grand Master in October 2010 [emphasis mine]:

“While one could increase the resolution to make up some of the difference, it is meaningless unless your tablet also includes sandpaper, so that the user can sand down their fingers to around one-quarter of their present size. Apple has done expensive user testing on touch interfaces over many years, and we really understand this stuff.

There are clear limits of how close you can place physical elements on a touch screen, before users cannot reliably tap, flick or pinch them. This is one of the key reasons we think the 10-inch screen size is the minimum size required to create great tablet apps.

For his part, Tim Cook has repeatedly used the “toaster-fridge” metaphor to dismiss the idea that the iPad needs a keyboard… and to diss hybrid tablet-PC devices such as Microsoft’s Surface Pro, starting with an April 2012 Earnings Call [emphasis and stitching mine]:

“You can converge a toaster and a refrigerator, but those aren’t going to be pleasing to the user. […] We are not going to that party, but others might from a defensive point of view.”

Recently, however, Apple management has adopted a more nuanced position. In a May 2013 AllThings D interview, Tim Cook cautiously danced around the iPhone screen size topic — although he didn’t waste the opportunity to throw a barb at Samsung [insert and emphasis mine]:

“We haven’t [done a bigger screen] so far, that doesn’t shut off the future. It takes a lot of really detailed work to do a phone right when you do the hardware, the software and services around it. We’ve chosen to put our energy in getting those right and have made the choices in order to do that and we haven’t become defocused working multiple lines.”

Sixteen months later, Apple’s Fall 2014 smartphone line-up sports three screen sizes: the 4” iPhone 5C and 5S , the new 4.7” iPhone 6, and the 5.5” iPhone 6 Plus phablet.

Is this apostasy? Fecklessness?

Remarking on Jobs’ quotable but not-always-lasting pronouncements, Cook gives us this:

“[Jobs] would flip on something so fast that you would forget that he was the one taking the 180 degree polar [opposite] position the day before. I saw it daily. This is a gift, because things do change, and it takes courage to change. It takes courage to say, ‘I was wrong.’ I think he had that.”

That brings us to the future of the iPad. In the same interview (in 2012) Cook expressed high hopes for Apple’s tablet:

“The tablet market is going to be huge… As the ecosystem gets better and better and we continue to double down on making great products, I think the limit here is nowhere in sight.”

Less than two years after the sky-is-the-limit pronouncement, iPad unit sales started to head South and have now plummeted for three quarters in a row (- 2,3%, – 9% and – 13% for the latest period). This isn’t to say that the iPad is losing ground to its competitors, unless you include $50 models. Microsoft just claimed $903M in Surface Pro revenue for the quarter ended last September, which, at $1K per hybrid, would be .9M units, or double that number if the company only sold its $499 year-old model. For reference, 12.3M iPads were sold in the same period (I don’t know any company, other than Apple, that discloses its tablet unit volume).

As Andreessen Horowitz’s Benedict Evans felicitously tweets it: There’re 2 tablet markets: next-gen computing vision, where Apple has 80%, and, bigger but quite separate, the cheap TV/casual games device.”

Still, the concern remains. Does the iPad own 80% of a shrinking market, or can the Cupertino team reboot sales and fulfill Tim Cook’s The Limit Is Nowhere In Sight promise?

What’s missing?

A hint might lie in plain sight at the coffee shop next door. We see laptops, a Kindle reader or two, and iPads – many with an attached keyboard. Toaster-fridges!

But here’s Craig Federighi, Apple’s Sr. VP of Software Engineering, who is fond of dismissing talk of touch-screen Macs:

“We don’t think it’s the right interface, honestly.”

I find Federighi’s remark a bit facile. Yes, touching the screen makes much more ergonomic sense for a tablet than for a laptop, but in view of the turnabouts discussed above, I don’t quite know what to make of the honestly part.

Frederigh may be entombed in the OS X and iOS software caves, but can he honestly ignore the beautiful Apple Wireless Keyboard proposed as an iPad accessory, or the many Logitech, Incase, and Belkin keyboards offered in the company’s on-line store? (Amazon ranks such keyboards between #20 and #30 in their bestsellers lists.) Is he suborning others to commit the crime of toaster-fridging?

In any case, the iPad + keyboard combo is an incomplete solution. It’s not that the device suffers from a lack of apps. Despite its poor curation, the App Store’s 675,000 iPad apps offer productivity, entertainment, education, graphic composition and editing, music creation, story-telling, and many other tools. As Father Horace (Dediu) likes to put it, the iPad can be “hired to do interesting jobs”.

No, what’s missing is that the iOS user interface building blocks are not keyboard-friendly. And when you start to list what needs to be done, such as adding a cursor, the iPad hybrid looks more and more like a Mac…but a Mac with smaller margins. The 128GB iPad plus an Apple Keyboard rings up at $131 less than a 11”, 128GB MacBook Air. (As an added benefit, perhaps the Apple toaster-fridge would come bundled with Gene Munster’s repeatedly predicted TV Set.)

On to better science fiction.

Let’s imagine what might happen next quarter when Intel finally ships the long-promised Broadwell processors. The new chips’ primary selling point is reduced power consumption. The Broadwell probably won’t dislodge ARM SoCs from smartphones, but a reduced appetite for electricity could enable a smaller, slimmer, lighter MacBook Air 2, with or without a double (linear) density Retina display.

Now consider last quarter’s iPad and Mac numbers, compared to the previous year:

341_jlg_table

Mac units grew 25% year-on-year, while iPads experienced a 7% decrease.

You’re in Apple’s driver seat: Do you try to make the iPad feel more like a Mac despite the risks on many levels (internal engineering, app developers, UI issues), or do you let nature to take its course and let the segment of more demanding users gravitate to the Mac, cannibalizing iPad sales as a result? Put another way, are you willing to risk the satisfaction of users who enjoy “pure tablet” simplicity in order to win over customers who will naturally choose a nimbler Mac?

JLG@mondaynote.com

PS: John Kirk just published a column titled The Apple Mac Takes Its Place In The Post-PC World where he digs up a prophetic Gates quote and explains the rise of the Mac as the weapon of choice for power users.

The two things that could hurt Google 

 

Google’s recent Search Box feature is but one example of the internet giant’s propensity to use weird ideas to inflict damage upon itself. This sheds light on two serious dangers for Google: Its growing disconnection from the real world and its communication shortcomings. 

At first, the improved Google search box discreetly introduced on September 5 sounded like a terrific idea: you enter the name of a retailer — say Target, Amazon — and, within Google’s search result page, shows up another, dedicated search box in which you can search inside the retailer inventory. Weirdly enough, this new feature was not mentioned in a press release, but just in a casual Google Webmaster Central Blog post aimed at the tech in-crowd.

Evidently, it was also supposed to be a serious commercial enhancer for the search engine. Here is what it looked like as recently as yesterday:

341_search1

341_search2

Google wins on both ends: it keeps users on its own site (a good way to bypass the Amazon gravity well) while, in passing, cashing on ad modules purchased, in this case, both by Amazon.fr itself bidding for the keyword “perceuse” (drill) on Google.fr, and also by Amazon’s competitors offering the same appliance (and whose bids were lower.)

In due fairness, the Google Webmaster Blog explains how to bypass the second stage and how to make a search that lands directly to the site, Amazon.fr in our example. Many US e-commerce sites did so. Why Amazon didn’t is still unclear.

Needless to say, this new feature triggered outrage from many e-commerce sites, especially in Europe. (I captured these screenshots on Google.fr because no ads showed up for US retailers, most likely because I’m browsing form Paris).

For Google’s opponents, it was a welcome ammunition. Immediately, the Open Internet Project summoned a press conference (last Thursday Oct. 23), inviting journalists seen as supportive of their cause. In a previous Monday Note (see Google and the European media: Back to the Ice Age), I told the story of this advocacy group, mostly controlled by the German publishing giant Axel Springer AG, and the French media group Lagardère Active. The latter’s CEO, Denis Olivennes is well-know for his deft political maneuvers, much less so for his business acumen as he missed scores of digital trains in his long career in retail (he headed French retailer Fnac), and in the media business.

Realizing its mistake, Google quickly pulled back, removing the search box on several retailers’ sites, and announcing (though unofficially) that it was working on an opt-out system.

This incident is the perfect illustration of two major Google liabilities.

One: Google’s disconnect from the outside world keeps growing. More than ever, it looks like an insulated community, nurturing its own vision of the digital world, with less and less concern for its users who also happen to be its customers. It looks like Google lives in its own space-time (which is not completely a figure of speech since the company maintains its own set of atomic clocks to synchronize its data centers across the world independently from official time sources).

You can actually feel it when hanging around its vast campus, where large luxury buses coming from San Francisco pour out scores of young people, mostly male (70%) mostly white (61%), produced by the same set of top universities (in that order:  Stanford, UC Berkeley, Carnegie Mellon, MIT, UCLA…). They are pampered in the best possible way, with free food, on location dental care, etc. They see the world through the mirrored glass of their office, their computer screen and the reams of data that constitute their daily reality.

Google is a brainy but also messy company where the left hemisphere ignores what the other one does. Since the right one (the engineers) is particularly creative and productive, the left brain suffers a lot. In this recent case, a group of techies working at the huge search division (several thousands people) came up with this idea of an improved search box. Higher up, near the top, someone green-lighted the idea that went live early September. Many people from the left hemisphere — communication, legal, public affairs — might have been kept in the dark, not even willfully, by the engineering team, but simply by natural cockiness (or naiveté). However, I also suspect the business side of the company was in the loop (“Google” and “candor” make a solid oxymoron).

Two: Google has a chronic communication problem. The digital ecosystem is known for quickly testing and learning (as opposed to legacy media that are more into staying and sinking). In practical terms, they fire first and reflect afterwards. And sometimes retract. In the search box incident, the right attitude would have been to put up a communiqué saying basically, “Our genuine priority was to improve the user experience [the mandatory BS], but we found out that many e-retailers strongly disliked this new feature. As a result, we took the following steps, blablabla.” Instead, Google did nothing of the sort, only getting its engineering staff to quietly remove the offending search box.

There is a pattern to Google’s inability to properly communicate. You almost discover by accident that these people are doing stunning things in many fields. When the company is questioned, it almost never responds by providing solid data to make its point — that’s simply unbelievable from a company that is so obsessed with its reliance to hard facts. Recall Google’s internal adoption of W. Edwards Deming’s motto: In god we trust, all others bring data.

In parallel, the company practices access journalism, picking up the writer of its choosing, giving him/er a heads-up for a specific subject hoping for a good story. Here are two examples from Wired and The Atlantic.

341_AcceessJ_1

341_accessJour_2

These long-read “exclusive” and timely features were reported respectively on location from New Zealand and Australia. They are actually great and balanced pieces since both Wired’s Steven Levy and Atlantic’s Alex Madrigal are fine journalists.

While it never miss a opportunity to mention its vulnerability, Google is better than anyone else at nurturing it. Like Mikhail Gorbachev used to say about the crumbling USSR: “The steering is not connected to the wheels”. We all know what happened.

frederic.filloux@mondaynote.com

The iPad’s Future

 

The new iPad Air 2 is more than a mere iteration, but the real revolution in the iPad line may be heralded by the introduction of the iPhone 6 Plus.

The new iPad Air 2 looks and feels terrific. Hold an iPad mini in one hand and an iPad Air 2 in the other —  they seem to weigh about the same. This is an illusion: The 341 gram mini is lighter than the 444 gram Air 2 (.75 vs .98 pounds; both with cellular equipment), but the Air 2 is almost impossibly thin. At 6.1 mm, the Air 2 makes the mini’s 7.4 mm feel bulky.

340_aplSlim

The iPad Air 2 also has an improved screen, a better camera, enhanced motion capture, faster processing, and, perhaps most important, it has Touch ID, Apple’s fingerprint recognition system. This is a bigger deal than initially reported. For businesses that have increasingly stringent security requirements, Touch ID is a welcome replacement for annoying password entry and will help the selling iPads in “compliance-burdened” enterprises. (On this, and the rest of Apple’s announcements, see Charles Arthur’s column in The Guardian, IMHO the best overview.)

And liberation from the password or, more important, from lazy security, isn’t limited to IT-controlled environments. I hear from normal humans that they love the Apple Pay + Touch ID combination for their online shopping, an activity that was previously more convenient on a conventional PC.

If a MacBook Air showed up with a comparable pile of improvements, there would be oohs and aahs all over the Kommentariat. Instead, the slimmed-down, sped up iPad Air 2 has been met with either tepid, supercilious praise (“If the iPad has never appealed to you as a product, the Air 2 probably won’t change your mind”; CNET) or borderline dismissal on the grounds that it won’t fix iPad’s slowing sales (“But it is not clear that making the iPad Air 2 the Twiggy of tablet devices will be enough to reinvigorate Apple’s iPad sales”; NYT).

Indeed, after growing faster than anything in tech history, tablets have stalled. For the past three quarters unit sales have plummeted: iPad sales fell by 2.29% in the first (calendar) quarter of 2014 versus the same quarter in 2013, and they fell by 9% in Q2:

340_appl_tabl

(A thank-you to Apple for providing actual unit and revenue numbers for their product lines— does any other company do that?)

When Apple releases its fiscal Q4 numbers this coming Monday, we’ll find out how “poorly” the iPad did in the July to September period. We don’t expect the numbers to show a turn around, neither for the quarter and certainly not for the entire fiscal year.

Some folks look at these numbers and question the device’s future (Apple iPad Fad is Over). But technological viability and short-term sales effects are two different topics.

In The iPad Is a Tease last April and The Sweet Spot On Apple’s Racket in August, I tried to separate the merits of the tablet genre, which I see as established and durable, from the unreasonable expectations that arose from the sense of liberation from PC obfuscation. If you see the tablet as a one-for-one replacement for a PC, you’ll be disappointed, and the falling iPad sales will look like an inevitable skid into obsolescence. I flirted with membership in that camp when I accused the iPad of being unsympathetic to “ambitious” users (iPad and File Systems: Failure of Empathy; in my defense, that was in early 2013 — eons ago in tech time).

I’ve since recanted. Instead of a hybrid product as promoted by Microsoft, the sweet spot in Apple’s business model seems to be a tablet and a laptop, each one used for what it does best, unencumbered by hybridization.

As Tim Cook noted last week, Mac sales (laptops, mostly) grew 18% in the last reported quarter. This time, contrary to earlier expectations, it looks like the Mac is cannibalizing the iPad… not a bad “problem” to have. And it’s nothing like the evisceration of iPod sales after the iPhone was introduced. With the advent of the iPhone, the music player became an ingredient, it was no longer a standalone genre.

The new Air 2 won’t put iPad sales back on its previous growth curve… and I don’t think Apple is troubled by this. Making the iPad Air nimbler and more useful, a stand-out in a crowd of tablets, that’s Apple’s strategy, and it’s more than good enough — for the time being.

Talk of Apple’s game plan brings us to the iPhone 6 Plus. (These lengthening product names bring bad memories form the auto industry, but what can Apple do?) Does the new, larger iPhone say something about the future of the iPad mini?

I once thought the mini was the “real” iPad because I could carry it everywhere in a jacket pocket. But about two weeks ago I bought an iPhone 6 Plus, and I haven’t touched my mini since. (As punishment for my sin, I found 52 apps awaiting an update when I finally turned on the mini this morning…) Now I have an “iPad micro” in my (front) jeans pocket…and it makes phone calls.

With the introduction of the iPhone 6 Plus, the iDevices playing field has changed: A broader range of iPhones could “chase” the iPad upwards, creating opportunity for a beefier “iPad Pro”. Or perhaps Apple will use its now-proven microprocessor design muscle to make a lighter, nimbler MacBook Air.

Whatever Apple does next, the iPhone 6 Plus might prove to be a turning point.

JLG@mondaynote.com

HP’s Old Curses

 

Finally! HP did what everyone but its CEO and Board thought inevitable: They spun off the commoditized PC and printing businesses. This is an opportunity to look deeper into HP’s culture for roots of today’s probably unsolvable problems.

The visionary sheep of Corporate America are making a sharp 180º turn in remarkable lockstep. Conglomerates and diversification strategies are out. Focus, focus, focus is now the path to growth and earnings purity.

As reported in last week’s Monday Note, eBay’s John Donahoe no longer believes that eBay and PayPal “make sense together”, that splitting the companies “gives the kind of strategic focus and flexibility that we think will be necessary in the coming period”. This week, Symantec announced that it will spin off its storage division (née Veritas) so that “the businesses would be able to focus better on growth opportunities including M&A”.

And now Meg Whitman tells us that HP will be “a lot more nimble, a lot more focused” as two independent companies: HP Inc. for PCs and printers, Hewlett Packard Enterprises for everything else.

Spinning off the PC and printer business made sense three years ago when Léo Apotheker lost his CEO job for suggesting it, and it still makes sense today, but this doesn’t mean that an independent HP PC company will stay forever independent. In a declining PC market that they once dominated, HP has fallen behind Lenovo, the company that acquired IBM’s PC business and made the iconic ThinkPad even more ubiquitous. HP Inc. will also face a newly-energized Dell, as well as determined Asian makers such as Acer and Asus. That Acer is losing money and Asus’ profits have fallen by 24% will make the PC market even more prone to price wars and consolidation. It doesn’t take much imagination to foresee HP Inc. shareholders agitating for a sale.

Many think that Hewlett-Packard Enterprise’s future isn’t so bright, either. The company’s latest numbers show that the enterprise business, which competes with the likes of IBM, Oracle, and SAP, isn’t growing.  As with the PC business, such unexciting state of affairs leads to talk of consolidation, of the proverbial “merger of equals”.

Such unhappy prospects for what once was a pillar of Silicon Valley leads to bitter criticism of a succession of executives and of an almost surreal procession of bad Board decisions. Three years ago, I partook in such criticism in a Monday Note titled How Bad Boards Kill Companies: HP. This was after an even older column, The Incumbent’s Curse: HP, where I wistfully contemplated the company’s rise and fall.

I’m fascinated by the enduring power, both negative or positive, of corporate cultures, of the under-the-surface system of emotions and permissions. After thinking about it, I feel HP’s current problems are rooted more deeply and started far earlier than the Board’s decisions and the sorry parade of executives over the past 15 years.

Founded in 1939, HP spent a quarter century following one instinct: Make products for the guy at the next bench. HP engineers could identify with their customers because their customer were people just like them…it was nerd heaven.

HP’s line of pocket calculators is the exemplar of a company following its instincts. They worked well because they appealed to techies. The  amazingly successful HP-80 was a staple of the financial world; its successor, the HP 12-C, is still sold today.

But HP’s initial success bred a strain of Because We Can that led the company into markets for which its culture had no natural feeling. I’m not just referring to the bizarre attempt in 1977 to sell the HP-01 “smartwatch” through jewelry stores…

Hewlett_Packard_Digital_Watch_Modell_1_1977

No, I’m referring to computers. Not the technical/scientific desktop kind, but computers that were marketed to corporate IT departments. In the late ’60’s, HP embarked on the overly ambitious Omega project, a 32-bit, real-time computer that was cancelled in 1970. The Because We Can impulse of HP engineers wasn’t supported by a reliable internal representation of the customer’s ways, wants, and emotions. (A related but much more modest project, the 16-bit Alpha, ultimately led to the successful HP 3000 — but even the HP 3000 had a difficult birth.)

Similarly, when 8-bit microprocessors emerged in 1974, HP engineers had no insights into the desires of the unwashed hobbyist. They couldn’t understand why anyone would embrace designs that were clearly inferior to their pristine 16-bit 9800 series of desktop machines.

By the late 70’s the company was bisected into engineers who stuck to the “guy at the next bench” approach, and engineers who targeted the IT workers that they mistakenly thought they understood. Later, in 1999, the instrument engineers and products — the “real” HP to many of us — were split off into Agilent, a relatively small business that’s not very profitable. The company’s less than $7B in revenue is nothing compared to the more than $100B in yearly revenues for the pre-split HP.

In all industries, some companies manage to stick to their story, while others drift from the script. I’m thinking of Volkswagen and its 40-year old Golf (not the misbegotten Phaeton) versus Honda’s sprightly 1972 Civic hatchback that later lost its soul and turned into today’s banal little sedan. (To be fair, I see the Civic as alive and well in the Honda Fit.)

In the tech world, Oracle has kept to the plot – no doubt because the founder, Larry Ellison, is still at the helm after 37 years. Others, like Cisco, make bizarre acquisitions: Flip, a consumer camera company that it quickly shut down, and home networking company LinkSys (purchased at a time when CEO John Chambers called The Home his company’s next frontier). And now Cisco is going after the $19T (trillion!) Internet of Things.

The now dysfunctional Wintel lost the plot by letting the PC-centric intuitions that worked well for so long blind them to the fact mobile devices aren’t “PCs – only smaller”.

I have a personal feeling of melancholy when I see that the once mighty HP has drifted from its instincts. The company hired me in June 1968 to launch their first desktop computer on the French market. After years in the weeds, this was the chance of a lifetime for this geeky college drop-out. At the time I joined, HP’s vision was concentrated. They rarely acquired other companies…why buy what you can build yourself? That all changed, and in a big way, in the 90’s.

To this day, I’m grateful for the kindness and patience of the HP that took me in. It was the company that David Packard describes in The HP Way, not today’s tired conglomeration.

JLG@mondaynote.com