Uncategorized

The NYTimes could be worth $19bn instead of $2bn  

 

by Frédéric Filloux

Some legacy media assets are vastly underestimated. A few clues in four charts.   

Recent annual reports and estimates for the calendar year 2014 suggest interesting comparisons between the financial performance of media (either legacy or digital) and Internet giants.

In the charts below, I look at seven companies, each in a class by itself:

355-1
A few explanations are required.

For two companies, in order to make comparisons relevant, I broke down “digital revenues” as they appear in financial statements: $351m for the New York Times ($182m in digital advertising + $169m for digital subscriptions) and, for The Guardian, $106m (the equivalent of the £69.5m in the Guardian Media Group annual report (PDF here).

Audience numbers above come from ComScore (Dec 2014 report) for a common reference. We’ll note traffic data do vary when looking at other sources – which shows the urgent need for an industry-wide measurement standard.

The “Members” column seemed necessary because traffic as measured by monthly uniques does differ from actual membership. Such difference doesn’t apply to news media (NYT, Guardian, BuzzFeed).

For valuations, stock data provide precise market cap figures, but I didn’t venture putting a number the Guardian’s value. For BuzzFeed, the $850m figure is based on its latest round of investment. I selected BuzzFeed because it might be one of the most interesting properties to watch this year: It built a huge audience of 77m UVs (some say the number could be over 100m), mostly by milking endless stacks of listicles, with clever marketing and an abundance of native ads. And, at the same time, BuzzFeed is poaching a number first class editors and writers, including, recently, from the Guardian and ProPublica; it will be interesting to see how Buzzfeed uses this talent pool. (For the record: If founder Jonah Peretti and editor-in-chief Ben Smith pull this off, I will gladly revise my harsh opinion of BuzzFeed).

The New York Times is an obvious choice: It belongs to the tiny guild of legacy media that did almost everything right for their conversion to digital. The $169m revenue coming from its 910,000 digital subscribers didn’t exist at all seven years ago, and digital advertising is now picking up thanks to a decisive shift to native formats. Amazingly enough, the New York Times sales team is said to now feature a ratio of one to one between hardcore sales persons and creative people who engineer bespoke operations for advertisers. Altogether, last year’s $351m in digital revenue far surpasses newsroom costs (about $200m).

A “normal” board of directors would certainly ask management why it does not consider a drastic downsizing of newspaper operations and only keep the fat weekend edition. (I believe the Times will eventually go there.)

The Guardian also deserves to be in this group: It became a global and digital powerhouse that never yielded to the click-bait temptation. From its journalistic breadth and depth to the design of its web site and applications, it is the gold standard of the profession – but regrettably not for its financial performances, read Henry Mance’s piece in the FT.

Coming back to our analysis, Google unsurprisingly crushes all competitors when it comes its financial performance against its audience (counted in monthly unique visitors):

355-2
Google monetizes its UVs almost five times better than its arch-rival Facebook, and 46 times better than The New York Times Digital. BuzzFeed generates a tiny $1.30 per unique visitors per year.

When measured in terms of membership — which doesn’t apply to digital media — the gap is even greater between the search engine and the rest of the pack :

355-3

The valuation approach reveals an apparent break in financial logic. While being a giant in every aspects (revenue, profit, market share, R&D spending, staffing, etc), Google appears strangely undervalued. When you divide its market capitalization by its actual revenue, the multiple is not even 6 times the revenue. By comparison, BuzzFeed has a multiple of 8.5 times its presumed revenue (the multiple could fall below 6 if its audience remains the same and its projected revenue increases by 50% this year as management suggests.)  Conversely, when using this market cap/revenue metric, the top three (Twitter, Facebook, and even LinkedIn) show strong signs of overvaluation:

355-4
Through this lens, if Wall Street could assign to The New York Times the ratio Silicon Valley grants BuzzFeed (8.5 instead of a paltry 1.4), the Times would be worth about $19bn instead of the current $2.2bn.

Again, there is no doubt that Wall Street would respond enthusiastically to a major shrinkage of NYTCo’s print operations; but regardless of the drag caused by the newspaper itself, the valuation gap is absurdly wide when considering that 75% of BuzzFeed traffic is actually controlled by Facebook, certainly not the most reliably unselfish partner.

As if the above wasn’t enough, a final look confirms the oddity of market valuations. Riding the unabated trust of its investors, BuzzFeed brings three times less money per employee  than The New York Times does (all sources of revenue included this time):

355-5
I leave it to the reader to decide whether this is a bubble that rewards hype and clever marketing, or if the NYT is an unsung investment opportunity.

frederic.filloux@mondaynote.com

2015 Digital Media: A Call For a Big Business Model Cleanup 

 

by Frédéric Filloux

Digital media are stuck with bad economics resulting in relentless deflation. It’s time to wake-up and make 2015 the year of radical — and concerted — solutions.

Trends in digital advertising feel like an endless agony to me. To sum up: there is no sign of improvement on the performance side; a growing percentage of ads are sold in bulk; click-fraud and user rejection are on the rise, all resulting in ceaseless deflation. Call it the J-Curve of digital advertising, as it will get worse before it gets better (it must – and it will).

Here is a quick summary of issues and possible solutions.

The rise of Ad Blocking system, the subject of a December 8th, 2014 Monday Note. That column was our most viewed and shared ever, which measures a growing concern for the matter. Last week, AdBlockPlus proudly announced a large scale deployment solution: with a few clicks, system administrators can now install AdBlockPlus on an entire network of machines. This yet another clue that the problem won’t go away.

There are basically three approaches to the issue.

The most obvious one is to use the court system against Eyeo GmBH, the company operating AdBlockPlus. After all, the Acceptable Ads agreement mechanism in which publishers pay to pass unimpeded through ABP filters is a form of blackmail. I don’t see how Eyeo will avoid collective action by publishers. Lawyers — especially in Europe — are loading their guns.

The second approach is to dissuade users from installing ABP on their browsers. It’s is up to browser makers (Google, Microsoft, Apple) to disable ABP’s extensions. But they don’t have necessarily much of an incentive to do so. Browser technology is about user experience quality when surfing the web or executing transactions. Performance relies on sophisticated techniques such as developing the best “virtual machines” (for a glimpse on VM technology, this 2009 FT Magazine piece, The Genius behind Google’s browser  is a must-read). Therefore, if the advertising community, in its shortsighted greed, ends up saturating the internet with sloppy ads that users massively reject, and that such excesses led a third party developer to create a piece of software to eliminate the annoyance, it should be no surprise to see the three browsers providers tempted to allow ad blocking technologies.

Google’s is in a peculiar position on this because it also operates the ad-serving system DFP (DoubleClick for Publishers). Financially speaking, Google doesn’t necessarily care if a banner is actually viewed because DFP collects its cut when the ad is served. But, taking the long view, as Google people usually do, we can be sure they will address the issue in coming months.

The best way to address the growing ad rejection is to take it at its root: It’s up to the advertising sector to wake up and work on better ads that everybody will be happy with.

But reversing this trends will take time. The perversity of ad-blocking is that everyone ends up being affected by the bad practices of a minority: Say a user installs ABP on her computer after repeated visits on a site where ads are badly implemented, chances are that she will intentionally disconnect ABP on sites that carefully manage their ads are next to zero.

As if the AdBlock challenge wasn’t not enough, the commercial internet has to deal with growing “Bot Fraud”. Ads viewed by robots generating fake — but billable — impressions become a plague as the rate of bogus clicks is said to be around 36% (see this piece in MIT’s Technology Review). This is another serious problem for the industry when advertisers are potentially defrauded with such magnitude: as an example, last year, the FT.com revealed that up to 57% of a Mercedes-Benz campaign viewers actually were robots.

In the digital advertising sector, the places to find some relief remain branded content or native ads. Depending on how deals are structured, prices are still high and such ad forms can evade blocking. Still, to durably avoid user rejection, publishers should be selective and demanding on the quality of branded content they’ll carry.

Another ingredient of the cleanup involves Internet usage metrics — fixed and mobile. More than ever, our industry calls for reliable, credible and, above all, standardized measurement systems. The usual ‘Unique Visitor’ or page views can’t remain the de rigueur metrics as both are too easily faked. The ad market and publishers need more granular metrics to reflect actual reader engagement (a more critical measure when reading in-depth contents vs. devouring listicles dotted with cheap ads). Could it be time spent on a piece of content or shares on social networks? One sure thing, though: the user needs to be counted across platforms she’s using. It is essential to reconcile the single individual who is behind a variety of devices: PC, smartphone or tablet. To understand her attention level — and to infer its monetary value, we need to know when, for how long, and in which situations she uses her devices. Wether it is anonymously or based on a real ID, retrieving actual customer data is critical.

The answer is complicated, but one thing is sure: to lift up its depleted economics, the industry needs to agree on something solid and long-lasting.

The media industry solutions to the problems we just discussed will have a significant impact on digital information. As long as the advertising market remains in today’s mess, everybody loses: Advertisers express their dissatisfaction with more pressure on the prices they’re willing to pay; intermediaries — media buying agencies — come under more scrutiny; and, in the end, publisher P&Ls suffer. The two digital world ‘mega-gatekeepers’ — Facebook and Google — could play a critical role in such normalization. Unfortunately, their interests diverge. There is not a month when we do not see competition increase between them, on topics ranging from user attention, to mobile in emerging markets, internet in the sky, and artificial intelligence… At this stage, the result of this multi-front war is hard to predict.

frederic.filloux@mondaynote.com

The iPhone’s 8th Anniversary

 

by Jean-Louis Gassée

Smartphones existed before Steve Jobs introduced the iPhone on January 9th, 2007. But by upending existing technology platforms, application distribution, and carrier business models, he kickstarted a new era of computing whose impact is yet to be fully understood.

I knew one of the victims of the Charlie Hebdo massacre: Bernard Maris. We weren’t friends, just pleasantly casual acquaintances through the in-law side of my family. Typical Parisian dinner conversations “rearranging the world” led to a Palo Alto visit and an interview for a small Charlie Hebdo piece, complete with the requisite risqué drawing.

350-bernard-maris-07
[Bernard Maris]

After several false starts writing about the events in Paris, I’ve come to the conclusion that I’m too angry at too many targets, starting with certain cowards in the media who don’t understand that the fear of antagonizing oppressors perpetuates their power, that no good culture can exist without a dose of bad taste, that the demand to never be offended is inhumane. As Cardinal André Vingt-Trois, archbishop of Paris puts it: ‘A caricature, even in bad taste, criticism, even extremely unfair, cannot be put on the same plane as murder.

(Lovers of ironic detail will note that Cardinal Vingt-Trois was once the titular bishop of Thibilis, Algeria. In partibus infidelium.)

Instead, I will turn to a more positive train of thought: The beginning of the Smartphone 2.0 era.

Eight years ago, Steve Jobs walked onto the stage at MacWorld San Francisco and gave a masterful performance. His presentation is worth revisiting from time to time, a benchmark against which to evaluate a PowerPoint-addled CEO pitch or a product intro cum dance number.

In his talk, Jobs tells us that the iPhone is one of these products that, like the Mac and the iPod before, “changes everything”. He was right, of course, but one wonders… even with his enormous ambition, did Jobs envision that the iPhone would not only transform Apple and an entire industry, but that it would affect the world well beyond the boundaries of the tech ecosystem?

If the last sentence sounds a bit grand, let’s look at the transformation of the smartphone industry, starting with Apple.

In 2006, the year before the iPhone, Apple revenue was $19B (for the Fiscal Year ending in September). That year, iPod revenue exceeded the Mac, $7.7B to $7.3B…but no one claimed that Apple had become an iPod company.

In 2007, revenue climbed to $24B, a nice 26% progression. Mac sales retook the lead ($10.3B vs. $8.3B for the iPod), and iPhone sales didn’t register ($123M) as shipments started late in the Fiscal Year and accounting’s treatment of revenue blurred the picture.

In 2008, revenue increased to $32.5B, up 35%. iPhone revenue began to weigh in at $1.8B, far behind $9B for the iPod and $14.3B for the Mac (a nice 39% uptick).

In 2009, revenue rose by a more modest 12%, to $36.5B — this was the financial crisis. iPod declined to $8B (- 11%) as its functionality was increasingly absorbed by the iPhone, and the Mac declined a bit to $13.8B (- 3%). But these shortfalls were more than compensated for by iPhone revenue of $6.8B (+ 266%), allowing the company to post a $4B increase for the year. This was just the beginning. (And even the beginning was bigger than originally thought: Due to a change in revenue recognition esoterica, 2009 iPhone revenue would be recalculated at $13.3B.)

In 2010, iPhone revenue shot up to $25B, pushing Apple’s overall revenue up by a phenomenal 52% to $65B. The iPhone now represented more than 1/3rd of total revenue.

In 2011, growth accelerates, revenue reaches $108B (+ 66%), more than five times the pre-iPhone 2006 number. iPhone reaches $47B (+ 87%), now almost half of the company’s total.

For 2012, sales shoot up to $156.5B (+ 45%), and the iPhone reaches $80.5B (+ 71%). At such massive absolute numbers, 45% and 71% growth look almost unnatural as they appear to violate the Law of Large Numbers. As this happens, the iPhone crosses the 50% of total revenue threshold, and accounts for probably 2/3rd of Apple’s total profit.

Apple’s growth slowed in 2013 to a modest + 9%, with $171B overall revenue. The iPhone, weighing in at $91.3B (+ 16%), provides most ($12.6B) of the modest ($14B) overall revenue increase and 53% of total sales.

Last year, growth slows just a bit more: $182.8B (+ 7%) with the iPhone reaching $102B (+12%). Once again, the iPhone contributes most of the total revenue growth ($10.7B of $11.9B) and fetches 56% of the company’s sales. Notably, the iPad shows a 5% decrease and, at $2.3B, the iPod is becoming less and less relevant. (Although, how many companies would kill for $2.3B in music player revenue?)

The excellent Statista portal gives us picture of the iPhone’s emergence as Apple’s key product:

350_jlg_

While the company is about ten times larger than it was before the iPhone came out, the smartphone industry has become a nearly trillion dollar business. Depending on how we count units and dollars, if we peg Apple at 12% market share, that means the worldwide number across the smartphone industry reaches $800B. If we grant Apple just a 10% share, we have our $1T number.

For reference, still according to Statista, the two largest auto companies, Toyota and the Volkswagen Group, accounted for $485B in revenue in 2013:

350_JLG2-1

However we calculate its size, whether we place it at $800B or $1T, what we mustn’t do is think that the smartphone industry merely grew to this number. Today’s smartphone business has little in common with what it was in 2006.

Consider that Motorola “invented” the cell phone. Now Motorola is (essentially) gone: Acquired by Google, pawned of to Lenovo, likely to do well in its new owner’s Chinese line.

Nokia: The Finnish company stole the crown from Motorola when cell phones became digital and once shipped more than 100M phones per quarter. Since then, Nokia was Osborned by its new CEO, Stephen Elop, an ex-Microsoft  exec, and is now owned by Elop’s former employer. With 5% or less market share, Nokia is waste of Microsoft resources and credibility… unless they switch to making Android phones as a vehicle for the company’s  “Cloud-First, Mobile-First” apps.

Palm, a company that made a credible smartphone by building on their PDA expertise, was sold to HP and destroyed by it. They’re worse than dead, with a necrophiliac owner (TCL), and LG humping other parts of the corpse for their WebOS TVs and a WebOS smartwatch.

And then there’s the BlackBerry. Once the most capable of all the smartphones with a Personal Information Manager that was ahead of its time, it was rightly nicknamed CrackBerry by its devoted users. Now BlackBerry Limited is worth less than a 1/100th of Apple, and is trying to find a niche – or a seeker of body parts.

The change in the industry is, of course, far from being solely Apple’s “fault”. In many ways, Google destroyed more incumbents than Apple. Google acquired Android in 2005, well before the iPhone appeared. According to the always assertive Tomi Ahonen, China now sports more than 2000 (!) phone brands, all based on some Android derivative. And let’s not forget the voraciousness of Apple’s giant Korean frenemy Samsung, which acts as both a supplier of key iPhone components and a competitor.

But is the industry now settled? Are any of the current incumbents, including Apple, unassailable? Market-leading Samsung appears to be challenged by both Apple at the high end and Xiaomi from below, and has announced more recent troubles. Our friend Tomi argues that Xiaomi isn’t the new Apple but that Lenovo and Huawei are the ones to watch. And, of course, Apple is seen as a “hits” company, a business that lives and dies by its next box-office numbers — and the numbers for the new iPhone 6 aren’t in, yet They’re likely be very strong.

Regardless of any individual company’s business case, the overall of impact of the smartphone on the world is what counts the most. In a blog post titled Tech’s Most Disruptive Impact Over the Next Five Years, Tim Bajarin argues that the real Next Big Thing isn’t the Internet of Things, Virtual Reality, or BitCoin. These are all important advances, but nothing compared to the impact of smartphones [emphasis mine]:

“Another way to think of this is that smart phones or pocket computers connecting the next two billion people to the internet is similar to what the Gutenberg Press and the Bible were to the masses in the Middle Ages.”

As Horace Dediu notes, we’re on track to 75% US smartphone penetration by the end of 2014. The big impact to come will be getting the entire world to reach and exceed this degree of connectivity, especially in areas where there’s little or no wired connectivity.

This is what Steve Jobs started eight years ago by upending established players and carrier relationships.

JLG@mondaynote.com

My Best Reads This Year

by Frédéric Filloux

For this year’s last Monday Note, I chose to share a few interesting topics I followed in 2014. I expect many of them will stay high in next year’s news cycle. Here are my picks, in about 40 links.

The Great Mobile Takeover…

Next year, the vast majority of media will see more than 50% of their traffic coming from mobile devices (Facebook is way ahead with 65%). We might see a new breed of mobile-only quality media, but the ecosystem still has to come up with ad formats that don’t irritate audiences, and adjusting revenue streams won’t be easy. Last October, Andreessen Horowitz’s Benedict Evans came up with his Mobile is Eating the World stack of data. It goes in the same direction as Mary Meeker’s bi-annual State of the Internet slide deck, thus reinterpreted by the Atlantic : Mobile Is Eating Global Attention: 10 Graphs on the State of the Internet.

… And How it Will Impact “The Next Billion”

Quartz coined the “Next Billion” phrase and went on to build a cluster of conferences around it (the next is May 19 in London). If 85% of the world population lives within range of a cell tower (including 2G connectivity), 4.3 billion people are still not connected to the web. They will do so by getting a smartphone. According to the GSMA trade group, the number of smartphones will increase by 3 billion by 2020 as the infrastructure is built and handset prices keep falling (they cost currently less that $75).

More in this series of links from Quartz:

Internet cafes in the developing world find out what happens when everyone gets a smartphone
How to map wealth in Africa using nothing but mobile-phone minutes
How to sell gigabytes to people who’ve never heard of them
This mobile operator wants to charge $2.50 a year for access to Facebook
Kenya’s merchants are warming up to a payment system born in a Seattle basement

Last Fall, BusinessWeek ran a special edition about tech outside Silicon Valley. I found these two pieces:
China’s Xiaomi, the World’s Fastest-Growing Phone Maker
Ten Days in Kenya With No Cash, Only a Phone in Nairobi.

Thanks to fancy technologies, 2015 will see all Internet titans competing for these billions of potential customers. In 2013, Wired came up with The Untold Story of Google’s Quest to Bring the Internet Everywhere—By Balloon, followed by this recent update, Google’s Balloon Internet Experiment, One Year Later.

Time Magazine broke all limits of “access journalism” (lots of space in exchange of an exclusive) with this cover story:

facebook-cover

It’s  a nine pages quasi-stenographed account of a press junket arranged by Facebook in India. In it, Lev Grossman “soberly” sums things up:

Over the past decade, humanity hasn’t just adopted Facebook; we’ve fallen on it like starving people who have been who have been waiting for it our entire lives as it were the last missing piece of our social infrastructure as a species.

Since it is behind a paywall I’m not providing a link for this de facto press kit (I assume you can live without it.)

The social doubters

Not everyone has been touched by grace as Lev Grossman was. Among skeptics, Alexis Madrigal from The Atlantic is one of my favorite. Last month, he wrote The Fall of Facebook, a contrarian piece in which he states that “The social network future dominance is far from assured”. He is not the only one to cast such doubt. Bloomberg, for instance, notes that Facebook’s Popularity Among Teens Dips Again while its columnist Leonid Bershidsky, in his trademark stern way, contends Google Deserves Its Valuation, Facebook Doesn’t. On the social phenomenon, this NYT’s OpEd page: The Flight From Conversation by MIT professor Sherry Turkle is a must read.

Journalism

2014 has been quite a year for journalism with endless reverberations of the Snowden affair and the subsequent release of Citizen Four. A must-read of the documentary background is this NYT story:

349-potras2

The Snowden affair is sure to give a boost to investigative reporting.

I bet 2015 will see the rise of Pierre Omidyar’s media venture First Look Media. The project has been mocked for its stumbling debut (read Mathew Ingram piece First Look Media has forgotten the number one rule of startups). A few weeks ago, I spoke with Pierre and John Temple, First Look’s chief, at a conference in Phoenix, Arizona. Our discussion fell under the Chatham House Rule, meaning I’m not saying who exactly said what. To me, both men have the vision (and the funding) to build a media that could rattle the right cages. (A good read: The Pierre Omidyar Insurgency — New York Magazine). I simply hope Pierre and John will look beyond the United States, there are plenty of stories in Europe as well. Still on journalism, don’t mis Dan Gillmor’s piece about The New Editors of the Internet (The Atlantic); it raises interesting questions about who controls what we see and don’t see on the Web.

Ebola was — and remains — one of the big stories of the year.

I have two friends — two American doctors — who have been on the front line in Sierra Leone and Liberia for months. There is not a single day when I don’t think about their commitment and the risks they take to help the victims of this terrible disease.

Just to grasp the gruesomeness of the situation, watch this video from Time Magazine in which photojournalist John Moore explains his coverage of the epidemic.

Mashable also published Eyewitness to Hell: Life in Ebola-Ravaged Liberia, a horrifying photo essay. Also among the must-reads: Inside the Ebola Wars and In the Ebola Ward, both by The New Yorker’s Richard Preston, an expert on the matter and author of the famous book The Hot Zone. On the economics side, Business Week came up with this cover story: How the U.S. Screwed Up in the Fight Against Ebola

349_ebola_BW

The rise of the Islamic State was the other big story of the year

Here are my picks in the abundant coverage. First, Vice News’ subjective, but extremely effective four part videos was a revelation. For the first time, a reporter was embedded (sort of) in ISIS. (He had to obey the Rules for Journalists in Deir Ezzor compiled by Syria Deeply.)

More classical but definitely a must-read is Guardian’s Isis: the inside story by Martin Chulov, probably the best account so far. As backgrounders, read ISIS’ Harsh Brand of Islam Is Rooted in Austere Saudi Creed (NYT), The Ancestors of ISIS (NYT), How ISIS Works (NYT) and How the US Created the Islamic State  (Vice).

[miscellaneous]

Let’s conclude with subjects such as the Sony hack. First, to get an idea of the relentlessness of the cyberattacks the US faces on a permanent basis, have a look at this real-time map:

349_cyber_atatck

As far as Sony is concerned, the studio’s apparent cowardice shouldn’t have surprised anyone. Still, was the stolen information legitimate news fodder? Certainly not, yells Aaron Sorkin in a New York Times OpEd : The Press Shouldn’t Help the Sony Hackers. Of course it is, retorts Los Angeles Times’ business columnist Michael Hiltzik: Why The press must report those Sony hacks.

In the Sharing Economy, Workers Find Both Freedom and Uncertainty (NYT) or the reality of a Uber/Lyft driver. Uber will remain a big story in 2015 as its ruthlessness will keep feeding the news cycle (read Uber C.E.O. Travis Kalanick’s Warpath (Vanity Fair)

The Military’s Rough Justice on Sexual Assault (NYT) by Natasha Singer who came up with extraordinary journalistic work on the women who dare to fight the institution.

And finally, another Vanity Fair feature: How Marine Salvage Master Nick Sloane Refloated Costa Concordia, and a moving reportage from The New Yorker: Weather Man, Life at a Remote Russian Weather Station served by the work of a fabulous young photographer, Evgenia Abugaeva, herself born in the Russian Arctic town of Tiksi.

Happy Holiday readings. See you next year.

frederic.filloux@mondaynote.com

MSFT Hardware Futures

 

(Strangely, the WordPress software gives me a “Bad Gateway 502″  error message when I fully spell the name of the Redmond company)

by Jean-Louis Gassée

Microsoft’s hardware has long been a source of minor profit and major pain. In this last 2014 Monday Note, we’ll look at the roles Microsoft’s hardware devices will play — or not —  in the company’s future.

Excluding keyboards and the occasional Philippe Starck mouse, Microsoft makes three kinds of hardware: Game consoles, PC-tablet hybrids, and smartphones. We’ll start with the oldest and least problematic category: Game consoles.

Building on the success of DOS and its suite of business applications, Microsoft brought forth the MSX reference platform in 1983. This was a Bill Gates-directed strategic move, he didn’t want to leave the low-end of the market “unguarded”. Marketed as “home computers”, which meant less capable than a “serious” PC, MSX-branded machines were manufactured by the likes of Sony and Yamaha, but its only serious impact was in gaming. As the Wikipedia articles says, “MSX was the platform for which major Japanese game studios, such as Konami and Hudson Soft, produced video game titles.”

For the next two decades, gaming remained a hobby for Microsoft. This changed in 2001 when the company took the matter into its own hands and built the Xbox. Again, the company wanted to guard against “home invasions”.

With its Intel processors and customized version of Windows, the first iteration of the Xbox was little more than a repackaged PC. The 2005 Xbox 360 was a heartier offering: It featured an IBM-designed Power-PC derivative processor and what some call a “second-order derivative” of Windows 2000 ported to the new CPU.

Now we have the Xbox One. Launched in 2013, the platform is supported by a full-fledged ecosystem of apps, media store, and controllers such as the remarkable Kinect motion sensor.

Success hasn’t been easy. The first Xbox sold in modest numbers, 24 million units in about five years. Sales of the second generation Xbox 360 were better — almost 80 million through 2013 — but it was plagued with hardware problems, colloquially known as the Red Ring of Death. Estimates of the number of consoles that were afflicted range from 23% to more than 54%. Predictably, poor reliability translated into heavy financial losses, as much as $2B annually. Today’s Xbox One fares a little better: It lost only $800M for the first eight months of its life, selling 11.7M units in the process.

Microsoft’s latest numbers bundle Xbox game consoles and Surface tablet-PCs into a single Computing & Gaming category that makes up $9.7B of the company’s $87B in revenue for the 2014 Fiscal Year. This means Xbox console contribute less than 10% of total sales, which is probably why Satya Nadella, Microsoft’s new CEO, has carefully positioned the Xbox business as less than central to the company’s business:

“I want us to be comfortable to be proud of Xbox, to give it the air cover of Microsoft, but at the same time not confuse it with our core.”

In other words, the Xbox business can continue… or it could disappear. Either way, it won’t have much effect on Microsoft’s bottom line or its future.

For the moment, and with the assistance of a holiday price cut, Xbox One sales are topping those of the Sony PS4, but that shouldn’t take our attention away from a more important trend: The rise of mobile gaming. Smartphones are gaining in raw computing power, connectivity, display resolution, and, as a result, support from game developers on both Android and iOS platforms. Larger, more capable game consoles aren’t going away, but their growth is likely to slow down.

The history of Xbox problems, Nadella’s lukewarm embrace of the series, the ascendency of mobile gaming… by comparison the Surface tablet should look pretty good.

It doesn’t.

When Steve Ballmer introduced the Surface device in June, 2012, he justified Microsoft’s decision to compete with its own Windows licensees by the need to create a “design point”, a reference for a new type of device that would complement the “re-imagined” Windows 8.

349-commercialSurface3

Two and a half years later, we know two things: Surface tablet sales have been modest (about $2B in the 2014 Fiscal Year ended June 30th), and Windows 8 frustrated so many users that Microsoft decided to re-re-imagine it and will re-introduce it as Windows 10, scheduled to be released in mid-2015.

Microsoft believes its Surface combines the best of the PC with the best of a tablet. While the hybrid form has given rise to some interesting explorations by PC makers, such as the Yoga 3 Pro by Lenovo, many critics — and not just Apple — condemn the hybrid as a compromise, as a neither-nor device that sub-optimizes both its tablet and its PC functions (see the tepid welcome given to the HP Envy).

What would happen if Microsoft stopped making Surface Pro tablets? Not much… perhaps a modest improvement in the company’s profit picture. While the latest quarter of Surface Pro 3 sales appear to have brought a small positive gross margin, Surface devices have cost Microsoft about $1.7B over the past two years. Mission accomplished for the “design point”.

We now turn to smartphones.

Under the Ballmer regime, Microsoft acquired Nokia rather than let its one and only real Windows Phone licensee collapse. It was a strategic move: Microsoft was desperate to achieve any sort of significance in the smartphone world after seeing its older Windows Mobile platform trounced by Google’s Android and Apple’s iOS.

In the latest reported quarter (ended September 30th 2014), Windows Phone hardware revenue was $2.6B. For perspective, iPhone revenue for the same period was $23.7B. Assuming that Apple enjoys about 12% of the world smartphone market, quarterly worldwide revenue for the sector works out to about $200B… of which Microsoft gets 1.3%. Perhaps worse, a recent study says that Microsoft’s share of the all-important China smartphone market is “almost non-existent at 0.4 percent”. (China now has more than twice as many smartphone users, 700M, as the US has people, 319M.)

Hardware development costs are roughly independent of volume, as is running an OS development organization. But hardware production costs are unfavorably impacted by low volumes. Windows Phones sell less and they cost more to make, putting Microsoft’s smartphone business in a dangerous downward spiral. As Horace Dediu once remarked, the phone market doesn’t forgive failure. Once a phone maker falls into the red, it’s nearly impossible to climb back into the black.

What does all this mean for Microsoft?

Satya Nadella, the company’s new CEO, uses the phrase “Mobile First, Cloud First” to express his top-level strategy. It’s a clear and relevant clarion call for the entire organization, and Microsoft seems to do well in the Cloud. But how does the Windows Phone death spiral impact the Mobile First part?

In keeping with its stated strategy, the company came up with Office apps on iOS and Android, causing bewilderment and frustration to Windows Phone loyalists who feel they’d been left behind. Versions of Office on the two leading mobile platforms ensures Microsoft’s presence on most smartphones, so why bother making Windows Phones?

Four and a half years ago, in a Monday Note titled Science Fiction: Nokia Goes Android, I fantasized that Nokia ought to drop its many versions of Symbian and adopt Android instead. Nokia insiders objected that embracing a “foreign OS” would cause them to lose control of their destiny. But that’s exactly what happened to them anyway when they jumped into bed with Stephen Elop and, a bit later, with Windows Phone. This started a process that severely damaged phone sales, ending with Microsoft acquisition of what was already a captive licensee.

Now the Android question rises again.

Should Microsoft pursue what looks like a manly but losing Windows Phone hardware strategy or switch to making and selling Android phones? Or should it drop an expensive smartphone design, manufacturing, and distribution effort altogether, and stay focused on what it does already, Mobile First, Cloud First applications?

The Rise of AdBlock Reveals A Serious Problem in the Advertising Ecosystem

 

By Frédéric Filloux

Seeing a threat to their ecosystem, French publishers follow their German colleagues and prepare to sue startup Eyeo GmbH, the creator of anti-advertising software AdBlock Plus. But they cannot ignore that, by using ABP, millions of users actively protest against the worst forms of advertising. 

On grounds that it represents a major economic threat to their business, two groups of French publishers are considering a lawsuit against AdBlockPlus creator Eyeo GmbH. (Les Echos, broke the news in this story, in French).
Plaintiffs are said to be the GESTE and the French Internet Advertising Bureau. The first is known for its aggressive stance against Google via its contribution to the Open Internet Project. (To be clear, GESTE said they were at a “legal consulting stage”, no formal complaint has been filed yet.) By his actions, the second plaintiff, the French branch of the Internet Advertising Bureau is in fact acknowledging its failure to tame the excesses of the digital advertising market.

Regardless of its validity, the legal action misses a critical point. By downloading the plug-in AdBlock Plus (ABP) on a massive scale, users do vote with their mice against the growing invasiveness of digital advertising. Therefore, suing Eyeo, the company that maintains ABP, is like using Aspirin to fight cancer. A different approach is required but very few seem ready to face that fact.

I use AdBlock Plus on a daily basis. I’m not especially proud of this, nor do I support anti-advertising activism, I use the ad-blocker for practical, not ideological, reasons. On too many sites, the invasion of pop-up windows and heavily animated ad “creations” has became an annoyance. A visual and a technical one. When a page loads, the HTML code “calls” all sorts of modules, sometimes 10 or 15. Each sends a request to an ad server and sometimes, for the richest content, the ad elements trigger the activation of a third-party plug-in like Adobe’s Shockwave which will work hard to render the animated ads. Most of the time, these ads are poorly optimized because creative agencies don’t waste their precious time on such trivial task as providing clean, efficient code to their clients. As a consequence, the computer’s CPU is heavily taxed, it overheats, making fans buzz loudly. Suddenly, you feel like your MacBook Pro is about to take off. That’s why, with a couple of clicks, I installed AdBlock Plus. My ABP has spared me several thousands of ad exposures. My surfing is now faster, crash-free, and web pages looks better.

I asked around and I couldn’t find a friend or a colleague not using the magic plug-in. Everyone seems to enjoy ad-free surfing. If this spreads, it could threaten the very existence of a vast majority of websites that rely on advertising.

First, a reality check. How big and dangerous is the phenomenon? PageFair, a startup-based in Dublin, Ireland, comes up with some facts. Here are key elements drawn from a 17-pages PDF document available here.

347_adblock_1c

 

347_adblock_2c

 

Put another way, if your site, or your apps, are saturated with pop-up windows, screaming videos impossible to mute or skip, you are encouraging the adoption of AdBlock Plus — and once it’s installed on a browser, do not expect any turning  back. As an example of an unwitting APB advocate:

347_adblock_3c

Eyeo’s AdBlock Plus takes the advertising rejection in its own hands — but these are greedy and dirty ones. Far from being the work of a selfless white knight, Eyeo’s business model borders on racketeering. In its Acceptable Ads Manifesto, Eyeo states the virtues of what the company feels are tolerable formats:

1. Acceptable Ads are not annoying.
2. Acceptable Ads do not disrupt or distort the page content we’re trying to read.
3. Acceptable Ads are transparent with us about being an ad.
4. Acceptable Ads are effective without shouting at us.
5. Acceptable Ads are appropriate to the site that we are on.

Who could disagree? But such blandishments go with a ruthless business model that attests to the merits of straight talk:

We are being paid by some larger properties that serve non-intrusive advertisements that want to participate in the Acceptable Ads initiative.
Whitelisting is free for all small and medium-sized websites and blogs. However, managing this list requires significant effort on our side and this task cannot be completely taken over by volunteers as it happens with common filter lists.
Note that we will never whitelist any ads that don’t meet these criteria. There is no way to buy a spot in the whitelist. Also note that whitelisting is free for small- and medium-sized websites.
In addition, we received startup capital from our investors, like Tim Schumacher, who believe in Acceptable Ads and want to see the concept succeed.

Of course, there is no public rate card. Eyeo doesn’t provide any measure of what defines  “small and medium size websites” either. A 5 million monthly uniques site can be small in the English speaking market but huge in Finland. And the number of “larger properties” and the amount they had to pay to be whitelisted remains a closely guarded secret. According to some German websites, Eyeo is said to have snatched $30m from big internet players; not bad for a less than 30 people operation (depending of the recurrence of this “compliance fee” — for lack of a better term.)

There are several issues here.

One, a single private entity cannot decide what is acceptable or not for an entire sector. Especially in such an opaque fashion.

Two, we must admit that Eyeo GmbH is filling a vacuum created by the incompetence and sloppiness of the advertising community’s, namely creative agencies, media buyers and organizations that are supposed to coordinate the whole ecosystem (such as the Internet Advertising Bureau.)

Three, the rise of ad blockers is the offspring of two major trends: a continual deflation of digital ads economics, and the growing reliance on ad exchanges and Real Time Bidding, both pushing prices further down.

Even Google begins to realize that the explosion of questionable advertising formats has become a problem. Proof is its recent Contributor program that proposes ad-free navigation in exchange for a fee ranging from $1 to $3 per month (read this story on NiemanLab, and more in a future Monday Note).

The growing rejection of advertising AdBlock Plus is built upon is indeed a threat to the ecosystem and it needs to be addressed decisively. For example, by bringing at the same table publishers and advertisers to meet and design ways to clean up the ad mess. But the entity and leaders who can do the job have yet to be found.

frederic.filloux@mondaynote.com

Apple Watch: Hard Questions, Facile Predictions

 

by Jean-Louis Gassée

Few Apple products have agitated forecasters and competitors as much as the company’s upcoming watch. The result is an escalation of silly numbers – and one profound observation from a timepiece industry insider.

Apple Watch 2015 sales predictions are upon us: 10 million, 20 million, 24 million, 30 million, even 40 million! Try googling “xx million apple watch”, you won’t be disappointed. Microsoft’s Bing doesn’t put a damper on the enthusiasm either: It finds a prediction for first year sales of 60 million Apple Watches!

These are scientific, irony-free numbers, based on “carefully weighed percentages of iPhone users” complemented by investigations into “supplier orders” and backed up by interviews with “potential buyers”. Such predictions reaffirm our notion that the gyrations and divinations of certain anal-ists and researchers are best appreciated as black comedy— cue PiperJaffray’s Gene Munster with his long-running Apple TV Set gag.

Fortunately, others are more thoughtful. They consider how the product will actually be experienced by real people and how the new Apple product will impact the watch industry.

As you’ll recall from the September 14th “Apple Watch Is And Isn’t”, Jean-Claude Biver, the LVMH executive in charge of luxury watch brands such as Hublot and TAG Heuer, offered his frank opinion of the “too feminine” AppleWatch:

“To be totally honest, it looks like it was designed by a student in their first trimester.” 

At the time, it sounded like You Don’t Need This sour grapes from disconcerted competitor. But recently, Biver has also given us deeper, more meaningful thoughts:

“A smartwatch is very difficult for us because it is contradictory,” said Mr. Biver. “Luxury is supposed to be eternal … How do you justify a $2,000 smart watch whose technology will become obsolete in two years?” he added, waving his iPhone 6. 

Beautiful. All the words count. Luxury and Eternity vs. Moore’s Law.

To help us think about the dilemma that preoccupies the LVMH exec, let’s take a detour through another class of treasured objects: Single Lens Reflex cameras.

347_Nikon_F_Photomic_FTn-2714

 

Unless you were a photojournalist or fashion photographer taking hundreds of pictures a day, these cameras lasted forever. A decade of use would come and go without impact on the quality of your pictures or the solid feel of the product. People treasured their Hasselblads, Leicas (not an SLR), Canons, and more obscure marques such as the Swiss Alpa. (I’m a bit partial, here, I bought a Nikon exactly like the one pictured above back in 1970.)

These were purely mechanical marvels. No battery, the light sensor was powered by…light.

Then, in the mid-nineties, digital electronics begin to sneak in. Sensor chips replaced silver-halide film; microcomputers automated more and more of the picture taking process.

The most obvious victim was Eastman Kodak, a company that had dominated the photographic film industry for more than a century – and filed for bankruptcy in 2012. (A brief moment of contemplation: Kodak owned many digital photography patents and even developed the first digital camera in 1975, but “…the product was dropped for fear it would threaten Kodak’s photographic film business.” [Wikipedia].)

The first digital cameras weren’t so great. Conventional film users rightly criticized the lack of resolution, the chromatic aberrations, and other defects of early implementations. But better sensors, more powerful microprocessors, and clever software won the day. A particular bit of cleverness that has saved a number of dinner party snapshots was introduced in the late-nineties: A digital SLR sends a short burst of flash to evaluate the scene, and then uses the measurements to automatically balance shutter speed and aperture, thus correcting the classical mistake of flooding the subject in the foreground while leaving the background in shadows.

Digital cameras have become so good we now have nostalgia “film packs” that recreate the defects — sorry, the ambiance — of analog film stock such as Ektachrome or Fuji Provia.

But Moore’s Law exacts a heavy price. At the high end, the marvelous digital cameras from Nikon, Canon, and Sony are quickly displaced year after year by new models that have better sensors, faster microprocessors, and improved software. Pros and prosumers can move their lenses — the most expensive pieces of their equipment — from last year’s model to this one’s, but the camera body is obsolete. In this regard, the most prolific iterator seems to be Sony, today’s king of sensor chips; the company introduces new SLR models once or twice a year.

At the medium to low end, the impact of Moore’s law was nearly lethal. Smartphone cameras have become both so good and so convenient (see Chase Jarvis’ The Best Camera is the One That’s With You) that they have displaced almost all other consumer picture taking devices.

What does the history of cameras say for watches?

At the high-end, a watch is a piece of jewelry. Like a vintage Leica or Canon mechanical camera, a Patek watch works for decades, it doesn’t use batteries, and it doesn’t run on software. Mechanical watches have even gained a retro chic among under-forty urbanites who have never had to wind a stem. (A favorite of techies seems to be the Officine Panerai.)

So far, electronic watches haven’t upended the watch industry. They’ve mostly replaced a spring with a battery and have added a few functions and indicator displays – with terrible user interfaces. This is about to change. Better/faster/cheaper organs are poised to invade watches: sensors, microprocessors + software, wireless links…

Jean-Claude Biver is right to wonder how the onslaught of ever-improving technology will affect the “eternity” of the high-end, fashion-conscious watch industry…and he’ll soon find out:  He’s planning a (yet-to-be announced) TAG Heuer smartwatch.

With this in mind, Apple’s approach is intriguing: The company plays the technology angle, of course, and has loaded their watch with an amazing — some might say disquieting — amount of hardware and software, but they also play the fashion and luxury game. The company invited fashion writers to the launch; it hosted a celebrity event at Colette in Paris with the likes of Karl Lagerfeld and Anna Wintour in attendance. The design of the watch, the choice of materials for the case and bands/bracelets… Apple obviously intends to offer customers a differentiated combination of traditional fashion statement and high-tech functions.

But we’re left with a few questions…

Battery life is one question — we don’t know what it will be. The AppleWatch user interface is another.

The product seems to be loaded with features and apps… will users “get” the UI, or will they abandon hard-to-use functions, as we’ve seen in many of today’s complicated watches?

But the biggest question is, of course, Moore’s Law. Smartphone users have no problem upgrading every two years to new models that offer enticing improvements, but part of that ease is afforded by carrier subsidies (and the carriers play the subsidy game well, despite their disingenuous whining).

There’s no carrier subsidy for the AppleWatch. That could be a problem when Moore’s Law makes the $5K high-end model obsolete. (Expert Apple observer John Gruber has wondered if Apple could just update the watch processor or offer a trade-in — that would be novel.)

We’ll see how all of this plays out with regard to sales. I’ll venture that the first million or so AppleWatches will sell easily. I’ll certainly buy one, the entry-level Sports model with the anodized aluminum case and elastomer band. If I like it, I’ll even consider the more expensive version with a steel case and ingenious Marc Newson link bracelet — reselling my original purchase should be easy enough.

Regardless of the actual sales, first-week numbers won’t matter. It’s what happens after that that matters.

Post-purchase Word of Mouth is still the most potent marketing device. Advertising might create awareness, but user buzz is what makes or breaks products such as a watch or phone (as opposed to cigarettes and soft drinks). It will take a couple months after the AppleWatches arrive on the shelves before we can judge whether or not the product will thrive.

Only then can we have a sensible discussion about how the luxury segment of the line might plan to deal with the eternity vs. Moore’s Law question.

JLG@mondaynote.com

Hard Comparison: Legacy Media vs. Digital Native

 

by Frédéric Filloux

From valuations to management cultures, the gap between legacy media companies and digital natives ones seems to widen. The chart below maps the issues and shows where efforts should focus. 

At conferences and workshops in Estonia, Spain or in the US, most discussions I recently had ended up zeroing on the cultural divide between legacy media and internet natives. About fifteen years into the digital wave, tectonic plates seems to drift more apart that ever. On one side, most media brands — the surviving ones — are still struggling with an endless transition. On the other, digital native companies, all with deeply embedded technology, expand at an incredible pace. Hence the central question: can legacy media catch up? What are the most critical levers to pull in order to accelerate change?

Once again, it’s not a matter of a caricatural opposition of fossilized media brands versus agile and creative media startups. The reality is far more complex. I come from a world in which information had price and cost; facts were verified; seasoned editors called the shots; readers were demanding and loyal — and journalists occasionally autistic. I’m coming from the culture of great stories, intense competition (now gone) and the certitude of the important role of great journalism in society.

That said, I simply had the luck to be in the right place at the right time to embrace the new culture: Small companies, starting on a blank slate with the unbreakable faith and systemic understanding that combine into a vision of growth and success, all wrapped-up in the virtues of risk-taking. I always wanted to believe that the two cultures could be compatible — in fact, I hoped the old world would be able to morph swiftly and efficiently enough to catch the wave, deal with new kinds of readers, with a wider set of technologies and a proteiform competition. I still want to believe this.

In the following chart, I list the most critical issues and pinpoint the areas of transformation that are both the most urgent and the easiest to address.

345_Divide_table

[Footnotes]

1. Funding: The main reason why newcomers are able to quickly leave the incumbent in the dust. When venture firms compete to provide $160m to Flipboard, $61m to Vox Media, or $96m to BuzzFeed, the consequences are not just staggering valuations. Abundant funds translate into the ability to hire more and better qualified people. Just one example: Netflix’s recommendation system — critical to ensure both viewer engagement and retention — can count on a $150m yearly budget, far more than the entire revenue of many mid-sized media companies. Fact is: old media companies in transition will never be able to attract such level of funding due to inherent scalability limitations (it is extremely rare to see a legacy media corporation suddenly jumping out of its ancestral business.)

2. Resource Allocation. Typically, the management team of a legacy media will assign just enough resources to launch a product or service and hope for the best. This deliberate scarcity has several consequences: From the start, the project team will be in the fight/survival mode, both internally (vs. other projects or “historical” operations); second consequence, in the (likely) case of a failure, it will be difficult to find the cause: Was the product or service inherently flawed? Or did it fail to achieve “ignition” because the approach was too cautious? The half-baked, half-supported legacy product might stagnate for ever, without making sufficient money to be seen as a success, nor significant losses to justify a termination. By contrast, a digital native corporation will go at full throttle from day one with scores of managers, engineers, marketers and sufficient development time for tests, market research, promotion, etc. The idea is to succeed — or to fail, but fast and clearly.

3. Approach to timing. The tragedy for the vast majority of legacy media is they no longer have the luxury of long term thinking. Shareholder pressure and weak P&L impose quick results. By contrast, most digital companies are built for the long term: Their management is asked to grow, conquer, secure market positions and then monetize. It can take years, as seen in many instances, form Flipboard to Amazon (which might have pushed the envelope a bit too far.)

4. Scalability vs. sustainability. Many reasons — readership structure, structurally constrained markets — explain the difficulty for legacy media to scale up. At the polar opposite, disrupters like Uber or AirBnB, or super-optimizers such as BuzzFeed or The Huffington Post are designed and built to scale — globally.

5. Customer relations. On this aspect, the digital world has reset the standard. All of a sudden, legacy media companies appeared outdated when it comes to customer satisfaction, from poor subscription handling to the virtuous circle of acquisition-engagement-retention of customers.

In the chart above, my allocation of purple dots (feasibility) illustrates the height of hurdles facing large, established media brands. Many components remain extremely hard to move – I personally experience that on a daily basis.  But there is no excuse not to take a better care of customers, not to reward the risk-taking of committed staffers, assign resources in a decisive manner or induce a better sense of competition.

frederic.filloux@mondaynote.com

Clayton Christensen Becomes His Own Devil’s Advocate

 

by Jean-Louis Gassée

Every generation has its high tech storytellers, pundits who ‘understand’ why products and companies succeed and why they fail. And each next generation tosses out the stories of their elders. Perhaps it’s time to dispense with “Disruption”.

“I’m never wrong.”

Thus spake an East Coast academic, who, in the mid- to late-eighties, parlayed his position into a consulting money pump. He advised — terrorized, actually — big company CEOs with vivid descriptions of their impending failure, and then offered them salvation if they followed his advice. His fee was about $200K per year, per company; he saw no ethical problem in consulting for competing organizations.

The guru and I got into a heated argument while walking around the pool at one of Apple’s regular off-sites. When I disagreed with one of his wild fantasies, his retort never varied: I’m never wrong.

Had I been back in France, I would have told him, in unambiguous and colorful words, what I really thought, but I had acclimated myself to the polite, passive-aggressive California culture and used therapy-speak to “share my feelings of discomfort and puzzlement” at his Never Wrong posture. “I’ve always been proved right… sometimes it simply takes longer than expected”, was his comeback. The integrity of his vision wasn’t to be questioned, even if reality occasionally missed its deadline.

When I had entered the tech business a decade and a half earlier, I marveled at the prophets who could part the sea of facts and reveal the True Way. Then came my brief adventures with the BCG-advised diversification of Exxon into the computer industry.

Preying on the fear of The End of Oil in the late-seventies, consultants from the prestigious Boston company hypnotized company executives with their chant: Information Is The Oil of The 21st Century. Four billion dollars later (a lot of money at the time), Exxon finally recognized the cultural mismatch of the venture and returned to the well-oiled habits of its hearts and minds.

It was simply a matter of time, but the BCG was ultimately proved right — we now have our new Robber Barons of zeroes and ones. But they were wrong about something even more fundamental but slippery, something they couldn’t divine from their acetate foils: culture.

A little later, we had In Search of Excellence, the 1982 best-seller that turned into a cult. Tom Peters, the more exuberant of the book’s two authors, was a constant on pledge-drive public TV. As I watched him one Sunday morning with the sound off, his sweaty fervor and cutting gestures reminded me of the Bible-thumping preacher, Jimmy “I Sinned Against You” Swaggart. (These were my early days in California; I flipped through a lot of TV channels before Sunday breakfast, dazzled by the excess.)

Within a couple of years, several of the book’s exemplary companies — NCR, Wang, Xerox — weren’t doing so well. Peters’ visibility led to noisy accusations and equally loud denials of faking the data, or at least of carefully picking particulars.

These false prophets commit abuses under the color of authority. They want us to respect their craft as a form of science, when what they’re really doing is what Neil Postman, one of my favorite curmudgeons, views as simple storytelling: They felicitously arrange the facts in order to soothe anxiety in the face of a confusing if not revolting reality. (Two enjoyable and enlightening Postman books: Conscientious Objections, a series of accessible essays, and Amusing Ourselves To Death, heavier, very serious fare.)

A more recent and widely celebrated case of storytelling in a scientist’s lab coat is Clayton Christensen’s theory of disruptive innovation. In order to succeed these days — and, especially, to pique an investor’s interest — a new venture must be disruptive, with extra credit if the disrupter has attended the Disrupt conference and bears a Renommierschmiss from the Startup Battlefield.

345_christensen__
(Credit: www.claytonchristensen.com )

Christensen’s body of work is (mostly) complex, sober, and nuanced storytelling that’s ill-served by the overly-simple and bellicose Disruption! battle cry. Nonetheless, I’ll do my share and provide my own tech world simplification: The incumbency of your established company is forever threatened by lower cost versions of the products and services you provide. To avoid impending doom, you must enrich your offering and engorge your price tag. As you abandon the low end, the interloper gains business, muscles up, and chases you farther up the price ladder. Some day — and it’s simply a matter of time — the disruptor will displace you.

According to Christensen, real examples abound. The archetypes, in the tech world, are the evolution of the disk drive, and the disruptive ascension from mainframe to minicomputer to PC – and today’s SDN (Software Defined Networking) entrants.

But recently, skeptical voices have disrupted the Disruption business.

Ben Thompson (@monkbent) wrote a learned paper that explains What Clayton Christensen Got Wrong. In essence, Ben says, disruption theory is an elegant explanation of situations where the customer is a business that’s focused on cost. If the customer is a consumer, price is often trumped by the ineffable values (ease-of-use, primarily) that can only be experienced, that can’t be described in a dry bullet list of features.

More broadly, Christensen came under attack by Jill Lepore, the New Yorker staff writer who, like Christensen, is a Harvard academic. In a piece titled The Disruption Machine, What the gospel of innovation gets wrong, Lepore asserts her credentials as a techie and then proceeds to point out numerous examples where Christensen’s vaunted storytelling is at odds with facts [emphasis and edits mine]:

“In fact, Seagate Technology was not felled by disruption. Between 1989 and 1990, its sales doubled, reaching $2.4 billion, “more than all of its U.S. competitors combined,” according to an industry report. In 1997, the year Christensen published ‘The Innovator’s Dilemma,”’Seagate was the largest company in the disk-drive industry, reporting revenues of nine billion dollars. Last year, Seagate shipped its two-billionth disk drive. Most of the entrant firms celebrated by Christensen as triumphant disrupters, on the other hand, no longer exist

Between 1982 and 1984, Micropolis made the disruptive leap from eight-inch to 5.25-inch drives through what Christensen credits as the ‘Herculean managerial effort’ of its C.E.O., Stuart Mahon. But, shortly thereafter, Micropolis, unable to compete with companies like Seagate, failed. 

MiniScribe, founded in 1980, started out selling 5.25-inch drives and saw quick success. ‘That was MiniScribe’s hour of glory,’ the company’s founder later said. ‘We had our hour of infamy shortly after that.’ In 1989, MiniScribe was investigated for fraud and soon collapsed; a report charged that the company’s practices included fabricated financial reports and ‘shipping bricks and scrap parts disguised as disk drives.’”

Echoes of the companies that Tom Peters celebrated when he went searching for excellence.

Christensen is admired for his towering intellect and also for his courage facing health challenges — one of my children has witnessed both and can vouch for the scholar’s inspiring presence. Unfortunately, his reaction to Lepore’s criticism was less admirable. In a BusinessWeek interview Christensen sounds miffed and entitled:

“I hope you can understand why I am mad that a woman of her stature could perform such a criminal act of dishonesty—at Harvard, of all places.”

At Harvard, of all places. Hmmm…

In another attempt to disprove Jill Lepore’s disproof, a San Francisco- based investment banker wrote a scholarly rearrangement of Disruption epicycles. In his TechCrunch post, the gentleman glows with confidence in his use of the theory to predict venture investment successes and failures:

“Adding all survival and failure predictions together, the total gross accuracy was 84 percent.”

and…

“In each case, the predictions have sustained 99 percent levels of statistical confidence without a flinch.”

Why the venture industry hasn’t embraced the model, and why the individual hasn’t become richer than Warren Buffet as a result of the unflinching accuracy remains a story to be told.

Back to the Disruption sage, he didn’t help his case when, as soon as the iPhone came out, he predicted Apple’s new device was vulnerable to disruption:

“The iPhone is a sustaining technology relative to Nokia. In other words, Apple is leaping ahead on the sustaining curve [by building a better phone]. But the prediction of the theory would be that Apple won’t succeed with the iPhone. They’ve launched an innovation that the existing players in the industry are heavily motivated to beat: It’s not [truly] disruptive. History speaks pretty loudly on that, that the probability of success is going to be limited.”

Not truly disruptive? Five years later, in 2012, Christensen had an opportunity to let “disruptive facts” enter his thinking. But no, he stuck to his contention that Modularity always defeats integration:

“I worry that modularity will do its work on Apple.”

In 2013, Ben Thompson, in his already quoted piece, called Christensen out for sticking to his theory:

“[…] the theory of low-end disruption is fundamentally flawed. And Christensen is going to go 0 for 3.”

Perhaps, like our poolside guru, Christensen believes he’s always right…but, on rare occasions, he’s simply wrong on the timing.

Apple will, of course, eventually meet its maker, whether through some far off, prolonged mediocrity, or by a swift, regrettable decision. But such predictions are useless, they’re storytelling – and a bad, facile kind at that. What would be really interesting and courageous would be a detailed scenario of Apple’s failure, complete with a calendar of main steps towards the preordained ending. No more Wrong on the Timing excuses.

A more interesting turn for a man of Christensen’s intellect and reach inside academia would be to become his own Devil’s Advocate. Good lawyers pride themselves in researching their cases so well they could plead either side. Perhaps Clayton Christensen could explain, with his usual authority, how the iPhone defines a new theory of innovation. Or why the Macintosh has prospered and ended up disrupting the PC business by sucking up half of the segment profits. He could then draw comparisons to other premium goods that are happily chosen by consumers, from cars to clothes and…watches.

JLG@mondaynote.com

Cultural Adventures In Payment Systems – Part I

 

by Jean-Louis Gassée

Payment systems and user behaviors have evolved over the past three decades. In this first of a two-part Monday Note, I offer a look at the obstacles and developments that preceded the Apple Pay launch.

When I landed in Cupertino in 1985, I was shocked, shocked to find that so much gambling was going on in here. But it wasn’t the Rick’s Café Américain kind of gambling, it was the just-as-chancy use of plastic: Colleagues would heedlessly offer their credit card numbers to merchants over the phone; serious, disciplined executives would hand their AmEx Platinums to their assistants without a second thought.

This insouciant way of doing business was unheard of in my Gallic homeland. The French (and most Europeans) think that trust is something that must be earned, that it has a value that is debased when it’s handed out too freely. They think an American’s trusting optimism is naïve, even infantile.

After I got over my shock, I came to see that my new countrymates weren’t such greenhorns. They understood that if you want to lubricate the wheels of commerce, you have to risk an occasional loss, that the rare, easily-remedied abuses are more than compensated for by a vibrant business. It wasn’t long before I, too, was asking my assistant to run to the store with my Visa to make last-minute purchases before a trip.

(On the importance of Trust and its contribution to The Wealth of Nations — or their poverty — see Alain Peyrefitte’s La Société de Confiance [The Society of Trust]. Unfortunately the work hasn’t been translated into English, unlike two of Peyrefitte’s other books, The Trouble with France and the prophetic 1972 best-seller The Immobile Empire. The title of the latter is a deplorable translation of Quand la Chine s’éveillera… Le monde tremblera, “When China Awakes, The World Will Shake”, a foreboding attributed to Napoleon.)

The respective attitudes towards trust point out a profound cultural difference between my two countries. But I also noticed other differences that made my new environment feel a little antiquated.

For example, direct deposit and direct deduction weren’t nearly as prevalent in America as in France. In Cupertino, I received a direct deposit paycheck, but checks to cover expenses were still “cut”, and I had to write checks for utilities and taxes and drop them in the mailbox.

Back in Paris, everything had been directly wired into and out of my bank account. Utilities were automatically deducted ten days after the bill was sent, as mandated by law (the delay allowed for protests and stop-payments if warranted). Paying taxes was ingeniously simple: Every month through October, a tenth of last year’s total tax was deducted from your bank account. In November and December, you got a reprieve for Holiday spending fun (or, if your income had gone up, additional tax payments to Uncle François — Mitterrand at the time, not Hollande).

Like a true Frenchman, I once mocked these “primitive” American ways in a conversation with a Bank of America exec in California. A true Californian, she smiled, treated me to a well-rehearsed Feel-Felt-Found comeback, and then, dropping the professional mask, she told me that the distrust of electronic commerce that so astonished me here in Silicon Valley (of all places), it was nothing compared to Florida where it’s common for retirees to cash their Social Security checks at the bank, count the physical banknotes and coins, and then deposit the money into their accounts.

Perhaps this was the heart of the “Trust Gap” between Europe and the US: Europeans have no problem trusting electronic commerce as long as it doesn’t involve people; Americans trust people, not machines.

My fascination with electronic payment modes preceded my new life in Silicon Valley. In 1981, shortly after starting Apple France, I met Roland Moreno, the colorful Apple ][ hardware and software developer who invented the carte à puce (literally “chip card”, but better known as a “smart card”) that’s found in a growing number of credit cards, and in mobile phones where it’s used as a Subscriber Identity Module (SIM).

343_jlg

The key to Moreno’s device was that it could securely store a small amount of information, hence its applicability to payment cards and mobile phones.

I carried memories of my conversations with Moreno with me to Cupertino. In 1986, we briefly considered adding a smart card reader to the new ADB Mac keyboard, but nothing came of it. A decade later, Apple made a feeble effort to promote the smart card for medical applications such as a patient ID, but nothing came of that, either.

The results of the credit cards industry’s foray into smart card technology were just as tepid. In 2002, American Express introduced its Blue smart card in the US with little success:

“But even if you have Blue (and Blue accounts for nearly 10% of AmEx’s 50 million cards), you may still have a question: What the hell does that chip (and smart cards in general) do?

The answer: Mostly, nothing. So few stores have smart-card readers that Blue relies on its magnetic strip for routine charges.”

In the meantime, the secure smart chip found its way into a number of payment cards in Europe, thus broadening the Trust Gap between the Old and New Worlds, and heightening Roland’s virtuous and vehement indignation.

(Moreno, who passed away in 2012, was a true polymath; he was an author, gourmand, inventor of curious musical instruments, and, I add without judgment, an ardent connoisseur of a wide range of earthly delights).

Next came the “Chip and PIN” model. Despite its better security — the customer had to enter a PIN after the smart card was recognized — Chip and PIN never made it to the US, not only because there were no terminals into which the customers could type their PINs (let alone that could read the smart cards in the first place), but, just as important, because there was a reluctance on the part of the credit card companies to disturb ingrained customer behavior.

It appeared that smart cards in the US were destined to butt up against these two insurmountable obstacles: The need for a new infrastructure of payment terminals and a skepticism that American customers would change their ingrained behavior to accept them.

In 2003, I made a bad investment in the payment system field on behalf of the venture company I had just joined. The entrepreneur that came to us had extensive “domain knowledge” and proposed an elegant way to jump over both the infrastructure and the customer behavior obstacles by foregoing the smart card altogether. Instead, he would secure the credit card’s magnetic stripe.

(more next page)