The Network Is the Computer: Google Tries Again

 

All you need is a dumb device attached to a smart network. It’s an old idea that refuses to die despite repeated failures. Now it’s Google’s turn.

In the late 1980s, Sun Microsystems used a simple, potent war cry to promote its servers: The Network Is The Computer. Entrust all of your business intelligence, computing power, and storage to Sun’s networked SPARC systems and you can replace your expensive workstation with a dumb, low cost machine. PCs are doomed.

Nothing of the sort happened, of course. Sun’s venture was disrupted by inexpensive servers assembled from the PC organ bank and running Open Source software.

PCs prospered, but that didn’t dampen the spirits of those who would rid us of them.

Fast-forward to the mid-1990s and thought re-emerges in a new guise: The Browser Will Be The Operating System (a statement that’s widely misattributed to Marc Andreessen, who holds a more nuanced view on the matter). The browser will serve as a way to access networked services that will process your data. The actual OS on your device, what sort of apps it can run — or even if it can run any (other than a browser) — these questions will fade into insignificance.

Soon after, Oracle took a swing at the Network is the Computer piñata by defining the Network Computer Reference Profile (or NCRP), a specification that focused on network connectivity and deemphasized local storage and processing. It was understood, if not explicitly stated, that an NCRP device must be diskless. A number of manufacturers offered NCRP implementations, including Sun (which would ultimately be acquired by Oracle) with its JavaStation. But despite Larry Ellison’s strongly expressed belief that Network Computers would rid the industry of the evil Microsoft, the effort went nowhere.

Today, The Network Is The Computer lives on under the name Cloud Computing, the purest example of which is a Google Chromebook running on Chrome OS. (And thus, in a sense, Sun’s idea lives on: Google’s first investor was Sun co-founder Andy Bechtolsheim.)

So far, Chromebooks have shown only modest penetration (a topic for musings in a future Monday Note), but despite the slow adoption, Google has become one of the largest and most important Cloud Computing companies on the planet. Combine this with the Android operating system that powers more than a billion active devices, could Google bring us to the point where The Network Really Is The Computer?

It’s a complicated question, partly because the comparison with the previous generation of devices, traditional PCs, can (excuse me) cloud the view.

Unlike PCs, smartphones rely on an expensive wireless infrastructure. One can blame the oligopolistic nature of the wireless carrier industry (in English: too few companies to have a really competitive market), but that doesn’t change the simple fact that wireless bandwidth isn’t cheap. The dumber the device, the more it has to rely on the Cloud to process and store data, and the more bandwidth it will consume.

Let’s visit Marc Andreessen actual words regarding Network-As-Computer, from a 2012 Wired interview [emphasis mine]:

“[I]f you grant me the very big assumption that at some point we will have ubiquitous, high-speed wireless connectivity, then in time everything will end up back in the web model.”

If we interject, on Andreessen’s behalf, that wireless connectivity must be as inexpensive as it is ubiquitous, then we begin to see the problem. The “data hunger” of media intensive apps, from photo processing to games, shows no sign of slowing down. And when you consider the wireless bandwidth scarcity that comes from the rapid expansion of smartphone use, it seems that conditions are, yet again, conspiring against the “dumb device” model.

The situation is further confounded when we consider that Google’s business depends on delivering users to advertisers. Cloud computing will help drive down the cost of Android handsets and thus offer an even wider audience to advertisers…but these advertisers want a pleasant and memorable UI, they want the best canvas for their ads. When you dumb down the phone, you dumb down the ad playback experience.

In a recent blog post titled The next phase of smartphones, Benedict Evans neatly delineates the two leading “cloud views” by contrasting Apple and Google [emphasis mine]:

“Apple’s approach is about a dumb cloud enabling rich apps while Google’s is about devices as dumb glass that are endpoints of cloud services…”

But Google’s “dumb glass” can’t be too dumb.  For its mobile advertising business, Google needs to “see” everything we do on our smartphones, just like it does on our PCs. Evans intimates as much:

“…it seems that Google is trying to make ‘app versus web’ an irrelevant discussion – all content will act like part of the web, searchable and linkable by Google.”

Native apps running on a “really smart” device are inimical to Google’s business model. To keep the advertisers happy, Google would have to “instrument” native apps, insert deep links that will feed its data collection activities.

This is where the Apple vs. Google contrast is particularly significant: iOS apps are not allowed to let advertisers know what we are doing – unless explicitly authorized. Apple’s business model doesn’t rely on peddling our profile to advertisers.

In the end, I wonder if Google really believes in the “dumb glass” approach to smartphones. Perhaps, at least for now, The Computer will remain The Computer.

JLG@mondaynote.com

 

Google might not be a monopoly, after all

 

Despite its dominance, Google doesn’t fit the definition of a monopoly. Still, the Search giant’s growing disconnect from society could lead to serious missteps and, over time, to a weakened position. 

In last week’s column, I opined about the Open Internet Project’s anti-trust lawsuit against Google. Reactions showed divided views of the search engine’s position. Granted, Google is an extremely aggressive company, obsessed with growth, scalability, optimization — and also with its own vulnerability.

But is it really a monopoly in the traditional and historical sense? Probably not. Here is why, in four points:

1. The consent to dependency. It is always dangerous to be too dependent from a supplier one doesn’t control. This is the case in the (illegal) drug business. Price and supply will fluctuate at the whim of unpredictable people.This is what happens to those who build highly Google-dependent businesses such as e-commerce sites and content-farms that provide large quantities of cheap fodder in order to milk ad revenue from Google search-friendly tactics.

326_jaws
In the end, everything is a matter of trust (“Jaws”, courtesy of Louis Goldman)

Many news media brands have sealed their own fate by structuring their output so that 30% to 40% of their traffic is at the mercy of Google algorithms. I’m fascinated by the breadth and depth of the consensual ecosystem that is now built around the Google traffic pipeline: consulting firms helping media rank better in Google Search and Google News; software that rephrases headlines to make it more likely they’ll hit the top ranks; A/B testing on-the-fly that shows what the search engine might like best, etc.

For the media industry, what should have remained a marginal audience extension has turned into a vital stream of page views and revenue. I personally think this is dangerous in two ways. One, we replace the notion of relevance, reader interest, with a purely quantitative/algorithmic construct (listicles vs depth, BuzzFeed vs. ProPublica for instance). Such mechanistic practices further fuel the value deflation of original content. Two, the eagerness to please the algorithms distracts newsrooms, journalists, editors, from their job to find, develop, build intelligent news packages that will lift brand perception and elevate the reader’s mind (BuzzFeed and plenty of others are the quintessence of cheapening alienation.)

2. Choice and Competition. In 1904, Standard Oil Inc. controlled 91% of American oil production and refining, and 85% of sales. This practically inescapable monopoly was able to dictate prices and supply structure. As for Google, it indeed controls 90% of the search market in some regions (Europe especially, where fragmented markets, poor access to capital and other cultural factors prevented the emergence of tech giants.) Google combines its services (search, mail, maps, Android) to produce one of the most potent data gathering systems ever created. Note the emphasis: Google (a) didn’t invent the high tech data X-ray business, nor (b) is it the largest entity to collect gargantuan amounts of data. Read this Quartz article The nine companies that know more about you than Google or Facebook  and see how corporations such as Acxiom, Corelogic, Datalogix, eBureau, ID Analytics, Intelius, PeekYou, Rapleaf, and Recorded Future collect data on a gigantic scale, including court and public records information, or your gambling habit. Did they make you sign a consent form?

You want to escape Google? Use Bing, Yahoo, DuckDuckGo or Exalead for your web search, or go here to find a list of 40 alternatives. You don’t want your site to be indexed by Google? Insert a robot exclusion line in your html pages, and the hated crawler won’t see your content. You’re sick of Adwords in your pages or in Gmail? Use AdBlock plug-in, it’s even available for the Google Chrome browser. The same applies for storing your data, getting a digital map or web mail services. You’re “creeped out” by Google’s ability to reconstruct every move around your block or from one city to another by injecting data from your Android phone into Maps? You’re right! Google Maps Location History is frightening; to kill it, you can turn off your device’s geolocation, or use Windows Phone or an iPhone (be simply aware that they do exactly the same thing, but they don’t advertise it). Unlike public utilities, you can escape Google. Simply, its services are more convenient, perform well and… are better integrated, which gets us to our third point:

3. Transparent strategy. To Google’s credit, for the most part, its strategy is pretty transparent. What some see as a monopoly in the making is a deliberate — and open — strategy of systematic (and systemic) integration. Here is the chart I made few months ago:

326 graph_goolge

We could include several recent additions such as trip habits from Uber (don’t like it? Try Lyft, or better, a good old Parisian taxi – they don’t even take credit cards); or temperature setting patterns soon coming from Nest thermostats (if you chose to trust Tony Fadell’s promises)… Even Google X, the company’s moonshot factory (story in Fast Company) offers glimpses of Google’s future reach with the development of autonomous cars, projects to bring the internet to remote countries using balloons (see Project Loon) or other airborne platforms.

4. Innovation. Monopolies are known to kill innovation. That was the case with oil companies, cartels of car makers that discouraged alternate transportation systems, or even Microsoft which made our life miserable thanks to a pipeline of operating systems without real competition. By contrast, Google is obsessed with innovative projects seen as an absolute necessity for its survival. Some are good, other are bad, or remain in beta for years.

However, Google is already sowing the seeds of its own erosion. This company is terribly disconnected from the real world. This shows everywhere, from the minutest details of its employees daily life pampered in a overabundance of comfort and amenities that keep them inside a cosy bubble, to its own vital statistics (published by the company itself). Google is mostly white (61%), male (70%), recruits in major universities (in that order: Stanford, UC Berkeley, MIT, Carnegie Mellon, UCLA), with very little “blood” from fields other than scientific or technical. For a company that says it wants to connect its business to a myriad of sectors, such cultural blinders are a serious issue. Combined to the certainty of its own excellence, the result is a distorted view of the world in which the distinction between right and wrong can easily blur. A business practice internally considered virtuous because it supports the perpetuation of the company’s evangelistic vision of a better world can be seen as predatory in the “real” world. Hence a growing rift between the tech giant and its partners and customers, and the nations who host them.

frederic.filloux@mondaynote.com

Google and the European media: Back to the Ice Age

 

Prominent members of the European press are joining a major EU-induced antitrust lawsuit against Google. The move is short on rationale and long on ideology. 

A couple of weeks ago, Axelle Lemaire, France’s deputy minister for digital affairs,  was quoted contending Google’s size and market power effectively prevented the emergence of a “French Google”. A rather surprising statement from a public official whose profile stands in sharp contrast to the customary high civil service profile. As an MP, Mrs Lemaire represents French citizens living overseas and holds dual French and Canadian citizenship; she got a Ph.D. in International Law at London’s King’s College as well as a Law degree at the Sorbonne. Ms. Lemaire then practiced Law in the UK and served as a parliamentary aide in the British House of Commons. Still, her distinguished and unusually “open” background didn’t help: She’s dead wrong about why there is no French Google.

The reasons for France’s “failure” to give birth to a Google-class search engine are simply summarized: Education and money. Google is a pure product of what France misses the most: a strong and diversified engineering pipeline supported by a business-oriented education system, and access to abundant capital. Take the famous (though controversial) Shanghai higher education ranking in computer science: France ranks in the 76 to 100 group with the University of Bordeaux; 101 to 150 for the highly regarded Ecole Normale Supérieure; and the much celebrated Ecole Polytechnique sits deep in the 150-200 group – with performance slowly degrading over the last ten years and a minuscule faculty of… 7 CS professors and assistants professors. That’s the reality of computer science education in the most prestigious engineering school in France. As for access to capital, two numbers say it all: according to its own trade association, the size of the French venture capital sector is 1/33th of the US’ while the GDP ratio is only 1 to 6. That’s for 2013; in 2012, the ratio was 1/46th, things are improving.

The structural weakness of French tech clearly isn’t Google’s fault. Which reveals the ideological facts-be-damned nature of the blame, an attitude broadly shared by other European countries.

A few weeks ago, a surreal event took place in Paris, at the Cité Universitaire Internationale de Paris (which wants to look like a Cambridge replica). There, the Open Internet Project uncovered the next European antitrust action against Google. On stage was an disparate crew: media executives from German and French companies; the former antitrust litigator Gary Reback known for his fight against Microsoft in the Nineties – and now said to help Microsoft in its fight against Google; Laurent Alexandre, a strange surgeon/entrepreneur and self-proclaimed visionary  living in Luxembourg Brussels where his company DNA Vision is headquartered, who almost got a standing ovation by explaining how Google intended to connect our brains to its gigantic neuronal network by around 2040; all of the above wrapped up with a speech from French Economy Minister Arnaud Montebourg who never misses an opportunity to apply his government’s seal on anti-imperialist initiatives.

The lawsuit alleges market distortion practices, discrimination in several guises, anticompetitive conduct, preference for its own vertical services at the expense of fairness in its search results, illegal use of data, etc. (The summary of EU allegations is here). The complaint paves the way for painstaking litigation that will drag on for years.

Among the eleven corporations or trade groups funding the lawsuit we find seven media entities, including the giant German Axel Springer GroupLagardère Active whose boss invoked the “moral obligation” to fight Google. There is also CCM Benchmark Group, a large diversified digital player whose boss, Benoît Sillard, had his own epiphany while speaking with Nikesh Arora in Mountain View a while ago. There and then, Mr. Sillard saw the search giant’s grand plan to dominate the digital world. (I paid a couple of visits to Google’s headquarters but was never granted such a religious experience – I will try again, I promise.)

Despite the media industry’s weight, the lawsuit fails to expose Google practices directly affecting the P&L of news providers. Indeed, some media companies have developed business that competes with Google verticals. That’s the case of Lagardère’s shopping site LeGuide.com but, again, the group’s CEO, Denis Olivennes, was long on whining and short on relevant facts. (The only fun element he mentioned was outside the scope of OIP’s legal action: with only €50m in revenue, LeGuide.com paid the same amount of taxes as Google whose French operation generates $1.6bn in revenue).

Needless to say, that doesn’t mean that Google couldn’t be using its power in questionable ways at the expense of scores of e-retailers. But as far as the media sector is concerned, gains largely outweigh losses as most web sites enjoy a boost in their traffic thanks to Google Search and Google News. (The value of Google-generated clicks is extremely difficult to assess — a subject for a future Monday Note.)

One fact remains obvious: In this legal action, media groups are being played to defend interests… that are not theirs.

In this whole affair, the French news media industry is putting itself in an awkward position. In February 2013, Google and the French government hammered a deal in which the tech giant committed €60m ($81m) over a 3-year period to fund digital projects run by the French press. (In 2013, according to the fund’s report, 23 projects have been started, totaling €16m in funding.) The agreement between Google and the French press stipulates that, for the duration of the deal, the French will refrain from suing Google on copyrights grounds – such as the use of snippets in search results. But those who signed the deal found themselves dragged in the OIP lawsuit through the GESTE, a legacy trade association – more talkative than effective – going back to the Minitel era  that supports the OIP lawsuit on antirust rather than copyrights grounds. (Those who signed the Google Funds agreement issues a convoluted communiqué to distance themselves from the OIP initiative.)

In Mountain View, many are upset by French media that, on one hand, get hefty subsidies and, on the other, file an anti-Google suit before the Europe Court of Justice. “Back home, the [Google] Fund always had its opponents”, a Google exec told me, “and now they have reasons to speak louder…” Will they be heard? It is unlikely that Google will pull the plug on the Fund, I’m told. But people I talk to also said that any renewal, under any form, now looks unlikely. So will be the extension of an innovation funding scheme in Germany — or elsewhere. “Google is at a loss when trying to develop peaceful relations with the French”, another Google insider told me… “We put our big EMEA [Europe and Middle East] headquarters in Paris, we created a nicely funded Cultural Institute, we fueled the innovation fund for the press, and now we are bitten by the same ones who take our subsidies…”

Regardless of its merits, the European press’ involvement in this antitrust case is ill-advised. It might throw the relationship with Google back to the Ice Age. As another Google exec said to me: “News media should not forget that we don’t need them to thrive…”

–frederic.filloux@mondaynote.com

 

iWatch Thoughts

 

Unlike the almost forgotten Apple TV set, there might be a real product in the iWatch. But as rumors about the device intensify, the scuttlebutt conveniently skirts key questions about the product’s role.

As reverberations of Apple’s Developer Conference begin to die down, the ever-dependable iWatch has offered itself as the focus of another salvo of rumors and speculation. Actually, there’s just one rumor — a Reuters “report” that Quanta Computer will begin manufacturing the iWatch in July — but it was enough to launch a quick-fire series of echoes that bounced around the blogosphere. Not to be outdone, the Wall Street Journal added its own tidbits:

“Apple is planning multiple versions of a smartwatch…[that] will include more than 10 sensors to track and monitor health and fitness data, these people said.”

(“These people” are, of course, the all-knowing “people familiar with the matter”.)

The iWatch hubbub could be nothing more than a sort of seasonal virus, but this time there’s a difference.

At the WWDC three weeks ago, Apple previewed HealthKit, a toolkit iOS developers can use to build health and fitness related applications. HealthKit is a component of the iOS 8 release that Apple plans to ship this fall in conjunction with the newest iDevices. As an example of what developers will be able to do with HealthKit, Apple previewed Health, an application that gives you “an easy-to-read dashboard of your health and fitness data.”

The rumor that Quanta will soon begin “mass production” of the iWatch — the perfect vehicle for health-and-fitness apps — just became a bit more tantalizing… but there are still a number of questions that are left unanswered.

Foremost is iWatch “independence”. How useful will it be when it’s running on its own, unconnected to a smartphone, tablet, or conventional PC? My own guess: Not very useful. Unless Apple plans to build a monstrosity of a device (not likely), the form factor of our putative iWatch will dictate a small battery, which means the processor will have to be power-conserving and thus unable to run iPhone-caliber apps. Power conservation is particularly important if Apple wants to avoid jibes of the ‘My iWatch ran out of battery at the end of the day’ type. Such occurrences, already annoying with a smartphone, could be bad publicity for a “health and fitness” watch.

So, let’s settle for a “mostly dependent” device that relies on a more robust sibling for storage, analysis, and broad overview.

That raises another question: Will the iWatch be part of Apple’s ecosystem only, or will it play nice with Windows PCs or even Android smartphones? If we take Apple’s continued tolerance of the Android version of Beats Music (at least so far) as an example, the notion of an Apple device communicating with a member of the Android tribe is less heretical than it once was. Again, my own guess: Initially, the iWatch will be of restricted to the Apple ecosystem. We’ll see what happens if the device catches on and there’s a demand for an “non-denominational” connection.

As for what role the iWatch will play in the ecosystem, those of us ancient enough might recall the example set by the Smart Personal Objects Technology (SPOT) that Microsoft launched a decade ago. No need to repeat that bit of doomed history by targeting too many platforms, by trying to make “Smart Objects” omniscient. Instead, Apple is likely, as it insisted at its early June WWDC, to tout its Continuity ethos: Let each device do what it does best, but don’t impede the flow of information and activities between devices. In plainer English: Hybrid devices are inferior.

So, besides telling time (perhaps in Yosemite’s new system font, a derivative of Helvetica Neue) what exactly will the iWatch do? The first part of the answer is easy: It will use its sensors to collect data of interest. We’ve already seen what the M7 motion processor and related apps can do in an iPhone 5S; now imagine data that has much finer granularity, and sensors that can measure additional dimensions, such as altitude.

Things quickly get more complicated when we turn to the “other side of the skin”. Heart rhythm and blood pressure measurements look banal, but they shouldn’t be taken for granted, especially if one wants medically reliable data. Oxymetry, the measurement of your oxygen saturation, looks simple — you just slide a cap onto your fingertip — but that cap is actually transmitting lightwaves through your finger. A smartwatch can’t help the nearly 18 million US citizens who suffer from Type II Diabetes (a.k.a Adult Onset Diabetes)  because there are no non-invasive methods for measuring blood sugar. And even as the technical complications of collecting health data are surmounted, device makers can find themselves skirting privacy issues and infringing the HIPAA charter.

The iWatch will also act as a receiver of data from a smartphone, tablet, or PC. This poses many fewer problems, both technical and ethical, than health monitoring, but it also offers few opportunities. Message notifications and calendar alerts are nice but they don’t create a new category, and they certainly haven’t “moved the needle” for existing smartwatches. In a related vein, one can imagine bringing the iWatch close to one’s face and speaking to Siri, asking to set up a calendar event, or sending a text message… but, as with the trend towards larger smartphone screens, one must exercise care when fantasizing about iWatch use cases.

Then we have the question of developers and applications — where’s the support for iWatch app creators? When the iOS App Store opened in 2008, the iPhone became an app phone and solidified the now universal genre. What iWatch rumors fail to address is the presence or absence of an iWatch SDK, of iWatch apps, and of a dedicated App Store section.

Meanwhile, Google has already announced its Android Wear platform and has opened a “Developer Preview” program. Conventional wisdom has it that the Google I/O convention next week will focus on wearables. Samsung has been actively fine-tuning and updating the software for its line of Galaxy Gear smart watches (the watches originally ran on an Android derivative but now use Tizen – until next week).

Finally, we have the question of whether an iWatch will sell in numbers that make the endeavor worthwhile. As the previously-mentioned WSJ story underlines, the smartwatch genre has had a difficult start:

“[...] it isn’t clear how much consumers want the devices. Those on the market so far haven’t sold well, because most wearable devices only offer a limited set of features already found on a smartphone.”

The most ambitious rumors project 50 million iWatches sold in the first 12 months. I think that’s an unrealistic estimate, but if a $300 iWatch can sell at these numbers, that’s $15B for the year. This seems like a huge number until you compare it to a conservative estimate for the iPhone:  50 million iPhones at $650 generates $32B per quarter.

Taking a more hopeful view, let’s recall the history of the iPad. It was a late entrant in the tablet field but it coalesced and redefined the genre. Perhaps the the iWatch will establish itself as The Smartwatch Done Right. But even if it succeeds in this category-defining role, it won’t have the power and flexibility or the huge number of apps of a true trouser pocket computer. As a result, the iWatch will be part of the supporting cast, not a first order product like the iPhone. There’s nothing wrong with that — it might help make high-margin iPhones even more attractive — but it won’t sell in numbers, dollar volume, or profit comparable to the iPhone or iPad. The iWatch, if and when announced, might be The Next Big Thing – for the few weeks of a gargantuan media feast. But it won’t redefine an industry the way PCs, smartphones and tablets did.

JLG@mondaynote.com

 

Legacy Media: The Missing Gene

 

Legacy media is at great risk of losing against tech culture. This is because incumbents miss a key driver: an obsession with their own mortality. Such missing paranoia gene negatively impacts every aspect of their business. 

At the last Code conference (the tech gathering hosted by Walter Mossberg and Kara Swisher), Google co-founder Sergey Brin made a surprising statement (at least to me): Asked by Swisher how Google sees itself, Brin responded in his usual terse manner: “There is the external and the internal view. For the outside, we are Goliath and the rest are Davids. From the inside, we are the Davids”. From someone who co-founded a $378bn market cap company that commands more than 80% of the global internet search, this is indeed an unexpected acknowledgement.

Sergey Brin’s statement echoes Bill Gates’ own view when, about fifteen years ago, he was asked about his biggest concern: Was it a decisive move or product by another big tech company? No, says, Gates, it is the fact that somewhere, somehow, a small group of people is inventing something that will change everything… With the rise of Google and Facebook, his fears came true on a scale he couldn’t even imagine. Roughly at the same time, Andy Grove, then CEO of Intel, published a book with a straightforward title: “Only the Paranoid Survives“. Among my favorites Grove quotes:

“Business success contains the seeds of its own destruction. The more successful you are, the more people want a chunk of your business and then another chunk and then another until there is nothing.”

Still, Intel wasn’t paranoid enough and completely missed the mobile revolution, leaving to ARM licensees the entire market of microprocessors for smartphones and tablets.

This deep-rooted sense of fragility is a potent engine of modern tech culture. It spurs companies to grow as fast as they can by raising lots of capital in the shortest possible time. It also drives them to capture market share by all means necessary (including the worst ones), and to develop a culture of excellence by hiring the best people at any cost while trimming the workforce as needed while obsessively maintaining a culture of agility to quickly learn form mistakes and to adapt to market conditions. Lastly, the ever-present sense of mortality drives rising tech companies to quickly erect barriers-to-entry and to generate network effects needed to keep incumbents at bay.

For a large part, these drives stem from these companies’ early history and culture. Most started combining a great idea with clever execution – as opposed to being born within an expensive infrastructure. Take Uber or AirBnB. Both started with a simple concept: harness digital tools to achieve swift and friction-free connections between customers and service providers. Gigantic infrastructure or utterly complicated applications weren’t required. Instead, the future of these companies was secured by a combination of flawless execution and fast growth (read this New York Times story about the Uber network effect challenge). Hence the rapid-fire rounds of financing that will boost Uber’s valuation to $17bn, allowing it to accelerate its worldwide expansion – and also combat a possible price war, as stated by its founder himself at the aforementioned Code Conference.

Unfortunately, paranoia-driven growth sometimes comes with ugly business practices. Examples abound: Amazon’s retaliation against publishers who fight its pricing conditions; Uber bullying tactics against its rival – followed by an apology; Google offering for free what others were used to sell, or distorting search results, etc.

Such behaviors leave the analog world completely flummoxed. Historical players had experienced nothing but a cosy competitive gentlemen-like environment, with a well-defined map of players. This left incumbents without the genes, the culture required to fight digital barbarians. Whether they are media dealing with Google, publishers negotiating with Amazon, hotels fighting Booking.com or AirBnB, or taxi confronting Uber, legacy players look like the proverbial deer caught in the headlights. In some instances, they created their own dependency to new powerful distributors (like websites whose traffic relies largely on Google), before realizing that it was time to sue the dope dealer. (This is exactly what the European press is doing by assigning Google before the European Court of Justice invoking antitrust violations — a subject for a future Monday Note). The appeal to legislators underlines the growing feeling of impotence vis-a-vis the take-no-prisoners approach of new digital players: Unable to respond on the business side, the old guard turns to political power to develop a legal (but short-lasting) containment strategy.

In the media industry, historic players never developed a sense of urgency. The situation varies from one market to another but, in many instances, the “too important to fail” was the dominant belief. It always amazed me: As I witnessed the rise of the digital sector – its obsession with fast growth, and its inevitable collision course with legacy media – incumbents were frozen in the quiet certitude that their role in society was in fact irreplaceable, and that under no circumstances they would be left to succumb to a distasteful Darwinian rule. This deep-rooted complacency is, for a large part, responsible for the current state of the media industry.

Back in 1997, Andy Grove’s book explained how to deal with change :

“The implication was that either the people in the room needed to change their areas of knowledge and expertise or people themselves needed to be changed” 

Instead, our industry made too few changes, too late. Since the first digital tremors hit business models ten years ago, we have been through one or two generations of managers in traditional media company. It is amazing to see how the same DNA is being replicated over and over. Some layers are moving faster than others, though. The higher you go in the food chain, the more people are penetrated by a sense of vital urgency. But the rank-and-file and middle management are holding back, unable to exit their comfort zone.

Earlier this year, the French newspaper Liberation chose the outdated slogan: “We are a Newspaper” in reaction to its new owners ideas (read this story in the NYT). Last week, Liberation opted to appoint as it editor-in-chief one of the strongest opponent to digital media (he is just out from the weekly Le Nouvel Observateur which he gently led into a quiet nursing home, leaving it worth next to nothing).

The gap between the managers of pure digital players and those who still lead legacy media has never been greater. Keenly aware of their own mortality, the former rely more than ever on brutal street-fight tactics, while the incumbents evolve at a different pace, still hoping that older models will resist longer than feared. For old media, it is time for a radical genetic alteration — if performed down to every layer of the media industry.

frederic.filloux@mondaynote.com

 

WWDC: iOS 2.0, the End of Silos

 

Apple tears down the walls between iOS applications, developer rejoice, and Tim Cook delivers a swift kick to Yukari Iwatani Kane’s derrière – more on that at the end.

In this year’s installment of the World Wide Developers Conference, Apple announced a deluge of improvements to their development platforms and tools, including new SDKs (CloudKit, HomeKit, HealthKit); iCloud Drive, the long awaited response to Dropbox; and Swift, an easy-to-learn, leak-free programming language that could spawn a new generation of Apple developers who regard Objective-C as esoteric and burdensome.

If this sounds overly geeky, let’s remind ourselves that WWDC isn’t intended for buyers of Apple products. It’s a sanctuary for people who write OS X and iOS applications. This explains Phil Schiller’s absence from the stage: Techies don’t trust marketing people. (Unfortunately, the conference’s ground rules seem to have been lost on some of the kommentariat.)

The opening keynote is a few breaths short of 2 hours. If you’d rather not drink from the proverbial fire hydrant, you can turn to summaries from Federico Viticci in MacStories, Andrew Cunningham in Ars Technica (“Huge for developers. Massive for everyone else.”), or you can look for reviews, videos, and commentary through Apple’s new favorite search engine, DuckDuckGo, “The search engine that doesn’t track you”.

For today, I’ll focus on the most important WWDC announcement: iOS applications have been freed from the rigid silos, the walls that have prevented them from talking to each other. Apple developers can now write extensions to their apps and avail themselves of the interprocess facilities that they expect from a 21st century OS.

A bit of history will help.

When the first iPhone is shipped in late June, 2007, iOS is incomplete in many respects. There’s no cut and paste, no accented characters, and, most important, there are no native apps. Developers must obey Steve Job’s dictate to extend the iPhone through slow and limited Web 2.0 apps. In my unofficial version numbering, I call this iOS 0.8.

The Web 2.0 religion doesn’t last long. An iOS Software Development Kit (SDK) is announced in the fall and released in February, 2008. When the iTunes-powered App Store opens its doors in July, the virtual shelves are (thinly) stocked with native apps. This is iOS 1.0.

Apple developers enthusiastically embrace the platform and the App Store starts it dizzying climb from an initial 500 apps in 2008 to today’s 1.2 million apps and 75B cumulated downloads.

However, developers’ affections don’t extend to Apple’s “security state”, the limits imposed on their apps in the name of security and simplicity. To be sold in the App Store, an app must agree to stay confined in its own little sandbox, with no way to communicate with other apps.

According to Apple dogma, this limitation is a good thing because it prevents the viruses and other malware that have plagued older operating systems and overly-trusting apps. One wrong click and your device is visited by rogue code that wreaks havoc on your data, yields control to remote computers, or, worst of all, sits silently and unnoticed while it spies on your keystrokes. No such thing on iOS devices. The prohibition against inter-application exchange vastly reduces the malware risk.

This protection comes with a cost. For example, when you use a word processor or presentation tool on a personal computer, you can grab text and images of any provenance and drop them into your project. On the iOS version of Pages, you can only see other Pages documents — everything else is out of sight and out of reach.

The situation becomes even more galling when developers notice that some of Apple’s in-house apps — iMessage, Maps, Calendar with Contacts — are allowed to talk among themselves. To put it a little too simply, Apple engineers can write code that’s forbidden to third party developers.

Apple’s rules for app development and look-and-feel are famously (and frustratingly) rigid, but the company is occasionally willing to shed its dogma. In 2013, for example, skeuomorphism was abandoned…do any of us miss the simulated leather and torn bits of paper on the calendar?

With last week’s unveiling of the new version of iOS, a much more important dogma has been tossed into the dustbin: An app can now reach beyond its sandbox. Apps can interconnect, workflows are simplified, previously unthinkable feats are made possible.

This is the real iOS 2.0. For developers, after the 2008 momentous opening of the App Store that redefined the smartphone, this is the second major release.

With the new iOS, a third-party word processor developer can release his app from its sandbox by simply incorporating the Document Picker:

“The document picker feature lets users select documents from outside your app’s sandbox. This includes documents stored in another app’s iCloud container or documents provided by a third-party extension.”

Users of the word processor will be able to see and incorporate all files, regardless of how they were created or where they’re stored (within the obvious physical limits). This is a welcome change from today’s frustratingly constricted situation.

iOS Extensions, a feature that lets applications offer their own services to other apps, played well when demonstrated by Craig Federighi, Senior VP of Apple Software:

“Federighi was able to easily modify Safari by adding a sharing option for Pinterest and a translation tool courtesy of Bing. Users will also be able to apply photo filters from third-party apps and use document providers like Box or OneDrive…”
Business Insider, Why You Should Be Excited for Extensions in iOS 8 

Prominent among the benefactors of iOS Extensions are third-party keyboard designers. Today, I watch with envy as my Droid compatriots Swype a quick text message. The keyboard layouts and input methods on my iPhone are limited to the choices Apple gives me — and they don’t include Swype. Tomorrow, developers will be able to augment Apple’s offerings, including keyboards that are designed for specific apps.

As expected, developers have reacted enthusiastically to the end of silo hell. Phil Libin, Evernote’s CEO, sums up developer sentiment in the Ars Technica review:

“We’re most excited about extensions, widgets, TouchID APIs and interactive notifications. We’re all over all of that…This is a huge update for us. It feels like we got four out of our top five most wanted requests!”

Now, for the mandatory “To Be Sure” paragraph…

None of this is free. I don’t mean in the financial sense, but in terms of complexity, restrictions, adapting to new ways of doing old things as well as to entirely fresh approaches. While the relaxation of Apple’s “security state” strictures opens many avenues, it also heightens malware risk, something Apple is keenly aware of. In some cases the company will put the onus on the user, asking us to explicitly authorize the use of an extension. In other situations, as Charles Arthur points out in his WWDC article for The Guardian, Apple will put security restrictions on custom keyboards. Quoting Apple’s prerelease documentation:

“There are certain text input objects that your custom keyboard is not eligible to type into. First is any secure text input object [which is] distinguished by presenting typed characters as dots.
When a user taps in a secure text input object, the system temporarily replaces your custom keyboard with the system keyboard. When the user then taps in a nonsecure text input object, your keyboard automatically resumes.”

In part, the price to pay for the new freedoms will depend on Apple’s skills in building safeguards inside the operating system — that’s what all OS strive for. Developers will also have to navigate a new labyrinth of guidelines to avoid triggering the App Store security tripwire.

That said, there is little doubt that the fall 2014 edition of iOS will be well received for both existing and new iDevices. Considering what Apple iOS developers were able to accomplish while adhering to the old dogma, we can expect more than simply more of the same when the new version of iOS is released.

Which brings us to Tim Cook and the stamp he’s put on Apple. Critics who moan that Apple won’t be the same now that Steve Jobs is gone forget the great man’s parting gift: “Don’t try to guess what I would have done. Do what you think its best.” With the Maps fiasco, we saw Cook take the message to heart. In a break with the past, Cook apologized for an Apple product without resorting to lawyerly caveats and justifications. In a real break with the past, he even recommended competing products.

We’ve also seen Cook do what he thinks is best in his changes to the executive team that he inherited from Jobs. Craig Federighi replaces 20-year NeXT/Apple veteran Scott Forstall; Angela Ahrendts is the new head of Retail; there’s a new CFO, Luca Maestri, and a new head of US Sales, Doug Beck. The transitions haven’t always been smooth — both Ahrendts’ and Beck’s immediate predecessors were Cook appointees who didn’t work out and were quickly dismissed. (Beck was preceded by Zane Browe, former CFO at United Airlines…a CFO in a Sales job?)

Inside the company, Cook is liked and respected. He’s seen as calmly demanding yet fair; he guides and is well supported by his Leadership Team. This isn’t what the PR office says, it’s what I hear from French friends who work there. More than just French, they’re hard-to-please Parisians…

I Love Rien I'm Parisien

…but they like Cook, the way he runs the show. (True to their nature, they save a few barbs for the egregious idiots in their midst.)

With this overall picture of corporate cultural health and WWDC success in mind, let’s turn to Yukari Iwatani Kane, the author of Haunted Empire, Apple After Steve Jobs.

On her Web page, Kane insists her book, exemplar of the doomed-without-Jobs attitude, is “hard-hitting yet fair”. That isn’t what most reviewers have to say. The Guardian’s Charles Arthur called it “great title, shame about the contents”; Time’s Harry McCracken saw it as “A Bad Book About Apple After Steve Jobs”; Jason Snell’s detailed review in Macworld neatly addresses the shortcoming that ultimately diminishes the book’s value:

“Apple after the death of Steve Jobs would be a fascinating topic for a book. This isn’t the book. Haunted Empire can’t get out of the way of its own Apple-is-doomed narrative to tell that story.”

Having read the book, I can respect the research and legwork this professional writer, previously at the Wall Street Journal, has put into her opus, but it’s impossible to avoid the feeling that Kane started with a thesis and then built an edifice on that foundation despite the incompatible facts. Even now she churlishly sticks to her negative narrative: Where last week’s successful WWDC felt like a confederation of engineers and application developers happily working together, Kane sees them as caretakers holding a vigil:

Kane Churlish Tweet 450

The reaction to Kane’s tweet was “hard-hitting yet fair”:

Responses to Kane 450

Almost three years after Tim Cook took the helm, the company looks hale, not haunted.

I’ll give Cook the last word. His assessment of Kale’s book:  “nonsense”.

JLG@mondaynote.com

 

The Beats Music Rorschach Blot

 

Apple has a long track record of small, cautious, unheralded acquisitions. Has the company gone off course with hugely risky purchase of Beats Music and Electronics, loudly announced at an industry conference? 

As Benedict Evans’ felicitous tweet put it, Apple’s $3B acquisition of Beats, the headphone maker and music streaming company, is a veritable Rorschach blot:

Benedict Evans Rorschach

The usual and expected interpretations of Anything Apple – with the implied or explicit views of the company’s future – were in full display at last week’s Code Conference after the Beats acquisition was officially announced during the second day of the event. Two of the conference’s high-profile invitees, Apple’s SVP Craig Federighi and Beats’ co-founder, Dr. Dre (née André Young), quickly exited the program so all attention could be focused on the two key players: Eddy Cue, Apple’s Sr. VP of Internet Software and Services; and Jimmy Iovine, Beats’ other co-founder and freshly minted Apple employee. They were interviewed on stage by Walt Mossberg and Kara Swisher, the conference creators (59-minute video here).

Walt and Kara had booked Cue and Iovine weeks before Tim Bradshaw scooped the Apple/Beats story on May 8th in the Financial Times (the original FT article sits behind a paywall; TechCrunch version here). Was the booking a sign of prescience? smart luck? a parting gift from Katie Cotton as she retires as head of Apple PR? (And was Swisher’s warmly worded valentine to Cotton for her 18 years of service a quid pro quo acknowledgment?)

After the official announcement and the evening fireside chat, the Rorschach analysis began. Amidst the epigrams, which were mostly facile and predictable, one stood out with its understated questioning of culture compatibility:

‘Iovine: Ahrendts or Browett?‘ 

The “Browett”, here, is John Browett, the British executive who ran Dixons and Tesco, two notoriously middle-brow retail chains. Apple hired him in April 2012 to succeed Ron Johnson as the head of Apple Retail… and showed him the door seven months later, removed for a clear case of cultural incompatibility. When Browett tried to apply his estimable cost-cutting knowledge and experience to the Italian marble Apple Store, things didn’t work out — and the critics were quick to blame those who hired him.

Nothing of the sort can be said of Dame Angela Ahrendts. Now head of Apple’s physical and on-line stores, Ahrendts was lured from Burberry, a culturally compatible and Apple-friendly affordable luxury enterprise.

Will Iovine be a Browett or an Ahrendts?

In a previous Monday Note, I expressed concern for the cultural integration challenges involved in making the Beats acquisition work. What I learned from the on-stage interview is that Jimmy Iovine and Eddy Cue have known and worked with each other for more than ten years. Iovine says he’ll be coming to Cupertino ‘about once a month’, so my initial skepticism may have been overstated; Apple isn’t acquiring a company of strangers.

But are they acquiring a company that creates quality products?  While many see Beats Music’s content curation as an important differentiator in the streaming business, one that would give a new life to its flagging music sales, others are not so sure. They find Beats Music’s musical choices uninspiring. I’m afraid I have to agree. I downloaded the Beats Music app, defined a profile, and listened for several hours while walking around Palo Alto or sitting at my computer. Perhaps it’s me, my age, or my degenerate tastes but none of the playlists that Beats crafted for me delivered neither the frisson of discovery nor the pleasure of listening to an old favorite long forgotten. And my iPhone became quite hot after using the app for only an hour or so.

Regarding the headphones: They’re popular and sell quite well in spite of what The Guardian calls “lacklustre sound”. I tried Beats Electronic’s stylish Studio headphones for a while, but have since returned to the nondescript noise-canceling Bose QC 20i, a preference that was shared (exactly or approximately) by many at the conference.

There was no doubt, at the conference, that Apple understands there are problems with Beats, but there’s also a feeling that the company sees these problems as opportunities. An overheard hallway discussion about the miserable state of the iTunes application (too strongly worded to repeat here verbatim) neatly summed up the opportunity: ‘Keeping Beats as a separate group affords Cook and Cue an opening for independently developing an alternative to iTunes instead of trying to fix the unfixable.’ It’s worth noting that the Beats Music app is available on mobile devices, only, and it appears there’s no plan to create a desktop version. This underlines the diminished role of desktops, and points out the possibility of a real mobile successor to the aging iTunes application.

Continuing with the blot-reading exercise, many members of the audience found it necessary to defend the $3B price tag. Some point out that since Apple’s valuation is about 3X its revenue, Beats’ purported $1.5B hardware revenue easily “justifies” the $3B number. (Having consorted with investment bankers at various moments of my business life, as an entrepreneur, a company director, and a venture investor, I know they can be trusted to explain a wide range of valuations. Apparently, Apple is paying $500M for the streaming business and $2.5B for the hardware part.)

My own reading is that the acquisition price won’t matter: If it acquisition succeeds, the price will be easily forgotten; if it fails, Apple will have bigger worries.

Ultimately, the Apple-Beats products and services we don’t haven’t yet seen will do the talking.

–JLG@mondaynote.com

The New York Times KPI’s

 

Here are numbers lifted form the NYT’s Innovation report (see last week) and other sources. 

Most of The New York Times’ reach comes from its digital audience. Regardless of the metric, viewers on desktops and mobile are crushing print readers.

321-1 - 450

Sources: ComScore for the monthly uniques (US only); internal count for the home page views per 24 hours period and Gfk MRI based on net weekday & Sunday readership, Fall 2013 survey.

321-2 - 450

321-3 - 450

In theory, the Times can get rid of print. Digital revenue far exceeds the cost of running the newsroom, which amounts to $200m a year for 1300 writers and editors. Even if you add $20m for the 200 technical staff needed to run digital operations, and even 30% more for overhead, sales, marketing, and support staff, the result would still be a substantial profit  – but would advertisers come in the same way for a digital-only product?

321-4 - 450

The ad market seems to reward quality journalism over aggregation and listicles: The NYTimes.com monetizes itself three times better than Business Insider and nineteen times better than BuzzFeed. For this graph I simply divided annual advertising revenue for each media by the number of monthly users: 30m UVs for the NYT, 12m UVs for Business Insider according to ComScore figures quoted in this 247wallst story, and a revenue estimated at $20m by Reuters. (Had I used a 25m UVs assumption, BI’s ARPU would have been only $0.80 per visitor and per year).

321-5 - 450

The Times is known to have invested a lot in its digital subscription system (760,000 subs to date). It turns out to have been worth every penny. For those who doubt the paid model’s efficiency, The New York Times provides a great blueprint for quality media.

–frederic.filloux@mondaynote.com 

 

Peak PC. Intel Fork.

 

Propelled by Moore’s Law and the Internet, PCs have enjoyed four decades of strong growth, defying many doomsday prophecies along the way. But, with microprocessor performance flattening out, the go-go years have come to an end. Intel, the emperor of PC processors, and a nobody in mobile devices needs to react.]

I’m suspicious of Peak <Anything> predictions. Some of us became aware of the notion of a resource zenith during the 1973 OPEC oil embargo, with its shocking images of cars lined up at gas stations (in America!):

Gas Lines Oil Embargo

This was Peak Oil, and it spelled doom to the auto industry.

We know what happened next: Cars improved in design and performance, manufacturers became more numerous. Looking at this bit of history through my geek glasses, I see three explanations for the rebound: computers, computers, and computers. Computer Assisted Design (CAD) made it easier to design new car models as variations on a platform; Volkswagen’s MQB is a good example. Massive computer systems were used to automate the assembly line and manage the supply chain. It didn’t take long for computers to work their way into the cars themselves, from the ECU under the hood to the processors that monitor the health of the vehicle and control the entertainment and navigation systems.

Since then, we’ve had repeated predictions of Peak Oil, only to be surprised by the news that the US will soon become a net oil exporter and, as Richard Muller points out in his must-read Physics for Future Presidents, we have more than a century of coal reserves. (Unfortunately, the book, by a bona fide, middle-of-the-road physicist, can’t promise us that physics will eventually push politics aside when considering the rise of CO2 in the atmosphere…)

I’ve heard similar End of The Go-Go Days predictions about personal computers since 1968 when my love affair with these machines started at HP France (I was lucky enough to be hired to launch their first desktop machine).

I heard the cry again in 1985 when I landed in Cupertino in time for the marked slowdown in Apple ][ sales. The never-before round of layoffs at Apple prompted young MBAs, freshly imported from Playtex and Pepsi, to intone the It’s All Commodities Now dirge. I interpreted the cry (undiplomatically -- I hadn’t yet learned to speak Californian) as a self-serving It’s All Marketing Now ploy. In the meantime, engineers ignored the hand-wringing, went back to work, and, once again, proved that the technology “mines” were far from exhausted.

In 1988, a Sun Microsystems executive charitably warned me: “PCs are driving towards the Grand Canyon at 100 mph!”.  A subscriber to Sun’s The Network Is The Computer gospel, the gent opined that heavy-duty computing tasks would be performed by muscular computers somewhere (anywhere) on the network. Desktop devices (he confusingly called them “servers” because they were to “serve” a windowing protocol, X11) would become commodities no more sophisticated or costly than a telephone. He had no answer for multimedia applications that require local processing of music, video, and graphics, nor could he account for current and imminent mobile devices. His view wasn’t entirely new. In 1965, Herb Grosch gave us his Law, which told us that bigger computers provide better economics; smaller machines are uneconomical.

And yet, personal computers flourished.

I have vivid memories of the joy of very early adopters, yours truly included. Personal computers are liberating in many ways.

First, they don’t belong to the institution, there’s no need for the intercession of a technopriest, I can lift my PC with my arms, my brains, and my credit card.

Second, and more deeply, the PC is a response to a frustration, to a sense of something amiss. One of mankind’s most important creations is the symbol, a sign without a pre-existing meaning: X as opposed to a drawing of a deer on a cave wall. Strung together, these symbols show formidable power. The expressive and manipulative power of symbol strings runs through the Song of Songs, Rumi’s incandescent poetry, Wall Street greed, and quantum physics.

But our central nervous system hasn’t kept up with our invention. We don’t memorize strings well, we struggle with long division, let alone extracting cubic roots in our heads.

The PC comes to the rescue, with its indefatigable ability to remember and combine symbol strings. Hence the partnership with an object that extends the reach of our minds and bodies.

Around 1994, the Internet came out of the university closet, gave the PC access to millions of servers around the world (thus fulfilling a necessary part of the Sun exec’s prophecy), and extended our grasp.

It’s been great and profitable fun.

But today, we once again hear Peak PC stories. Sales have gone flat, never to return:

PC shipments 2014-18 - PNG

This time, I’m inclined to agree.

Why?

Most evenings, my home-builder spouse and I take a walk around Palo Alto. Right now, this smallish university town is going through a building boom. Offices and three-layer retail + office + residence are going up all around University Avenue. Remodels and raze-and-build projects can be found in the more residential parts of town. No block is left unmolested.

I can’t help but marvel. None of this activity, none of Silicon Valley would exist without Moore’s Law, the promise made in 1965 that semiconductor performance would double every 18 months. And, for the better part of 40 years, it did - and rained money on the tech ecosystem, companies and people. PCs, servers, embedded electronics, giant network routers, cars...they’ve all been propelled because Moore’s Law has been upheld...until recently.

The 1977 Apple ][ had a 1MHz 8-bit processor. Today’s PCs and Mac’s reach 3.7GHz, but number that hasn’t changed in more than three years. This isn’t to say that Intel processors aren’t still improving, but the days when each new chip brought substantial increases in clock speed seem to be over.

One should never say never, but Moore’s Law is now bumping into the Laws of Physics. The energy needed to vibrate matter (electrons in our case) increases with frequency. The higher the clock frequency, the higher the power dissipation and the greater the heat that’s generated…and a PC can withstand only so much heat. Consider the cooling contraptions used by PC gamers when they push the performance envelope of their “rigs”:

EK-Thermosphere_right2_12001

To work around the physical limits, Intel and others resort to stratagems such as “multiple cores”, more processors on the same chip. But if too many computations need the result of the previous step before moving forward, it doesn’t matter how many cores you have. Markitects have an answer to that as well: “speculative branch execution”, the use of several processors to execute possible next steps. When the needed outcome appears, the “bad” branches are pruned and the process goes forward on the already-computed good branch. It makes for interesting technical papers, but it’s no substitute for a 8GHz clock speed.

If we need confirmation of the flattening out of microprocessor progress, we can turn to Intel and the delays in implementing its Broadwell chips. The move to a 14 nanometers  “geometry” — the term here denotes the size of a basic circuit building block — is proving more difficult than expected. And the design isn’t meant to yield faster processors, just less power-hungry ones (plus other goodies such as better multi-media processing).

One possible reaction to this state of affairs is to look at tablets as a new engine of growth. This is what Microsoft seems to be doing by promoting its Intel-inside Surface Pro 3 as a laptop replacement. But even if Microsoft tablets turn out to be every bit as good as Microsoft says they are, they aren’t immune to the flattening out of Intel processor performance. (I don’t have an opinion yet on the product — I tried to buy one but was told to wait till June 20th.)

Does this broaden the opening for ARM-based devices? Among their advantages is a cleaner architecture, one devoid of the layers of backwards compatibility silt x86 devices need. ARM derivaties need less circuitry for the same computing task and, as a result, dissipate less power. This is one of the key reasons for their dominance in the battery-powered world of mobile devices. (The other is the customization and integration flexibility provided by the ARM ecosystem.) But today’s ARM derivatives run at lower speeds (a little above 1GHz for some) than Intel chips. Running at higher speeds will challenge them to do so without hurting battery life and having to add the fan that Microsoft tablets need.

With no room to grow, PC players exit the game. Sony just did. Dell took itself private and is going through the surgery and financial bleeding a company can’t withstand in public. Hewlett-Packard, once the leading PC maker, now trails Lenovo. With no sign of turning its PC business around, HP will soon find itself in an untenable position.

Intel doesn’t have the luxury of leaving their game — they only have one. But I can’t imagine that Brian Krzanich, Intel’s new CEO, will look at Peak PC and be content with the prospect of increasingly difficult x86 iterations. There have been many discussions of Intel finally taking the plunge and becoming a “foundry” for someone else’s ARM-based SoC (System On a Chip) designs instead of owning x86 design and manufacturing decisions. Peak PC will force Intel CEO’s hand.

JLG@mondaynote.com

Time to Rethink the Newspaper. Seriously.

 

The newspaper’s lingering preeminence keeps pulling legacy media downward. Their inability to challenge the old sovereign’s status precludes every step of a critically needed modernization. (Part of a series).  

This column was scheduled to appear in the next two or three weeks. Then, on Thursday, the thick Innovation report by an ad hoc New Times task force came to the fore. Like many media watchers, I downloaded its 97 pages PDF , printed it (yes) and carefully annotated it. A lot has been written about it and I’m not going to add my own exegesis on top of numerous others. You can look at the always competent viewpoint from Nieman Lab’s Joshua Benton who sees The leaked New York Times innovation report as one of the key documents of this media age. (Other good coverage includes Politico and Capital New York — I’m linking to the NYT tag, then you’ll have all the stories pertaining to Jill Abramson’s brutal firing as well).

320-Innovation_full

This report is important one for two main reasons:

– The New York Times is viewed as one of the few traditional media to have successfully morphed into a spectacular digital machine. This backdrop gives a strong resonance to the report because many news organizations haven’t achieved half of what the NYT did, whether the metric is the performance of its digital subscription model, or its achievements in high-yield advertising – all while keeping its impregnable ability to collect Pulitzer prizes.

– We rarely, if ever, see an internal analysis expressed in such bold terms. Usually, to avoid ruffling feathers, such reports are heavily edited – which ends up being the best way to preserve the status quo. Even more, mastheads tend to distance themselves from endorsing conclusions coming from the “management crowd” – a coldly demeaning phrase. But, it the Times case, the report was expressly endorsed by the top editors (Abramson and her then second-in-command Dean Baquet who now leads the shop.)

Let’s then focus back to this column’s original intent: Why reinvent the newspaper, quickly and thoroughly.

Until last week, the reference on the matter was an email sent in January 2013 by Lionel Barber, the Financial Times editor (full-text in the Guardian), in which he sets a clear roadmap to shifting resources from print to digital:

I now want to set out in detail how we propose to reshape the FT for the digital age. (…)

[We] are proposing a shift of some resources from night work to day and from print to digital. This requires an FT-wide initiative to train our journalists to operate to the best of their abilities. And it requires decisive leadership. (…)

On unified news desks, we need to become content editors rather than page editors. We must rethink how we publish our content, when and in what form, whether conventional news, blogs, video or social media.

 A year later, key numbers for the FT are impressive:

– A 2013 profit of £55m ($92m, €67m) for the FT Group (which includes the 50% stake Pearson owns in the Economist Group); that’s an increase of 17%, while sales are slightly down by 1% to £449m ( $755m, €551m)

– 415,000 digital subscribers (+31% in one year) who now account for two-thirds of the FT’s total audience (652,000 altogether: +8%, including a staggering 60% growth in corporate users at 260,000)

– A rise in digital subscribers that offsets the decline in advertising now accounting for 32% of FT Group revenue vs. 52% in 2008.

– For the first time, in 2013, FT digital content revenue exceeded print content.

The FT might be on sale – but its management did quite well.

Echoing Lionel Barber’s view of resources reassignments are the equally strong terms from The New York Times’Innovation Report:

In the coming years, The New York Times needs to accelerate its transition from a newspaper that also produces a rich and impressive digital report to a digital publication that also produces a rich and impressive newspaper. This is not a matter of semantics. It is a critical, difficult and, at times, painful transformation that will require us to rethink much of what we do every day. [page 81] 

Stories are typically filed late in the day. Our mobile apps are organized by print sections. Desks meticulously lay out their sections but spend little time thinking about social strategies. Traditional reporting skills are the top priority in hiring and promotion. The habits and traditions built over a century and a half of putting out the paper are a powerful, conservative force as we transition to digital — none more so than the gravitational pull of Page One. [It] has become increasingly clear that we are not moving with enough urgency. [page 59]

The newsroom should begin an intensive review of its print traditions and digital needs — and create a road map for the difficult transition ahead. We need to know where we are, where we’re headed and where we want to go. [page 82]

These quotes from a news organization that never gave up on great journalism will be helpful to those who desperately struggle to transform newsrooms. It is also a plea for the necessity of dumping the obdurate print-first obsession:

– It precludes modernizing the recruiting process as journalists are still too often picked for their writing capabilities while many other talents are needed.

– It limits audience development initiatives. In today’s print-oriented newsrooms, most writers and editors consider their jobs done once the story is filed in the CMS (Content Management System). Unfortunately, in every fast-growing digital media outlets such as Buzzfeed, The HuffPo, Politico, Quartz, Vox Media, now part of the competitive landscape, throwing the story online is actually just the beginning. The ability to cause a news item to reverberate around the social sphere is now as important as being a good writer.

– As stated in the Times report, convincing the masthead on the mandatory resource-shifting in only part of the journey; most of the transformation’s weight lies on the shoulders of the rank and file in the newsroom.

– At the NYT as everywhere else, the old guard (regardless of age, actually), is the main obstacle to the necessary rapprochement between the editorial and the business side. For instance, by rejecting the idea that Branded Content would greatly benefit from the newsroom expertise (although everyone agrees that a news writer should never be asked to write advertorial), or that a conference is indeed an editorial initiative directed to a valuable audience segment, such conservative postures are actually shrinking the company down to its most fragile component.

– The same goes for the analytics arsenal. I heard scores of examples in which newsrooms call for more dashboards and indicators, but seldom use them. Editors should be supported by tactical analytics teams (including at the editorial meeting level) that will provide immediate and mi-terms trends, as well as editorial decision-making tools.

One of the most difficult part of the transformation of legacy media is not addressed in the Times Innovation report nor in the FT’s exposé. It pertains to the future of the physical newspapers itself (the layout of the Times remains terribly out-of-date): How should it evolve? What should be its primary goals in order to address and seduce a readership now overwhelmed by commodity news? What should be the main KPIs (Key Performance Indicators) of a modern newspapers? What about content: types of stories, length, timelessness, value-added? Should it actually remain a daily?

(To be continued…)

frederic.filloux@mondaynote.com