News: Mobile Trends to Keep In Mind

 

For publishers, developing an all-out mobile strategy has become both more necessary and more challenging. Today, we look at key data points and trends for such a task. 

#1 The Global Picture
— 1.7bn mobile phones (feature phones and smartphones) were sold in 2012 alone
— 3.2bn people use a mobile phone worldwide
— Smartphones gain quickly as phones are replaced every 18 to 24 months
— PCs are completely left in the dust as shown in this slide from Benedict Evans’ excellent Mobile is Eating the World presentation:

ben-evans

The yellow line has two main components:
— 1 billion Android smartphones are said to be in operation worldwide (source: Google)
— 700 million iOS devices have been sold over time, with 500 million still in use, which corresponds to the number of iTunes accounts (source: Asymco, one of the best references for the mobile market.)
— 450 million Symbian-based feature phones are in operation (Asymco.)

#2 The Social Picture 

Mobile phone usage for news consumption gets increasingly tied to social networks. Here are some key numbers :
— Facebook: about 1.19bn users; we don’t exactly know how many are active
— Twitter: 232 million users
— LinkedIn: 259 million users

When it comes to news consumption in a social environment, these three channels have different contributions. This chart, drawn from a Pew Research report, shows the penetration of different social networks and the proportion of the US population who get their news from it.

300_pew

One of the most notable data points in the Pew Report is the concentration of sources for social news:
— 65% say to get their news from one social site
— 26% from two sites
— 9% from three sources or more (such as Google +, LinkedIn)

But, as the same time, these sources are completely intertwined. Again, based on the Pew survey, Twitter appears to be the best distributor of news.

Among those who get their news from Twitter:
— 71% also get their news on Facebook
— 27% on YouTube
— 14% on Google+
— 7% on LinkedIn

Put another way, Facebook collects more than half of the adult population’s news consumption on social networks.

But a closer looks at demographics slightly alters the picture because all social networks are not equal when it comes to education and income segmentation:

If you want to reach the Bachelor+ segment, you will get:
— 64% of them on LinkedIn
— 40% on Twitter
but…
— only 30% on Facebook
— 26% on G+
— 23% on YouTube

And if you target the highest income segment (more than $75K per year), you will again favor LinkedIn that collects 63% of news consumers in this slice, more than Facebook (41%)

Coming back to the mobile strategy issue, despite Facebook’s huge adoption, Twitter appears to be the best bet for news content. According to another Pew survey, the Twitter user is more mobile :

Mobile devices are a key point of access for these Twitter news consumers. The vast majority, 85%, get news (of any kind) at least sometimes on mobile devices. That outpaces Facebook news consumers by 20 percentage points; 64% of Facebook news consumers use mobile devices for news. The same is true of 40% of all U.S. adults overall. Twitter news consumers stand out for being younger and more educated than both the population overall and Facebook news consumers

 And, as we saw earlier, Twitter redistributes extremely well on other social platforms. It’s a no brainer: any mobile site or app should carry a set of hashtags, whether it’s a stream of information produced by the brand or prominent bylines known for their insights.

 #3 The Time Spent Picture

Here is why news is so complicated to handle in mobile environments. According to Flurry Analytics: On the 2 hours and 38 minutes spent each day on a smartphone and an a tablet by an American user, news accounts for 2% as measured in app consumption, which accounts for 80% of time spent. The remaining 20% is spent in a browser where we can assume the share of the news to be much higher. But even in the most optimistic hypothesis, news consumption on a mobile device amounts to around 5 to 6% of time spent (this is correlated by other sources such as Nielsen). Note that this proportion seems to decrease as, in May 2011, Flurry Analytics stated news in the apps ecosystems accounted for 9% of time spent.

This view is actually consistent with broader pictures of digital news consumption, such as these two provided by Nielsen, which show that while users spend 50 minutes per month on CNN (thanks to is broad appeal and to its video content), they only spend 18 minutes on the NYT and a mere 8 minutes on the Washington Post:

300 nielsen

All of the above compares to 6hrs 42min spent on Facebook, 2hrs on YouTube or Yahoo sites.

In actionable terms, this shows the importance of having smartphones apps (or mobile web sites) sharply aimed at providing news in the most compact and digestible way. The “need to know” focus is therefore essential in mobile because catching eyeballs and attention has become increasingly challenging. That’s why The New York Times is expected to launch a compact version of its mobile app (currently dubbed N2K, Need to Know, precisely), aimed at the market’s youngest segment and most likely priced just below $10 a month. (The Times also does it because the growth of digital subscriptions aimed at the upper market is slowing down.) At the other end of the spectrum, the NYT is also said to work on digital magazine for iPad, featuring rich multimedia-narrative on (very) long form such the Pulitzer winning Snow Fall (on that matter, the Nieman analysis is worth a read).

This also explains why the most astute digital publishers go for newsletters designed for mobile that are carefully – and wittily – edited by humans. (One example is the Quartz Daily Brief; it’s anecdotal but everyone I recommended this newsletter to now reads it on a daily basis.) I personally no longer believe in automated newsletters that repackage web site headlines, regardless of their quality. On smartphones, fairly sophisticated users (read: educated and affluent) sought by large media demand time-saving services, to the point content, neatly organized in an elegant visual, and — that’s a complicated subject — tailored to their needs way.

#4 The ARPU View

On mobile devices, the Average Revenue per User should be a critical component when shaping a mobile strategy. First, let’s settle the tablet market question. Even though the so-called “cheap Android” segment  ($100-150 for a plastic device running an older version of Android) thrive in emerging markets, when it comes to extracting significant money from users, the iPad runs the show. It accounts for 80% of the tablet web traffic in the US, UK, Germany, France, Japan, and even China (source: Adobe.)

The smartphone is more complicated. A year ago, many studies made by AppAnnie or Flurry Analytics showed that the iPhone ecosystem brought four times more revenue than Android. More recently, Flurry Analytics ran a story stating that the average app price for Android was $0.06 vs. $0.19 for the iPhone and $0.50 for the iPad.

The gap is closing as Android terminals attracts a growing number of affluent users. Still, compared to iOS, it is notoriously difficult to carry paid-for apps and services in the Android ecosystem, and Android ads remains cheaper. It’s likely to remain the case for quite a while as iOS devices are likely to remain much more expensive than Android ones, and therefore more able to attract high-end demographics and the ads that go to them.

How this impacts a smartphone strategy: Publishers might consider different business models for the two main ecosystems. They could go for fairly sophisticated apps in the iOS world, served  by a well-oiled payment system allowing many flavors of In-App add-ons. By contrast, the Android environment favors a more “go-for-volume” approach; but things could evolve quickly as the Android share of high-end audience grows and as the PlayStore gains in sophistication and gets as friction-free as the AppStore.

frederic.filloux@mondaynote.com

Sound Holiday Thoughts

 

Nothing too serious this week. No Microsoft CEO succession, no Samsung $14B marketing budget exceeding Iceland’s GDP, no Apple Doom. Just Holiday – or Cyber Monday – audio talk.

I used to listen to sound. Now I enjoy music. It started with explosives. I was lucky to be born at a time and place (an arch-communist suburb of post-war Paris) where a 9-year old kid could hopscotch to the drugstore around the corner and buy nitric, sulfuric, or hydrochloric acid, sulfur, potassium chlorate, hydrogen peroxide… and other fascinating wares – among which a flogger with short leather lashes I was also acquainted with. Imagine this in today’s California…

After a minor eye-burn incident, I was firmly redirected towards electronics and started building crystal radios, rudimentary AM sets using a galena (lead sulfide) crystal.

My good fortune continued. In 1955, my parents decided to send their increasingly restive child to a Roman Catholic boarding school in Brittany. What awaited me there, besides a solid classical education, was a geeky Prefect of Discipline who had a passion for hobby electronics. After hours, I would go to his study to read Radio Plans and Le Haut-Parleur — the French equivalents of Nuts and Volts — and salivate over the first OC71 transistor that had just landed on his desk (amazingly, the transistor is still available). This was exciting: Fragile, noisy, power hungry vacuum tubes that required both high and low voltages were going to be replaced by transistors. Numerous, randomly successful projects followed: radios, mono and stereo amplifiers, hacked surplus walkie-talkies.

Years later, in June 1968, I landed a dream job launching HP’s first desktop computer, the 9100A, on the French market. I distinctly recall the exultant feeling: After years of the psycho-social moratorium referred to in an earlier Monday Note, I had entered the industry I love to this day.

With more money, I was able to afford better turntables, tape decks, receivers, amplifiers and, above all, speakers. For a while I started to listen more to the sound they produced than to the music itself. The Lacanians have a phrase for the disease: Regressive Fixation On Partial Objects…

HP had installed an über-geek, Barney Oliver, as head of its Research organization, HP Labs. Adored for his giant intellect and free spirit, Oliver decided stereo amplifiers of the day (early 70’s) were either expensive frauds or noisy trash. Or both. So he raided the HP parts bin and built us a real stereo amplifier. (The manual and schematics are lovingly preserved here.) Four hundred were built. I bought two, because you never know. This was a vastly “overbuilt” device that used high-precision gas chromatograph attenuators with .1dB steps as volume controls. (Most of us have trouble perceiving a 1dB difference.) The power supply had such enormous capacitors that the amplifier would keep “playing” for 25 seconds after it was turned off.

HP, the old, real HP, truly was technophile heaven.

As years passed, I became uncomfortable with the audio arms race, the amps that pushed out hundreds or even thousands of watts, the claims of ever-vanishing .1%, nay. .01% distortion levels, the speakers that cost tens of thousands of dollars. (The Rolls-Royce of audio equipment of the time was…McIntosh.)

A chance encounter with The Audio Critic helped me on the road to recovery. Peter Aczel, the magazine’s publisher and main author is a determined Objectivist Audiophile, a camp that believes that “audio components and systems must pass rigorously conducted double-blind tests and meet specified performance requirements in order to validate the claims made by their proponents”. Committed to debunking Subjectivists‘ claims of “philosophic absolutes” and ethereal nuance, Aczel has attracted the ire of high-end equipment makers who hate it when he proves that their oxygen-free copper cables with carefully aligned grains are no better than 12-gauge zip wire at 30 cents per foot.

(A helpful insight from Aczel: In an A/B audio comparison, the louder gear inevitably wins, so loudness needs to be carefully equalized. This “sounds” like the reason why, over the last two or three decades, wines have increased their alcohol concentration to 14% or more: In tastings, the stronger wine is almost always preferred.)

The real turning point from sound fetishism to music appreciation came in early 2002 when I bought an iMac G4 that came with two small but surprisingly good external loudspeakers:

iMac G4 w Speakers

They won’t fill a concert hall, they can’t compete with my old JBL speakers but coupled with iTunes, the iMac had become a pleasant stereo. (Due, of course, to the improvements in magnetic alloys such as neodymium compounds, more efficient Class D amplifiers, and… but I’ll stop before I relapse.)

A decade later — and skipping the politically incorrect jokes about married men experiencing premature hearing impairment in the high-frequency region of the spectrum — I’m now able to focus on music and expect the reproduction equipment to stay out of the way, in both practical and auditory terms.

Today’s “disk drives” are solid state and store hundreds of gigabytes; CDs and DVDs have all but disappeared; iPods, after a few years in the sun, have been absorbed into phones and tablets. (And we watch iTunes on the road to becoming Apple’s Windows Vista.)

After years of experiment, I’ve come to a happy set of arrangements for enjoying music at home, at work, and on the go. Perhaps these will help your own entertainment. (Needless to say, I bought all the following – and many others – with my own money, and the Monday Note doesn’t receive compensation of any kind.)

At home, I use a Bose Companion V desktop set-up. It consists of two pods, one on each side of the screen, plus a bass module anywhere under the desk. Bose’s idea is to take your PC’s output from a USB port and process it to add an illusion of depth/breadth when sitting at your desk. For me, it works. And the output is strong enough for a family/kitchen/dining room.

That said, I’m not fond of all Bose products. I find the smaller Companion units too bass-heavy, and I didn’t like (and returned) their AirPlay speaker. As for the company’s design sensibility, Amar Bose gave me the evil eye more than 15 years ago when I dared suggest that the industrial design of his Wave System could use updating (I was visiting his Framingham Mountain office with a “noted Silicon Valley electrics retailer”). The design hasn’t changed and is selling well.

At the office, I followed advice from my old friends at Logitech and bought two Ultimate Ears Bluetooth speakers. With a (recently improved) smartphone app, they provide very good stereo sound. At $360/pair, the UE system costs about the same as the Companion V; what UE lacks in the Bose’s power, it makes up for in portability. The only knock is that the mini-USB charging port is under the speaker’s bottom — you have to turn it on its head to charge it..

Speaking of portability, Bose’s Soundlink Mini, another testament to modern speaker and amplifier technology, fits in a bag or roll-aboard and shocks unprepared listeners with its clean, powerful sound and clean design. No discounts on Amazon, which we can attribute to Bose’s unwavering price control and to the system’s desirability.

I kept the best for last: Noise-reducing earphones. The premise is simple: A microphone captures ambient sound, embedded circuitry flips the waveform and adds it into the signal, thus canceling the background noise and allowing us to enjoy our music undisturbed. This is a consumer application of Bose’s first noise-canceling headphones for aviation applications, still considered the domain’s standard. A “pro” set cost about $1,000. Consumer versions are $300 or less.

To my ears, early models were disappointing, they introduced small levels of parasitic noise and featured indifferent music reproduction. Nonetheless, sales were strong.

Later models, from Bose and others, improved both music playback and noise cancelation, but still felt big, unwieldy. Again, a matter of personal preference.

Yielding to the friendly bedside manner of an Apple Store gent, I recently bought a pair of Bose QC 20i “noiseless” earphones (about $300). The earbuds are comfortable and so “skin-friendly” that you forget you’re wearing them (I mention this because comfort will always trump quality). They’re also more secure, less prone to falling out of your ears than are Apple’s own devices.

Now, as I take my evening walk in the streets of Palo Alto enjoying the Bach Partitas, the street noise is barely a whisper, cars seem to glide by as they were all Teslas. For civility and safety, there’s a button to defeat noise reduction, and the mandatory Pause for phone or street conversations. There are other nice details such as a spring-loaded clip for your shirt or lapel, or a dead-battery mode that still lets music — and noise —  come through.

Next week, we’ll return to more cosmic concerns.

JLG@mondaynote.com

The Internet of Things: Look, It Must Work

 

For twenty-five years, we’ve been promised a thoroughly connected world in which our “things” become smarter, safer and save energy. But progress doesn’t seem to match the glowing predictions.

The presentation is straightforward and enticing:

Picture this: A 25¢ smart chip inside a light-bulb socket. Networked through the 110V wires, it provides centralized on-off control and monitors the bulb’s “health” by constantly measuring electrical resistance. Imagine the benefits in a large office, with thousands, or even tens of thousands of fixtures. Energy is saved as lighting is now under central, constantly adaptable control. Maintenance is easier, pinpointed, less expensive: Bulbs are changed at precisely the right time, just before the filament burns out.
Now, add this magic chip to any and all appliances and visualize the enormity of the economic and ease-of-use benefits. This is no dream. . . we’re already working on agreements in energy-conscious Scandinavia.

When did this take place?

There is a one-word giveaway to this otherwise timeless pitch: filament. Incandescent lights have been regulated out of existence, replaced first by CFLs (compact fluorescent lamps — expensive and not so pleasant) and then by LEDs (still expensive, but much nicer).

The pitch, reproduced with a bit of license, took place in 1986. It’s from the business plan of a company called Echelon, the brainchild of Mike Markkula, Apple’s original angel investor and second CEO.

The idea seemed obvious, inevitable: The relentless physics of Moore’s Law would make chips smaller, more powerful, and less expensive. Connected to a central household brain, these chips would control everything from lightbulbs and door locks to furnaces and stoves. Our lives would be safer and easier. . . and we’d all conserve energy.

The idea expresses itself in variations of the core Home Automation concept, the breadth of which you can visualize by googling “home automation images”:

Home Automation Pics copy

In 1992, Vint Cerf, our beloved Internet pioneer, posed with his famous IP On Everything t-shirt:

Vint Cerf T-Shirt IP On Everything copy

This was a modern, ringing restatement of Echelon’s vision: The objects in our homes and offices will have sensors and actuators. . . and a two-way connection to the Internet, to a world of data, applications, people (and, inevitably, marketing trolls).

It’s been a quarter century since Echelon started, more than two decades since Vint Cerf’s pithy yet profound prophecy. We now speak of the Internet Of Things and make bold predictions of billions of interconnected devices.

Earlier this year, Cisco invited us to “capture our share” of the $14.4T (yes, T as in trillion) business opportunity that The Internet of Everything (IoE) will create in the coming decade. Dave Evans, Cisco’s chief futurist, tells us that within ten years we’ll see “50 billion connected things in the world, with trillions of connections among them“.

Maybe. . . but that’s a lot of “things”.

As Network World points out, “[m]ore than 99 percent of physical objects are not now connected to the Internet”. The exact percentage matters less than the existential truth that the obvious, attractive, inevitable idea of a universe of interconnected objects is taking a long, long time to materialize.

Does the concept need a Steve Jobs to coalesce the disparate components into a coherent, vibrant genre? Are important pieces still missing? Or, like Artificial Intelligence (rebranded as Machine Learning in an attempt to soothe the pain of repeated disappointments), are we looking at an ever-receding horizon?

Echelon’s current state (the company went public in 1998) serves as a poster child for the gulf between the $14.4T vision and today’s reality.

First, some context: Mike Markkula, who is still Vice Chairman of Echelon, has assembled a strong Board of Valley veterans who have relevant experience (I know several of them well — these aren’t just “decorative directors”). The company’s Investor Overview offers an impressive Corporate Profile [emphasis mine]:

“Echelon Corporation is an energy control networking company, with the world’s most widely deployed proven, open standard, multi-application platform, selling complete systems and embedded sub-systems for smart grid, smart city and smart building applications. Our platform is embedded in more than 100 million devices, 35 million homes, and 300,000 buildings and powers energy savings applications for smart grids, smart cities and smart buildings. We help our customers reduce operational costs, enhance satisfaction and safety, grow revenues and prepare for a dynamic future.”

But the latest Earnings Call Presentation paints a different picture:

Echelon Q3FY13 Highlights Edited copy

The Gross Margin is good (58.5%), as is the company’s cash position ($56.7M). . . but Echelon’s business is a tiny $18M — about a millionth of Cisco’s predicted motherlode. That’s a decrease of 38% compared to the same quarter last year.

So, we have a company that’s in the hands of competent technologist who have deep knowledge of the domain; a company with real, proven products that have been deployed in millions of homes and offices— but with little revenue to show for its technology and experience.

This seems to be the case for the Everything Connected industry in general. There’s no General Electric, no Microsoft, no Google (the latter abandoned its PowerMeter initiative in 2011).

Why not? The answer might lie in the Echelon presentation already mentioned:

echelin ioT

After more than 25 years of developing devices and platforms, Echelon concludes that the Internet of Things isn’t going to be felt as a direct, personal experience. Instead, it will be mostly invisible: components and subsystems in factories, warehouses, fleets of trucks and buses, office buildings. . .

Consumers certainly don’t have to be sold on the benefits of connected devices. We can’t function without our smartphones, tablets, and PCs. But once we stray outside the really personal computer domain, the desirability of connected devices drops dramatically.

The dream of giving sensors, actuators, and an Internet connection to everyday objects feels good, until one looks at matters of practical and commercial implementation. Will the software in my smart toaster be subject to a licensing agreement? Will it stop toasting if I don’t renew my subscription? (This isn’t just a dystopian strawman; one electric car manufacturer says it can remotely disable the battery if you don’t pay up.)

And then there are the (very real) security and privacy concerns. Could our appliances be hacked? Could my toaster spy on me, collect more data to be used to peddle related goods?

Home automation and security systems seem like a natural fit for the Internet of Things, but they’re still expensive, complicated, and fragile – if not hopelessly primitive. Some connected thermostats, such as the Nest (with its smoke and carbon monoxide detector), work well, but most of them are stubbornly user-hostile.

When we wander into the realm of connected appliances what we see are novelties, fit only for hobbyists and technofetishists (do we really need a toaster that sends a tweet when it’s done?). This is nothing like the smartphone wave, for a simple reason: Appliances are just that, appliances. It’s word we use as an insult to describe a boring car.

JLG@mondaynote.com

 

What to do with $250m in digital journalism? (II)

 

In a previous Monday Note, we looked at an ideal newsroom, profusely funded by Pierre Omidyar and managed by whistleblowing facilitator Glenn Greenwald, a structure that combines the agility of a tech startup with the highest of journalistic standards. Today, we look at the product and the business model.   

Profit or non-profit? Definitely for-profit! First, because the eBay founder’s track record (see this The New Inquiry article) shows a fierce appetite for profitable ventures. And second, because there no such thing as a free and independent media press without a strong business side: financial vulnerability is journalism’s worst enemy while profit breeds scalability. How to make money, then, with a narrow niche such as investigative journalism? Can Omidyar’s venture move beyond the cross-subsidy system that powered legacy media for decades? This weekend, in a FT.com interview, Henry Blodget justified the deluge of eye-grabbing headlines spread over Business Insider by saying “The dining and motoring sections pay for the Iraq bureau”. . .

For this, Omidyar can look at a wide set of choices: he could devise click-driven contents built on the proven high volume / cheap ads equation. Or he could opt for what I’ll call the Porsche Model, one in which the most visible activity (in this case sports car manufacturing) brings only a marginal contribution to the P&L when compared to its financial activities: in 2009, Porsche made $1bn in profit from car sales and almost $7bn betting on Volkswagen stock. More realistically, an endowment-like model sounds natural for a deep-pocketed investor like Pierre Omidyar. Most US universities are doing fine with that model: a large sum of money, the endowment, is invested and produces enough interest to run operations. One sure thing: If he really wants to go against big corporations and finance, to shield it from pressure, Omidyar should keep its business model disconnected from its editorial operation.

Investigative journalism is a field in which the subscription model can work. In France, the web site Mediapart offers a credible example. Known for, among many others feats, its investigation of the Budget Minister’s hidden Swiss bank account that led to its resignation, Mediapart maintains a newsroom of seasoned reporters working on hot topics. In five years, it collected close to 80,000 subscribers paying €9.90 per month; the web site intends to make €6m ($8m) in revenue and a profit of €0.4m ($0.5m) this year. Small amounts indeed, but not so bad for a market one fifth the size of the US. Scaling up to the huge English-speaking market, and assuming that it will go for a global scope rather than a US-centric coverage, the Omidyar-Greenwald venture could shoot for 500,000 to 800,000 subscribers within a few years, achieving $40m to $60m in yearly revenue.

On the product side, the motto should be Try Everything – on multiple segments and platforms.

Here is possible product-line structure:

298 graph

Mobile should primarily be a news updating vector. In a developing story, say hearings on the NSA scandal, readers want quotes, live blogging, snapshots – all easy to grab while on the go. Addiction must be the goal.

Newsletters deserve particular attention. They remain an excellent vector to distribute news and a powerful traffic driver. But this requires two conditions: First, they must be carefully designed, written by human beings and not by robots. Second, they must be run like an e-commerce operation: a combination of mass emailing and heavy personalization based on collected navigation data. For an editorial product, this means mapping out granular “semantic profiles” in order to serve users with tailored contents. If the Omidyar-Greenwald project lives up to its promise, it will deliver a regular stream of exclusive stuff. A cleverly engineered email system (both editorially and technically) stands good chances  to become a must-read.

User profiling must allow the creation of several verticals. Judging who will join the venture from the first bylines (see article in CNet), the coverage intends te be broad: from national security to White House politics, sports issues (a sure click-bait), civil liberties, military affairs, etc. This justifies working on audience segmentation, as not everyone will be interested in the same subject. The same goes for social web extensions: the more segmented, the better.

Web TV. If you want to go beyond kittens or Nascar crashes, providing TV contents on the web is more difficult that it appears. But “programs” available in Scandinavia show that, for developing stories, Web TV can be a great substitute for conventional TV as it allows simultaneous coverage of multiple events. Nordic viewers love that.

Fact-checking. Since the Omidyar-Greenwald project is built. t on trust and transparency, it should consider launching the equivalent of politifact.com, a fact-checking web site operated by the Tampa Bay Times, which landed a Pulitzer Prize in 2009. A vertical fact-checking site on national security, privacy and data protection issue would definitely be a hit.

Other languages. Going after the Chinese market could be hard to resist. According to Internet World Stats, it is by far the largest single market in the world with 538 million people connected to the web in 2012. For a media venture aimed at lifting the veil on corruption, China offers strong potential in itself. As far as evading censorship, it should be an appealing challenge for the squad of hackers hired by Omidyar-Greenwald.

A print version? Yes. It sounds weird, but I strongly believe that a well-designed weekly, large format (tabloid or Berliner), distributed on selected, affluent markets, would complete the product line. Print remains a vector of choice for specific, long-form readings, ambitious news scenographies with high impact photographs, for an in-depth profile or a public interest story.

Global Thinking. Its potential for worldwide reach is one of this venture’s most interesting factors. It will be of limited interest if it doesn’t embrace a global approach to public interest journalism in large democracies but also in countries that are deprived of a free press (a long list). Creating a high standard, worldwide affiliation system to promote investigative journalism everywhere, regardless of the economic and political constraints, should definitely be on the founders’ roadmap.

frederic.filloux@mondaynote.com

Amazon and Apple Business Models

 

Amazon “loses” money, Apple makes tons if it. And yet, Wall Street prefers Jeff Bezos’s losses to Tim Cook’s. A look at the two very different cash machines will help dispel the false paradox.

The words above were spoken by an old friend and Amazon veteran, as three French émigrés talked shop at a Palo Alto watering hole. The riposte would fit as the epigraph for The Amazon Money Pump For Dummies, an explanation of Amazon’s ever-ascending stock price while the company keeps “losing money”.

(I don’t like the term Business Model, and Bizmodel even less so. I prefer Money Pump with its lively evocations: attach the hose, adjust the valves, prime the mechanism, and then watch the flow of money from the customer’s pocket to the investor’s purse).

Last quarter, Amazon’s revenue grew by 24% year-on-year, and lost about 1% of its net sales of $17B. This strong but profitless revenue growth follows an established pattern:

AMZN No Profit Growth copy
Despite the company’s flat-lined profits, Wall Street loves Amazon and keeps sending its shares to new heights. Since its 1997 IP0, AMZN has gone from $23 to $369/share:

298 share 21x
How come?

[Professional accountants: Avert your eyes; the following simplification could hurt.

Profit isn't cash, it's merely an increase in the value of your assets. Such increase can be illiquid. Profit is an accountant's opinion. Cash is a fact.]

Amazon uses its e-commerce genius to prime the money pump. The company seduces customers through low prices, prompt delivery, an ever-expanding array of services and products, and exemplary customer attention. What keeps the pump going is the lag between the moment they ding my credit card and the time that they pay Samsung for the Galaxy Note tablet I ordered. Last quarter, Amazon’s daily revenue was about $200M ($17B divided by 90 days). If it waits just 24 hours to pay its suppliers, the company has $200M to play with. If it delays payment for a month, that’s $6B it can use to invest in developing the business. Delay an entire quarter…the numbers become dizzying.

But, you’ll say, there’s nothing profoundly original there. All businesses play this game, retail chains depend on it. Definitely — but what sets Amazon apart is what it does with that flow of free cash. The company is relentless in building the best services and logistics machine on Earth. Just this week, we read that Amazon has hired the US Post Office to deliver Amazon packages (only) on Sundays.

Amazon uses cash to build a better Amazon that keeps bringing in more cash.

Why do suppliers “loan” Amazon such enormous amounts of cash? Why do they let the company grow on their backs? Because, just like Wall Street, they trust that the company will keep growing and give them ever more business. Amazon might be a hard taskmaster, but it can be trusted to pay its bills (eventually) — the same cannot be said of some other retail organizations.

Amazon doesn’t care that it doesn’t make a “profit” on the sale of a box of Uni-Ball pens that it ships for free. Rather, it focuses on pumping enormous amounts of cash into the virtuous spiral of an ever-expanding business. Wall Street rewards the company with an equally expanding market cap.

How long can Amazon’s expansion last? Will the tree grow to the sky? If we consider a single line of business — books, for example — saturation will inevitably set in. But one of the many facets of Bezos’ genius is that he’s always been able to find new territories. Amazon Web Services is one area where the company is now larger than all of its competitors combined, and shows no sign of slowing down or of approaching saturation.

In the end, we mustn’t be fooled by the simplicity of Amazon’s money pump. Bezos’ genius is in the implementation, in the details. Like a chef who’s not afraid to disclose his recipes, Bezos writes to his shareholders every year — his missives are all here — And he always appends his first 1997 letter, thus reminding everyone that he’s not about to lose the plot.

The other friend in this conversation, an old Apple hand, happily nodded along as our ex-Amazon compatriot told stories from his years in the Seattle trenches. When asked about the Apple money pump and why Wall Street didn’t seem to respect Apple’s huge profits, he started with an epigraph of his own:

The simplest encapsulation of Apple’s business model is the iPod.

To paraphrase: The iPod is the movie star, it brings the audience flocking to the theatre; iTunes is the supporting cast.

iTunes was initially perceived as a money-losing operation, but without it the iPod would have been a good-looking but not terribly useful piece of hardware. iTunes propelled iPod volumes and margin by providing an ecosystem that comprised two innovations: “music by the slice” (vs. albums,) and a truly new micro-payment system (99 cents charged to a credit card).

That model is what powers the Apple money pump today. The company’s personal computers — smartphones, tablets, and laptops/desktops — are the movie stars. Everything else exists to make these lead products more useful and pleasant. Operating systems, applications, stores, Apple TV, the putative iWatch…they’re all part of the supporting cast.

Our Apple friend offered another thought: The iPod marked the beginning of the Post-PC era. By 2006 — a year before the introduction of the iPhone — iPod sales had exceeded Mac revenue.

Speaking of cash, Apple doesn’t need to play Amazon’s timing games. Product margins range from 20-25% for desktops and laptops (compared to HP’s 3-5%), to 65% or more for iPhones. With cash reserves reaching $147B at the end of September 2013, Apple has had to buy shares back and pay dividends to bleed off the excess.

Far from needing a “loan” from its suppliers, Apple heads in exactly the opposite direction. On page 37 of the company’s 2013 10-K (annual) filing, you’ll find a note referring to “third-party manufacturing commitments and component purchase commitments of $18.6 billion“. This is a serious cash outlay, an advance to suppliers to secure components and manufacturing capacity that works out to $50 for every person in the US…

Wall Street’s cautious regard for Apple seems ill-advised given Apple’s ability to generate cash in embarrassing amounts. As the graph below shows, after following a trajectory superficially similar to Amazon’s, Apple apparently “fell from grace” in 2012:

298 fall from grace
I can think of two explanations, the first one local, the other global.

During Fiscal 2012, ending in September of that year, Apple’s Gross Margins reached an unprecedented high of 43.9%. By all standards, this was extremely unusual for a hardware company and, as it turned out, it was unsustainable. In 2013, Apple Gross Margin dropped by more than 6 percentage points (630 basis points in McKinsey-speak), an enormous amount. Wall Street felt the feast was over.

Also, Fiscal 2013 was seen as a drought year. There were no substantial new products beyond the iPhone 5 and the iPad mini announced in September and October 2012, and there was trouble getting the new iMacs into customers’ hands during the Holiday season.

More globally important is the feeling that Apple has become a “hits” business. iPhones now represent 53% of Apple’s revenue, and much more (70%?) of its profits. They sell well, everything looks rosy…until the next season, or the next round of competitive announcements.

This is what makes Wall Street nervous: To some, Apple now looks like a movie studio that’s too dependent on the popularity of its small stable of stars.

We hear that history will repeat itself, that the iPhone/iPad will lose the battle to Android, just as the Mac “lost” to Windows in the last century.

Our ex-Apple friend prefers an automotive analogy. Audi, Tim Cook’s preferred brand, owns a small portion of the luxury car market (about 7.5%), but it constantly posts increasing profits — and shows no sign of slacking off. Similarly, today’s $21B Mac business holds a mere 10% of the PC market, but Apple “uses” that small share to command 45% of market profits. The formula is no secret but, as with Amazon’s logistics and service, the payoff is in the implementation, how the chef combines the ingredients. It’s the “mere matter of implementation” that eluded Steve Ballmer’s comprehension when he called the MacBook an Intel laptop with an Apple logo slapped on it. Why wouldn’t the Mac recipe also work for smartphones and tablets?

JLG@mondaynote.com

 

New iWork: Another Missed Opportunity To Set Expectations

 

With the 5.0 iWork suite we revisit Apple’s propensity to make lofty claims that fall short of reality. The repetition of such easily avoidable mistakes is puzzling and leads us to question what causes Apple executives to squander the company’s well-deserved goodwill.

Once upon a time, our youngest child took it upon herself to sell our old Chevy Tahoe. She thought her father was a little too soft in his negotiations on the sales lot, too inclined to leave money on the table in his rush to end the suffering.

We arrive at the dealership. She hops out, introduces herself to the salesperson, and then this kid — not yet old enough to vote — begins her pitch. She starts out by making it clear that the car has its faults: a couple dents in the rear fender, a stubborn glove compartment door, a cup holder that’s missing a flange. Flaws disclosed, she then shows off the impeccable engine, the spotless interior, the good-as-new finish (in preparation, she’d had the truck detailed inside and out, including the engine compartment).

The dealer was charmed and genuinely complimentary. He says my daughter’s approach is the opposite of the usual posturing. The typical seller touts the car’s low mileage, the documented maintenance, the vows of unimpeachable driver manners. The seller tries to hide the tired tires and nicked rims, the white smoke that pours from the tail pipe, the “organic” aroma that emanates from the seat cushions — as if these flaws would go unnoticed by an experienced, skeptical professional.

‘Give the bad news first’ said the gent. ‘Don’t let the buyer discover them, it puts you on the defensive. Start the conversation at the bottom and end with a flourish.’ (Music to this old salesman’s ears. My first jobs were in sales after an “unanticipated family event” threw me onto the streets 50 years ago. I’m still fond of the trade, happiest when well executed, sad when not).

The fellow should have a word or two with Apple execs. They did it again, they bragged about their refurbished iWork suite only to let customers discover that the actual product fails to meet expectations.

We’ll get into details in a moment, but a look into past events will help establish the context for what I believe to be a pattern, a cultural problem that starts at the top (and all problems of culture within a company begin at the executive level).

Readers might recall the 2008 MobileMe announcement, incautiously pitched as Exchange For The Rest of Us. When MobileMe crashed, the product team was harshly criticized by the same salesman, Steve Jobs, who touted the product in the first place. We’ll sidestep questions of the efficacy of publicly shaming a product team, and head to more important matters: What were Jobs and the rest of Apple execs doing before announcing MobileMe? Did they try the product? Did they ask real friends — meaning non-sycophantic ones — how they used it, for what, and how they really felt?

Skipping some trivial examples, we land on the Maps embarrassment. To be sure, it was well handled… after the fact. Punishment was meted out and an honest, constructive apology made. The expression of regret was a welcome departure from Apple’s usual, pugnacious stance. But the same questions linger: What did Apple execs know and when did they know it? Who actually tried Apple Maps before the launch? Were the execs who touted the service ignorant and therefore incompetent, or were they dishonest, knowingly misrepresenting its capabilities? Which is worse?

This is a pattern.

Perhaps Apple could benefit from my daughter’s approach: Temper the pitch by confessing the faults…

“Dear customers, as you know, we’re playing the long game. This isn’t a finished product, it’s a work in progress, and we’ll put your critical feedback to good use.”

Bad News First, Calibrate Expectations. One would think that (finally!) the Maps snafu would have seared this simple logic into the minds of the Apple execs.

But, no.

We now have the iWork missteps. Apple calls its new productivity suite “groundbreaking”. Eddy Cue, Apple’s head of Internet Software and Services, is ecstatic:

“This is the biggest day for apps in Apple’s history. These new versions deliver seamless experiences across devices that you can’t find anywhere else and are packed with great features…” 

Ahem… Neither in the written announcement nor during the live presentation will one find a word of caution about iWork’s many unpleasant “features”.

The idea, as best we can discern through the PR fog, is to make iOS and OS X versions of Pages, Numbers, and Keynote “more compatible” with each other (after Apple has told us, for more than two years, how compatible they already are).

To achieve this legitimate, long game goal, the iWork apps weren’t just patched up, they were re-written.

The logic of a fresh, clean start sounds compelling, but history isn’t always on the side of rewriting-from-scratch angels. A well-known, unfortunate example is what happened when Lotus tried a cross-platform rewrite of its historic Lotus 1-2-3 productivity suite. Quoting from a Wikipedia article:

“Lotus suffered technical setbacks in this period. Version 3 of Lotus 1-2-3, fully rewritten from its original macro assembler into the more portable C language, was delayed by more than a year as the totally new 1-2-3 had to be made portable across platforms and fully compatible with existing macro sets and file formats.”

The iWorks rewrite fares no better. The result is a messy pile of missing features and outright bugs that educed many irate comments, such as these observations by Lawrence Lessig, a prominent activist, Harvard Law professor, and angry Apple customer [emphasis and edits mine]:

“So this has been a week from Apple hell. Apple did a major upgrade of its suite of software — from the operating system through applications. Stupidly (really, inexcusably stupid), I upgraded immediately. Every Apple-related product I use has been crippled in important ways.

… in the ‘hybrid economy’ that the Internet is, there is an ethical obligation to treat users decently. ‘Decency’ of course is complex, and multi-faceted. But the single dimension I want to talk about here is this: They must learn to talk to us. In the face of the slew of either bugs or ‘features’ (because as you’ll see, it’s unclear in some cases whether Apple considers the change a problem at all), a decent company would at least acknowledge to the public the problems it identifies as problems, and indicate that they are working to fix it.”

Lessig’s articulate blog post, On the pathological way Apple deals with its customers (well worth your time), enumerates the long litany of iWork offenses.

Srange Paste Behavior copy

[About that seemingly errant screenshot, above...keep reading.]

Shortly thereafter, Apple issued a support document restating the reasons for the changes:

“…applications were rewritten from the ground up to be fully 64-bit and to support a unified file format between OS X and iOS 7 versions” 

and promising fixes and further improvements:

“We plan to reintroduce some of these features in the next few releases and will continue to add brand new features on an ongoing basis.”

Which led our Law Professor, who had complained about the “pathologically constipated way in which Apple communicates with its customers”, to write another (shorter) post and thank the company for having at last “found its voice”…

Unfortunately, Lessig’s list of bugs is woefully short of the sum of iWork’s offenses. For example, in the old Pages 4.0 days, when I click on a link I’m transported to the intended destination. In Pages 5.0, instead of the expected jump, I get this…

[See above.]

Well, I tried…CMD-CTRL-Shift-4, frame the shot, place the cursor, CMD-V… Pages 5.0 insists on pasting it smack in the middle of a previous paragraph [again, see above].

Pages has changed it’s click-on-a-link behavior; I can get used to that, but…it won’t let me paste at the cursor? That’s pretty bad. Could there be more?

There’s more. I save my work, restart the machine, and the Save function in Pages 5.0 acts up:

Pages 5.0 Autosave Bug copy

What app has changed my file? Another enigma. I’m not sharing with anyone, just saving my work in my Dropbox, something that has never caused trouble before.

Another unacceptable surprise: Try sending a Pages 5.0 file to a Gmail account. I just checked, it still doesn’t work. Why wasn’t this wasn’t known in advance – and not fixed by now?

I have to stop. I’ll leave comparing the even more crippled iCloud version of iWork to the genuinely functional Web version of Office 365 for another day and conclude.

First. Who knew and should have known about iWork’s bugs and undocumented idiosyncrasies? (I’ll add another: Beware the new autocorrect)

Second. Why brag instead of calmly making a case for the long game and telling loyal customers about the dents they will inevitably discover?

Last and most important, what does this new fiasco say about the Apple’s management culture? The new iPhones, iPad and iOS 7 speak well of the company’s justly acclaimed attention to both strategy and implementation. Perhaps there were no cycles, no neurons, no love left for iWork. Perhaps a wise general puts the best troops on the most important battles. Then, why not regroup, wait six months and come up with an April 2014 announcement worthy of Apple’s best work?

JLG@mondaynote.com

———

This hasn’t been a good week using Apple products and services. I’ve had trouble loading my iTunes Music library on an iPad, with Mail and other Mavericks glitches, moving data and apps from one computer to another, a phantom Genius Bar appointment in another city and a stubborn refusal to change my Apple ID. At every turn, Apple support people, in person, on the phone or email, were unfailingly courteous and helpful. I refrained from mentioning iWork to these nice people.

 

What to do with $250m in digital journalism? (1)

 

Pierre Omidyar, Ebay’s founder and now philanthropist, pledged $250m to a new investigative reporting venture. Starting a project of this magnitude from scratch isn’t an everyday occurrence, leading us to wonder how it could look like? (First of two articles) 

For a digital journalism project, 250 million dollars (€185m) is a serious investment. So far, it’s unclear whether this is a one-time investment, merely initial funding (Omidyar’s share in eBay is approx. $8.5bn), or just yearly running costs. To put things in perspective, The New York Times’ 1300 people newsroom costs around $200m per year, including $70m for international coverage alone, i.e. reporting abroad and maintaining 24 foreign bureaus manned by 50 reporters. But, by most measures, the scope of NYT operations is at the far end of the scale.

A more realistic example is the funding of the non-profit media ProPublica (see a previous Monday Note on the subject). According to its 2012 financial statement (PDF here), ProPublica has raised a little more than $10m from philanthropic organizations and spends less than that for a 30 persons staff. No one disputes that, journalistically speaking, ProPublica is a remarkable publication; it faithfully follows its “Journalism in the Public Interest” mission statement, collecting two Pulitzer Prizes in so doing.

Great journalism can be done at a relatively minimal cost, especially when focused on a narrow segment of the news spectrum. On the other hand, as the New York Times P&L shows, the scope and size of its output directly correlates to the money invested in its production – causing the spending to skyrocket as a result.

Since we know little of Pierre Omidyar’s intentions (interview here in the NYT and a story outlining the project), I’ll spare Monday Note readers my usual back-of-the-envelope calculations, and I’ll stick to a general outline of what a richly funded news ventures could look like.

Staffing structure. Once again, ProPublica shows the way: a relatively small team of young staffers, coached by seasoned reporters and editors. For this, Omidyar draws the hottest name in the field, namely the lawyer-activist-Guardian blogger Glenn Greenwald, who played a prominent role in the Snowden leaks (more about him: his blog on The Guardian; a NYT Magazine profile of Greenwald’s pal Laura Poitras, another key Snowden helper).

Greenwald_guardian

Multi-layer hierarchy is the plague of legacy media. The org chart should be minimalist. A management team of five dedicated, experienced editors is sufficient to lead a 24/365 news structure. Add another layer for production tasks and that’s pretty much it. As for the headcount, it depends on the scope of the news coverage: My guess is a newsroom of 100-150, including a production staff (I’ll come back to that in a moment) can do a terrific  job.

No Guild, no unions, no syndicats à la française, please. Behind their “fighting for our people” façade, they cynically protect their cushy prebends and accelerate the industry’s demise. As a result, the field is left open to pure players – who are keeping people in stables, content-recycling factories.

Beyond that, avoiding any kind of collective bargaining allows management to pay whatever will be necessary to hire and retain talent, without relying to fake titles or bogus hierarchy positions to justify their choices. In addition, above-market salaries should discourage ethically dubious external gigs. Lastly, a strict No-Kolkhoze governance must be enforced from the outset; collaboration and heated intellectual debate is fine as long as it doesn’t emasculate decisions, development, innovation – and speed.

A Journalism 2.0 Academy. I strongly believe in the training of staffers, journalists or not. Hiring motivated young lawyers, accountants, financial analysts, even scientists, and teaching them the trade of journalism is one the best ways to raise the competency level in a newsroom. It means having a couple of in-house “teachers” who will compile and document the best internal and external practices, and dispense those on a permanent basis. This is what excellence requires.

A Technology Directorate. On purpose, I’m borrowing jargon from the CIA or the FSB. A modern news organization should get inspiration from the intelligence community, with a small staff of top level engineers, hackers, cryptographers, data miners, semantic specialists. Together, they will collect data, protect communications for the staff and their sources, provide secured workstations, laptops and servers, build a mirroring infrastructure as a precaution against governmental intrusion. This is complex and expensive: It means establishing encrypted links between countries, preferably on a dedicated network (take advantage of Google’s anger against the NSA to rent capacity), and putting servers in countries like Iceland — a libertarian country and also one of the most connected in the world. While writing this, I ran a couple of “ping” tests, and it turned out that, from Europe, the response-time from an Icelandic server is twice as short as from the New York Times!

Besides assisting the newsroom, tech staff should build a secure and super-fast and easy-to-use Content Management System. Most likely, the best way will turn out to be a WordPress system hack – as Forbes, Quartz, AllThingsD, and plenty of others did. Whatever the setup ends up being, it must be loaded with a powerful semantic engine, connected to scores of databases that will help enrich stories with metadata (see a previous Monday Note on the subject The story as gateway to knowledge). By the same token, a v2.0 newsroom should have its own “aggrefilter”, its own Techmeme that will monitor hundreds websites, blogs and twitter feeds and programmatically collect the most relevant stories. This could be a potent tool for a newsroom (we are building one at Les Echos that will primarily benefit our news team.)

Predictive Analysis Tools and Signal-to-Noise detection. In a more ambitious fashion, an ideal news machine should run analytics aimed at anticipating/predicting spasms in the news cycle. Pierre Omidyar and Glenn Greenwald should acquire or build a unit like the Swedish company Recorded Future (more in this story in Wired UK), which is used by large corporations and by the CIA. Perhaps more realistically, building tools to analyze and decipher in realtime the internet’s “noise”, and being able to detect “low-level signals” could be critical to effectively surfing the wave.

That’s all for today. Next week, I’ll address two main points: Designing modern news products, and ideas on how to make (some) money with this enthralling venture.

frederic.filloux@mondaynote.com

Intel Is Under New Management – And It Shows

 

Intel rode the PC wave with Microsoft and built an seemingly insurmountable lead in the field of “conventional” (PCs and laptops) microprocessors. But, after his predecessor missed the opportunity to supply the CPU chip for Apple’s iPhone, Intel’s new CEO must now find a way to gain relevance in the smartphone world.

In last May’s The Atlantic magazine, Intel’s then-CEO Paul Otellini confessed to a mistake of historic proportions. Apple had given Intel the chance to be part of the smartphone era, to supply the processor for the first iPhone… and Otellini said no [emphasis and light editing mine]:

“The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do… At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn’t see it. It wasn’t one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought.”
“…while we like to speak with data around here, so many times in my career I’ve ended up making decisions with my gut, and I should have followed my gut. [...] My gut told me to say yes.”

That Otellini found the inner calm to publicly admit his mistake — in an article that would be published on his last day as CEO, no less — is a testament to his character. More important, Otellini’s admission unburdened his successor, Brian Krzanich, freeing him to steer the company in a new direction.

And Krzanich is doing just that.

First: House cleaning. Back in March 2012, the Wall Street Journal heralded Intel as The New Cable Guy. The idea was to combine an Intel-powered box with content in order to serve up a quality experience not found elsewhere (read Apple, Netflix, Roku, Microsoft…). To head the project, which was eventually dubbed OnCue, Intel hired Erik Huggers, a senior industry executive and former head of BBC Online.

At the All Things D conference in February, Huggers announced that the TV service would be available later this year. The Intel TV chief revealed no details about how the service OnCue would differ from existing competitors, or how much the thing would cost…but he assured us that the content would be impressive (“We are working with the entire industry”), and the device’s capabilities would be comprehensive (“This is not a cherry-pick… this is literally everything”).

Intel seemed to be serious. We found out that more than 1,000 Intel employees in Oregon had been engaged in testing the product/service.

Then Krzanich stepped in, and applied a dose of reality:

Intel continues to look at the business model…. we are not experts in the content industry and we’re being careful.” [AllThingsD: New Intel CEO Says Intel TV Sounds Great in Theory. But …]

Indeed, to those of us who have followed the uneasy dance between Apple and content providers since the first Apple TV shipped in 2007, the Intel project sounded bold, to say the least.

Late September, the project was put on hold and, last week, the news came that OnCue had been cancelled and allegedly offered to Verizon, whose V Cast media distribution feats come to mind…

Even before OnCue’s cancellation was made official, the well-traveled Erik Huggers appeared to show an interest in the Hulu CEO job. (If Mr Huggers happens to be reading this: I’d be more than happy to relieve you of the PowerPoints that you used to pitch the project to Intel’s top brass, not to mention the updates on the tortuous negotiations for content, and the reports from the user testing in Oregon. These slides must make fascinating corpospeak logic.)

Krzanich quickly moved from doubt to certainty. He saw that OnCue would neither make money by itself, nor stimulate sales or margins for its main act, x86 processors. OnCue would never be an Apple TV “black puck”, a supporting character whose only mission is to make the main personal computers (small, medium and large; smartphones, tablets and conventional PCs) more useful and pleasant.

So he put an end to the impossible-to-justify adventure.

That was easy.

Tackling Intel’s failure to gain a significant role in the (no longer) new world of smartphones is a much more complicated matter.

With its x86 processors, Intel worked itself into a more-than-comfortable position as part of the Wintel ecosystem. The dominant position achieved by the Microsoft-Intel duopoly over two decades yielded correspondingly high margins for both.

But smartphones changed the game. ARM processors proved themselves better than x86 at the two tasks that are integral to personal, portable devices: lowering power consumption and customization. The ARM architecture didn’t have to wait for the iPhone and Android handsets to dominate the cell phone business. Just as Windows licensing spawned a large number of PC makers, ARM licensing contributed to the creation of a wide range of processor design and manufacturing companies. The ARM site claims 80 licensees for its newer Cortex family and more than 500 for its older Classic Arm processors. No monopoly means lower margins.

Intel saw the unattractive margins offered by ARM processors and didn’t want to commit the billions of dollars required by a fab (a chip manufacturing plant) for a product that would yield profits that were well below Wall Street expectations.

The prospect of bargain basement margins undoubtedly figured in Otellini’s decision to say no to the iPhone. In 2006, no one could have predicted that it could have been made up in volume, that there would be a billion smartphone sales in 2014. (I’m basing the 1B number for the entire industry on Horace Dediu’s estimate of 250 million iOS devices for 2014.)

Even if the Santa Clara company had had the foresight to accept lower margins in order to ensure their future in the smartphone market, there would still have been the problem of customization.

Intel knows how to design and manufacture processors that used “as is” by PC makers. No customization, no problems.

This isn’t how the ARM world works. Licensees design processors that are customized for their specific device, and they send the design to a manufacturer. Were Intel to enter this world, they would no longer design processors, just manufacture them, an activity with less potential for profit.

This explains why Intel, having an ARM license and making XScale processors, sold the business to Marvell in 2006 – a fateful date when looking back on the Apple discussions.

But is Intel’s new CEO is rethinking the “x86 and only x86″ strategy? Last week, a specialty semiconductor company called Altera announced that Intel would fabricate some if its chips containing a 64-bit ARM processor. The company’s business consists of offering faster development times through “programmable logic” circuits. Instead of a “hard circuit” to be designed, manufactured, tested, debugged, modified and sent back to the manufacturing plant in lengthy and costly cycles, you buy a “soft circuit” from Altera and similar companies (Xilinx comes to mind). This more expensive device can be reprogrammed on the spot to assume a different function, or correct the logic in the previous iteration. Pay more and get functioning hardware sooner, without slow and costly turns through a manufacturing process.

With this in mind, what Intel will someday manufacture for Altera isn’t the 64-bit ARM processor that excited some observers: “Intel Makes 14nm ARM for Altera“. The Stratix 10 circuits Altera contracts to Intel manufacturing are complicated and expensive ($500 and up) FPGA (Field Programmable Gate Array) devices where the embedded ARM processor plays a supporting, not central, role. This isn’t the $20-or-less price level arena in which Intel has so far declined to compete.

Manufacturing chips for Altera might simply be work-for-hire, a quick buck for Intel, but I doubt it. Altera’s yearly revenue is just shy of $2B; Intel is a $50B company. The newly announced device, just one in Altera’s product lines, will not “move the needle” for Intel — not in 2014 (the ship date isn’t specified), or ever.

Instead, I take this as a signal, a rehearsal.  250M ARM SoCs at $20 each would yield $5B in revenue, 10% of Intel’s current total…

This might be what Krzanich had in mind about when he inked the “small” manufacturing agreement with Altera; perhaps he was weighing the smaller margins of ARM processors against the risk of slowing PC sales.

Graciously freed from the past by his predecessor, it’s hard to see how Intel’s new CEO won’t take the plunge and use the company’s superb manufacturing technology to finally

make ARM processors.

JLG@mondaynoye.com

 

The Age of the Platform

 

Before deciding what should comes “first” in digital, publishers must figure out the right production workflow. Each and every player must plot its very own path away from the now aging notion of publication to the broader platform model. 

Last week, I spent a few of days in Berlin at the European INMA conference. Among many interesting moments, there was our visit to the Axel Springer group, the number one print publisher in Germany that also operates scores of publications in 44 countries. In 2012, Springer had a revenue of €3.3bn and an EBIDTA of €628m; 40% of its revenue comes from digital, thanks to 160 different online properties and 120 applications. Attaining this level required an aggressive growth strategy: since 2006, Springer launched or acquired new digital activities at the stunning rate of one every two weeks!

Like most modern news outlets, Springer is obsessed with having everyone in the company work without distinction between digital and print. Its latest initiative involves the definitive transformation of the venerable daily Die Welt into a multimedia news factory. To achieve this, the company bets on the radical architecture of its brand new newsroom. Of course, Die Welt is not the first to bet on the physical setting of the workplace to accelerate changes. Among others, the UK’s Telegraph did the same several years ago (it didn’t go smoothly at first but, in the end, the effort paid back.)

Here is the floor plan of the Die Welt’s newsroom that will enter in operation within a couple of months (I reconstructed it from a picture and briefing notes) :

die_welt_newsrm_plan

The open space resembles a sound-proof cathedral on the ground floor of the Axel Springer building in the center of Berlin. It will operate from 5am to midnight. The star shape reflects the news products’ diversity and time imperatives; the closest the workstations are from the center (where on-duty management sits), the faster the treatments are supposed to be: mobile staffers will stay close to the top editors as people in charge of building pages for the daily will dwell at the outer edges. This newsroom is mostly a production center; it actually accommodates only half of the Die Welt 300+ editorial staff as reporters and some staff writers will be located in a separate room. Note how all individual offices are gone while the periphery is filled with meeting rooms of various sizes and shapes that staffers use as needed.

Management gurus often say a radical alteration of physical settings is a key instrument of change. I can’t agree more. Interestingly enough, a firm like Innovation Media Consulting I’ve known since the Nineties as mostly an art direction company now works with architects and workflow specialists to induce changes in the way newsrooms operate.

But a super-modern floor plan is only part of the equation. In last week’s Monday Note,  I addressed the need to make the story the kernel of a cluster of high value products. Both are merely components of a much deeper change, that is the creation of a true News Platform. Anglo-Saxon newsrooms enjoy several advantages over Southern Europe (for instance) ones. Since the beginning, their journalism is built on a clear separation between writers (or reporters) on one side, and editors on the other. Anglo-Saxon journalism comes embedded with a separation between the writing and the editing of journalistic material — that is not the custom in a country like France in which most interns sees themselves as potential heirs to Joseph Kessel. More seriously, here, the principle of heavy editing is much less accepted than in the US, UK or Germany where the process results in much better structured articles, and most powerful storytelling for long-form reporting. In addition, in those countries, newsrooms with top editors entirely dedicated to their role of managers are better equipped to address the needs of morphing news organizations. For the most part, these factors explain why, in the Anglo-Saxon world, the News Platform transformation is way ahead of anywhere else. Axel Springer’s management concedes that this radical news flow structure is the result of a process that started years ago — that’s why it has been smoothly accepted by the staff. Everyone now sees it as the indispensable platform to produce across all major vectors now used by the readers – mobile, tablets, web and print – with greater efficiency along with consistent quality,.

frederic.filloux@mondaynote.com

Security Shouldn’t Trump Privacy – But I’m Afraid It Will

 

The NSA and security agencies from other countries are shooting for total surveillance, for complete protection against terrorism and other crimes. This creates the potential for too much knowledge falling one day in the wrong hands.

An NSA contractor, Edward Snowden, takes it upon himself to gather a mountain of secret internal documents that describe our surveillance methods and targets, and shares them with journalist Glenn Greenwald. Since May of this year, Greenwald has provided us with a trickle of Snowden’s revelations… and our elected officials, both here and abroad, treat us to their indignation.

What have we learned? We Spy On Everyone.

We spy on enemies known or suspected. We spy on friends, love interests, heads of state, and ourselves. We spy in a dizzying number of ways, both ingenious and disingenuous.

(Before I continue, a word on the word “we”. I don’t believe it’s honest or emotionally healthy to say “The government spies”. Perhaps we should have been paying more attention, or maybe we should have prodded our solons to do the jobs we elected them for… but let’s not distance ourselves from our national culpability.)
You can read Greenwald’s truly epoch-making series On Security and Liberty in The Guardian and pick your own approbations or invectives. You may experience an uneasy sense of wonder when contemplating the depth and breadth of our methods, from cryptographic and social engineering exploits (doubly the right word), to scooping up metada and address books and using them to construct a security-oriented social graph.

We manipulate technology and take advantage of human foibles; we twist the law and sometimes break it, aided by a secret court without opposing counsel; we outsource our spying by asking our friends to suck petabytes of data from submarine fiber cables, data that’s immediately combed for keywords and then stored in case the we need to “walk back the cat“.

NSA-Merkel-Phone

Sunday’s home page of the German site Die Welt

The reason for this panopticon is simple: Terrorists, drugs, and “dirty” money can slip through the tiniest crack in the wall. We can’t let a single communication evade us. We need to know everything. No job too small, no surveillance too broad.

As history shows, absolute anything leads to terrible consequences. In a New York Review of Books article, James Bamford, the author of noted books on the NSA, quotes Senator Frank Church who, way back in 1975, was already worried about the dangers of absolute surveillance [emphasis mine]:

“That capability at any time could be turned around on the American people and no American would have any privacy left, such [is] the capability to monitor everything: telephone conversations, telegrams, it doesn’t matter. There would be no place to hide. If this government ever became a tyranny, if a dictator ever took charge in this country, the technological capacity that the intelligence community has given the government could enable it to impose total tyranny, and there would be no way to fight back, because the most careful effort to combine together in resistance to the government, no matter how privately it was done, is within the reach of the government to know. Such is the capability of this technology…. I don’t want to see this country ever go across the bridge. I know the capacity that is there to make tyranny total in America, and we must see to it that this agency and all agencies that possess this technology operate within the law and under proper supervision, so that we never cross over that abyss. That is the abyss from which there is no return.

From everything we’ve learned in recent months, we’ve fallen into the abyss.

We’ve given absolute knowledge to a group of people who want to keep the knowledge to themselves, who seem to think they know best for reasons they can’t (or simply won’t) divulge, and who have deemed themselves above the law. General Keith Alexander, the head of the NSA, contends that “the courts and the policy-makers” should stop the media from exposing our spying activities. (As Mr. Greenwald witheringly observes in the linked-to article, “Maybe [someone] can tell The General about this thing called ‘the first amendment’.”)

Is the situation hopeless? Are we left with nothing but to pray that we don’t elect bad guys who would use surveillance tools to hurt us?

I’m afraid so.

Some believe that technology will solve the problem, that we’ll find ways to hide our communications. We have the solution today! they say: We already have unbreakable cryptography, even without having to wait for quantum improvements. We can hide behind mathematical asymmetry: Computers can easily multiply very large numbers to create a key that encodes a message, but it’s astronomically difficult to reverse the operation.

Is it because of this astronomic difficulty — but not impossibility — that the NSA is “the largest employer of mathematicians in the country“? And is this why “civilian” mathematicians worry about the ethics of those who are working for the Puzzle Palace?

It might not matter. In a total surveillance society, privacy protection via unbreakable cryptography won’t save you from scrutiny or accusations of suspicious secrecy. Your unreadable communication will be detected. In the name of State Security, the authorities will knock on your door and demand the key.

Even the absence of communication is suspect. Such mutism could be a symptom of covert activities. (Remember that Bin Laden’s compound in Abbottabad was thoroughly unwired: No phones, no internet connection.)

My view is that we need to take another look at what we’re pursuing. Pining for absolute security is delusional, and we know it. We risk our lives every time we step into our cars — or even just walk down the street — but we insist on the freedom to move around. We’re willing to accept a slight infringement on our liberties as we obey the rules of the road, and we trust others will do the same. We’re not troubled by the probability of ending up mangled while driving to work, but the numbers aren’t unknown (and we’re more than happy to let insurance companies make enormous profits by calculating the odds).

Regarding surveillance, we could search for a similar risk/reward balance. We could determine the “amount of terror” we’re willing to accept and then happily surrender just enough of our privacy to ensure our safety. We could accept a well-defined level of surveillance if we thought it were for a good cause (as in keeping us alive).

Unfortunately, this pleasant-sounding theory doesn’t translate into actual numbers, on either side of the equation. We have actuarial tables for health and automotive matters, but none for terrorism; we have no way of evaluating the odds of, say, a repeat of the 9/11 terrorist attack. And how do you dole out measures of privacy? Even if we could calculate the risk and guarantee a minimum of privacy, imagine that you’re the elected official who has to deliver the message:

In return for guaranteed private communication with members of your immediate family (only), we’ll accept an X% risk of a terrorist attack resulting in Y deaths and Z wounded in the next T months.

In the absence of reliable numbers and courageous government executives, we’re left with an all-or-nothing fortress mentality.

Watching the surveillance exposition unfold, I’m reminded of authoritarian regimes that have come and gone (and, in some cases, come back). I can’t help but think that we’ll coat ourselves in the lubricant of social intercourse: hypocrisy. We’ll think one thing, say another, and pretend to ignore that we’re caught in a bad bargain.

JLG@mondaynote.com