Comcast and Us

 

Comcast tells us how much better our lives will be after they acquire Time Warner. Great, thanks! Perhaps this is an opportunity to look at other ways that we can “acquire” Cable TV and Internet access.

Comcast CEO Brian Roberts thinks we’re powerless idiots. This is what his company’s website says about the planned Time Warner acquisition :

“Transaction Creates Multiple Pro-Consumer and Pro-Competitive Benefits…”

Don’t read the full legal verbiage that purports to explain the maneuver. A more productive use of your time will be had by reading Counternotion’s pointed summary in Obfuscation by disclosure: a lawyerly design pattern:

(tl;dr: According to Comcast, the merger is “pro-sumer” if you “get past some of the hysteria,” it’s “approvable” by the regulators and won’t “reduce consumer choice at all”. Will it raise prices? “not promising that they will go down or even that they will increase less rapidly.” Given the historical record of the industry, it’s Comedy Central material.)

Let’s not loiter around Comcast’s lobbying operations, either — the $18.8M spent in 2013, the pictures of Mr. Roberts golfing with our President, the well-oiled revolving door between the FCC and the businesses they regulate. Feelings of powerlessness and anger may ensue, as trenchantly expressed in this lament from a former FCC Commissioner.

Instead, let’s use our agitation as an opportunity to rethink what we really want from Cable carriers. The wish list is long: TV à la carte instead of today’s stupid bundles, real cable competition vs. de facto local monopolies, metered Internet access in exchange for neutrality and lower prices for lighter usage, decent set-top boxes, 21st century cable modems, and, of course, lower prices.

These are all valid desires, but if there were just one thing that we could change about the carrier business, what would it be? What would really make a big, meaningful difference to our daily use of TV and the Internet?

Do you remember the Carterfone Decision? For a century (telephone service started in the US in 1877), AT&T reigned supreme in telecommunications networking. (I should say the former AT&T, not today’s company rebuilt from old body parts.) The company owned everything along its path, all the way down to your telephone handset — only MaBell’s could be used.

Then, in the late fifties, a company called Carterfone began to sell two-way radios that could be hooked up to a telephone. The device was invented by a Texan named Thomas Carter as a clumsy but clever way to allow oil field owners and managers sitting in their offices in Dallas to reach their workers out at the pumps.

AT&T was not amused.

“[AT&T] advised their subscribers that the Carterfone, when used in conjunction with the subscriber’s telephone, is a prohibited interconnecting device, the use of which would subject the user to the penalties provided in the tariff…”

Carterfone brought an antitrust suit against AT&T… and won. With its decision in favor of Thomas Carter’s company, the Federal Communications Commission got us to a new era where any device meeting the appropriate technical standards could connect to the phone network.

“…we hold, as did the examiner, that application of the tariff to bar the Carterfone in the future would be unreasonable and unduly discriminatory.”

The regulator — an impartial representative, in an ideal world — decides what can connect to the network. It’s not a decision that’s left to the phone company.

Back in the 21st century, we need a Carterfone Decision for cable boxes and modems. We need a set of rules that would allow Microsoft, Google, Roku, Samsung, Amazon, Apple — and companies that are yet to be founded — to provide true alternatives to Comcast’s set-top boxes.

Today, you have a cable modem that’s so dumb it forces you to restart everything in a particular sequence after a power outage. You have a WiFi base station stashed in among the wires. Your set-top box looks like it was made in the former Soviet Union (a fortuitous product introduction days before the merger announcement doesn’t improve things, much). You have to find your TV’s remote in order to switch between broadcast TV, your game console, and your Roku/AppleTV/Chromecast…and you have to reach into your basket of remotes just to change channels.

Imagine what would happen if a real tech company were allowed to compete on equal terms with the cable providers.

Microsoft, for example, could offer an integrated Xbox that would provide Internet access, TV channels with a guide designed by Microsoft, WiFi, an optional telephone, games of course, and other apps as desired. One box, three connectors: power, coax from the street, and HDMI to the TV set. There would be dancing in the streets.

But, you’ll object, what about the technical challenges? Cable systems are antiquated and poorly standardized. The cables themselves carry all sorts of noisy signals. What tech giant would want to deal with this mess?

To which one can reply: Look at the smartphone. It’s the most complicated consumer device we’ve ever known. It contains radios (Wifi, Bluetooth, multi-band cellular), accelerometers/gyroscopes, displays, loudspeakers, cameras, batteries… And yet, smartphones are made in huge quantities and function across a wide range of network standards. There’s no dearth of engineering talent (and money) to overcome the challenges, especially when they’re tackled outside of the cable companies and their cost-before-everything cultures.

Skeptics are more likely to be correct about the regulatory environment or, to be more precise, regulatory capture, a phrase that…captures the way regulators now work for the industries they were supposed to control. Can we imagine the FCC telling Comcast: “Go ahead and buy Time Warner…just one little condition, make sure any and all of your connection protocols and services APIs are open to any and all that pass the technical tests listed in Appendix FU at the end of this ruling.”

That’s not going to happen. We must prepare ourselves for a sorry display of bad faith and financial muscle. Who knows, in the end, Comcast might give up, as AT&T did after telling us how pro-consumer the merger with T-Mobile would be.

JLG@mondaynote.com

@gassee

Building a business news aggrefilter

 

This February 10, Les Echos launches its business news aggrefilter. For the French business media group, this is a way to gain critical working knowledge of the semantic web. Here is how we did it. An why. 

The site is called Les Echos 360 and is separate from our flagship site LesEchos.fr, the digital version of the French business daily Les Echos. As the newly coined word aggrefilter indicates, it is an aggregation and filtering system. It is to be the kernel from which many digital products and extensions we have in mind will spring.

My idea to build an aggrefilter goes back to… 2007. That year, in San Francisco, I met Dan Farber, at the time editor-in-chief of CNet (now at CBS Interactive, his blog here) – and actual father of the aggrefilter term. Dan told me: ‘You should have a look at Techmeme. It’s an “aggrefilter” that collects technology news and ranks them based on their importance to the news cycle’. I briefly explored the idea of building such an aggrefilter, but found it too hard to do it from scratch, off-the-shelf aggrefilter software didn’t exist yet. The task required someone like Techmeme founder Gabe Rivera – who holds a PhD in computer science. I shelved the idea for a while.

360 cap

A year ago, as the head of digital at Les Echos, I reopened the case and pitched the idea to a couple of French computer scientists specialized in text-mining — a field that had vastly improved since I first looked at it. We decided to give a shot to the idea. Why?

I believe a great media brand bearing a large sets of positive attributes (reliability, scope, depth of coverage) needs to generate an editorial footprint that goes far beyond its own production. It’s a matter of critical mass. In the case of Les Echos, we need to be the very core of business information, both for the general public and for corporations. Readers trust the content we produce, therefore they should trust the reading recommendation we make through our aggregation of relevant web sites. This isn’t an obvious move for journalists who, understandably, aren’t necessarily keen to send traffic to third party web sites. (Interestingly enough, someone at the New York Times told me that a heated debate flared up  within the newsroom a few years ago: To which extent should NYT.com direct readers to its competitors? Apparently, market studies settled the issue by showing that readers of the NYT online actually tended to also like it for being a reliable prescriber.)

In the business field, unlike Google News that crawls an unlimited trove of sources, my original idea was to extract good business stories from both algorithmically and manually selected sources. More importantly, the idea was to bring to the surface, to effectively curate specialized sources — niche web sites and blogs — usually lost in the noise. Near-real-time information also seemed essential, hence the need for an automated gathering process, Techmeme-like. (Techmeme is now supplemented by Mediagazer, one of my favorite readings.)

Where do we go from here?

Initially, we turned to the newsroom, asking beat reporters for a list of reliable sources they regularly monitored. The idea was to build a qualified corpus based on suggestions from our in-house specialists. Techmeme and Mediagazer call it their “leaderboard” (see theirs for tech and media). Perhaps we didn’t have the right pitch, or we were misunderstood, but all we got was a lukewarm reception. Our partner, the French startup Syllabs, came up with a different solution, based on Twitter analysis.

We used our reporters’ 72 most active Twitter accounts to extract URLs embedded in their tweets. This first pass yielded about 5000 URLs, but most turned out to be useless because, most of the time, reporters linked their tweets to their own or their colleagues’ newsroom stories. Then, Syllabs engineers had another idea, they data-mined tweets from people followed by our staff. This yielded 872,000 URLs. After that, another filtering pass found out the true curators, the people who found original sources around the web. Retweets also were counted as they indicate a vote of relevance/confidence. After further statistical analysis of tweet components, the 872,000 URLs were boiled down to less than 400 original sources that were to become the basis of Les Echos 360′s Leaderboard (we are now down to 160 sources).

Building a corpus of sources is one thing, but ranking articles with respect to their weight in the news cycle is yet another story. Every hour, 1,500 to 2,000 news pieces go through a filtering process that defines their semantic footprint (with its associated taxonomy). Then, they are aggregated in “clusters”. Eventually, clusters are ranked based according to a statistical analysis of their “signal” in the general news-flow. Each “Clustering” (collection + ranking) contains 400-500 clusters, a process that more than occasionally overloads our computers.

Despite continuous revisions to its 19,000 lines of code, the system is far from perfect. As expected. In fact it needs two sets of tunings: One to maintaining a wide enough spectrum of sources to properly reflect the diversity of topics we want to cover. With a caveat: profusion doesn’t necessarily create quality. Crawling the long tail of potentially good sources continues to prove difficult. The second needed adjustment is finding the right balance between all parameters: update frequency, the “quality index” of sources – and many other criteria I won’t disclose here. This I compare to the mixing console inside a recording studio. Finding the right sound is tricky.

It took years for Techmeme to refine its algorithm. It might take a while for Les Echos 360 — that’s why we are launching the site in beta (a notion not widely shared in the media sector.) No surprise, a continuous news-flow is an extremely difficult moving target. As for Techmeme and Mediagazer, despite refinements in Gabe Rivera’s work, their algorithm is “rectified” by more than a dozen editors (who even rewrite headlines to make them more explicit and punchier). A much lighter crew will monitor Les Echos 360 through a back-office that will allow us to change cluster rankings and to eliminate parasitic items.

For Les Echos’ digital division, this aggrefilter is a proof of concept, a way to learn a set of technologies we consider essential for the company’s future. The digital news business will be increasingly driven by semantic processes; these will allow publishers to extract much more value from news items, whether they are produced in-house or aggregated/filtered. That is especially true for a business news provider: the more specialized the corpus, the higher the need for advanced processing. Fortunately, it is much easier to fine-tune an aggrefilter for a specific field (logistics, clean-tech, M&A, legal affairs…) than for wider and muddier streams of general news. This new site is just the tip of the iceberg. We built this engine to address a wide array of vertical, business-to-business, needs. It aims to be a source of tangible revenue.

frederic.filloux@mondaynote.com

@filloux 

 

Nadella’s Job One

 

Microsoft has a new CEO – a safe choice, steeped in the old culture, with the Old Guard still on the Board of Directors. This might prevent Nadella from making one tough choice, one vital break with the past.

Once upon a distant time, the new CFO of a colorful personal computer company walks into his first executive staff meeting and proudly shares his thoughts:

“I’ve taken the past few weeks to study the business, and I’d now like to present my top thirty-five priorities…”

This isn’t a fairy tale, I was in the room. I didn’t speak Californian as fluently as I do now, so rather than encourage the fellow with mellifluous platitudes — ‘Interesting’ or, even better, ‘Fascinating, great vision!’ — I spoke my mind, possibly much too clearly:

“This is terrible, disorganized thinking. Claiming to have thirty-five priorities is, in fact, a damning admission: You have none, you don’t even know where to start. Give us your ONE priority and show us how everything else serves that goal…”

The CFO, a sharp, competent businessman, didn’t lose his cool and, after an awkward silence, stepped through his list. Afterwards, with calm poise, he graciously accepted my apologies for having been so abrupt…

Still, you can’t have a litany of priorities.

Turning to Microsoft, will the company’s new CEO, Satya Nadella, focus the company on a true priority, one and only one goal, one absolutely must-win battle? For Nadella, what is Microsoft’s Nothing Else Matters If We Fail?

In his first public pronouncement, the new Eagle of Redmond didn’t do himself any favors by uttering bombastic (and false) platitudes (which were broadly retweeted and ridiculed):

“We are the only ones who can harness the power of software and deliver it through devices and services that truly empower every individual and every organization. We are the only company with history and continued focus in building platforms and ecosystems that create broad opportunity.”

One hesitates. Either Nadella knows this is BS but thinks we’re stupid enough to buy into such pablum. Or he actually believes it and is therefore dangerous for his shareholders and coworkers. Let’s hope it’s the former, that Nadella, steeped in Microsoft’s culture, is simply hewing to his predecessor’s chest-pounding manner. (But let’s also keep in mind the ominous dictum: Culture Eats Strategy For Breakfast.)

Satya_Nadella

Assuming Nadella knows the difference between what he must say and what he must do, what will his true priority be? What battle will he pick that, if lost, will condemn Microsoft to a slow, albeit comfortable, slide into the tribe of has beens?

It can’t be simply tending the crops. Enterprise software, Windows and Office licenses might not grow as fast as they used to, but they’re not immediately threatened. The Online Services Division has problems but they can be dealt with later — it continues to bleed money but the losses are tolerable (about $1B according to the Annual Report). The Xbox One needs no immediate attention.

What really threatens Microsoft’s future is the ebullient, sui generis world of mobile devices, services, and applications. Here, Microsoft’s culture, its habits of the heart and mind, has led the company to a costly mistake.

Microsoft has succeeded, in the past, by straddling the old and the new: The company is masterful at introducing new features without breaking older software. In Microsoft’s unspoken, subconscious culture, the new can only be defined as an extension of the existing, so when it finally decided they it needed a tablet (another one after the Tablet PC failure), the Redmond company set out to build a better device that would also function as a laptop. The best of both worlds.

We know what happened. Users shunned Microsoft’s neither-nor Windows 8 and Surface hybrids. HP has backed away from Windows 8 and now touts its PCs running Windows 7 “Back By Popular Demand”  — this would never have happened when Microsoft lorded over its licensees. And now we hear that the upcoming Windows 8.1 update will boot directly into the conventional Windows 7-like desktop as opposed to the unloved Modern (née Metro) tiles.

Microsoft faces a choice. It can replace the smashed bumper on its truck with a stronger one, drop a new engine into the bay and take another run at the tablet wall. Or it can change direction. The former — continuing to attempt to bridge the gap between tablets and laptops  — will do further damage to the company’s credibility, not to mention its books. The latter requires a radical but simple change: Make an honest tablet using a version of Windows Phone that’s optimized for the things that tablets do well. Leave laptops out of it.

That is a priority, a single, easily stated goal that can be understood by everyone — employees and shareholders, bloggers and customers. To paraphrase a Valley wag, it’s a cri de guerre that’s so simple you can remember it even if you’re tired, drunk, and your spouse has thrown you out in the rain at 3 A.M. in your jockey briefs.

This is an opportunity for the new CEO to make his mark, to show vision, as opposed to mere care-taking.

But will he seize it?

Nadella should know the company by now. He’s been with Microsoft for over twenty years, during which time he’s proven himself to be a supremely technical executive. The company is remarkably prosperous — $78B in revenue in 2013; $22B profit; $77B in cash. This prosperity bought the Board some time when deciding on a new CEO, and should give Nadella a cushion if he decides to redirect the company.

Of course, there’s the Old Guard to contend with. Bill Gates has ceded the Chairman role to John Thompson, but he’ll stay on as a “technical advisor” to Nadella, and Ballmer hasn’t budged — he remains on the Board (for the time being). This might not leave a lot of room for bold moves, for undoing the status quo and for potentially embarrassing (or angering) Board members.
I can’t leave the topic without asking another related question.

We’ve just seen how decisive Larry Page can be. He looked at Motorola’s $2B red ink since they were acquired by Google — no end in sight, no product momentum — and sold the embarrassment to Lenovo. If regulators approve the sale, Motorola will be in competent hands within a company whose leader, Yang Yuanqing also known as YY, plays for the number one position. (Lenovo is the company that, in 2005, bought IBM’s ailing PC business and has since vaulted over Dell and HP to become the world’s premier PC maker.)

With this in mind, looking at the smartphone space where Apple runs its own premium ecosystem game, where Samsung takes no prisoners, where Huawei keeps rising, and where Lenovo will soon weigh in — to say nothing of the many OEMs that make feature phone replacements based on Android’s open source software stack (AOSP) — is it simply too late for Microsoft? Even if he has the will to make it a priority, can Nadella make Windows Phone a player?

If not, will he be as decisive as Larry Page?

JLG@mondaynote.com
@gassee

Why Twitter needs a design reset

 

Twitter is the archetype of a greatly successful service that complacently iterates itself without much regard for changes in its uses. Such behavior makes the service — and others like it — vulnerable to disruptive newcomers. 

Twitter might be the smartest new media of the decade, but its user interface sucks. None of its heavy users is ready to admit it for simple reason: Twitter is fantastic in broadcast mode, but terrible in consumption mode. Herein lies the distortion: most Twitter promoters broadcast tweets as much as they read them. The logical consequence is a broad complacency: Twitter is great, because its most intensive broadcasters say so. The ones who rarely tweet but use the service as a permanent and tailored news feed are simply ignored. They suffer in silence — and they are up for grabs by the inevitable disrupter.

Twitter’s integration can’t be easier. Your Tweet it from any content, from your desktop with an app accessible in the toolbar, or from your smartphone. Twitter guarantees instant execution followed by immediate gratification: right after the last keystroke, your tweet is up for a global propagation.

But when it comes to managing your timeline, it’s a different story. Unless you spend most of  your time on Tweeter, you miss many interesting items. Organizing feeds is a cumbersome process. Like everybody else, I tried many Twitter’s desktop or mobile apps. None of them really worked for me. Even TweetDeck seems to have been designed by an IBM coder from the former Soviet régime. I looked around my professional environment and was stunned by the number of people who acknowledge going back to the basic Tweeter app after unsuccessful tries elsewhere.

Many things are wrong in the Twitter’s user interface and it’s time to admit it. In the  real world, where my 4G connection too often falls back to a sluggish EDGE network, watching a Tweeter feed in a mobile setting becomes a nightmare. It happens to me every single day.

Here is a short list of nice-to-have features:

Background Auto-refresh. Why do I have to perform a manual refresh in my Twitter app each time I’m going to my smartphone (even though the app is running in the background)? My email client does it, so do many apps that push contents to my device. Alternatively, I’d be happy with refresh preset intervals and not having to struggle to catch up with stuff I might have missed…
Speaking of refreshes, I would love to see iOS and Android coming up with a super-basic refresh system: as long as my apps are open in the background, I would have a single “Update Now” button telling all my relevant apps (Email reader, RSS reader, Twitter, Google Current, Zite, Flipboard, etc.) to quickly upload the stuff I’m used to read while I still have a decent signal.

Save the Tweet feature. Again, when I ride the subway (in Paris, London or NYC), I get a poor connection – at best. Then, why not offer a function such as a gentle swipe of my thumb to put aside a tweet that contains an interesting link for later viewing?

Recommendation engine. Usually, I will follow someone I spot within the subscriptions of someone I already follow and appreciate. Or from a retweet. Twitter knows exactly what my center of interests are. Therefore it would be perfectly able to match my “semantic footprint” to others’.

Tag system. Again, Twitter maintains a precise map of many of its users, or at least of those categorized as “influencers”. When I subscribe to someone who already has thousands of followers, why not tie this user to metadata vectors that will categorize my feeds? Overtime, I would built a formidable cluster of feeds catering to my obsessions…

I’m puzzled by Twitter’s apparent inability to understand the needs of the basic users. The company is far from unique in this regard.; instead, it keeps relying on a self-centered elite of trendy aficionados to maintain the comfy illusion of universal approval – until someone comes up with a radical new approach.

This is the “NASA/SpaceX syndrome”. For decades, NASA kept sending people and crafts to space in the same fashion: A huge administrative machine, coordinating thousands of contractors. As Jason Pontin wrote in his landmark piece of the MIT’s Technology Review:

In all, NASA spent $24 billion, or about $180 billion in today’s dollars, on Apollo; at its peak in the mid-1960s, the agency enjoyed more than 4 percent of the federal budget. The program employed around 400,000 people and demanded the collaboration of about 20,000 companies, universities, and government agencies. 

Just to update Pontin’s statement, the International Space Station cost $100bn to build over a ten years period and needs about $3bn per year to operate.

That was until a major disrupter, namely Elon Musk came up with a different way to build a rocket. His company, Space X, has a long way to go but it is already able to send objects (and soon people) to the ISS at a fraction of Nasa’s cost. (Read the excellent story The Shared Genius of Elon Musk and Steve Jobs by Chris Anderson in Fortune.)

In the case of the space exploration, Elon Musk-the-outsider, along with its “System-level design thinking powered by extraordinary conviction” (as Anderson puts it), simply broke Nasa’a iteration cycle with a completely different approach.

That’s how tech company become vulnerable: they keep iterating their product instead of inducing disruption within their own ranks. It’s the case for Twitter, Microsoft, Facebook.

There is one obvious exception – and a debatable one. Apple appears to be the only one able to nurture disruption in its midst. One reason is the obsessive compartmentalization of development projects wrapped in paranoid secrecy. Apple creates an internal cordon sanitaire  that protects new products from outside influences – even within the company itself. People there work on products without kibitzing, derivative, “more for less” market research.

Google operates differently as it encourages disruption with its notorious 20% of work time that can be used by engineers to work on new-new things (only Google’s dominant caste is entitled to such contribution). It also segregated GoogleX, its “moonshots” division.

To conclude, let me mention one tiny example of a general-user approach that collides with convention. It involves the unsexy world of calendars on smartphones. At first sight not a fertile field of outstanding innovation. Then came PeekCalendar, a remarkably simple way to manage your schedule (video here) on an iPhone.

Peek-Photo3

This app was developed by Squaremountains.com, a startup created by an IDEO alumni, and connected to Estonian company Velvet. PeekCalendar is gently dismissed by techno-pundits as only suitable for not-so-busy people. I tested it and – with a few bugs – it nicely accommodates my schedule  of 25-30 appointments a week.

Showing this app during design sessions with my team at work also made me feel that the media sphere is by no mean immune to the criticism I detailed above. Our industry is too shy when it comes to design innovations. Most often, for fear of losing our precious readership, we carefully iterate instead of seeking disruption. Inevitably, a young company with nothing to lose nor preserve will come up with something new and eat our lunch. Maybe it’s time to Think Different™.

frederic.filloux@mondaynote.com
@filloux 

 

Apple Numbers For Normals: It’s The 5C, Stupid!

 

Today’s unscientific and friendly castigation of Apple’s iPhone 5C costly stumble: misdirected differentiation without enough regard for actual customer aspirations.

Here’s a quick snapshot of Apple’s numbers for the quarter ending December 2013, with percentage changes over the same quarter a year ago:

307TABLE JLG

We can disregard the iPod’s “alarming” decrease. The iPod, which has become more of an iPhone ingredient, is no longer expected to be the star revenue maker that it was back in 2006 when it eclipsed the Mac ($7.6B vs. $7.4B for the full year).

For iPhones, iPads, and overall revenue, on the other hand, these are record numbers…. and yet Apple shares promptly lost 8% of their value.

Why?

It couldn’t have been that the market was surprised. The numbers exactly match the guidance (a prophylactic legalese substitute for forecast) that was given to us by CFO Peter Oppenheimer last October:

“We expect revenue to be between $55 billion and $58 billion compared to $54.5 billion in the year ago quarter. We expect gross margins to be between 36.5% and 37.5%.”

(Non-normals can feast their eyes on Apple’s 10-Q filing and its lovingly detailed MD&A section. I’m sincere about the “lovingly” part — it’s great reading if you’re into it.)

Apple guidance be damned, Wall Street traders expected higher iPhone numbers. As Philip Elmer-DeWitt summarizes in an Apple 2.0 post, professional analysts expected about 55M iPhones, 4M more than the company actually sold. At $640 per iPhone, that’s about $2.5B in lost revenue and, assuming 60% margin, $1.5B in profit. The traders promptly dumped the shares they had bought on the hopes of higher revenues.

In Apple’s choreographed, one-hour Earnings Call last Monday (transcript here), company execs offered a number of explanations for the shortfall (one might say they offered a few too many explanations). Discussing proportion of sales of the iPhone 5S vs. iPhone 5C. Here what Tim Cook had to say [emphasis mine]:

“Our North American business contracted somewhat year over year. And if you look at the reason for this, one was that as we entered the quarter, and forecasted our iPhone sales, where we achieved what we thought, we actually sold more iPhone 5Ss than we projected.

And so the mix was stronger to the 5S, and it took us some amount of time in order to build the mix that customers were demanding. And as a result, we lost some sort of units for part of the quarter in North America and relative to the world, it took us the bulk of the quarter, almost all the quarter, to get the iPhone 5S into proper supply.

[…]

It was the first time we’d ever run that particular play before, and demand percentage turned out to be different than we thought.

In plainer English:

“Customers preferred the 5S to the 5C. We were caught short, we didn’t have enough 5Ss to meet the demand and so we missed out on at least 4 million iPhone sales.”

Or, reading between the lines:

“Customers failed to see the crystalline purity of the innovative 5C design and flocked instead to the more derivative — but flattering — 5S.”

Later, Cook concludes the 5S/5C discussion and offers rote congratulations all around:

“I think last quarter we did a tremendous job, particularly given the mix was something very different than we thought.”

… which means:

“Floggings will begin behind closed doors.”

How can a company that’s so precisely managed — and so tuned-in to its customers’ desires — make such a puzzling forecast error? This isn’t like the shortcoming in the December 2012 quarter when Apple couldn’t deliver the iMacs it had announced in October. This is a different kind of mistake, a bad marketing call, a deviation from the Apple game plan.

With previous iPhone releases, Apple stuck to a simple price ladder with $100 intervals. For example, when Apple launched the iPhone 5 in October 2012, US carriers offered the new device for $200 (with a two-year contract), the 2011 iPhone 4S was discounted to $100, and the 2010 iPhone 4 was “free”.

But when the iPhone 5S was unveiled last September, Apple didn’t deploy the 2012 iPhone 5 for $100 less than the new flagship device. Instead, Apple “market engineered” the plastic-clad 5C to take its place. Mostly built of iPhone 5 innards, the colorful 5C was meant to provide differentiation… and it did, but not in ways that helped Apple’s revenue — or their customers’ self-image.

Picture two iPhone users. One has a spanking new iPhone 5S, the other has an iPhone 5 that he bought last year. What do you see? Two smartphone users of equally discerning taste who, at different times, bought the top-of-the-line product. The iPhone 5 user isn’t déclassé, he’s just waiting for the upgrade window to open.

Now, replace the iPhone 5 with an iPhone 5C. We see two iPhones bought at the same time… but the 5C owner went for the cheaper, plastic model.

We might not like to hear psychologists say we build parts of our identity with objects we surround ourselves with, but they’re largely right. From cars, to Burberry garments and accessories, to smartphones, the objects we choose mean something about who we are — or who we want to appear to be.

I often hear people claim they’re not interested in cars, that they just buy “transportation”, but when I look at an office or shopping center parking lot, I don’t see cars that people bought simply because the wheels were round and black. When you’re parking your two-year old Audi 5S coupe (a vehicle once favored by a very senior Apple exec) next to the new and improved 2014 model, do you feel you’re of a lesser social station? Of course not. You both bought into what experts call the Affordable Luxury category. But you’re self-assessment would be different if you drove up in a Volkswagen Jetta. It’s made by the same German conglomerate, but now you’re in a different class. (This isn’t to say brand image trumps function. To the contrary, function can kill image, ask Nokia or Detroit.)

The misbegotten iPhone 5C is the Jetta next to the Audi 5S coupé. Both are fine cars and the 5C is a good smartphone – but customers, in numbers large enough to disrupt Apple’s forecast, didn’t like what the 5C would do to their image..

As always, it’ll be interesting to observe how the company steers out of this marketing mistake.

There is much more to watch in coming months: How Apple and its competitors adapt to a new era of slower growth; how carriers change their behavior (pricing and the all important subsidies) in the new growth mode; and, of course, if and how “new categories” change Apple’s business. On this, one must be cautious and refrain from expecting another iPhone or iPad explosion, with new products yielding tens of billions of dollars in revenue. Fodder for future Monday Notes.

JLG@mondaynote.com

@gassee

 

Mac Pro: Seymour Cray Would Have Approved

 

As we celebrate 30 years of Macintosh struggles and triumphs, let’s start with a semiserious, unscientific comparison between the original 128K Mac and its dark, polished, brooding descendant, the Mac Pro.

Mac 128KMac Pro

The original 128K Mac was 13.6” high, 9.6” wide, 10.9” deep (35.4 x 24.4 x 26.4 cm) and 16.5 lb (7.5 kg). Today’s Mac Pro is 9.9″ by 6.6″ (25 by 17 cm) and weighs 11 lb (5 kg) — smaller, shorter, and lighter than its ancient progenitor. Open your hand and stretch your fingers wide: The distance from the tip of your pinky to the tip of your thumb is in the 9 to 10 inches range (for most males). This gives you an idea of how astonishingly small the Mac Pro is.

At 7 teraflops, the new Pro’s performance specs are impressive…but what’s even more impressive is how all that computing power is stuffed into such a small package without everything melting down. Look inside the new Mac Pro and you’ll find a Xeon processor, twin AMD FirePro graphics engines, main memory, a solid-state “drive”, driven by 450W of maximum electric power… and all cooled by a single fan. The previous Mac Pro version, at only 2 teraflops, needed eight blowers to keep its GPU happy.

The Mac Pro achieves a level of “computing energy density” that Seymour Cray — the master of finding ways to cool high-performance, tightly packaged systems, and a Mac user himself — would have approved of.

(I’ve long been an admirer of Seymour Cray, ever since the introduction of his company’s first commercial supercomputer, the CDC 6600. In the early nineties, I was a Board member and investor at Cray Inc.  My memories of Seymour would fill an entire Monday Note. If you’re familiar with the name but not the supercomputer genius himself, I can recommend the Wikipedia article; it’s quite well-written.)

During Cray’s era of supercomputing — the 1960’s to early 90’s — processors were discrete, built from separate components. All of these building blocks had to be kept as close to each other as possible in order to stay in sync, to stay within the same “time horizon”. (Grace Hopper’s famous “one nanosecond equals a foot of wire” illustration comes to mind.) However, the faster the electronic module is, the more heat it generates, and when components are packed tightly together, it becomes increasingly difficult to pump out enough heat to avoid a meltdown.

That’s where Cray’s genius expressed itself. Not only could he plot impossibly tight circuit paths to guarantee the same propagation time for all logic signals, he designed these paths in ways that allowed adequate cooling. He sometimes referred to himself, half-seriously, as a good plumber.

(Seymour once told me he could fold a suit, change of shirt, and underwear in his small Delsey briefcase, and thus speed through airports on the way to a fund raising meeting while his investment bankers struggled with their unwieldy Hartmann garment bags…)

I finally met Seymour in December 1985 while I was head of Apple’s Product Development. The Mac Plus project was essentially done and the Mac II and Mac SE projects were also on their way (they would launch in 1987). Having catered to the most urgent tasks, we were looking at a more distant horizon, at ways to leap ahead of everyone else in the personal computer field. We concluded we had to design our own CPU chip, a quad-processor (today we’d call it a “four-core chip”). To do this, we needed a computer that could run the design and simulation software for such an ambitious project, a computer of commensurate capabilities, hence our choice of a Cray X/MP, and the visit to Seymour Cray.

For the design of the chip, the plan was to work with AT&T Microelectronics — not the AT&T we know now, but the home of Bell Labs, the birthplace of the transistor, Unix, the C language, cellular telephony and many other inventions. Our decision to create our own CPU wasn’t universally well-received. The harshest critics cast Apple as a “toy company” that had no business designing its own CPU chip. Others understood the idea but felt we vastly underestimated the technical challenges. Unfortunately, they turned out to be right. AT&T Microelectronics ultimately bailed out of the microprocessor business altogether.

(Given this history, I couldn’t help be amused when critics scoffed at Apple’s decision to acquire P.A. Semiconductor in 2008 and, once again, attempt to design its own microprocessors. Even if the chip could be built, Apple could never compete against the well-established experts in the field… and it would cost Apple a billion dollars, either way. The number was widely off the mark – and knowing Apple’s financials wouldn’t matter anyway. We know what happened: The 64-bit A7 device took the industry by surprise.)

Thirty years after the introduction of the original Mac, the Mac Pro is both different and consistent. It’s not a machine for everyone: If you mostly just use ordinary office productivity apps, an iMac will provide more bang for less buck (which means that, sadly, I don’t qualify as a Mac Pro user). But like the 128K Mac, the Mac Pro is dedicated to our creative side; it serves the folks who produce audio and video content, who run graphics-intensive simulations. As Steve put it so well, the Mac Pro is at the crossroad of technology and liberal arts:

Crossroads

Still, thirty years later, I find the Mac, Pro or “normal” every bit as seductive, promising – and occasionally frustrating – as its now enshrined progenitor.

As a finishing touch, the Mac Pro, like its ancestor, is designed and assembled in the US.

JLG@mondaynote.com

————————–

Postscript. At the risk of spoiling the fun in the impressive Making the all-new Mac Pro video, I wonder about the contrast between the powerful manufacturing operation depicted in the video and the delivery constipation. When I ordered my iMac early October 2013, I was promised delivery in 5-7 business days, a strange echo of of the December 2012 quarter iMac shipments shortfall. The machine arrived five weeks later without explanation or updated forecast. Let’s hope this was due to higher than expected demand, and that Apple’s claim that Mac Pro orders will ship “in March” won’t leave media pros wanting.

Those media assets that are worth nothing

 

The valuation gap between high tech and media companies has never been wider. The erosion  of their revenue model might be the main culprit, but management teams, unions and boards of directors also bear their heavy share of responsibility. 

Two weeks ago, with a transaction that reset the value of printed assets to almost nothing, the French market for newsmagazines collapsed for good. Le Monde acquired 65% of the weekly Le Nouvel Observateur for a mere €13.4m ($18m), at a valuation of €20m ($27m). In fact, thanks to convoluted transaction terms, Le Monde will actually disburse less than €10m for its controlling share.

This number is a hard fact, it confirms the downward spiral of French legacy media values. For a while, rumors have been flying about bids for prominent newsmagazines that would float around €20m. At the same time, Lagardère Groupe (a €7bn media conglomerate based in Paris) put most of its French magazines on the block, saying it would close them down if no buyer showed up. It turned out to be a “good” way to tip potential bidders, they can now sit and wait for prices to come down as balance sheets continue to deteriorate. This brilliant strategy is attributable to Arnaud Lagardère, the son of Jean-Luc Lagardère, the swashbuckling group founder. The heir is fond of tennis, top-models and embarrassing statements. He once said of himself: “Maybe [he] is incompetent, but not dishonest” — definitely right on the first count. Today, Lagardère Groupe faces a negative value for a large part of its magazine portfolio, meaning it is willing to actually pay the buyer willing to acquire a publication.

I discussed this situation with financial analysts in Paris and London. They are unforgivingly critical of the causes for this unprecedented value depletion. For a start, newsweeklies paid the price of deteriorating copy sales (roughly -15% for 2013) and of an anemic advertising market. But the real sin, these analysts point out, is the delay in transforming and restructuring companies. One put it bluntly:  “It is clear there won’t be a single euro left for shareholders who didn’t do their job. Today, every acquisition on the French market is first and foremost weighed down by the need for a costly restructuring, which, in addition, will take three or for times longer than in the UK or elsewhere in Europe”.

The case of Le Nouvel Observateur is the perfect example. This iconic magazine of the French social democrats perfectly fits the picture of a nursing home where residents don’t do much while waiting for the unavoidable end. A thick layer of journalists there are keen to praise the weekly: “You come on a tuesday morning to write your column and by the following thursday, you’re gone. I don’t complain.” Two insiders told me that one of the events that finally pushed the aging owner of the “Nouvel Obs” to sell was the nixing of a timid management proposal: cutting one week of vacation (out of twelve) to save money. Also true, a good third of the staff actually does working hard to produce the magazine week after week. But a digital transformation — comparable, for instance, to what the Atlantic Media Group undertook is the US — is a dream completely out of reach.

From an investor standpoint, buying the Nouvel Observateur means spending from the outset €15m to €20m, just to realign the company with decent working practices. French laws and collective bargaining do not help. In the case of Le Nouvel Observateur, the change in ownership will trigger a “clause of transfer” that will entitle every journalist to leave the company with at least one month of salary per year of employment (raised to 120% of the monthly wage beyond 15 years). For the upper layer of the newsroom that will see their working habits incompatible with a probable productivity realignment, this could be a once-in-a-lifetime opportunity to reward their long and tranquil tenure… at a cost of several million euros for the new owner. The same goes for mandatory buyouts, the customary way to push out people no longer needed. (What is Le Monde buying you might ask? Basically a 500,000 subscribers base, a better bargaining position on the advertising market, add a dose of vanity…)

Again, from a investor perspective, being forced to spend €15m-20m before allocating the first cent to a transformative investment is a severe deterrent. This mechanism also threatens daily newspapers such as Liberation (another icon of the French left wing, where I spent 12 years of my career). Isolated, stuck with a single product, dealing with a 35% decline in its paid circulation last year, a weak advertising base and a discredited management (in a recent internal vote, 90% of staff mistrust the bosses), a negative P&L despite €12m in State subsidies, this company faces a certain death unless it radically transforms itself. Its only way to survive might be to forgo the costly daily print edition, move to a well-crafted weekly distributed in selected urban areas, and extend it to realtime digital coverage on web, mobile and tablet. But such a move would mean yet another downsizing, along with heavy costs. No one is willing to be dragged into such “social Vietnam”, as one of my interlocutors puts it.

Those who advise potential buyers are quick to point out that, if the goal is to take a position in the digital world, their money would better be spent in building a pure player from the ground up. With €20 or 40 million, you can definitely build something powerful in the journalistic field.

The highly publicized startup culture — some would say “ideology” — with its unparalleled mixture of agility and skyrocketing valuations contributes to the demise of legacy medias. Consider the table below. It shows the gap between the valuation of each customer of social networks and legacy media:

305 valuations

For what it’s worth, this comparison illustrates the tremendous loss in value for legacy media. Several actually make (slim) profits while digital companies such as Pinterest or Snapchat don’t even have a revenue model. But as unfair as it sounds, investors — venture capital firms, Wall Street, high tech giants — are betting on two factors: the scalability of current user bases (with factors 10x or 20x being the norm) and also the ability of digital players to swiftly adjust themselves to quickly changing environments. Two qualities unfortunately not associated with legacy media.

frederic.filloux@mondaynote.com

 

Puzzling Over Google’s Nest Acquisition

 

Looking past the glitter, big names, and big money ($3.2B), a deeper look at Google’s last move doesn’t yield a good theory. Perhaps because there isn’t one.

Last week’s Monday Note used the “Basket of Remotes” problem as a proxy for the many challenges to the consumer version of the IoT, the Internet of Things. Automatic discovery, two-way communication, multi-vendor integration, user-interface and network management complexity… until our home devices can talk to each other, until they can report their current states, functions, and failure modes, we’re better off with individual remotes than a confusing — and confused — universal controller..

After reading the Comments section, I thought we could put the topic to rest for a while, perhaps until devices powered by Intel’s very low-power Quark processor start shipping.

Well…

A few hours later, Google announced its $3.2B acquisition (in cash) of Nest, the maker of elegant connected thermostats and, more recently, of Nest Protect smoke and CO alarms. Nest founder Tony Fadell, often referred to as “one of the fathers of the iPod”, takes his band of 100 ex-Apple engineers and joins Google; the Mountain View giant pays a hefty premium, about 10 times Nest’s estimated yearly revenue of $300M.

Why?

Tony Fadell mentioned “scaling challenges” as a reason to sell to Google versus going it alone. He could have raised more money — he was actually ready to close a new round, $150M at a $2B valuation, but chose adoption instead.

Let’s decode scaling challenges. First, the company wants to raise money because profits are too slim to finance growth. Then, management looks at the future and doesn’t like the profit picture. Revenue will grow, but profits will not scale up, meaning today’s meager percentage number will not expand. Hard work for low profits.

(Another line of thought would be the Supply Chain Management scaling challenges, that is the difficulties in running manufacturing contractors in China, distributors and customer support. This doesn’t make sense. Nest’s product line is simple, two products. Running manufacturing contractors isn’t black magic, it is now a well-understood trade. There are even contractors to run contractors, two of my friends do just that for US companies.)

Unsurprisingly, many worry about their privacy. The volume and tone of their comments reveals a growing distrust of of Google. Is Nest’s expertise at connecting the devices in our homes simply a way for the Google to know more about us? What will they do with my energy and time data? In a blog post, Fadell attempts to reassure:

“Will Nest customer data be shared with Google?
Our privacy policy clearly limits the use of customer information to providing and improving Nest’s products and services. We’ve always taken privacy seriously and this will not change.”

What else could Fadell offer besides this perfunctory reassurance?  “[T]his will not change”… until it does. Let’s not forget how so many tech companies change their minds when it suits them. Google is no exception.

This Joy of Tech cartoon neatly summarizes the privacy concern:

Thermostats

The people, the brands, the money provide enough energy to provoke less than thoughtful reactions. A particularly agitated blogger, who can never pass up a rich opportunity to entertain us – and troll for pageviews – starts by arguing that Apple ought to have bought Nest:

“Nest products look like Apple products. Nest products are beloved by people who love Apple products. Nest products are sold in Apple stores.
Nest, in short, looked like a perfect acquisition for Apple, which is struggling to find new product lines to expand into and has a mountain of cash rotting away on its balance sheet with which it could buy things.
[...] Google’s aggressiveness has once again caught Apple snoozing. And now a company that looked to be a perfect future division of Apple is gone for good.”

Let’s slow down. Besides Nest itself, two companies have the best data on Nest’s sales, returns, and customer service problems: Apple and Amazon. Contrary to the “snoozing” allegation, Apple Store activity told Apple exactly the what, the how, and the how much of Nest’s business. According to local VC lore, Nest’s Gross Margin are low and don’t rise much above customer support costs. (You can find a list of Nest’s investors here. Some, like Kleiner Perkins and Google Ventures, have deep links to Google… This reminds many of the YouTube acquisition. Several selling VCs were also Google investors, one sat on Google’s Board. YouTube was bleeding money and Google had to “bridge” it, to loan it money before the transaction closed.)

See also Amazon’s product reviews page; feelings about the Nest thermostat range from enthusiastic to downright negative.

The “Apple ought to have bought Nest because it’s so Apple-like” meme points to an enduring misunderstanding of Apple’s business model. The Cupertino company has one and only one money pump: personal computers, whether in the form of smartphones, tablets, or conventional PCs. Everything else is a supporting player, helping to increase the margins and volume of the main money makers.

A good example is Apple TV: Can it possibly generate real money at $100 a puck? No. But the device expands the ecosystem, and so makes MacBooks, iPads, and iPhones more productive and pleasant. Even the App Store with its billions in revenue counts for little by itself. The Store’s only mission is to make iPhones and iPads more valuable.

With this in mind, what would be the role of an elegant $249 thermostat in Apple’s ecosystem? Would it add more value than an Apple TV does?

We now turn to the $3.2B price tag. The most that Apple has ever paid for an acquisition was $429M (plus 1.5M Apple shares), and that was for… NeXT. An entire operating system that revitalized the Mac. It was a veritable bargain. More recently, in 2012, it acquired AuthenTec for $356M.

With rare exceptions (I can think of one, Quattro Wireless), Apple acquires technologies, not businesses. Even if Apple were in the business of buying businesses, a $300M enterprise such as Nest wouldn’t move the needle. In an Apple that will approach or exceed $200B this calendar year, Nest would represent about .15% of the company’s revenue.

Our blogging seer isn’t finished with the Nest thermostat:

“I was seduced by the sexy design, remote app control, and hyperventilating gadget-site reviews of Nest’s thermostat. So I bought one.”

But, ultimately, he never used the device. Bad user feedback turned him off:

“[…] after hearing of all these problems, I have been too frightened to actually install the Nest I bought. So I don’t know whether it will work or not.”

He was afraid to install his Nest… but Apple should have bought the company?

So, then, why Google? We can walk through some possible reasons.

First, the people. Tony Fadell’s team is justly admired for their design skills. They will come in handy if Google finally gets serious about selling hardware, if it wants to generate new revenue in multiples of $10B (its yearly revenue is approximately $56B now). Of course, this means products other than just thermostats and smoke alarms. It means products that can complement Google’s ad business with its 60% Gross Margin.

Which leads us to a possible second reason: Nest might have a patent portfolio that Google wants to add to its own IP arsenal. Fadell and his team surely have filed many patents.

But… $3.2B worth of IP?

This leaves us with the usual questions about Google’s real business model. So far, it’s even simpler than Apple’s: Advertising produces 115% or more of Google’s profits. Everything else brings the number back down to 100%. Advertising is the only money machine, all other activities are cost centers. Google’s hope is that one of these cost centers will turn into a new money machine of a magnitude comparable to its advertising quasi-monopoly.

On this topic, I once again direct you to Horace Dediu’s blog. In a post titled Google’s Three Ps, Horace takes us through the basics of a business: People, Processes, and Purpose:

“This is the trinity which allows for an understanding of a complex system: the physical, the operational and the guiding principle. The what, the how and the why.”

Later, Horace points to Google’s management reluctance to discuss its Three Ps:

“There is a business in Google but it’s a very obscure topic. The ‘business side’ of the organization is only mentioned briefly in analyst conference calls and the conversation is not conducted with the same team that faces the public. Even then, analysts who should investigate the link between the business and its persona seem swept away by utopian dreams and look where the company suggests they should be looking (mainly the future.)
There are almost no discussions of cost structures (e.g. cost of sales, cost of distribution, operations and research), operating models (divisional, functional or otherwise) or of business models. In fact, the company operates only one business model which was an acquisition, reluctantly adopted.”

As usual — or more than usual in current circumstances — the entire post is worth a meditative read. Especially for its interrogation at the end:

“The trouble lies in that organization also having de-facto control over the online (and hence increasingly offline) lives of more than one billion people. Users, but not customers, of a company whose purpose is undefined. The absence of oversight is one thing, the absence of an understanding of the will of the leadership is quite another. The company becomes an object of faith alone.  Do we believe?”

Looking past the glitter, the elegant product, the smart people, do we believe there is a purpose in the Nest acquisition? Or is Google simply rolling the dice, hoping for an IoT breakthrough?

JLG@mondaynote.com

 

Is Yahoo serious about media?

 

Under Marissa Mayer’s leadership, Yahoo keeps making substantial efforts to become a major news media player. Will a couple of well-know bylines and a shiny mobile app do the job? 

A big Silicon Valley player entering the news business has long been the worst nightmare of legacy publishers. Combining an array of high tech products with the ability to get all the talent money can buy, the Valley giant could be truly disruptive. Ten years ago, the ongoing fantasy was Google or a Yahoo gulping the New York Times or another such big media property. For many reasons — economical as well as cultural ones — it didn’t happen. Yahoo once approached NYT’s columnist Thomas Friedman, offering him a hefty pay raise to become its star writer. But the Times’ globo-pundit quickly backed off when he realized that most of his reputation –  as arguable as it can be (see the cruel Tom Friedman OpEd Generator) — was tied to his employer. Yahoo and others put the issue at rest for years, focusing on core challenges: survival for Yahoo and global domination for Google.

Until now.

Last year, we first witnessed a significant move from the tech galaxy: Jeff Bezos acquired the Washington Post by. As mentioned in the Monday Note (see the Memos To Jeff series), Amazon’s technical firepower will undoubtedly exert a transformative — rather than merely incremental — impact on the Post. Further, I guess this will end up being a welcome stimulus for the entire industry, it really needs a tech kick in its sagging backside.

Then came the Yahoo initiatives. Last fall, Marissa Mayer, snatched three visible talents from the New York Times: Megan Liberman, until then the Times’ deputy news editor, was appointed Yahoo News editor in chief; Mayer also tapped iconic tech columnist David Pogue; a month later, she picked the Times’ chief political correspondent Matt Bai. Finally, on November 25th, Marissa Mayer announced that she hired former TV host Katie Couric as the portal’s “global anchor”.

Here we are: Expect Yahoo to simultaneously enter three major information segments: General audience programming with Katie Couric’s show; political and national issues; and tech coverage (in addition to the classical Food site). Logically, Yahoo started with the tech side. Pogue himself introduced Yahoo Tech on stage at CES last week — and didn’t pass up the opportunity to blast its competitors, mocking their nerdy and obscure language. Interface wise, I found the site pretty clever with its one page, endless scrolling structure — a trend to be noticed –  and articles showcased in about 120 tiles (approx 7 tiles x 18 rows), each expanding as needed and keeping its own URL, which is essential for social sharing uses.

Regardless of David Pogue’s ability to put a the human face on technology, Yahoo Tech is entering an increasingly crowded segment. This month, the Wall Street Journal rolled out WSJD, set to take Walt Mossberg’s and Kara Swisher’s AllThingsD slot, itself reborn as Re/Code (can’t find a geekier name), operated by the same duo. The Re/Code money machine will be the already sold-out Code Conference and its offsprings. WSJD features potent editorial firepower with no less than 50 writers on deck.

Marissa Mayer made no mystery of the fact that her editorial initiatives will be directed at Yahoo’s #1 priority, “the company’s commitment to mobile”. When she landed at Yahoo, Mayer was dismayed to discover that everyone received a Blackberry. Now, the company wants to board every relevant ecosystem, starting with iOS and Android.

That’s what Yahoo does with its interesting NewsDigest App for iOS, launched at CES. As its tech web site does, the mobile app focuses on a series of hot trends. First of all, with its truncated structure, the app borrows a lot from Circa (see a previous Monday Note); it also inherits technology developed by Summly, the startup it acquired in March last year (merely five months after the app’s launch). Summly’s core idea is a news summarizing algorithm. The NewsDigest iteration does actually much more than condensing stories: In a neat interface, it creates context by slicing coverage as follows:
–Image gallery
–Infographics
–Maps
–Stock charts
–Main Twitter feeds
–Video
–Wikipedia
…plus a set of references if you want more.

For a story picked up yesterday, it looks like this:

304_yahoo_news

Evidently, there is room for improvement. Weirdly enough, the app is updated only twice a day and carries less than ten stories. Both elements go against the idea of a smartphone app supposed to update on a permanent and to provide content in an endless stream. Plus, automated as it is, the prose can’t quite compete for a Pulitzer Prize. But, if Yahoo decides to hand the key ingredients over to a competent editorial team, the NewsDigest could become a really good product.

Coming back to this column’s main topic, I believe Yahoo is really up to something in the news sector:
– Yahoo enjoys huge traction in the mobile world: According to Marissa Mayer, among the 800 million people who access Yahoo every month (excluding Tumbler), roughly 400 million reach the portal through their mobile phone. (Despite that number, one irritating thing: Yahoo made its app available to the US AppStore only, ignoring the hundreds millions of English-speaking users on other shores, East and West of Sunnyvale, California.)
– Unlike with Google’s mobile strategy, Yahoo is free from Android’s strategic goals and from a difficult relationship with Apple. It can therefore play the two ecosystems equally, opening the potential for one to gain leverage against the other.
– Even better, by last week acquiring Aviate, an Android customizing interface layer, Yahoo can now create its own branded experience on top of the standard Android interface.
– Assuming it enters the news business for good, Yahoo will act like a tech company, not a legacy media one. In other words, it will first build a sizable audience for its news ecosystem while deliberately ignoring the revenue side as long as needed. Then, it will optimize and datamine this user base to understand in the most granular way what works and what doesn’t. Having successfully gone through those steps, Yahoo will then transform the (hopefully vast) newly acquired audience into a money machine.
This is the way it works nowadays.

frederic.filloux@mondaynote.com

@filloux

 

Internet of Things: The “Basket of Remotes” Problem

 

We count on WiFi and Bluetooth in our homes, but we don’t have appliances that provide self-description or reliable two-way communication. As a result, the Internet of Things for consumers is, in practice, a Basket of Remotes.

Last Friday, I participated in a tweetchat (#ibmceschat) arranged by friends at IBM. We discussed popular CES topics such as Wearables, Personal Data, Cable and Smart TV, and the Internet of Things. (I can’t help but note that Wikipedia’s disambiguation page bravely calls the IoT “a self-configuring wireless network between objects”. As we’ll see, the self-configuring part is still wishful thinking.)

At one point, the combined pressures of high-speed twittering and 140-characters brevity spurred me to blurt this:

Remotes Basket Case

A little bit of background before we rummage through the basket.

In practice, there are two Internets of Things: One version for Industry, and another for Consumers.

The Industrial IoT is alive and well. A gas refinery is a good example: Wired and wireless sensors monitor the environment, data is transmitted to control centers, actuators direct the flow of energy and other activities. And the entire system is managed by IT pros who have the skill, training, and culture — not to mention the staff — to oversee the (literal) myriad unseen devices that control complicated and dangerous processes.

The management of any large corporation’s energy, environment, and safety requires IT professionals whose raison d’être is the mastery of technology. (In my fantasy, I’d eavesdrop on Google’s hypergalactic control center, the corporate Internet of Things that manage the company’s 10 million servers…)

Things aren’t so rosy in the consumer realm.

For consumers, technology should get out of the way — it’s a means, not an end. Consumers don’t have the mindset or training of IT techies, they don’t have the time or focus to build a mental representation of a network of devices, their interactions and failure modes. For example, when my computer connects to the Net, I don’t have to concern myself with the way routers work, how the human-friendly mondaynote.com gets translated into the 78.109.84.91 IP address.

Not so with a home network of IoT objects that connect the heating and cooling systems, security cameras, CO and fire sensors, the washer, dryer, stove, fridge, entertainment devices, and under-the-mattress sleep monitoring pads. This may be an exaggerated example, but even with a small group of objects, how does a normal human configure and manage the network?

For an answer, or lack thereof, we now come back to the Basket of Remotes.

I once visited the home of an engineer who managed software development at an illustrious Silicon Valley company. I was shocked, shocked to see a basket of remotes next to the couch in front of his TV. ‘What? You don’t use a programmable remote to subsume this mess into one elegant device and three of four functions, TV, DVR, VoD, MP3 music?’

‘No, it’s too complicated, too unreliable. Each remote does its separate job well, with an easy mental representation. These dumb devices don’t talk back, there’s no way for a unified remote to ask what state they’re in. So I gave up — I have enough mental puzzles at the office!’

Indeed, so-called “smart” TVs are unable to provide a machine-readable description of the commands they understand (an XML file, also readable by a human, would do). We can’t stand in front of a TV with a “fresh” universal remote – or a smartphone app – touch the Learn button and have the TV wirelessly ship the list of commands it understands…and so on to the next appliance, security system or, if you insist, fridge and toaster.

If an appliance would yield its control and reporting data, an app developer could build a “control center” that would summarize and manage your networked devices. But in the Consumer IoT world, we’re still very far from this desirable state of affairs. A TV can’t even tell a smartphone app if it’s on, what channel it’s tuned to, or which devices is feeding it content. For programmable remotes, it’s easy to get lost as too many TVs don’t even know a command such as Input 2, they only know Next Input. If a human changes the input by walking to the device and pushing a button, the remote is lost. (To say nothing of TVs that don’t have separate On and Off commands, only an On/Off toggle, with the danger of getting out of sync – and no way for the TV to talk back and describe its state…)

Why don’t Consumer Electronics manufacturers provide machine self-description and two-way communication? One possible answer is that they’re engaged in a cost-cutting race to the bottom and thus have no incentive to build more intelligence into their devices. If so, why build unbearably dumb apps in their Smart TVs? (Korean LG Electronics even dug up WebOS for integration into its latest TVs.)

A look at Bang & Olufsen’s Home Integration page might give one hope. The video demo, in B&O’s usual clean luxury style, takes us through from dining to sleep to waking up, opening curtains, making coffee, morning news on TV, and opening the garage door. But it only provides a tightly integrated B&O solution with the need for one or more IT intervention (and it’s expensive — think above $100K for the featured home).

This leaves middle class homes with an unsolved, mixed-vendor Basket of Remotes, a metaphor for the unanswered management challenges in the Consumer IoT space.

JLG@mondaynote.com

@gassee