The Apple Tesla Connection: Fun and Reason With Numbers

 

Apple acquiring Tesla would make for juicy headlines but would also be very dangerous. There are more sensible ways for the two companies to make money together.

Apple has never suffered from a lack of advice. As long as I can remember — 33 years in my case — words of wisdom have rained down upon the company, and yet the company stubbornly insists on following its own compass instead of treading a suggested path.

(Actually, that’s not entirely true. A rudderless, mid-nineties Apple yielded to the pressure of pundit opinion and licensed its Mac OS to Power Computing and Motorola… and promptly lost its profits to Mac clones. When Steve Jobs returned in 1997, he immediately canceled the licenses. This “harsh” decision was met with a howl of protest, but it stanched the bleeding and made room for the other life-saving maneuvers that saved the company.)

Now that Jobs is no longer with us and Apple’s growth has slowed, the advice rain is more intense than ever… and the pageviews netwalkers are begging for traffic. Suggestions range from the deranged (Tim Cook needs to buy a blazer), to – having forgotten what happened to netbooks – joining the race to the bottom (Apple needs a low-cost iPhone!), to the catchall “New Categories” such as Wearables, TV, Payment Systems, and on to free-floating hysteria: “For Apple’s sake, DO SOMETHING, ANYTHING!”

The visionary sheep point to the titanic deals that other tech giants can’t seem to resist: Google buys Nest for $3.3B; Facebook acquires WhatsApp for $16B (or $19B, depending on what and how you count). Why doesn’t Apple use its $160B of cash and make a big acquisition that will solidify its position and rekindle growth?

Lately, we’ve been hearing suggestions that Apple ought to buy Tesla. The company is eminently affordable: Even after TSLA’s recent run-up to $30B, the company is well within Apple’s means. (That Wall Street seems to be telling us that Tesla is worth about half of GM and Ford is another story entirely.)

Indeed, Adrian Perica, Apple’s head of acquisitions, met Tesla’s CEO Elon Musk last Spring. Musk, who later confirmed the meeting, called a deal “very unlikely”, but fans of both companies think it’s an ideal match: Tesla is the first Silicon Valley car company, great at design, robotics, and software. Like Apple, Tesla isn’t afraid to break with traditional distribution models. And, to top it off, Musk is a Steve Jobs-grade leader, innovator, and industry contrarian. I can see the headline: “Tesla, the Apple of the auto industry…”

But we can also picture the clash of cultures…to say nothing of egos.

I have vivid recollections of the clash of cultures after Exxon’s acquisition of high-tech companies to form Exxon Office Systems in the 1970s.

Still reeling from the OPEC oil crisis, Exxon’s management was hypnotized by the BCG (Boston Consulting Group) who insisted that “Information Is the Oil of the 21st Century.” The BCG was right, Tech has its Robber Barons: Apple, Oracle, Google, Facebook, Intel, and Microsoft all weigh more than Shell, Exxon, or BP.

But the BCG was also wrong: Exxon’s culture had no ability to understand what made the computer industry tick, and the tech folks thoroughly despised the Exxon people. The deal went nowhere and cost Exxon about $4B – a lot of money more than 30 years ago.

This history lesson isn’t lost on Apple. So: If Apple isn’t “interested” in Tesla, why are they interested in Tesla?

Could it be batteries?

A look at battery numbers for the two companies brings up an interesting parallel. Tesla plans to make 35,000 Tesla S cars this year. According to Car and Driver magazine, the battery weighs 1323 pounds (600 kilograms — we’ll stick to metric weights moving forward):

Tesla S Battery 1323lbs edited

That’s 21,000 (metric) tons of batteries.

For Apple devices the computation is more complicated — and more speculative — because the company publicizes battery capacity (in watt-hours) rather than weight. But after some digging around, I found the weight information for an iPhone 4S on the iFixit site: 26 grams. From there, I estimated that the weight of the larger iPhone 5S battery is 30 grams.

I reasoned that the weight/capacity ratio is probably the same for all Apple batteries, so if a 26g iPhone battery provides 5.25 watt-hrs, the iPad Air battery that yields 32.4 watt-hrs must weigh approximately 160g. Sparing you the details of the mix of iPad minis and the approximations for the various Macs, we end up with these numbers for 2014 (I’m deliberately omitting the iPod):

100M iPads @ 130g = 13,000 tons
200M iPhones @ 30g = 6,000 tons
20M Macs @ 250g = 5,000 tons
Total Apple batteries = 24,000 metric tons

It’s a rough estimate, but close enough for today’s purpose: Apple and Tesla need about the same tonnage of batteries this year.

Now consider that Tesla just announced it will build a battery mill it calls the Gigafactory:

gigafactory process

According to Tesla, the plant’s capacity in 2020 will be higher than what the entire world produced in 2013.

A more likely explanation for Apple’s conversation with Tesla might be something Apple does all the time: Sit with a potential supplier and discuss payment in advance as a way to secure supply of a critical component.

Of course, neither Tesla nor Apple will comment. Why should they? But a partnership born of their comparable needs for battery volumes makes a lot more sense than for the two companies to become one.

–JLG@mondaynote.com

follow gassee

Postscript: Certainly, Apple is no stranger to acquisitions — the company recently disclosed that it purchased 23 small companies in the last 16 months — but the operative word, here, is “small”. Apple has made two large purchases in twenty years: The late-1996 acquisition of NeXT for $429M (and 1.5M Apple shares), and the $356M purchase of Authentec in July 2012. Other other than that, Apple’s acquisitions have focused on talent and technologies rather than sheer size.

This seems wise. In small acquisitions, everyone knows who’s in charge, it’s the acquirer’s way or the highway, and failure rarely makes headlines. Bigger deals always involve explicit or implicit power sharing and, more important, a melding of cultures, of habits of the heart and mind, of “the ways we do things here”.

——–

News Media Revenue Matrix: The Bird’s Eye View

 

Publishers struggle with newer and more complex business models. Some appear stronger than others but, above all, a broad palette is a must. It is a means to capture emerging opportunities and to compensate for the drying up of older revenue sources.

Today, I submit the following revenue matrix for a modern, content-rich news outlet. As I see it, in the news business “modernity” mean this:

A proven ability to produce original content in abundance and under multiple forms: news reporting, investigation, analysis, data journalism, long form (for ebook publishing), enterprise-like journalism, live feeds; all of the above in the form of text, images, graphics and videos.

A cultural mindset to produce contents for the platform with the best fit: a news story for a newspaper, an interactive piece on the web, live coverage for mobile. The collective publishing mindset should no longer allow first- and second-class news products. Every piece of newsroom output must be designed as a contribution to a cascading revenue system in which each element empowers every other one.

– A newsroom equipped with the best tools money can buy or — even better — build. These include a powerful Content Management System (CMS) aimed at dispatching production to every platform. The CMS must be connected to a semantic analysis system that makes all pieces of information — from a feature story to the transcript of a video — compatible with the semantic web’s standardized grammar. In order to extract more value from a piece of content, the CMS must also connect to multiple databases. For example, the name of an obscure city must be able to generate a map – through the Geonames base; a Board Director must be tied to a high value database of business leaders such as The Official Board; the name of a company must lead to open-source corporations listings.

Mastering the semantic web is indissociable from acquiring information gathering capabilities such as aggregation and filtering (see a previous Monday Note: Building a business news aggrefilter ). Such feature is a prerequisite to building high-margin products as well as exploiting the social media echo chamber. After collecting contents through RSS feeds, the combination of semantic news analysis matched against the taxonomy of, say, Twitter, will yield a trove of information on what audiences like or dislike — not only for a news media but also for its competitors. It is a complex and expensive endeavor but, in the long run, it will be worth every penny.

– And more importantly, a global editorial thinking. Too often, newsroom management suffers form what l’ll call “mono-product bias”, focusing on what is seen as noble — namely print. At a very minimum, modern editorship must embrace a widespread digital strategy. But it also must envision a sustainable game plan for a complete lineup of ancillary products that also deserve editorial coherence and strength.

Having said that, let’s have a look at the following matrix. No rocket science here, I simply made a list of 14 products that many news outlets already operate. I then tried to assess the outlook for each revenue stream. (My original idea was to assign a estimated ARPU for each cell, but there are too many parameters to be taken into account).
Click to enlarge the table:

310 table revenue

Now, let’s focus on specific products and revenue streams.

Daily Print Edition. I’m very bearish on print. Granted, it still brings the most substantial chunk of revenue – but also most of the losses. And prospects are bleak: copy sales, subscriptions, even ad sales deteriorate fast. Some light can come from ads – when they are components of customized campaigns. Daily newspapers need to be vastly simplified in order to free up resources for the wide array of other revenue streams — especially digital. I’m a big supporter of Financial Times’ Lionel Barber “Memo on reshaping the newspaper for digital age“.

Weekend editions will do better than dailies for several reasons. First, their function — long formats, portfolios, reading habits — makes them better armed against the digital tsunami that devoured news. Second, they remain a great vector for pricey advertising: on some anglo-saxon markets, weekend editions accounts for half of the print ad revenue. The New York Times understood that well as its full digital access + weekend edition bundle is a hit among customers.

Advertising revenue stream. Let’s face it, traditional ads ormats, print or digital, are dying. The conjunction of programmatic buying and ad saturation/tracking/targeting will seal their fate for good. The best outlook seems to be for customized operations and brand contents (or combinations of the two). They can spread on every platforms, including on mobile where, so far, users massively reject ads. In addition, these customized operations carry high value (huge CPMs or hefty flat fees.)

Event & Conferences. The segment is crowded and success depends on a subtile combination of attendance fees vs sponsorship, but also of editorial content. A conference is indeed a full editorial vector that needs to be treated with the same care as any other publication, i.e, with a precise angle, great casting and first class moderation that favors intellectual density over speakers flogging cheap sales pitches. News media are well positioned to deploy an efficient promotion for a content-rich, sustainable, conference system.

Intelligence & Surveys. Attractive as they might sound, these products require a great deal of expertise to make a difference. Very few media can fulfill the promise and justify the high price that goes along with such offerings.

Training and MOOCs represent an interesting potential diversification for some business publications. They carry several advantages: by addressing a young readership, MOOCs can create an early attachment to the brand; the level of risk is low as long as the media company limits itself to being a distributor (quality MOOCs production is very expensive). For a business publication, such activities represent a great way to increase its penetration in the corporate world where the need for training is limitless.

Premium Subscriptions. Some large, diversified media companies are already considering complex subscription packages for a small number of high-yield clients. In addition to print and full digital access, such packages could include access to conferences & events, MOOCs, market intelligence, and other publications. Testing the concept is a low-risk proposition.

The Business to Business segment remains the province of specialized publications. But the potential is there for general-audience media: corporations are hungry for information. The era of the bulky corporate intranet that no one watches is gone; today, for their staff, companies want apps for mobile and tablets that will save time while being precisely targeted and well-designed. Not an easy market – but  a very solvent one.

Sketchy and questionable as it is, the above matrix also illustrates the complexity of designing and selling such a wide range of products to individuals or corporations. Only a small number of news organizations will have the staff, skills and resolve to address such a broad range of opportunities.

frederic.filloux@mondaynote.com

@filloux

Nokia Goes Android – Part II

 

Next week, we might see Nokia’s entry-level feature phones replaced by a low-end device running Android Open Source Project software. The phone may just be a fantasy, but the dilemma facing Nokia’s feature phone business is quite real: Embrace Android or be killed by it. 

Nokia will announce an Android phone! So says the persistent rumor, started about three months ago by an @evleaks tweet, and followed by more details as weeks went by. Initially code-named Normandy, the hypothetical feature phone is now called Nokia X, and it already has its own Wikipedia page and pictures:

Nokia X

Nokia is on the path to being acquired by Microsoft. Why introduce an Android-based phone now? The accepted reasoning is simple…

  • Even though it doesn’t generate much revenue per handset (only $42), Nokia’s feature phone business is huge and must be protected. Nokia’s Form 20-F for 2012 (the 2013 report hasn’t been published, yet) shows its phone numbers compared to the previous year:
    • 35M smartphones (-55%) at an average price (ASP) of $210 (+ 11%)
    • 300M feature phones (-12%) with an ASP of $42 (- 11%)
  • These 300 million feature phones — or “dumbphones” — keep the Nokia flag waving, particularly in developing economies, and they act as an up-ramp towards more profitable smartphones.
  • Lately, dumbphones have become smarter. With the help of Moore’s Law, vigorous competition, and Android Open Source Project (AOSP) software, yesterday’s underfed, spartan feature phones are being displaced by entry-level smartphones. Asha, Nokia’s offering in this category, has been mowed down by low-end Android devices from China.
  • Nokia can’t help but notice that these AOSP-based feature phones act as a gateway drug to the full-blown Android smartphone experience (and much larger profits) offered by competitors such as Samsung, Huawei, and Motorola’s new owner Lenovo.
  • So Nokia drops its over-the-hill Symbian software core, adopts Android, adds its own (and Microsoft’s) services, design expertise, and carrier relationships, and the result is Nokia X, a cleaner, smarter feature phone.

That’s it. Very tactical. Business as usual, only better. Move along, nothing to see.

It’s not that simple.

There’s an important difference between the Android Open Source Project (AOSP), and the full Android environment that’s offered by Samsung, LG, HTC and the like.

The Android Open Source Project is really Open Source, you can download the source code here, modify it as you see fit for your application, add layers of services, substitute parts…anything you like.

Well, almost anything. The one thing you can’t do is slap a figurative “Android Inside” sticker on your device. To do that, you must comply with Google’s strict compatibility requirements that force licensees to bundle Google Mobile (Maps, Gmail, YouTube, etc.) and Google Play (the store for apps and other content). The result isn’t open or free, but smartphone makers who want the Android imprimatur must accept the entire stack.

As an added incentive to stay clean, a “Full Android” licensee cannot also market devices that use a different, incompatible version (or “fork”) of the Android code published by Google. A well-know example of forking is Amazon’s use of Android source code to create the software engine that runs its high-end Kindle Fire tablets. You won’t find a single instance of the word “Android” on these devices: Google won’t license the name for such uses.

(For more on the murky world of Android licensing, bundling, and marketing agreements, see Ben Edelman’s research paper: Secret Ties in Google’s “Open” Android.)

The hypothetical, entry-level Nokia X can’t offer an entire Android stack — it can’t be allowed to compete with the higher-end Lumias powered by Microsoft’s Windows Phone — so it would have to run an “unmentionable” Android fork.

Even without the “Android Inside” label, everyone would soon know the truth about the Android code inside the new device. This could give pause to software developers, carriers, and, the more curious users. “Where is Microsoft going with this? Won’t the Android beast inside soon work its way up the product line and displace the Windows Phone OS?”

Microsoft will make soothing sounds: “Trust us, nothing of the sort will ever happen.  Nokia X is a purely tactical ploy, a placeholder that will give Windows Phone enough time to reveal its full potential.” We know how well attempts to create a Reality Distortion Field have worked for Microsoft’s Post-PC denials.

The Redmond not-so-mobile giant faces a dilemma: Lose the Asha feature phone business to aggressive forked-Android makers, or risk poisoning its Windows Phone business by introducing potentially expansionist Android seeds at the bottom of its handset line.

Several observers (see Charles Arthur’s penetrating Guardian column as an example) have concluded that Microsoft should follow Amazon’s lead and accept the “Come To Android” moment. It should drop Windows Phone and run a familiar Embrace and Extend play: Embrace Android and Extend it with Bing, Nokia’s Here Maps, Office, and other Microsoft properties.

Critics, such as Peter Bright, an energetic Microsoft commenter, contend that forking Android isn’t feasible:

“Android isn’t designed to be forked. With GMS, Google has deliberately designed Android to resist forking. Suggestions that Microsoft scrap its own operating system in favor of such a fork simply betray a lack of understanding of the way Google has built the Android platform.”

Dianne Hackborn, a senior Android engineer (and a former comrade of mine during a previous OS war) contradicts Bright in great point-by-point detail and concludes:

“Actually, I don’t think you have an understanding of how Google has built Android. I have been actively involved in designing and implementing Android since early on, and it was very much designed to be an open-source platform… Android creates a much more equal playing field for others to compete with Google’s services than is provided by the proprietary platforms it is competing with. I also think a good argument can be made that Android’s strategy for addressing today’s need to integrate cloud services into the base platform is an entirely appropriate model for a ‘real’ open-source platform to take.”

In the end, Microsoft probably doesn’t trust Google to refrain from the same games that Microsoft itself knows (too well) how to play. Microsoft used its control of Windows to favor its Office applications. Now it’s Google’s turn. The Mountain View company appears set to kill Microsoft Office, slowly but surely, and using all means available: OS platforms and Cloud services.

None of this draws a pretty picture for Microsoft’s mobile future. Damned if it introduces Android bits at the low end, damned if it lets that same software kill its Asha feature phone business.

JLG@mondaynote.com
@gassee
———————-
PS: Almost four years ago, I wrote a light-hearted piece titled Science Fiction: Nokia goes Android. It was actually less fictional than I let on at the time. In June 2010, I was asked to give a talk at Nokia’s US HQ in White Plains, NY. I was supposed to discuss Apple but I declined to spend too much time on that topic arguing that the Cupertino company was too “foreign” to Nokia’s culture. Instead, I made two suggestions: Fire your CEO and drop your four or five software platforms — Symbian and Linux variants — and adopt Android. Nokia’s combination of industrial design expertise, manufacturing might, and long-standing, globe-spanning carrier relationships could make it a formidable Android smartphone maker.

The first recommendation was warmly received — there was no love for Olli-Pekka Kallasvuo, the accountant cum attorney CEO.

The second was met with indignation: “We can’t lose control of our destiny”. I tried to explain that the loss had already taken place, that too many software platforms were a sure way to get killed at the hands of monomaniacal adversaries.

Three months later Kallasvuo was replaced…by a Microsoft alum who immediately osborned Nokia’s smartphone business by pre-announcing the move to Windows Phone almost a year before the new devices became available.

—–

Comcast and Us

 

Comcast tells us how much better our lives will be after they acquire Time Warner. Great, thanks! Perhaps this is an opportunity to look at other ways that we can “acquire” Cable TV and Internet access.

Comcast CEO Brian Roberts thinks we’re powerless idiots. This is what his company’s website says about the planned Time Warner acquisition :

“Transaction Creates Multiple Pro-Consumer and Pro-Competitive Benefits…”

Don’t read the full legal verbiage that purports to explain the maneuver. A more productive use of your time will be had by reading Counternotion’s pointed summary in Obfuscation by disclosure: a lawyerly design pattern:

(tl;dr: According to Comcast, the merger is “pro-sumer” if you “get past some of the hysteria,” it’s “approvable” by the regulators and won’t “reduce consumer choice at all”. Will it raise prices? “not promising that they will go down or even that they will increase less rapidly.” Given the historical record of the industry, it’s Comedy Central material.)

Let’s not loiter around Comcast’s lobbying operations, either — the $18.8M spent in 2013, the pictures of Mr. Roberts golfing with our President, the well-oiled revolving door between the FCC and the businesses they regulate. Feelings of powerlessness and anger may ensue, as trenchantly expressed in this lament from a former FCC Commissioner.

Instead, let’s use our agitation as an opportunity to rethink what we really want from Cable carriers. The wish list is long: TV à la carte instead of today’s stupid bundles, real cable competition vs. de facto local monopolies, metered Internet access in exchange for neutrality and lower prices for lighter usage, decent set-top boxes, 21st century cable modems, and, of course, lower prices.

These are all valid desires, but if there were just one thing that we could change about the carrier business, what would it be? What would really make a big, meaningful difference to our daily use of TV and the Internet?

Do you remember the Carterfone Decision? For a century (telephone service started in the US in 1877), AT&T reigned supreme in telecommunications networking. (I should say the former AT&T, not today’s company rebuilt from old body parts.) The company owned everything along its path, all the way down to your telephone handset — only MaBell’s could be used.

Then, in the late fifties, a company called Carterfone began to sell two-way radios that could be hooked up to a telephone. The device was invented by a Texan named Thomas Carter as a clumsy but clever way to allow oil field owners and managers sitting in their offices in Dallas to reach their workers out at the pumps.

AT&T was not amused.

“[AT&T] advised their subscribers that the Carterfone, when used in conjunction with the subscriber’s telephone, is a prohibited interconnecting device, the use of which would subject the user to the penalties provided in the tariff…”

Carterfone brought an antitrust suit against AT&T… and won. With its decision in favor of Thomas Carter’s company, the Federal Communications Commission got us to a new era where any device meeting the appropriate technical standards could connect to the phone network.

“…we hold, as did the examiner, that application of the tariff to bar the Carterfone in the future would be unreasonable and unduly discriminatory.”

The regulator — an impartial representative, in an ideal world — decides what can connect to the network. It’s not a decision that’s left to the phone company.

Back in the 21st century, we need a Carterfone Decision for cable boxes and modems. We need a set of rules that would allow Microsoft, Google, Roku, Samsung, Amazon, Apple — and companies that are yet to be founded — to provide true alternatives to Comcast’s set-top boxes.

Today, you have a cable modem that’s so dumb it forces you to restart everything in a particular sequence after a power outage. You have a WiFi base station stashed in among the wires. Your set-top box looks like it was made in the former Soviet Union (a fortuitous product introduction days before the merger announcement doesn’t improve things, much). You have to find your TV’s remote in order to switch between broadcast TV, your game console, and your Roku/AppleTV/Chromecast…and you have to reach into your basket of remotes just to change channels.

Imagine what would happen if a real tech company were allowed to compete on equal terms with the cable providers.

Microsoft, for example, could offer an integrated Xbox that would provide Internet access, TV channels with a guide designed by Microsoft, WiFi, an optional telephone, games of course, and other apps as desired. One box, three connectors: power, coax from the street, and HDMI to the TV set. There would be dancing in the streets.

But, you’ll object, what about the technical challenges? Cable systems are antiquated and poorly standardized. The cables themselves carry all sorts of noisy signals. What tech giant would want to deal with this mess?

To which one can reply: Look at the smartphone. It’s the most complicated consumer device we’ve ever known. It contains radios (Wifi, Bluetooth, multi-band cellular), accelerometers/gyroscopes, displays, loudspeakers, cameras, batteries… And yet, smartphones are made in huge quantities and function across a wide range of network standards. There’s no dearth of engineering talent (and money) to overcome the challenges, especially when they’re tackled outside of the cable companies and their cost-before-everything cultures.

Skeptics are more likely to be correct about the regulatory environment or, to be more precise, regulatory capture, a phrase that…captures the way regulators now work for the industries they were supposed to control. Can we imagine the FCC telling Comcast: “Go ahead and buy Time Warner…just one little condition, make sure any and all of your connection protocols and services APIs are open to any and all that pass the technical tests listed in Appendix FU at the end of this ruling.”

That’s not going to happen. We must prepare ourselves for a sorry display of bad faith and financial muscle. Who knows, in the end, Comcast might give up, as AT&T did after telling us how pro-consumer the merger with T-Mobile would be.

JLG@mondaynote.com

@gassee

Building a business news aggrefilter

 

This February 10, Les Echos launches its business news aggrefilter. For the French business media group, this is a way to gain critical working knowledge of the semantic web. Here is how we did it. An why. 

The site is called Les Echos 360 and is separate from our flagship site LesEchos.fr, the digital version of the French business daily Les Echos. As the newly coined word aggrefilter indicates, it is an aggregation and filtering system. It is to be the kernel from which many digital products and extensions we have in mind will spring.

My idea to build an aggrefilter goes back to… 2007. That year, in San Francisco, I met Dan Farber, at the time editor-in-chief of CNet (now at CBS Interactive, his blog here) – and actual father of the aggrefilter term. Dan told me: ‘You should have a look at Techmeme. It’s an “aggrefilter” that collects technology news and ranks them based on their importance to the news cycle’. I briefly explored the idea of building such an aggrefilter, but found it too hard to do it from scratch, off-the-shelf aggrefilter software didn’t exist yet. The task required someone like Techmeme founder Gabe Rivera – who holds a PhD in computer science. I shelved the idea for a while.

360 cap

A year ago, as the head of digital at Les Echos, I reopened the case and pitched the idea to a couple of French computer scientists specialized in text-mining — a field that had vastly improved since I first looked at it. We decided to give a shot to the idea. Why?

I believe a great media brand bearing a large sets of positive attributes (reliability, scope, depth of coverage) needs to generate an editorial footprint that goes far beyond its own production. It’s a matter of critical mass. In the case of Les Echos, we need to be the very core of business information, both for the general public and for corporations. Readers trust the content we produce, therefore they should trust the reading recommendation we make through our aggregation of relevant web sites. This isn’t an obvious move for journalists who, understandably, aren’t necessarily keen to send traffic to third party web sites. (Interestingly enough, someone at the New York Times told me that a heated debate flared up  within the newsroom a few years ago: To which extent should NYT.com direct readers to its competitors? Apparently, market studies settled the issue by showing that readers of the NYT online actually tended to also like it for being a reliable prescriber.)

In the business field, unlike Google News that crawls an unlimited trove of sources, my original idea was to extract good business stories from both algorithmically and manually selected sources. More importantly, the idea was to bring to the surface, to effectively curate specialized sources — niche web sites and blogs — usually lost in the noise. Near-real-time information also seemed essential, hence the need for an automated gathering process, Techmeme-like. (Techmeme is now supplemented by Mediagazer, one of my favorite readings.)

Where do we go from here?

Initially, we turned to the newsroom, asking beat reporters for a list of reliable sources they regularly monitored. The idea was to build a qualified corpus based on suggestions from our in-house specialists. Techmeme and Mediagazer call it their “leaderboard” (see theirs for tech and media). Perhaps we didn’t have the right pitch, or we were misunderstood, but all we got was a lukewarm reception. Our partner, the French startup Syllabs, came up with a different solution, based on Twitter analysis.

We used our reporters’ 72 most active Twitter accounts to extract URLs embedded in their tweets. This first pass yielded about 5000 URLs, but most turned out to be useless because, most of the time, reporters linked their tweets to their own or their colleagues’ newsroom stories. Then, Syllabs engineers had another idea, they data-mined tweets from people followed by our staff. This yielded 872,000 URLs. After that, another filtering pass found out the true curators, the people who found original sources around the web. Retweets also were counted as they indicate a vote of relevance/confidence. After further statistical analysis of tweet components, the 872,000 URLs were boiled down to less than 400 original sources that were to become the basis of Les Echos 360′s Leaderboard (we are now down to 160 sources).

Building a corpus of sources is one thing, but ranking articles with respect to their weight in the news cycle is yet another story. Every hour, 1,500 to 2,000 news pieces go through a filtering process that defines their semantic footprint (with its associated taxonomy). Then, they are aggregated in “clusters”. Eventually, clusters are ranked based according to a statistical analysis of their “signal” in the general news-flow. Each “Clustering” (collection + ranking) contains 400-500 clusters, a process that more than occasionally overloads our computers.

Despite continuous revisions to its 19,000 lines of code, the system is far from perfect. As expected. In fact it needs two sets of tunings: One to maintaining a wide enough spectrum of sources to properly reflect the diversity of topics we want to cover. With a caveat: profusion doesn’t necessarily create quality. Crawling the long tail of potentially good sources continues to prove difficult. The second needed adjustment is finding the right balance between all parameters: update frequency, the “quality index” of sources – and many other criteria I won’t disclose here. This I compare to the mixing console inside a recording studio. Finding the right sound is tricky.

It took years for Techmeme to refine its algorithm. It might take a while for Les Echos 360 — that’s why we are launching the site in beta (a notion not widely shared in the media sector.) No surprise, a continuous news-flow is an extremely difficult moving target. As for Techmeme and Mediagazer, despite refinements in Gabe Rivera’s work, their algorithm is “rectified” by more than a dozen editors (who even rewrite headlines to make them more explicit and punchier). A much lighter crew will monitor Les Echos 360 through a back-office that will allow us to change cluster rankings and to eliminate parasitic items.

For Les Echos’ digital division, this aggrefilter is a proof of concept, a way to learn a set of technologies we consider essential for the company’s future. The digital news business will be increasingly driven by semantic processes; these will allow publishers to extract much more value from news items, whether they are produced in-house or aggregated/filtered. That is especially true for a business news provider: the more specialized the corpus, the higher the need for advanced processing. Fortunately, it is much easier to fine-tune an aggrefilter for a specific field (logistics, clean-tech, M&A, legal affairs…) than for wider and muddier streams of general news. This new site is just the tip of the iceberg. We built this engine to address a wide array of vertical, business-to-business, needs. It aims to be a source of tangible revenue.

frederic.filloux@mondaynote.com

@filloux 

 

Nadella’s Job One

 

Microsoft has a new CEO – a safe choice, steeped in the old culture, with the Old Guard still on the Board of Directors. This might prevent Nadella from making one tough choice, one vital break with the past.

Once upon a distant time, the new CFO of a colorful personal computer company walks into his first executive staff meeting and proudly shares his thoughts:

“I’ve taken the past few weeks to study the business, and I’d now like to present my top thirty-five priorities…”

This isn’t a fairy tale, I was in the room. I didn’t speak Californian as fluently as I do now, so rather than encourage the fellow with mellifluous platitudes — ‘Interesting’ or, even better, ‘Fascinating, great vision!’ — I spoke my mind, possibly much too clearly:

“This is terrible, disorganized thinking. Claiming to have thirty-five priorities is, in fact, a damning admission: You have none, you don’t even know where to start. Give us your ONE priority and show us how everything else serves that goal…”

The CFO, a sharp, competent businessman, didn’t lose his cool and, after an awkward silence, stepped through his list. Afterwards, with calm poise, he graciously accepted my apologies for having been so abrupt…

Still, you can’t have a litany of priorities.

Turning to Microsoft, will the company’s new CEO, Satya Nadella, focus the company on a true priority, one and only one goal, one absolutely must-win battle? For Nadella, what is Microsoft’s Nothing Else Matters If We Fail?

In his first public pronouncement, the new Eagle of Redmond didn’t do himself any favors by uttering bombastic (and false) platitudes (which were broadly retweeted and ridiculed):

“We are the only ones who can harness the power of software and deliver it through devices and services that truly empower every individual and every organization. We are the only company with history and continued focus in building platforms and ecosystems that create broad opportunity.”

One hesitates. Either Nadella knows this is BS but thinks we’re stupid enough to buy into such pablum. Or he actually believes it and is therefore dangerous for his shareholders and coworkers. Let’s hope it’s the former, that Nadella, steeped in Microsoft’s culture, is simply hewing to his predecessor’s chest-pounding manner. (But let’s also keep in mind the ominous dictum: Culture Eats Strategy For Breakfast.)

Satya_Nadella

Assuming Nadella knows the difference between what he must say and what he must do, what will his true priority be? What battle will he pick that, if lost, will condemn Microsoft to a slow, albeit comfortable, slide into the tribe of has beens?

It can’t be simply tending the crops. Enterprise software, Windows and Office licenses might not grow as fast as they used to, but they’re not immediately threatened. The Online Services Division has problems but they can be dealt with later — it continues to bleed money but the losses are tolerable (about $1B according to the Annual Report). The Xbox One needs no immediate attention.

What really threatens Microsoft’s future is the ebullient, sui generis world of mobile devices, services, and applications. Here, Microsoft’s culture, its habits of the heart and mind, has led the company to a costly mistake.

Microsoft has succeeded, in the past, by straddling the old and the new: The company is masterful at introducing new features without breaking older software. In Microsoft’s unspoken, subconscious culture, the new can only be defined as an extension of the existing, so when it finally decided they it needed a tablet (another one after the Tablet PC failure), the Redmond company set out to build a better device that would also function as a laptop. The best of both worlds.

We know what happened. Users shunned Microsoft’s neither-nor Windows 8 and Surface hybrids. HP has backed away from Windows 8 and now touts its PCs running Windows 7 “Back By Popular Demand”  — this would never have happened when Microsoft lorded over its licensees. And now we hear that the upcoming Windows 8.1 update will boot directly into the conventional Windows 7-like desktop as opposed to the unloved Modern (née Metro) tiles.

Microsoft faces a choice. It can replace the smashed bumper on its truck with a stronger one, drop a new engine into the bay and take another run at the tablet wall. Or it can change direction. The former — continuing to attempt to bridge the gap between tablets and laptops  — will do further damage to the company’s credibility, not to mention its books. The latter requires a radical but simple change: Make an honest tablet using a version of Windows Phone that’s optimized for the things that tablets do well. Leave laptops out of it.

That is a priority, a single, easily stated goal that can be understood by everyone — employees and shareholders, bloggers and customers. To paraphrase a Valley wag, it’s a cri de guerre that’s so simple you can remember it even if you’re tired, drunk, and your spouse has thrown you out in the rain at 3 A.M. in your jockey briefs.

This is an opportunity for the new CEO to make his mark, to show vision, as opposed to mere care-taking.

But will he seize it?

Nadella should know the company by now. He’s been with Microsoft for over twenty years, during which time he’s proven himself to be a supremely technical executive. The company is remarkably prosperous — $78B in revenue in 2013; $22B profit; $77B in cash. This prosperity bought the Board some time when deciding on a new CEO, and should give Nadella a cushion if he decides to redirect the company.

Of course, there’s the Old Guard to contend with. Bill Gates has ceded the Chairman role to John Thompson, but he’ll stay on as a “technical advisor” to Nadella, and Ballmer hasn’t budged — he remains on the Board (for the time being). This might not leave a lot of room for bold moves, for undoing the status quo and for potentially embarrassing (or angering) Board members.
I can’t leave the topic without asking another related question.

We’ve just seen how decisive Larry Page can be. He looked at Motorola’s $2B red ink since they were acquired by Google — no end in sight, no product momentum — and sold the embarrassment to Lenovo. If regulators approve the sale, Motorola will be in competent hands within a company whose leader, Yang Yuanqing also known as YY, plays for the number one position. (Lenovo is the company that, in 2005, bought IBM’s ailing PC business and has since vaulted over Dell and HP to become the world’s premier PC maker.)

With this in mind, looking at the smartphone space where Apple runs its own premium ecosystem game, where Samsung takes no prisoners, where Huawei keeps rising, and where Lenovo will soon weigh in — to say nothing of the many OEMs that make feature phone replacements based on Android’s open source software stack (AOSP) — is it simply too late for Microsoft? Even if he has the will to make it a priority, can Nadella make Windows Phone a player?

If not, will he be as decisive as Larry Page?

JLG@mondaynote.com
@gassee

Why Twitter needs a design reset

 

Twitter is the archetype of a greatly successful service that complacently iterates itself without much regard for changes in its uses. Such behavior makes the service — and others like it — vulnerable to disruptive newcomers. 

Twitter might be the smartest new media of the decade, but its user interface sucks. None of its heavy users is ready to admit it for simple reason: Twitter is fantastic in broadcast mode, but terrible in consumption mode. Herein lies the distortion: most Twitter promoters broadcast tweets as much as they read them. The logical consequence is a broad complacency: Twitter is great, because its most intensive broadcasters say so. The ones who rarely tweet but use the service as a permanent and tailored news feed are simply ignored. They suffer in silence — and they are up for grabs by the inevitable disrupter.

Twitter’s integration can’t be easier. Your Tweet it from any content, from your desktop with an app accessible in the toolbar, or from your smartphone. Twitter guarantees instant execution followed by immediate gratification: right after the last keystroke, your tweet is up for a global propagation.

But when it comes to managing your timeline, it’s a different story. Unless you spend most of  your time on Tweeter, you miss many interesting items. Organizing feeds is a cumbersome process. Like everybody else, I tried many Twitter’s desktop or mobile apps. None of them really worked for me. Even TweetDeck seems to have been designed by an IBM coder from the former Soviet régime. I looked around my professional environment and was stunned by the number of people who acknowledge going back to the basic Tweeter app after unsuccessful tries elsewhere.

Many things are wrong in the Twitter’s user interface and it’s time to admit it. In the  real world, where my 4G connection too often falls back to a sluggish EDGE network, watching a Tweeter feed in a mobile setting becomes a nightmare. It happens to me every single day.

Here is a short list of nice-to-have features:

Background Auto-refresh. Why do I have to perform a manual refresh in my Twitter app each time I’m going to my smartphone (even though the app is running in the background)? My email client does it, so do many apps that push contents to my device. Alternatively, I’d be happy with refresh preset intervals and not having to struggle to catch up with stuff I might have missed…
Speaking of refreshes, I would love to see iOS and Android coming up with a super-basic refresh system: as long as my apps are open in the background, I would have a single “Update Now” button telling all my relevant apps (Email reader, RSS reader, Twitter, Google Current, Zite, Flipboard, etc.) to quickly upload the stuff I’m used to read while I still have a decent signal.

Save the Tweet feature. Again, when I ride the subway (in Paris, London or NYC), I get a poor connection – at best. Then, why not offer a function such as a gentle swipe of my thumb to put aside a tweet that contains an interesting link for later viewing?

Recommendation engine. Usually, I will follow someone I spot within the subscriptions of someone I already follow and appreciate. Or from a retweet. Twitter knows exactly what my center of interests are. Therefore it would be perfectly able to match my “semantic footprint” to others’.

Tag system. Again, Twitter maintains a precise map of many of its users, or at least of those categorized as “influencers”. When I subscribe to someone who already has thousands of followers, why not tie this user to metadata vectors that will categorize my feeds? Overtime, I would built a formidable cluster of feeds catering to my obsessions…

I’m puzzled by Twitter’s apparent inability to understand the needs of the basic users. The company is far from unique in this regard.; instead, it keeps relying on a self-centered elite of trendy aficionados to maintain the comfy illusion of universal approval – until someone comes up with a radical new approach.

This is the “NASA/SpaceX syndrome”. For decades, NASA kept sending people and crafts to space in the same fashion: A huge administrative machine, coordinating thousands of contractors. As Jason Pontin wrote in his landmark piece of the MIT’s Technology Review:

In all, NASA spent $24 billion, or about $180 billion in today’s dollars, on Apollo; at its peak in the mid-1960s, the agency enjoyed more than 4 percent of the federal budget. The program employed around 400,000 people and demanded the collaboration of about 20,000 companies, universities, and government agencies. 

Just to update Pontin’s statement, the International Space Station cost $100bn to build over a ten years period and needs about $3bn per year to operate.

That was until a major disrupter, namely Elon Musk came up with a different way to build a rocket. His company, Space X, has a long way to go but it is already able to send objects (and soon people) to the ISS at a fraction of Nasa’s cost. (Read the excellent story The Shared Genius of Elon Musk and Steve Jobs by Chris Anderson in Fortune.)

In the case of the space exploration, Elon Musk-the-outsider, along with its “System-level design thinking powered by extraordinary conviction” (as Anderson puts it), simply broke Nasa’a iteration cycle with a completely different approach.

That’s how tech company become vulnerable: they keep iterating their product instead of inducing disruption within their own ranks. It’s the case for Twitter, Microsoft, Facebook.

There is one obvious exception – and a debatable one. Apple appears to be the only one able to nurture disruption in its midst. One reason is the obsessive compartmentalization of development projects wrapped in paranoid secrecy. Apple creates an internal cordon sanitaire  that protects new products from outside influences – even within the company itself. People there work on products without kibitzing, derivative, “more for less” market research.

Google operates differently as it encourages disruption with its notorious 20% of work time that can be used by engineers to work on new-new things (only Google’s dominant caste is entitled to such contribution). It also segregated GoogleX, its “moonshots” division.

To conclude, let me mention one tiny example of a general-user approach that collides with convention. It involves the unsexy world of calendars on smartphones. At first sight not a fertile field of outstanding innovation. Then came PeekCalendar, a remarkably simple way to manage your schedule (video here) on an iPhone.

Peek-Photo3

This app was developed by Squaremountains.com, a startup created by an IDEO alumni, and connected to Estonian company Velvet. PeekCalendar is gently dismissed by techno-pundits as only suitable for not-so-busy people. I tested it and – with a few bugs – it nicely accommodates my schedule  of 25-30 appointments a week.

Showing this app during design sessions with my team at work also made me feel that the media sphere is by no mean immune to the criticism I detailed above. Our industry is too shy when it comes to design innovations. Most often, for fear of losing our precious readership, we carefully iterate instead of seeking disruption. Inevitably, a young company with nothing to lose nor preserve will come up with something new and eat our lunch. Maybe it’s time to Think Different™.

frederic.filloux@mondaynote.com
@filloux 

 

Apple Numbers For Normals: It’s The 5C, Stupid!

 

Today’s unscientific and friendly castigation of Apple’s iPhone 5C costly stumble: misdirected differentiation without enough regard for actual customer aspirations.

Here’s a quick snapshot of Apple’s numbers for the quarter ending December 2013, with percentage changes over the same quarter a year ago:

307TABLE JLG

We can disregard the iPod’s “alarming” decrease. The iPod, which has become more of an iPhone ingredient, is no longer expected to be the star revenue maker that it was back in 2006 when it eclipsed the Mac ($7.6B vs. $7.4B for the full year).

For iPhones, iPads, and overall revenue, on the other hand, these are record numbers…. and yet Apple shares promptly lost 8% of their value.

Why?

It couldn’t have been that the market was surprised. The numbers exactly match the guidance (a prophylactic legalese substitute for forecast) that was given to us by CFO Peter Oppenheimer last October:

“We expect revenue to be between $55 billion and $58 billion compared to $54.5 billion in the year ago quarter. We expect gross margins to be between 36.5% and 37.5%.”

(Non-normals can feast their eyes on Apple’s 10-Q filing and its lovingly detailed MD&A section. I’m sincere about the “lovingly” part — it’s great reading if you’re into it.)

Apple guidance be damned, Wall Street traders expected higher iPhone numbers. As Philip Elmer-DeWitt summarizes in an Apple 2.0 post, professional analysts expected about 55M iPhones, 4M more than the company actually sold. At $640 per iPhone, that’s about $2.5B in lost revenue and, assuming 60% margin, $1.5B in profit. The traders promptly dumped the shares they had bought on the hopes of higher revenues.

In Apple’s choreographed, one-hour Earnings Call last Monday (transcript here), company execs offered a number of explanations for the shortfall (one might say they offered a few too many explanations). Discussing proportion of sales of the iPhone 5S vs. iPhone 5C. Here what Tim Cook had to say [emphasis mine]:

“Our North American business contracted somewhat year over year. And if you look at the reason for this, one was that as we entered the quarter, and forecasted our iPhone sales, where we achieved what we thought, we actually sold more iPhone 5Ss than we projected.

And so the mix was stronger to the 5S, and it took us some amount of time in order to build the mix that customers were demanding. And as a result, we lost some sort of units for part of the quarter in North America and relative to the world, it took us the bulk of the quarter, almost all the quarter, to get the iPhone 5S into proper supply.

[…]

It was the first time we’d ever run that particular play before, and demand percentage turned out to be different than we thought.

In plainer English:

“Customers preferred the 5S to the 5C. We were caught short, we didn’t have enough 5Ss to meet the demand and so we missed out on at least 4 million iPhone sales.”

Or, reading between the lines:

“Customers failed to see the crystalline purity of the innovative 5C design and flocked instead to the more derivative — but flattering — 5S.”

Later, Cook concludes the 5S/5C discussion and offers rote congratulations all around:

“I think last quarter we did a tremendous job, particularly given the mix was something very different than we thought.”

… which means:

“Floggings will begin behind closed doors.”

How can a company that’s so precisely managed — and so tuned-in to its customers’ desires — make such a puzzling forecast error? This isn’t like the shortcoming in the December 2012 quarter when Apple couldn’t deliver the iMacs it had announced in October. This is a different kind of mistake, a bad marketing call, a deviation from the Apple game plan.

With previous iPhone releases, Apple stuck to a simple price ladder with $100 intervals. For example, when Apple launched the iPhone 5 in October 2012, US carriers offered the new device for $200 (with a two-year contract), the 2011 iPhone 4S was discounted to $100, and the 2010 iPhone 4 was “free”.

But when the iPhone 5S was unveiled last September, Apple didn’t deploy the 2012 iPhone 5 for $100 less than the new flagship device. Instead, Apple “market engineered” the plastic-clad 5C to take its place. Mostly built of iPhone 5 innards, the colorful 5C was meant to provide differentiation… and it did, but not in ways that helped Apple’s revenue — or their customers’ self-image.

Picture two iPhone users. One has a spanking new iPhone 5S, the other has an iPhone 5 that he bought last year. What do you see? Two smartphone users of equally discerning taste who, at different times, bought the top-of-the-line product. The iPhone 5 user isn’t déclassé, he’s just waiting for the upgrade window to open.

Now, replace the iPhone 5 with an iPhone 5C. We see two iPhones bought at the same time… but the 5C owner went for the cheaper, plastic model.

We might not like to hear psychologists say we build parts of our identity with objects we surround ourselves with, but they’re largely right. From cars, to Burberry garments and accessories, to smartphones, the objects we choose mean something about who we are — or who we want to appear to be.

I often hear people claim they’re not interested in cars, that they just buy “transportation”, but when I look at an office or shopping center parking lot, I don’t see cars that people bought simply because the wheels were round and black. When you’re parking your two-year old Audi 5S coupe (a vehicle once favored by a very senior Apple exec) next to the new and improved 2014 model, do you feel you’re of a lesser social station? Of course not. You both bought into what experts call the Affordable Luxury category. But you’re self-assessment would be different if you drove up in a Volkswagen Jetta. It’s made by the same German conglomerate, but now you’re in a different class. (This isn’t to say brand image trumps function. To the contrary, function can kill image, ask Nokia or Detroit.)

The misbegotten iPhone 5C is the Jetta next to the Audi 5S coupé. Both are fine cars and the 5C is a good smartphone – but customers, in numbers large enough to disrupt Apple’s forecast, didn’t like what the 5C would do to their image..

As always, it’ll be interesting to observe how the company steers out of this marketing mistake.

There is much more to watch in coming months: How Apple and its competitors adapt to a new era of slower growth; how carriers change their behavior (pricing and the all important subsidies) in the new growth mode; and, of course, if and how “new categories” change Apple’s business. On this, one must be cautious and refrain from expecting another iPhone or iPad explosion, with new products yielding tens of billions of dollars in revenue. Fodder for future Monday Notes.

JLG@mondaynote.com

@gassee

 

Mac Pro: Seymour Cray Would Have Approved

 

As we celebrate 30 years of Macintosh struggles and triumphs, let’s start with a semiserious, unscientific comparison between the original 128K Mac and its dark, polished, brooding descendant, the Mac Pro.

Mac 128KMac Pro

The original 128K Mac was 13.6” high, 9.6” wide, 10.9” deep (35.4 x 24.4 x 26.4 cm) and 16.5 lb (7.5 kg). Today’s Mac Pro is 9.9″ by 6.6″ (25 by 17 cm) and weighs 11 lb (5 kg) — smaller, shorter, and lighter than its ancient progenitor. Open your hand and stretch your fingers wide: The distance from the tip of your pinky to the tip of your thumb is in the 9 to 10 inches range (for most males). This gives you an idea of how astonishingly small the Mac Pro is.

At 7 teraflops, the new Pro’s performance specs are impressive…but what’s even more impressive is how all that computing power is stuffed into such a small package without everything melting down. Look inside the new Mac Pro and you’ll find a Xeon processor, twin AMD FirePro graphics engines, main memory, a solid-state “drive”, driven by 450W of maximum electric power… and all cooled by a single fan. The previous Mac Pro version, at only 2 teraflops, needed eight blowers to keep its GPU happy.

The Mac Pro achieves a level of “computing energy density” that Seymour Cray — the master of finding ways to cool high-performance, tightly packaged systems, and a Mac user himself — would have approved of.

(I’ve long been an admirer of Seymour Cray, ever since the introduction of his company’s first commercial supercomputer, the CDC 6600. In the early nineties, I was a Board member and investor at Cray Inc.  My memories of Seymour would fill an entire Monday Note. If you’re familiar with the name but not the supercomputer genius himself, I can recommend the Wikipedia article; it’s quite well-written.)

During Cray’s era of supercomputing — the 1960’s to early 90’s — processors were discrete, built from separate components. All of these building blocks had to be kept as close to each other as possible in order to stay in sync, to stay within the same “time horizon”. (Grace Hopper’s famous “one nanosecond equals a foot of wire” illustration comes to mind.) However, the faster the electronic module is, the more heat it generates, and when components are packed tightly together, it becomes increasingly difficult to pump out enough heat to avoid a meltdown.

That’s where Cray’s genius expressed itself. Not only could he plot impossibly tight circuit paths to guarantee the same propagation time for all logic signals, he designed these paths in ways that allowed adequate cooling. He sometimes referred to himself, half-seriously, as a good plumber.

(Seymour once told me he could fold a suit, change of shirt, and underwear in his small Delsey briefcase, and thus speed through airports on the way to a fund raising meeting while his investment bankers struggled with their unwieldy Hartmann garment bags…)

I finally met Seymour in December 1985 while I was head of Apple’s Product Development. The Mac Plus project was essentially done and the Mac II and Mac SE projects were also on their way (they would launch in 1987). Having catered to the most urgent tasks, we were looking at a more distant horizon, at ways to leap ahead of everyone else in the personal computer field. We concluded we had to design our own CPU chip, a quad-processor (today we’d call it a “four-core chip”). To do this, we needed a computer that could run the design and simulation software for such an ambitious project, a computer of commensurate capabilities, hence our choice of a Cray X/MP, and the visit to Seymour Cray.

For the design of the chip, the plan was to work with AT&T Microelectronics — not the AT&T we know now, but the home of Bell Labs, the birthplace of the transistor, Unix, the C language, cellular telephony and many other inventions. Our decision to create our own CPU wasn’t universally well-received. The harshest critics cast Apple as a “toy company” that had no business designing its own CPU chip. Others understood the idea but felt we vastly underestimated the technical challenges. Unfortunately, they turned out to be right. AT&T Microelectronics ultimately bailed out of the microprocessor business altogether.

(Given this history, I couldn’t help be amused when critics scoffed at Apple’s decision to acquire P.A. Semiconductor in 2008 and, once again, attempt to design its own microprocessors. Even if the chip could be built, Apple could never compete against the well-established experts in the field… and it would cost Apple a billion dollars, either way. The number was widely off the mark – and knowing Apple’s financials wouldn’t matter anyway. We know what happened: The 64-bit A7 device took the industry by surprise.)

Thirty years after the introduction of the original Mac, the Mac Pro is both different and consistent. It’s not a machine for everyone: If you mostly just use ordinary office productivity apps, an iMac will provide more bang for less buck (which means that, sadly, I don’t qualify as a Mac Pro user). But like the 128K Mac, the Mac Pro is dedicated to our creative side; it serves the folks who produce audio and video content, who run graphics-intensive simulations. As Steve put it so well, the Mac Pro is at the crossroad of technology and liberal arts:

Crossroads

Still, thirty years later, I find the Mac, Pro or “normal” every bit as seductive, promising – and occasionally frustrating – as its now enshrined progenitor.

As a finishing touch, the Mac Pro, like its ancestor, is designed and assembled in the US.

JLG@mondaynote.com

————————–

Postscript. At the risk of spoiling the fun in the impressive Making the all-new Mac Pro video, I wonder about the contrast between the powerful manufacturing operation depicted in the video and the delivery constipation. When I ordered my iMac early October 2013, I was promised delivery in 5-7 business days, a strange echo of of the December 2012 quarter iMac shipments shortfall. The machine arrived five weeks later without explanation or updated forecast. Let’s hope this was due to higher than expected demand, and that Apple’s claim that Mac Pro orders will ship “in March” won’t leave media pros wanting.

Those media assets that are worth nothing

 

The valuation gap between high tech and media companies has never been wider. The erosion  of their revenue model might be the main culprit, but management teams, unions and boards of directors also bear their heavy share of responsibility. 

Two weeks ago, with a transaction that reset the value of printed assets to almost nothing, the French market for newsmagazines collapsed for good. Le Monde acquired 65% of the weekly Le Nouvel Observateur for a mere €13.4m ($18m), at a valuation of €20m ($27m). In fact, thanks to convoluted transaction terms, Le Monde will actually disburse less than €10m for its controlling share.

This number is a hard fact, it confirms the downward spiral of French legacy media values. For a while, rumors have been flying about bids for prominent newsmagazines that would float around €20m. At the same time, Lagardère Groupe (a €7bn media conglomerate based in Paris) put most of its French magazines on the block, saying it would close them down if no buyer showed up. It turned out to be a “good” way to tip potential bidders, they can now sit and wait for prices to come down as balance sheets continue to deteriorate. This brilliant strategy is attributable to Arnaud Lagardère, the son of Jean-Luc Lagardère, the swashbuckling group founder. The heir is fond of tennis, top-models and embarrassing statements. He once said of himself: “Maybe [he] is incompetent, but not dishonest” — definitely right on the first count. Today, Lagardère Groupe faces a negative value for a large part of its magazine portfolio, meaning it is willing to actually pay the buyer willing to acquire a publication.

I discussed this situation with financial analysts in Paris and London. They are unforgivingly critical of the causes for this unprecedented value depletion. For a start, newsweeklies paid the price of deteriorating copy sales (roughly -15% for 2013) and of an anemic advertising market. But the real sin, these analysts point out, is the delay in transforming and restructuring companies. One put it bluntly:  “It is clear there won’t be a single euro left for shareholders who didn’t do their job. Today, every acquisition on the French market is first and foremost weighed down by the need for a costly restructuring, which, in addition, will take three or for times longer than in the UK or elsewhere in Europe”.

The case of Le Nouvel Observateur is the perfect example. This iconic magazine of the French social democrats perfectly fits the picture of a nursing home where residents don’t do much while waiting for the unavoidable end. A thick layer of journalists there are keen to praise the weekly: “You come on a tuesday morning to write your column and by the following thursday, you’re gone. I don’t complain.” Two insiders told me that one of the events that finally pushed the aging owner of the “Nouvel Obs” to sell was the nixing of a timid management proposal: cutting one week of vacation (out of twelve) to save money. Also true, a good third of the staff actually does working hard to produce the magazine week after week. But a digital transformation — comparable, for instance, to what the Atlantic Media Group undertook is the US — is a dream completely out of reach.

From an investor standpoint, buying the Nouvel Observateur means spending from the outset €15m to €20m, just to realign the company with decent working practices. French laws and collective bargaining do not help. In the case of Le Nouvel Observateur, the change in ownership will trigger a “clause of transfer” that will entitle every journalist to leave the company with at least one month of salary per year of employment (raised to 120% of the monthly wage beyond 15 years). For the upper layer of the newsroom that will see their working habits incompatible with a probable productivity realignment, this could be a once-in-a-lifetime opportunity to reward their long and tranquil tenure… at a cost of several million euros for the new owner. The same goes for mandatory buyouts, the customary way to push out people no longer needed. (What is Le Monde buying you might ask? Basically a 500,000 subscribers base, a better bargaining position on the advertising market, add a dose of vanity…)

Again, from a investor perspective, being forced to spend €15m-20m before allocating the first cent to a transformative investment is a severe deterrent. This mechanism also threatens daily newspapers such as Liberation (another icon of the French left wing, where I spent 12 years of my career). Isolated, stuck with a single product, dealing with a 35% decline in its paid circulation last year, a weak advertising base and a discredited management (in a recent internal vote, 90% of staff mistrust the bosses), a negative P&L despite €12m in State subsidies, this company faces a certain death unless it radically transforms itself. Its only way to survive might be to forgo the costly daily print edition, move to a well-crafted weekly distributed in selected urban areas, and extend it to realtime digital coverage on web, mobile and tablet. But such a move would mean yet another downsizing, along with heavy costs. No one is willing to be dragged into such “social Vietnam”, as one of my interlocutors puts it.

Those who advise potential buyers are quick to point out that, if the goal is to take a position in the digital world, their money would better be spent in building a pure player from the ground up. With €20 or 40 million, you can definitely build something powerful in the journalistic field.

The highly publicized startup culture — some would say “ideology” — with its unparalleled mixture of agility and skyrocketing valuations contributes to the demise of legacy medias. Consider the table below. It shows the gap between the valuation of each customer of social networks and legacy media:

305 valuations

For what it’s worth, this comparison illustrates the tremendous loss in value for legacy media. Several actually make (slim) profits while digital companies such as Pinterest or Snapchat don’t even have a revenue model. But as unfair as it sounds, investors — venture capital firms, Wall Street, high tech giants — are betting on two factors: the scalability of current user bases (with factors 10x or 20x being the norm) and also the ability of digital players to swiftly adjust themselves to quickly changing environments. Two qualities unfortunately not associated with legacy media.

frederic.filloux@mondaynote.com