Growing Forces in Mobile

 

As seen last week in Barcelona, the mobile industry is red hot. The media sector will have to work harder to capture its share of that growth.

The 2013 edition of the Mobile World Congress held last week in Barcelona was as large as the biggest auto-show in the world: 1500 exhibitors and a crowd of 72,000 attendees from 200 countries. The mobile industry is roaring like never before. But the news media industry lags and will have to fight hard to stay in the game. Astonishingly, only two media companies deigned to show up: Pearson with its huge education business accounting for 75% of its 2012 revenue (vs. 7% for its Financial Times unit); and Agence France-Presse which is entering the customized application market. No other big media brand in sight, no trade organizations either. Apparently, the information sector is about to miss the mobile train.

Let’s begin with data that piqued my interest, from AT Kearney surveys for the GSM Association.

Individual mobile subscribers: In 2012, the worldwide number of mobile subscribers reached 3.2 billion. A billion subscribers was added in the last four years. As the world population is expected to grow by 1.1% per year between 2008 and 2017, the mobile sector enjoyed a 8.3% CAGR (Compound Annual Growth Rate) for the 2008-2012 period. For the 2012 – 2017 interval the expected CAGR is 4.2%. The 4 billion subscribers mark will be passed in 2018. By that time, 80% of the global population will be connected via a mobile device.

The rise of the machines. When machine-to-machine (M2M) connections are taken into account, growth becomes even more spectacular: In 2012, there were 6.8 billion active SIM cards, 3% of them being M2M connections. In 2017, there will be 9.7 billion active SIM cards and the share of M2M connections will account for 13% with almost 1.3 billion devices talking to each other.
The Asia-Pacific region will account for half of the connection growth, both for individual subscriptions and M2M.

We’ll now turn to stats that could benefit the media industry.

Mobile growth will be mostly driven by data usage. In 2012, the volume of data exchanged through mobile devices amounted to .9 exabytes per month (1 exabyte = 1bn gigabytes), this is more than the all preceding years combined! By 2017, it is expected to reach 11.2 exabytes, that’s a 66% CAGR!

A large part of this volume will come from the deployment of 4G (LTE) networks. Between now and 2017, deploying LTE technology will result in a 4X increase in connection speeds.

For the 2012 – 2017 period, bandwidth distribution is expected to grow as follows:

M2M:......... +89% 
Video:....... +75% 
Gaming:...... +62% 
Other data:...+55% 
File sharing: +34% 
VoIP:........ +34%

Obviously, the huge growth of video streaming (+75%) points to a great opportunity for the media industry as users will tend to watch news capsules on-the-go in the same way they today look at a mobile web sites or an app (these two will be part of the 55% annual growth).

The growing social mobility will also be an issue for news media. Here are the key figures for today in active mobile users

Facebook:...680m 
Twitter:....120m 
LinkedIn:....46m 
Foursquare:..30m

Still, as important as it is, social usage only accounts for 17 minutes per day, vs. 25 minutes for internet browsing and a mere 12 minutes for voice calls. Most likely, the growth of video will impact the use of social networks as Facebook collects more and more videos directly uploaded from smartphones.

A large part of this growth will be driven by the rise of inexpensive smartphones. Last week in Barcelona, the largest stand was obviously Samsung’s. But a huge crowd also gathered around Huawei or ZTE showing sophisticated Android-powered smartphones — at much lower prices. This came as a surprise to many westerners like me who don’t have access to these Chinese devices. And for emerging markets, Firefox is coming with a HTML5 operating system that looked surprisingly good.

In years to come, the growing number of operating systems, screen sizes and features will be a challenge. (At the MWC, the trend was definitely in favor of large screens, read this story in Engadget.) An entire hall was devoted to applications — and software aimed at producing apps in a more standardized, economical fashion. As a result, we might see three approaches to delivering contents on mobile:
- The simplest way will be mobile sites based on HTML5 and responsive design; more features will be embedded in web applications.
- The second stage will consist of semi-native apps, quickly produced using standardized tools, allowing fast updates and adaptations to a broad range of devices.
- The third way will involve expensive deep-coded native apps aimed at supporting sophisticated graphics; they will mainly be deployed by the gaming industry.

In upcoming Monday Notes, we will address two majors mobile industry trends not tied to the media industry: Connected Living (home-car-city), a sector likely to account for most machine-to-machine use; and digital education taking advantage of a happy combination of more affordable handsets and better bandwidth.

frederic.filloux@mondaynote.com

Google’s Red Guide to the Android App Store

 

As they approach the one million apps mark, smartphone and tablet app stores leave users stranded in thick, uncharted forests. What are Google and Apple waiting?

Last week, Google made the following announcement:

Mountain View, February 24th, 2013 — As part of an industry that owes so much to Steve Jobs, we remember him on this day, the 58th anniversary of his birth, with great sadness but also with gratitude. Of Steve’s many achievements, we particularly want to celebrate the Apple App Store, the venerable purveyor of iPhone software. 

Introduced in 2008, the App Store was an obvious and natural descendant of iTunes. What wasn’t obvious or foreseen was that the App Store would act as a catalyst for an entire market segment, that it would metamorphose the iPhone from mere smartphone to app phone. This metamorphosis provided an enormous boost to the mobile industry worldwide, a boost that has benefitted us all and Google more than most.

But despite the success of the app phone there’s no question that today’s mobile application stores, our own Google Play included, are poorly curated. No one seems to be in charge, there’s no responsibility for reviewing and grading apps, there’s no explanation of the criteria that goes into the “Editors’ Picks”, app categorization is skin deep and chaotic.

Today, we want to correct this fault and, at the same time, pay homage to Steve’s elegant idea by announcing a new service: The Google Play Red Guide. Powered by Google’s human and computer resources, the Red Guide will help customers identify the trees as they wander through the forest of Android apps. The Red Guide will provide a new level of usefulness and fun for users — and will increase the revenue opportunities for application developers.

With the Google Play Red Guide, we’ll bring an end to the era of the uncharted, undocumented, and poorly policed mobile app store.

The Red Guide takes its name from another great high-tech company, Michelin. At the turn of the 20th century, Michelin saw it needed to promote automotive travel in order to stimulate tire sales. It researched, designed and published great maps, something we can all relate to. To further encourage travel, Michelin published Le Guide Rouge, a compendium of hotels and restaurant. A hundred years later, the Michelin Red Guide is still considered the world’s standard; its inspectors are anonymous and thus incorruptible, their opinions taken seriously. Even a single star award (out of three) can put an otherwise unknown restaurant on the map — literally.

Our Red Guide will comprise the following:

- “Hello, World”, a list of indispensable apps for the first time Android customer (or iPhone apostate), with tips, How-To guides, and FAQs.
- “Hot and Not”. Reviews of new apps and upgrades — and the occasional downgrade.
- “In Our Opinion”. This is the heart of the Guide, a catalogue of reviews written by a select group of Google Play staff who have hot line access to Google’s huge population of in-house subject matter experts. The reviews will be grouped into sections: Productivity, e-Learning, Games, Arts & Creativity, Communication, Food & Beverage, Healthcare, Spirituality, Travel, Entertainment, Civics & Philanthropy, Google Glass, with subcategories for each.

Our own involvement in reviewing Android apps is a novel — perhaps even a controversial — approach, but it’s much needed. We could have taken the easy path: Let users and third-parties provide the reviews. But third party motives are sometimes questionable, their resources quickly exhausted. And with the Android Store inventory rapidly approaching a million titles, our users deserve a trustworthy guide, a consistent voice to lead them to the app that fits.

We created the Red Guide because we care about our Android users, we want them to “play safe” and be productive, and we feel there’s no better judge of whether an application will degrade your phone’s performance or do what it claims than the people who created and maintain the Android framework. For developers, we’re now in a position to move from a jungle to a well-tended garden where the best work will be recognized, and the not-so-great creations will be encouraged to raise their game.

We spent a great deal of time at Google identifying exactly the right person to oversee this delicate proposition…and now we can reveal the real reason why Google’s Motorola division hired noted Macintosh evangelist, auteur, and investor Guy Kawasaki as an advisor: Guy will act as the Editor in Chief of the Google Play Red Guide.

With Guy at the helm, you can expect the same monkish dedication and unlimited resources we deployed when we created Google Maps.

As we welcome everyone to the Google Play Red Guide, we again thank Steve Jobs for his leadership and inspiration. Our algorithms tell us he would have approved.

The Red Guide is an open product and will be published on the Web at AppStoreRedguide.com as well as in e-book formats (iBookstore and Kindle formats pending approval) for open multi-platform enjoyment.
——– 

No need to belabor the obvious, you’ve already figured out that this is all a fiction. Google is no better than Apple when it comes to their mobile application store. Both companies let users and developers fend for themselves, lost in a thick forest of apps.

That neither company seems to care about their online stores’ customers makes no sense: Smartphone users download more apps than songs and videos combined, and the trend isn’t slowing. According to MobiThinking:

IDC predicts that global downloads will reach 76.9 billion in 2014 and will be worth US$35 billion.

Unfortunately, Apple appears to be resting on its laurels, basking in its great App Store numbers: 40 billion served, $8B paid to developers. Perhaps the reasoning goes like this: iTunes served the iPod well; the App Store can do the same for the iPhone. It ain’t broke; no fix needed.

But serving up music and movies — satisfying the user’s established taste with self-contained morsels of entertainment — is considerably different from leading the user to the right tool for a job that may be only vaguely defined.

Apple’s App Store numbers are impressive… but how would these numbers look like if someone else, Google for example, showed the kind of curation leadership Apple fails to assert?

JLG@mondaynote.com

Google News: The Secret Sauce

 

A closer look at Google’s patent for its news retrieval algorithm reveals a greater than expected emphasis on quality over quantity. Can this bias stay reliable over time?

Ten years after its launch, Google News’ raw numbers are staggering: 50,000 sources scanned, 72 editions in 30 languages. Google’s crippled communication machine, plagued by bureaucracy and paranoia, has never been able to come up with tangible facts about its benefits for the news media it feeds on. It’s official blog merely mentions “6 billion visits per month” sent to news sites and Google News claims to connect “1 billion unique users a week to news content” (to put things in perspective, the NYT.com or the Huffington Post are cruising at about 40 million UVs per month). Assuming the clicks are sent to a relatively fresh news page bearing higher value advertising, the six billion visits can translate into about $400 million per year in ad revenue. (This is based on a $5 to $6 revenue per 1,000 pages, i.e. a few dollars in CPM per single ad, depending on format, type of selling, etc.) That’s a very rough estimate. Again: Google should settle the matter and come up with accurate figures for its largest markets. (On the same subject, see a previous Monday Note: The press, Google, its algorithm, their scale.)

But how exactly does Google News work? What kind of media does its algorithm favor most? Last week, the search giant updated its patent filing with a new document detailing the thirteen metrics it uses to retrieve and rank articles and sources for its news service. (Computerworld unearthed the filing, it’s here).

What follows is a summary of those metrics, listed in the order shown in the patent filing, along with a subjective appreciation of their reliability, vulnerability to cheating, relevancy, etc.

#1. Volume of production from a news source:

A first metric in determining the quality of a news source may include the number of articles produced by the news source during a given time period [week or month]. [This metric] may be determined by counting the number of non-duplicate articles produced by the news source over the time period [or] counting the number of original sentences produced by the news source.

This metric clearly favors production capacity. It benefits big media companies deploying large staffs. But the system can also be cheated by content farms (Google already addressed these questions); new automated content creation systems are gaining traction, many of them could now easily pass the Turing Test.

#2. Length of articles. Plain and simple: the longer the story (on average), the higher the source ranks. This is bad news for aggregators whose digital serfs cut, paste, compile and mangle abstracts of news stories that real media outlets produce at great expense.

#3. “The importance of coverage by the news source”. To put it another way, this matches the volume of coverage by the news source against the general volume of text generated by a topic. Again, it rewards large resource allocation to a given event. (In New York Times parlance, such effort is called called “flooding the zone”.)

#4. The “Breaking News Score”:   

This metric may measure the ability of the news source to publish a story soon after an important event has occurred. This metric may average the “breaking score” of each non-duplicate article from the news source, where, for example, the breaking score is a number that is a high value if the article was published soon after the news event happened and a low value if the article was published after much time had elapsed since the news story broke.

Beware slow moving newsrooms: On this metric, you’ll be competing against more agile, maybe less scrupulous staffs that “publish first, verify later”. This requires a smart arbitrage by the news producers. Once the first headline has been pushed, they’ll have to decide what’s best: Immediately filing a follow-up or waiting a bit and moving a longer, more value-added story that will rank better in metrics #2 and #3? It depends on elements such as the size of the “cluster” (the number of stories pertaining to a given event).

#5. Usage Patterns:

Links going from the news search engine’s web page to individual articles may be monitored for usage (e.g., clicks). News sources that are selected often are detected and a value proportional to observed usage is assigned. Well known sites, such as CNN, tend to be preferred to less popular sites (…). The traffic measured may be normalized by the number of opportunities readers had of visiting the link to avoid biasing the measure due to the ranking preferences of the news search engine.

This metric is at the core of Google’s business: assessing the popularity of a website thanks to the various PageRank components, including the number of links that point to it.

#6. The “Human opinion of the news source”:

Users in general may be polled to identify the newspapers (or magazines) that the users enjoy reading (or have visited). Alternatively or in addition, users of the news search engine may be polled to determine the news web sites that the users enjoy visiting. 

Here, things get interesting. Google clearly states it will use third party surveys to detect the public’s preference among various medias — not only their website, but also their “historic” media assets. According to the patent filing, the evaluation could also include the number of Pulitzer Prizes the organization collected and the age of the publication. That’s for the known part. What lies behind the notion of “Human opinion” is a true “quality index” for news sources that is not necessarily correlated to their digital presence. Such factors clearly favors legacy media.

# 7. Audience and traffic. Not surprisingly Google relies on stats coming from Nielsen Netratings and the like.

#8. Staff size. The bigger a newsroom is (as detected in bylines) the higher the value will be. This metric has the merit of rewarding large investments in news gathering. But it might become more imprecise as “large” digital newsrooms tend now to be staffed with news repackagers bearing little added value.

#9. Numbers of news bureaus. It’s another way to favor large organizations — even though their footprint tends to shrink both nationally and abroad.

#10. Number of “original named entities”. That’s one of the most interesting metric. A “named entity is the name of a person, place or organization”. It’s the primary tool for semantic analysis.

If a news source generates a news story that contains a named entity that other articles within the same cluster (hence on the same topic) do not contain, this may be an indication that the news source is capable of original reporting.

Of course, some cheaters insert misspelled entities to create “false” original entities and fool the system (Google took care of it). But this metric is a good way to reward original source-finding.

#11. The “breadth” of the news source. It pertains to the ability of a news organizations to cover a wide range of topics.

#12. The global reach of the news sources. Again, it favors large media who are viewed, linked, quoted, “liked”, tweeted from abroad.

This metric may measure the number of countries from which the news site receives network traffic. In one implementation consistent with the principles of the invention, this metric may be measured by considering the countries from which known visitors to the news web site are coming (e.g., based at least in part on the Internet Protocol (IP) addresses of those users that click on the links from the search site to articles by the news source being measured). The corresponding IP addresses may be mapped to the originating countries based on a table of known IP block to country mappings.

#13. Writing style. In the Google world, this means statistical analysis of contents against a huge language model to assess “spelling correctness, grammar and reading levels”.

What conclusions can we draw? This enumeration clearly shows Google intends to favor legacy media (print or broadcast news) over pure players, aggregators or digital native organizations. All the features recently added, such as Editor’s pick, reinforce this bias. The reason might be that legacy media are less prone to tricking the algorithm. For once, a know technological weakness becomes an advantage.

frederic.filloux@mondaynote.com

iPad and File Systems: Failure of Empathy

 

The iPad placed a clear bet on simplicity — and was criticized for it. The bet won. But now, can the iPad evolve toward more business applications without sacrificing its simplicity, without becoming a “fridge-toaster”?

Three years ago, the iPad came out. The device was an immediate hit with customers and (most) critics. Steve Jobs’ latest — and, unfortunately, last — creation truly deserved the oft-abused game changer moniker.

But, as always, there were grumblings up in the cheap seats. As Mike Monteiro, co-founder of Mule Design observed:

“Following along on Twitter I was seeing things like ‘underwhelming’, ‘meh’ , ‘it’s not open’, ‘it’s just a big iPhone’, etc. And most of this stuff was coming from people who design and build interactive experiences.”

Monteiro penned a sharp, relevant response to the naysayers. Titled “The Failure of Empathy“, his post is summarized by this picture:

A generation ago, geeks were the arbiters of taste in the world of personal computing. Programmers, designers, hobbyists and tinkerers…these were the inhabitants of “user space”, and we built computers with them in mind. By designing the Apple ][ for himself (and his fellow travelers) Steve Wozniak hit the bull’s eye of a large, untapped target.

Today, geeks are but a smallish subset of computer users. Their (typically exaggerated) negative comments may have some sting if you’re responsible for engineering the “brain dead” backing store for a windowing system, but in the real world, no one cares about “byte sex” or “loop unrolling”. What counts is how non-technical users think, feel, and respond. Again, from Monteiro’s post:

“As an industry, we need to understand that not wanting root access doesn’t make you stupid. It simply means you do not want root access. Failing to comprehend this is not only a failure of empathy, but a failure of service.”

This was written in February 2010; I doubt that anyone at the time thought the iPad would ascend to such heights so quickly: 65.7M sold in 2012, 121M since the 2010 debut, rising even faster than the iPhone.

This is all well and good, but with success comes side effects. As the iPad gets used in ways its progenitors didn’t anticipate, another failure of empathy looms: Ignoring the needs of people who want to perform “complicated” tasks on their iPads.

When the iPad was introduced, even the most obliging reviewers saw the device as a vehicle for consumption, not creation. David Pogue in the New York Times:

“…the iPad is not a laptop. It’s not nearly as good for creating stuff. On the other hand, it’s infinitely more convenient for consuming it — books, music, video, photos, Web, e-mail and so on.”

This is still true…but that hasn’t stopped users from trying — struggling — to use their iPads for more ambitious tasks: Building rich media presentations and product brochures, preparing course material, even running a business. Conventional wisdom tells us that these are tasks that fall into the province of “true” personal computers, but these driven users can’t help themselves, they want to do it all on their iPads. They want the best of both worlds: The power of a PC but without its size, weight, (relative) unresponsiveness, and, certainly, price.

The evidence is all around us. Look at how many people in cafés, offices and airport lounges use a keyboard with their iPad, such as this Origami combo:

Or the Logitech Keyboard Cover:

Both keyboards are prominently displayed in the Apple Store. We’ll assume that shelf space isn’t doled out by lottery (or philanthropically), so these devices must be selling briskly.

Of course, this could just be anecdotal evidence. What isn’t anecdotal is that Apple itself claims that the iPad has penetrated a large proportion of Fortune 500 companies. In some of its stores, the company conducts sessions to promote the use of iPads in business applications.

I attended one such gathering last year. There was a very basic demonstration of Keynote, iPad’s presentation app, plus the testimony of a happy customer who described the usefulness of the iPad in sales situations. All quite pleasant, but the Q&A session that followed was brutal and embarrassing: How do you compose real-world, mixed-document presentation? No real answer. Why can’t the iPad access all the documents — not just iWork files — that I dropped into iCloud from my Mac? No answer there, either.

This brings us to a major iPad obstacle: On a “real” PC the file system is visible, accessible; on the iPad, it’s hidden. The act of creating, arranging, accessing files on a PC is trivial and natural. We know how to use Finder on the Mac and Explorer on Windows. We’re not perplexed by folder hierarchies: The MyGreatNovel folder might contain a lengthy set of “MGN-1″, “MGN-2″, “MGN-3″ drafts, as well as subfolders such as ArtWork, Reference, and RejectionLetters, each of which contain further subfolder refinements (RejectedByGrove, RejectedByPenguin, RejectedByRandomHouse…).

On an iPad you don’t navigate a file system but, instead, you launch an app that has it’s own trove of documents that it understands — but it can’t “see” anything else.

For example: Keynote doesn’t let you see the graphics, videos, and PDFs that you want to assemble into your presentation. Unlike on the Mac, there’s no Finder, no place where you can see “everything” at one glance. Even more important, there’s no natural way to combine heterogeneous documents into one.

On the other hand, we all know users who love the iPad for its simplicity. They can download and play music, read books, respond to email and tweets, view photos, and stream movies without having to navigate a file hierarchy. For them, the notion of a “file system” is neither natural nor trivial — it’s foreign and geeky. Why throw them into a maze of folders and files?

Apple’s decision to hide the iOS file system from iPad (and iPhone) users comforts the non-geek and is consistent with Steve Jobs’ idea that applications such as Mail, iTunes, iPhoto, iCal, and Contacts shouldn’t reveal their files and folders. Under the hood, the application stores its data in the Mac’s file system but, on the surface, the user sees appointments, photo albums and events, mailboxes and messages.

Still, some of us see this as the storage equivalent of Seinfeld’s Soup Nazi: No File System For You!

App developers and customers keep trying. iOS apps such as GoodReader and File Manager Pro valiantly attempt to work around the iPad strictures. PhoneView will expose and manipulate your iPad’s file system (not recommended). But success with any of these apps is limited and comes at a price: The iPad’s simplicity and fluidity is long gone by the time you achieve the desired result, the multimedia brochure or HR tutorial.

This places Apple at a fork on the road. On the left is the current path: more/better/lighter/faster of the same. Only evolutionary changes to the simple and successful worldview. This is today’s trajectory, validated by history (think of the evolution of the MacBook) and strong revenue numbers.

On the right, Apple could transform the iPad so that power users can see and combine data in ways that are impossible today. This could attract business customers who are hesitant about making the plunge into the world of tablets, or who may be considering alternatives such as Microsoft’s PC/tablet combo or Android devices with Google services.

The easiest decision is no decision. Let’s have two user interfaces, two modes: The Easy mode for my Mother-In-Law, and the Pro Mode for engineers, McKinsey consultants, and investment bankers. Such dual-mode systems haven’t been very popular so far, it’s been tried without success on PCs and Macs. (Re-reading this, I realize the Mac itself could be considered such a dual-mode machine: Fire up the Terminal app and you have access to a certified Unix engine living inside…)

The drive to “pervert” the iPad is unmistakable. I think it will prove irresistible in the end. But I have trouble forming a coherent picture of an evolution that would let Apple open the iPad to more demanding users — without sacrificing its great simplicity and falling into the fridge + toaster trap.
It’s a delicate balancing act.

JLG@mondaynote.com

 

The Need for a Digital “New Journalism”

 

The survival of quality news calls for a new approach to writing and reporting. Inspiration could come from blogging and magazine storytelling and also bring back memories of the 70′s New Journalism movement. 

News reporting is aging badly. Legacy newsrooms style books look stuck in a last Century formalism (I was tempted to write “formalin“). Take a newspaper, print or online. When it comes news reporting, you see the same old structure dating back to the Fifties or even earlier. For the reporter, there is the same (affected) posture of effacing his/her personality behind facts, and a stiff structure based on a string of carefully arranged paragraphs, color elements, quotes, etc.

I hate useless quotes. Most often, for journalists, such quotes are the equivalent of the time-card hourly workers have to punch. To their editor, the message is ‘Hey, I did my my job; I called x, y, z’ ; and to the  the reader, ‘Look, I’m humbly putting my personality, my point of view behind facts as stated by these people’ — people picked by him/herself, which is the primary (and unavoidable) way to twist a story. The result becomes borderline ridiculous when, after a lengthy exposé in the reporter’s voice to compress the sources’ convoluted thoughts, the line of reasoning concludes with a critical validation such as :

“Only time will tell”, said John Smith, director of the social studies at the University of Kalamazoo, consultant for the Rand Corporation, and author of “The Cognitive Deficit of Hyperactive Chimpanzees”. 

I’m barely making this up. Each time I open a carbon-based newspaper (or read its online version), I’m stuck by how old-fashioned news writing remains. Unbeknownst to the masthead (i.e. editorial top decision-makers) of legacy media, things have changed. Readers no longer demand validating quotes that weigh the narrative down. They want to be taken from A to B, with the best possible arguments, and no distraction or wasted time.

Several factors dictate an urgent evolution in the way newspapers are written.

1/ Readers’ Time Budget. People are deluged with things to read. It begins at 7:00 in the morning and ends up late into the night. The combination of professional contents (mail, reports, PowerPoint presentations) and social networking feeds, have put traditional and value-added contents (news, books) under great pressure. Multiple devices and the variable level of attention that each of them entails create more complications: a publishing house can’t provide the same content for a smartphone screen to be read in a cramped subway as for a tablet used in lean-back mode at home. More than ever, the publisher is expected to clearly arbitrate between the content that is to be provided in a concise form and the one that justifies a long, elaborate narrative. The same applies to linking and multi-layer constructs: reading a story that opens several browser tabs on a 22-inch screen is pleasant — and completely irrelevant for quick lunchtime mobile reading.

2/ Trust factor / The contract with the Brand. When I pick a version of The New York Times, The Guardian, or a major French newspaper, this act materializes my trust (and hope) in the professionalism associated with the brand. In a more granular way, it works the same for the writer. Some are notoriously sloppy, biased, or agenda-driven; others are so good than they became a brand by themselves. My point: When I read a byline I trust, I assume the reporter has performed the required legwork — that is collecting five or ten times the amount of information s/he will use in the end product. I don’t need the reporting to be proven or validated by an editing construct that harks back to the previous century. Quotes will be used only for the relevant opinion of a source, or to make a salient point, not as a feeble attempt to prove professionalism or fairness.

3 / Competition from the inside. Strangely enough, newspapers have created their own gauge to measure their obsolescence. By encouraging their writing staff to blog, they unleashed new, more personal, more… modern writing practices. Fact is, many journalists became more interesting on their own blogs than in their dedicated newspaper or magazine sections. Again, this trend evaded many editors and publishers who consider blogging to be a secondary genre, one that can be put outside a paywall, for instance. (This results in a double whammy: not only doesn’t the paper cash on blogs, but it also frustrates paid-for subscribers).

4/ The influence of magazine writing. Much better than newspapers, magazines have always done a good job capturing readers’ preferences. They’ve have always been ahead in market research, graphic design, concept and writing evolution. (This observations also applies to the weekend magazines operated by large dailies). As an example, magazine writers have been quick to adopt first person accounts that rejuvenated journalism and allowed powerful narrative. In many newspapers, authors and their editors still resists this.

Digital media needs to invent its own journalistic genres. (Note the plural, dictated by the multiplicity of usages and vectors). The web and its mobile offspring, are calling for their own New Journalism comparable to the one that blossomed in the Seventies. While the blogosphere has yet to find its Tom Wolfe, the newspaper industry still has a critical role to play: It could be at the forefront of this essential evolution in journalism. Failure to do so will only accelerate its decline.

frederic.filloux@mondaynote.com

The Next Apple TV: iWatch

 

Rumors don’t actual Apple products make, see the perennial Apple TV — and the latest iWatch rumors. This is an opportunity to step back, look at Apple’s one and only love –personal computers — and use this thought to sift through rumors. 

Every week brings new rumors of soon-to-be-released Apple products. The mythical Apple TV set is always a favorite: Gossip of an Apple buyout of troubled TV maker Löwe has sent the German company’s stock soaring. We also hear of a radio streaming service that will challenge Pandora and Spotify, and there’s the usual gaggle of iPhone, iPad, and Mac variations. More interesting is the racket surrounding Apple’s “stealth” projects:  an iWatch and other wearable devices (and “racket” is the right word — see these intimations of stock manipulation).

There is a way to see through the dust, to bring some clarity, to organize our thoughts when considering what Apple might actually do, why the company would (or wouldn’t) do it, and how a rumored product would fit into the game plan.

The formula is simple: Apple engineers may wax poetic about the crystalline purity of the software architecture, execs take pride in the manufacturing chain and distribution channels (and rightly so), marketing can point to the Apple Customer Experience (when they’re not pitching regrettable Genius ads or an ill-timed campaign featuring Venus and Serena Williams). But what really floats their bots, what hardens Apple’s resolve is designing, making, and selling large numbers of personal computers, from the traditional desktop/laptop Mac, to the genre-validating iPad, and on to the iPhone — the Very Personal Computer. Everything else is an ingredient, a booster, a means to the noblest end.

Look at Apple’s report to its owners: there’s only one Profit and Loss (P&L) statement for the entire $200B business. Unlike Microsoft or HP, for example, there is no P&L by division. As Tim Cook put it:

We manage the company at the top and just have one P&L and don’t worry about the iCloud team making money and the Siri team making money…we don’t do that–we don’t believe in that…

Apple’s appreciation for the importance and great economic potential of personal computers — which were invented to act as dumb servants to help us with data storage, text manipulation, math operations — may have been, at first, more instinctual than reasoned. But it doesn’t matter; the company’s monomania, it’s collective passion is undeniable. More than any other company, Apple has made computers personal, machines we can lift with our hands and our credit cards.

With these personal computer glasses on, we see a bit more clearly.

For example: Is Apple a media distribution company? Take a look at Apple’s latest 10-Q SEC filing, especially the Management Discussion and Analysis (MD&A) section starting page 21. iTunes, now reported separately, clocked $3.7B for the last quarter of 2012.  Elsewhere, Horace Dediu sees $13.5B for the entire year. A big number indeed, and, certainly, iTunes is a key to Apple’s success: Without iTunes there would have been no iPod, Apple’s “halo product“, proof that the company could come up with a winner.  Later, iTunes begat the App Store, a service that solidified the App Phone genre.

Some misguided analysts look at the numbers and argue that Apple ought to spin off iTunes. They use the old “shareholder value” gambit, but the “value” simply isn’t there: Horace Dediu puts iTunes margins in the 15% region, well below Apple’s overall 38%. iTunes is a hugely important means to the personal computer end, but it’s not a separate business.

How about Apple as a retail company? The success of the Apple Store is stellar, a word that’s almost too weak: The Apple Stores welcomed three times more visitors than all of the Disney parks, and generated more than $20B in revenue last year — that works out to an astonishing $6000 per square foot, twice as much as the #2 shop (Tiffany and Co.). But Apple’s 400 stores aren’t a business, they only exist to create an experience that will lead to more sales, enhanced customer satisfaction, and, as a consequence, increased margins.

Apple as a software company? No. The raison d’être for OS X, iOS, iWork, and even Garage Band is to breathe life into Apple hardware. By now, the calls for Apple to see the error of its ways, to not repeat the original sin of not licensing Mac OS, to sell iOS licenses to all comers have (almost) died.
During my first visit to Apple’s hypergalactic headquarters and warehouse in February 1981, I was astonished at the sight of forklifts moving pallets of Apple ][ software. The term “ecosystem” wasn’t part of the industry lingo yet, but I had witnessed the birth of the notion.
Apple had a much harder time building a similarly rich set of applications for the Macintosh, but the lesson was eventually learned, partly due to the NeXT acquisition and the adoption of object oriented programming. We now have a multi-dimensional macrocosm — a true ecosystem — in which our various forms of personal computing work together, share data, media, services.

Where does the current Apple TV device (the black puck, not the mythical TV set) fit into this scheme? Apple TV runs on a version of iOS, and it knows how to communicate with a Bluetooth keyboard — but that doesn’t mean the device is a personal computer. Perhaps Apple will (someday) provide a TV Software Development Kit (SDK) so developers can adapt existing iOS apps or write new ones. But I still see it as a lean-back device, as opposed to a lean-forward PC.

In any case, sales of the $100 black puck don’t move the needle. Four million Apple TVs were sold in 2012; even if ten million are sold this year — and that’s a very optimistic estimate — it won’t make a noticeable difference, at least not directly. Apple TV is a neat part of the ecosystem, it makes iPhones, iPads, Macs and our iTunes libraries more valuable, but it’s still just a member of the supporting cast.

This brings us back to the putative iWatch. Computer history buffs will recall the HP 01 watch. Buoyed by the success of its handheld calculators, including the programable HP 65 with its magnetic card reader, HP convinced itself it could make a calculator watch, introduced in 1977:

A technology tour de force, fondly remembered by aging geeks, but a market failure: too expensive, too hard to use, ill-fitting distribution channels.

Apple is in a different spot. Today, you can find a number of iPod watchbands such as this one:

It’s hard to imagine that Apple would merely integrate an existing accessory into a new iPod. Sales of the iPod proper are decelerating, so the iPod-as-iWatch could give the line a much needed boost, but it’s difficult to reconcile the rumors of “100 people” working on the project if it’s just a retrofit job. Is Apple working on an iWatch that can be experienced as an Even More Personal personal computer — an “intimate computer”? If so, many questions arise: user interface, sensors, iOS version, new types of apps, connection with other iDevices… And, of course price.

This would be much more interesting than the perennially in-the-future Apple TV set. Of course, iWatch and Apple TV aren’t necessarily mutually exclusive. If the Löwe buyout rumors are true, Apple could do both — the company could develop its own watch device and repurpose Löwe’s TV. (I still doubt the TV set part, as opposed to enhancing the black puck.)

But once we understand what Apple’s only business is, and that the related software, retail, and services are simply part of the supporting cast, Apple’s attitude towards big acquisitions becomes clearer. Apple isn’t looking at buying a big new business, it already owns The Big One. So, no movie studio, no retail chain or cable company, no HP or Dell, or Yahoo!. (But… a big law firm, perhaps?) Integrating a large group of people into Apple’s strong, unbending culture would, alone, prove to be impossible.

A small acquisition to absorb technology (and talented people) makes sense. The cultural integration risks remain, but at a manageable scale, unlike what happened to Exxon in the early eighties when it burned $4B (that was real money, then) in a failed attempt to become an information systems company — you know, the Oil of the Twenty-First Century.

Let’s just hope Apple doesn’t talk itself into a “because we can” move.

JLG@mondaynote.com

 

The Next Big Thing: Big Missing Pieces

 

Looking for next big wave of products or services, for something as big as smartphones or, more recently, tablets, we see technology kept in check by culture.

To qualify as a Big Thing these days, a product — or a service, or maybe something hardly more effable than a meme (think “social networks”) — has to assume a value on the order of $100B worldwide. The value needn’t be concentrated in a single company; indeed, the more boats that are lifted by the rising tide, the better. The revenue from the Next Big Thing might be divvied up among today’s hardware and software giants or shared with companies that are currently lurking under the radar of industry statistics.

The $100B number is derived from a look at Apple. For Fiscal Year 2013 (started October 1st, 2012), the company will weigh about $200B in revenue. To “move the needle” for just this one company, a Big Thing will need to contribute about $20B to this total. For Apple execs and shareholders, anything less counts as a mere hobby (which leads to questions about the future of the Mac, but I digress).

Using this gauge, smartphones easily qualify as a Big Thing. As Charles Arthur reports in The Guardian: Mobile internet devices ‘will outnumber humans this year‘. Initially offered by Palm, Microsoft, RIM, and Nokia, and then given successive boosts by the iPhone (first with the device itself and then the App Store), it’s no exaggeration to say that the size of the smartphone tsunami surprised everyone. Even the Big Four incumbents were crushed by the wave: Palm is gone, RIM is in trouble, and Nokia has enslaved itself to Microsoft — which has yet to come up with a viable smartphone OS.

The latest Big Thing is, of course, the “media tablet” (as IDC and Gartner obsessively call the iPad and its competitors). Whatever you call it, regardless of who makes it or which OS it runs, the tablet is a Big Thing that just keeps getting bigger. In less than five years, tablets have attained 10% US market penetration, a milestone that smartphones took eight years to reach. (See also slide 9 in Mary Meeker’s now iconic Internets Trends presentation.)

In his February 7th Apple 2.0 post, Philip Elmer-DeWitt offers this Canalys chart, which shows that one in six “PCs” shipped in Q4 2012 was an iPad:

So what’s next? Is there a breakthrough technology quietly germinating somewhere? What are the obstacles to a self-amplifying chain of events?

I don’t think the barriers to the Next Big Thing are technical. The ingredients are there, we simply need a master chef to combine them.

This brings us to the broad — and fuzzy — class of what is sometimes called “smart appliances.”

The underlying idea is that the devices that surround us — alarm systems, heaters and air conditions, televisions, stereos, baby monitors, cars, home health-care devices — should be automated and connected. And we should be able to control them through a common, intuitive UI — in other words, they should speak our language, not the other way around.

This isn’t a new idea. For decades now, we’ve been told the Smart Home is upon us, a fully automated, connected, secured, and energy-saving dwelling. More than 20 years ago, Vint Cerf, an Internet progenitor and now Google’s Chief Internet Evangelist, posed with a t-shirt featuring the famous IP On Everything pun:

The Internet visionary was and is right: Every object of importance is destined to have an “IP stack“, the hardware, software, and communication link required to plug the device into the Internet. With every turn of Moore Law’s crank, the hardware becomes smaller, less expensive and power-hungry, and thus makes more room for better software, allowing Internet (and local) connectivity to potentially “infect” a growing number of devices. And as devices become smart, they will “teach” each other how to communicate.

Imagine: You take a new remote control out of the box, walk up to a TV and press the “?”  key on the remote. A standardized “teach me” message is broadcast, and the TV responds, wirelessly, by sending back a longish XML file that identifies itself and tells the remote the commands it understands:

In a language that computers — and even humans — can process without too much effort, the TV has taught the remote: Here is where you’ll find me, and this is how you can talk to me. The little computer inside the remote munges the file and now the device knows how to control the TV…or the five components of the home theater, the heater/air conditioner, the alarm system, the car…

Now replace the remote in this scenario with your tablet, with its better UI, processing, and connectivity. Rather than controlling your devices by pushing plastic buttons, you use an app on your tablet — an app that the device delivered just before it sent the XML file. (You can use the default app sent by the device, or wander over to the App Store and pay $5 for a deluxe version with different skins. This is how cottage industries are born.)

So goes the lovely theory… but in reality we see so-called Smart TVs with Internet connections but mediocre UI; or less-smart TVs that are still bound to barely intelligent set-top boxes, with their Trabant-grade user experience. And we control them through multi-function “universal” remotes that cost as much as a smartphone, but that do less and do it less well.

What’s missing?

The technological building blocks exist in abundance. There is plenty of Open Source software available to help the remote (or your tablet) digest the This Is How To Talk To Me file from the TV.

Even in our deliberately simplified example, there seems to be no interest in coming up with a simple, open (yes, that word, again) standard to help appliances tell the rest of the world how to control them. It wouldn’t add much to the cost of the device and certainly wouldn’t require hiring rocket scientists. In other words, the obstacles are neither economical nor technical; they’re cultural, they’re keeping the Machine To Machine (M2M) revolution in check.

We’ve seen a similar sort of cultural resistance when we consider à la carte, app-based channels on the mythical “iTV”, whether from Apple, Google, or anyone else. Users would love to pick and chose individual shows and have them delivered through applications rather than through deaf-and-dumb multicast streams. App-ification of TV content would provide other “organic” features: the ability to rewind a live broadcast (without a DVR), easy search through program archives, access to user forums and behind-the-scenes commentary…

The technology and design already exist, as the wonderful 60 Minutes iPad app demonstrates:

 

Similar examples can be found on every internet-enabled TV platform from Google TV to Roku, the Xbox, and others.

Nice, easy, technically feasible yesterday…but it’s impossible today and will almost certainly continue to be impossible for the near future (I first typed nerd future, a neat typo).

Why?

Because carriers won’t allow it. They’re terrified of becoming dumb pipes (the link refers to mobile carriers but the idea also applies to cable and satellite providers). Carriers force us to buy bundles of channels that they package and sell in a tiered, take-it-or-leave it pricing scheme. True, there is VOD (Video On Demand) where we can buy and view individual movies or premium sporting events, but a pervasive newsstand model where we only pay for what we consume is still far away.

The content owners — movie studios and TV networks — don’t like the newsstand model either. They go by the old Hollywood saying: Content is King, but Distribution is King Kong. iTunes made an impression: Movie and TV studios don’t want to let Google, Apple, Netflix, or Amazon run the table the way Apple did with iTunes and AT&T. (That AT&T derived lasting benefits in higher ARPU and market share doesn’t seem to alleviate the content providers’ fears.)

How can this change and, as a result, unlock one or two Big Things? To retread a famous two-part Buddhist joke, change is a mysterious thing. Telling people what they ought to do doesn’t always work. Still, two thoughts come to mind.

First, the tablet. We, Tech People have always known the tablet was the right thing to do, and we tried for thirty years without much success. Three years ago, Chef Jobs grabbed the ingredients that had been available to all and, this time, the tablet genre “took”. Now, perhaps, the tablet will take its place as an ingredient in a yet grander scheme.

Second, go to an aquarium and watch a school of fish. They move in concert and suddenly turn for no apparent reason. Somewhere inside the school there must have been a “lead fish” that caused the change of direction. Perhaps the fish didn’t even realize he was The One destined to trigger the turn.

Who’s going to be our industry’s fish, big or small, that precipitates a cultural change unlocking the potential of existing technologies and gives rise to the next $100B opportunity?

JLG@mondaynote.com

The Google Fund for the French Press

 

At the last minute, ending three months of  tense negotiations, Google and the French Press hammered a deal. More than yet another form of subsidy, this could mark the beginning of a genuine cooperation.

Thursday night, at 11:00pm Paris time, Marc Schwartz, the mediator appointed by the French government got a call from the Elysée Palace: Google’s chairman Eric Schmidt was en route to meet President François Hollande the next day in Paris. They both intended to sign the agreement between Google and the French press the Friday at 6:15pm. Schwartz, along with Nathalie Collin, the chief representative for the French Press, were just out of a series of conference calls between Paris and Mountain view: Eric Schmidt and Google’s CEO Larry Page had green-lighted the deal. At 3 am on Friday, the final draft of the memorandum was sent to Mountain View. But at 11:00am everything had to be redone: Google had made unacceptable changes, causing Schwartz and Collin to  consider calling off the signing ceremony at the Elysée. Another set of conference calls ensued. The final-final draft, unanimously approved by the members of the IPG association (General and Political Information), was printed at 5:30pm, just in time for the gathering at the Elysée half an hour later.

The French President François Hollande was in a hurry, too: That very evening, he was bound to fly to Mali where the French troops are waging as small but uncertain war to contain Al-Qaeda’s expansion in Africa. Never shy of political calculations, François Hollande seized the occasion to be seen as the one who forced Google to back down. As for Google’s chairman, co-signing the agreement along with the French President was great PR. As a result, negotiators from the Press were kept in the dark until Eric Schmidt’s plane landed in Paris Friday afternoon and before heading to the Elysée. Both men underlined what  they called “a world premiere”, a “historical deal”…

This agreement ends — temporarily — three months of difficult negotiations. Now comes the hard part.

According to Google’s Eric Schmidt, the deal is built on two stages:

“First, Google has agreed to create a €60 million Digital Publishing Innovation Fund to help support transformative digital publishing initiatives for French readers. Second, Google will deepen our partnership with French publishers to help increase their online revenues using our advertising technology.”

As always, the devil lurks in the details, most of which will have to be ironed over the next two months.

The €60m ($82m) fund will be provided by Google over a three-year period; it will be dedicated to new-media projects. About 150 websites members of the IPG association will be eligible for submission. The fund will be managed by a board of directors that will include representatives from the Press, from Google as well as independent experts. Specific rules are designed to prevent conflicts of interest. The fund will most likely be chaired by the Marc Schwartz, the mediator, also partner at the global audit firm Mazars (all parties praised him for his mediation and wish him to take the job).

Turning to the commercial part of the pact, it is less publicized but at least as equally important as the fund itself. In a nutshell, using a wide array of tools ranging from advertising platforms to content distribution systems, Google wants to increase its business with the Press in France and elsewhere in Europe. Until now, publishers have been reluctant to use such tools because they don’t want to increase their reliance on a company they see as cold-blooded and ruthless.

Moving forward, the biggest challenge will be overcoming an extraordinarily high level distrust on both sides. Google views the Press (especially the French one) as only too eager to “milk” it, and unwilling to genuinely cooperate in order to build and share value from the internet. The engineering-dominated, data-driven culture of the search engine is light-years away from the convoluted “political” approach of legacy media that don’t understand or look down on the peculiar culture of tech companies.

Dealing with Google requires a mastery of two critical elements: technology (with the associated economics), and the legal aspect. Contractually speaking, it means transparency and enforceability. Let me explain.

Google is a black box. For good and bad reasons, it fiercely protects the algorithms that are key to squeezing money from the internet, sometimes one cent at a time — literally. If Google consents to a cut of, say, advertising revenue derived from a set of contents, the partner can’t really ascertain whether the cut truly reflects the underlying value of the asset jointly created – or not. Understandably, it bothers most of Google’s business partners: they are simply asked to be happy with the monthly payment they get from Google, no questions asked. Specialized lawyers I spoke with told me there are ways to prevent such opacity. While it’s futile to hope Google will lift the veil on its algorithms, inserting an audit clause in every contract can be effective; in practical terms, it means an independent auditor can be appointed to verify specific financial records pertaining to a business deal.

Another key element: From a European perspective, a contract with Google is virtually impossible to enforce. The main reason: Google won’t give up on the Governing Law of a contract that is to be “Litigated exclusively in the Federal or States Courts of Santa Clara County, California”. In other words: Forget about suing Google if things go sour. Your expensive law firm based in Paris, Madrid, or Milan will try to find a correspondent in Silicon Valley, only to be confronted with polite rebuttals: For years now, Google has been parceling out multiples pieces of litigation among local law firms simply to make them unable to litigate against it. Your brave European lawyer will end up finding someone that will ask several hundreds thousands dollars only to prepare but not litigate the case. The only way to prevent this is to put an arbitration clause in every contract. Instead of going before a court of law, the parties agrees to mediate the matter through a private tribunal. Attorneys say it offers multiples advantages: It’s faster, much cheaper, the terms of the settlement are confidential, and it carries the same enforceability as a Court order.

Google (and all the internet giants for that matter) usually refuses an arbitration clause as well as the audit provision mentioned earlier. Which brings us to a critical element: In order to develop commercial relations with the Press, Google will have to find ways to accept collective bargaining instead of segmenting negotiations one company at a time. Ideally, the next round of discussions should come up with a general framework for all commercial dealings. That would be key to restoring some trust between the parties. For Google, it means giving up some amount of tactical as well as strategic advantage… that is part of its long-term vision. As stated by Eric Schmidt in its upcoming book “The New Digital Age” (the Wall Street Journal had access to the galleys) :

“[Tech companies] will also have to hire more lawyers. Litigation will always outpace genuine legal reform, as any of the technology giants fighting perpetual legal battles over intellectual property, patents, privacy and other issues would attest.”

European media are warned: they must seriously raise their legal game if they want to partner with Google — and the agreement signed last Friday in Paris could help.

Having said that, I personally believe it could be immensely beneficial for digital media to partner with Google as much as possible. This company spends roughly two billion dollars a year refining its algorithms and improving its infrastructure. Thousands of engineers work on it. Contrast this with digital media: Small audiences, insufficient stickiness, low monetization plague both web sites and mobile apps; the advertising model for digital information is mostly a failure — and that’s not Google’s fault. The Press should find a way to capture some of Google’s technical firepower and concentrate on what it does best: producing original, high quality contents, a business that Google is unwilling (and probably culturally unable) to engage in. Unlike Apple or Amazon, Google is relatively easy to work with (once the legal hurdles are cleared).

Overall, this deal is a good one. First of all, both sides are relieved to avoid a law (see last Monday Note Google vs. the press: avoiding the lose-lose scenario). A law declaring that snippets and links are to be paid-for would have been a serious step backward.

Second, it’s a departure from the notion of “blind subsidies” that have been plaguing the French Press for decades. Three months ago, the discussion started with irreconcilable positions: publishers were seeking absurd amounts of money (€70m per year, the equivalent of IPG’s members total ads revenue) and Google was focused on a conversion into business solutions. Now, all the people I talked to this weekend seem genuinely supportive of building projects, boosting innovation and also taking advantage of Google’s extraordinary engineering capabilities. The level of cynicism often displayed by the Press is receding.

Third, Google is changing. The fact that Eric Schmidt and Larry Page jumped in at the last minute to untangle the deal shows a shift of perception towards media. This agreement could be seen as a template for future negotiations between two worlds that still barely understand each other.

frederic.filloux@mondaynote.com

iPad Pro: The Missing Workflow

 

The iPad started simple, one window at a time, putting it in the “media consumption” category as a result. Over time, such category proved too narrow, the iPad did well in some content creation activities. Can the new 128 GB iPad continue the trend and acquire better workflow capabilities?

Last week, without great fanfare, Apple announced a new 128 GB version of its fourth generation iPad, a configuration popularly known as the “iPad Pro“. The “Pro” monicker isn’t official, but you wouldn’t know that from Apple’s press release:

Companies regularly utilizing large amounts of data such as 3D CAD files, X-rays, film edits, music tracks, project blueprints, training videos and service manuals all benefit from having a greater choice of storage options for iPad. 

Cue the quotes from execs at seriously data storage-intense companies such as AutoCAD; WaveMachine Labs (audio software); and, quirkily, Global Aptitude, a company that makes film analysis software for football teams:

“The bottom line for our customers is winning football games, and iPad running our GamePlan solution unquestionably helps players be as prepared as possible,” said Randall Fusee, Global Apptitude Co-Founder. 

The naysayers grumble: Who needs this much memory on a “media tablet”? As Gizmodo put it:

The new iPad has the same retina display as its brothers, and the same design, and the same guts, with one notable exception: a metric crap-ton of storage. More storage than any decent or sane human being could ever want from a pure tablet…

(Increased storage is…indecent? This reminds me of the lambasting Apple received for putting 1 — one! — megabyte of memory in the 1986 Mac Plus. And we all recall Bill Gates’ assertion that 640 Kbytes ought to be enough for anyone. He now claims that the quote is apocryphal, but I have a different recollection.)

Or maybe this is simply Apple’s attempt to shore up the iPad’s average selling price ($467, down 18% from the year ago quarter), which took a hit following the introduction of the lower-priced iPad mini. (What? Apple is trying to make more money?)

The critics are right to be skeptical, but they’re questioning the wrong part of the equation.

When we compare iPad prices, the Pro is a bargain, at least by Apple standards:

The jump from 16GB to 32GB costs $100. Another doubling to 64GB costs the same $100. And, on February 5th, you’ll get an additional 64GB for yet another mere $100. (By comparison, extra solid state storage on a MacBook costs between $125 and $150 per 64GB.)

We get a bit more clarity when we consider the iPad’s place in Apple’s product line: As sales of the Mac slow down, the iPad Pro represents the future. Look at Dan Frommer’s analysis of 10 years of Mac sales. First, the Mac alone:

This leads Dan to ask if the Mac has peaked. Mac numbers for the most recent quarter  were disappointing. The newer iMacs were announced in October, with delivery dates in November and December for the 21.5″ and 27″ models respectively. But Apple missed the Xmas quarter window by about a million units, which cut revenue by as much as $1.5B and margin by half a billion or so (these are all very rough numbers). We’ll probably never find out how Apple’s well-oiled Supply Chain Management machine managed to strip a gear, but one can’t help wonder who will be exiled to Outer Mongolia Enterprise Sales.

Now consider another of Dan Frommer’s graphs:

This is units, not revenue. Mac and iPad ASPs are a 3 to 1 ratio but, still, this paints a picture of a slow-growth Mac vs. the galloping iPad.

The iPad — and tablets in general — are usurping the Mac/PC space. In the media consumption domain, the war is all but won. But when we take a closer look at the iPad “Pro”, we see that Apple’s tablet is far from realizing its “professional” potential.

This is where the critics have it wrong: Increased storage isn’t “insane”, it’s a necessary element…but it isn’t sufficient.

For example, can I compose this Monday Note on an iPad? Answering in the affirmative would be to commit the Third Lie of Computing: You Can Do It. (The first two are Of Course It’s Compatible and Chief, We’ll be in Golden Master by Monday.)

I do research on the Web and accumulate documents, such as Dan Frommer’s blog post mentioned above. On a PC or Mac, saving a Web page to Evernote for future reference takes a right click (or a two finger tap).

On an iPad, things get complicated. The Share button in Safari gives me two clumsy choices: I can mail the page to my Evernote account, or I can Copy the URL, launch Evernote, paste the URL, compose a title for the note I just created, and perhaps add a few tags.

Once I start writing, I want to look through the research material I’ve compiled. On a Mac, I simply open an Evernote window, side-by-side with my Pages document: select, drag, drop. I take some partial screenshots, annotate graphs (such as the iPad Pro prices above), convert images to the .png format used to put the Monday Note on the Web…

On the iPad, these tasks are complicated and cumbersome.

For starters — and to belabor the obvious — I can’t open multiple windows. iOS uses the “one thing at a time” model. I can’t select/drag/drop, I have to switch from Pages to Evernote or Safari, select and copy a quote, and then switch back to the document and paste.

Adding a hyperlink is even more tortuous and, at times, confusing. I can copy a link from Safari, switch back to Pages, paste…but I want to “slide” the link under a phrase. I consult Help, which suggests that I tap on the link, to no avail. If I want to attach a link to a phrase in my document, I have to hit the Space key after pasting, go to Settings and then enter the text that will “cover” the link — perfectly obvious.

This order of operations is intuitively backwards. On a Mac (or PC), I select the target text and then decide which link to paste under it.

Things get worse for graphics. On the iPad, I can’t take a partial screenshot. I can take a full screenshot by simultaneously pressing the Home and Sleep buttons, or I can tap on a picture in Safari and select Save. In both cases, the screenshot ends up in the Photos app where I can perform some amount of cropping and enhancing, followed by a Copy, then switch back to Pages and Paste into my opus.

Annotations? No known way. Control over the image file format? Same answer. There’s no iPad equivalent to the wonderful Preview app on the Mac. And while I’m at it, if I store a Preview document in iCloud, how do I see it from my iPad?

This gets us into the more general — and “professional” — topic of assembling a trove of parts that can be assembled into a “rich” document, such as a Keynote presentation. On a personal computer, there are plenty of choices. With the iPad, Apple doesn’t provide a solution, there’s no general document repository, no iCloud analog to Dropbox or Microsoft’s Skydrive, both of which are simple to use, quasi-free and, in my experience, quite reliable. (One wonders: Is the absence of a Dropbox-like general documents folder in iCloud a matter of technology or theology?)

Simply throwing storage at the problem is, clearly, not enough to make the iPad a “Pro” device.  But there is good news. Some of it is anecdotal, such as the more sophisticated editing provided by the iPad version of iPhoto. The better news is that iOS is a mature, stable operating system that takes advantage of fast and spacious hardware.

But the best news is that Apple has, finally, some competition when it comes to User Experience. For example, tablets that run Microsoft or Google software let users slide the current window to show portions of another one below, making it easier to select parts of a document and drop them into another. (Come to think of it, the sliding Notifications “drawer” on the iPad and iPhone isn’t too far off.)

This competition might spur Apple to move the already very successful iPad into authentically “Pro” territory.

The more complex the task, the more our beloved 30-year-old personal computer is up to it. But there is now room above the enforced simplicity that made the iPad’s success for UI changes allowing a modicum of real-world “Pro” workflow on iPads.

JLG@mondaynote.com

Dell Buyout: Microsoft’s Generosity

 

To perform painful surgery on its business model, Dell needs to take the company private. Seeing challenges in raising the needed $22B, Microsoft “generously” proposes to contribute a few billions. Is this helping or killing the deal?

The news broke two weeks ago: Dell wants to go private. The company would like to buy back all of its publicly traded shares.

The Apple forums are abuzz with memories of Michael Dell’s dismissal of Steve Jobs’ efforts to breathe new life into Apple in 1997:

What would I do? I’d shut it down and give the money back to the shareholders.

Is it now Michael’s turn to offer a refund?

Now we hear that Microsoft wants to lend a hand, as in “several billion dollars”. The forums buzz again: It’s just like when Bill Gates came to Jobs’ rescue and invested $150M in the Cupertino company, thus avoiding a liquidity crisis.

The analogy is amusing but facile. Dell 2013 isn’t Apple 1997. A look at Dell’s latest financials shows that the company still enjoys a solid cash position ($14B) and a profitable business (3.5% net profit margin). It’s profits may not be growing (-11% year to year), but the company is cash-flow positive nonetheless ($1.3B from the latest quarter). There’s no reason to fold up the tents.

As for Microsoft’s involvement: The Redmond company’s “investment” in Apple was part of a settlement of an on-going IP dispute. Microsoft avoided accusations of monopoly by keeping alive a highly visible but not overly dangerous adversary.

So what is Dell trying to accomplish by going private? To answer the question, let’s step back a bit and explore the whys and hows of such a move.

First, we have the Management Buyout. Frustrated with Wall Street’s low valuation, executives buy back their company “on the cheap” and run it in private for their own benefit. This rarely ends well.  Second-guessing the market is never a good idea, and the enormous amount of money that’s needed to pay off shareholders puts the execs at the mercy of bigger, smarter predators who turn out to be the ones who end up running the company for their benefit.

A good reason for going private is to allow a company to shift to a radically different business model without being distracted by Wall Street’s annoying glare and hysterics. This is what Dell is trying to do. They’re not shutting down shop, they’re merely closing the curtain.

Is it necessary to privatize for such a move? For an example that never came to pass, recall Bill Gates’ suggestion, in 1985, that Apple should get out of the hardware business and, instead, license the Mac operating system. At the time, the average revenue per Mac exceeded $2,500; a putative Mac OS license would have sold for $100. The theory was that Apple would eventually sell many, many more OS licenses than it did Macs.

The pundits agreed: “Just look at Microsoft!”.  Apple would jump from one slowly ascending earnings curve to a much steeper one.

Now picture yourself as John Sculley, Apple CEO, going to Wall Street with the following message: “We heard you, we’ve seen the light. Today, we’re announcing a new era for our company, we’ll be licensing Mac OS licenses to all comers for $100 apiece. Of course, there’ll be a trough; licensing revenue won’t immediately compensate the loss of Mac hardware sales. We need am ‘earnings holiday’ of about 36 months before the huge software profits flow in.”

You just became the ex-CEO. Wall Street dumps your shares, effectively telling you to take them back and only return after your “holiday” is over.

As another example that didn’t happen but probably should have, imagine if Nokia CEO Stephen Elop had taken his company private in 2011. Instead of osborning its Symbian business, Nokia would have had the latitude to perform the OS gender change behind closed doors and reemerge with a shiny new range of Microsoft-powered smartphones.

I’ll hasten to add that these made-up examples are somewhat unrealistic: To engineer a buyout, one must raise amounts of money commensurate with the company’s current valuation. Around 1987, Apple was worth about $2B, a great deal of money a quarter of century ago. In early 2011, Nokia’s market capitalization was about $40B, an impossibly large sum.

Still, thanks to these buyout fantasies, we get the two key ideas: First, Dell wants to go private because it plans to alter its business model in ways that would scare nervous, short-term Wall Street shareholders; second, the required amount of money (Dell’s market cap is about $22B) is a potential deal-killer.

We don’t have to look very far for the changes Dell wants to make. Dell no longer likes its legacy PC business and has made efforts to reposition itself as an enterprise player (expensive iron, software and services). Going private will allow it to perform the needed surgery, stanch the bleeding, and reemerge with a much stronger income statement, rid of low-margin commodity PCs.

When we look at the money that needs to be raised, things become really interesting. Michael Dell’s 15.7% ownership of the company undoubtedly helps, but the $22B market cap is still a big hill to climb. Several buyout firms and banks got involved in preliminary discussions; one group, TPG Capital, dropped out, but another, Silver Lake, has persisted in its attempt to round up big banks and other investors with enough funds to vacuum up Dell’s publicly traded shares.

That’s when Microsoft walks in on the discussions and offers to save Private Dell.

Clearly, Microsoft’s money will help in the buyout…but will its involvement torpedo Dell’s intentions? The NY Times DealBook article makes the case for Microsoft propping up the leading PC maker:

A vibrant Dell is an important part of Microsoft’s plans to make Windows more relevant for the tablet era, when more and more devices come with touch screens.

This would give Microsoft some amount of control over the restructured Dell, a seat on the Board of Directors, perhaps, with ways to better align the PC maker’s hardware with Redmond’s software. Microsoft wants Dell’s reinvigorated participation in the “Windows Reimagined” business.

But note the phrasing above: “Dell is an important part of Microsoft’s plans…” Better vertical integration without having to pay the full price for ownership, the putative “several billion dollars” would give Microsoft a significant ownership, 10% or 15%. This is completely at odds with the buyout’s supposed intent: Getting out of the PC clone race to the bottom.

Or maybe there’s another story behind Microsoft’s beneficence: The investor syndicate struggles and can’t quite reach the $22B finish line. Microsoft generously — and very publicly — offers to contribute the few missing billions. Investors see Microsoft trying to reattach the PC millstone to their necks — and run away.

Hats off to Steve Ballmer: Microsoft looks generous – without having to spend a dime – and forces Dell keep making PCs.

JLG@mondaynote.com