design

The Hybrid Tablet Temptation

 

In no small part, the iPad’s success comes from its uncompromising Do Less To Do More philosophy. Now a reasonably mature product, can the iPad expand its uses without falling into the hybrid PC/tablet trap?

When the iPad came out, almost four years ago, it was immediately misunderstood by industry insiders – and joyously embraced by normal humans. Just Google iPad naysayer for a few nuggets of iPad negativism. Even Google’s CEO, Eric Schmidt, couldn’t avoid the derivative trap: He saw the new object as a mere evolution of an existing one and shrugged off the iPad as a bigger phone. Schmidt should have known better, he had been an Apple director in the days when Jobs believed the two companies were “natural allies”.

I was no wiser. I got my first iPad on launch day and was immediately disappointed. My new tablet wouldn’t let me do the what I did on my MacBook Air – or my tiny EeePC running Windows Xp (not Vista!). For example, writing a Monday Note on an iPad was a practical impossibility – and still is.

I fully accept the personal nature of this view and, further, I don’t buy the media consumption vs. productivity dichotomy Microsoft and its shills (Gartner et al.) tried to foist on us. If by productivity we mean work, work product, earning one’s living, tablets in general and the iPad in particular have more than made the case for their being productivity tools as well as education and entertainment devices.

Still, preparing a mixed media document, even a moderately complex one, irresistibly throws most users back to a conventional PC or laptop. With multiple windows and folders, the PC lets us accumulate text, web pages, spreadsheets and graphics to be distilled, cut and pasted into the intended document.

Microsoft now comes to the rescue. Their hybrid Surface PC/Tablet lets you “consume” media, play games in purely tablet mode – and switch to the comfortable laptop facilities offered by Windows 8. The iPad constricts you to ersatz folders, preventing you to put your document’s building blocks in one place? No problem, the Surface device features a conventional desktop User Interface, familiar folders, comfy Office apps as well as a “modern” tile-based Touch UI. The best of both worlds, skillfully promoted in TV ads promising work and fun rolled into one device.

What’s not to like?

John Kirk, a self-described “recovering attorney”, whose tightly argued and fun columns are always worth reading, has answers. In a post on Tablets Metaphysics – unfortunately behind a paywall – he focuses on the Aristotelian differences between tablets and laptops. Having paid my due$$ to the Techpinions site, I will quote Kirk’s summation [emphasis mine]:

Touch is ACCIDENTAL to a Notebook computer. It’s plastic surgery. It may enhance the usefulness of a Notebook but it doesn’t change the essence of what a Notebook computer is. A keyboard is ACCIDENTAL to a Tablet. It’s plastic surgery. It may enhance the usefulness of a Tablet, but it doesn’t change the essence of what a Tablet is. Further — and this is key — a touch input metaphor and a pixel input metaphor must be wholly different and wholly incompatible with one another. It’s not just that they do not comfortably co-exist within one form factor. It’s also that they do not comfortably co-exist within our minds eye.

In plain words, it’s no accident that tablets and notebooks are distinctly different from one another. On the contrary, their differences — their incompatibilities — are the essence of what makes them what they are.

Microsoft, deeply set in the culture of backwards compatibility that served it so well for so long did the usual thing, it added a tablet layer on top of Windows 7. The result didn’t take the market by storm and appears to have caused the exit of Steve Sinofsky, the Windows czar now happily ensconced at Harvard Business School and a Board Partner with the Andreessen Horowitz venture firm. Many think the $900M Surface RT write-off also contributed to Ballmer’s August 2013 resignation.

Now equipped with hindsight, Apple’s decision to stick to a “pure” tablet looks more inspired than lucky. If we remember that a tablet project preceded the iPhone, only to be set aside for a while, Apple’s “stubborn minimalism”, its refusal to hybridize the iPad might be seen as the result of long experimentation – with more than a dash of Steve Jobs (and Scott Forstall) inflexibility.

Apple’s bet can be summed up thus: MacBooks and iPads have their respective best use cases, they both reap high customer satisfaction scores. Why ruin a good game?

Critics might add: Why sell one device when we can sell two? Apple would rather “force” us to buy two devices in order to maximize revenue. On this, Tim Cook often reminds Wall Street of Apple’s preference for self-cannibalization, for letting its new and less expensive products displace existing ones. Indeed, the iPad keeps cannibalizing laptops, PCs and Macs alike.

All this leaves one question unanswered: Is that it? Will the iPad fundamentals stay the way they have been from day one? Are we going to be thrown back to our notebooks when composing the moderately complex mixed-media documents I earlier referred to? Or will the iPad hardware/software combination become more adept at such uses?

To start, we can eliminate a mixed-mode iOS/Mac device. Flip a switch, it’s an iPad, flip it again, add a keyboard/touchpad and you have a Mac. No contraption allowed. We know where to turn to for that.

Next, a new iOS version allows multiple windows to appear on the iPad screen; folders are no longer separately attached to each app as they are today but lets us store documents from multiple apps in one place. Add a blinking cursor for text and you have… a Mac, or something too close to a Mac but still different. Precisely the reason why that won’t work.

(This might pose the question of an A7 or A8 processor replacing the Intel chip inside a MacBook Air. It can be done – a “mere matter of software” – but how much would it cut from the manufacturing cost? $30 to $50 perhaps. Nice but not game-changing, a question for another Monday Note.)

More modest, evolutionary changes might still be welcome. Earlier this year, Counternotions proposed a slotted clipboard as An interim solution for iOS ’multitasking‘:

[...] until Apple has a more general solution to multitasking and inter-app navigation, the four-slot clipboard with a visible UI should be announced at WWDC. I believe it would buy Ive another year for a more comprehensive architectural solution, as he’ll likely need it.

This year’s WWDC came and went with the strongest iOS update so far, but no general nor interim solution to the multitasking and inter-app navigation discussed in the post. (Besides  the Counternotions blog, this erudite and enigmatic author also edits counternotions.tumblr.com and can be followed on Twitter as @Kontra.)

A version of the above suggestion could be conceptualized as a floating dropbox to be invoked when needed, hovering above the document worked on. This would not require the recreation of a PC-like windows and desktop UI. Needed components could be extracted from the floating store, dragged and dropped on the work in process.

We’ll have to wait and see if and how Apple evolves the iPad without falling into the hybrid trap.

On even more speculative ground, a recent iPad Air intro video offered a quick glimpse of the Pencil stylus by Fifty-Three, the creators of the well-regarded Paper iPad app. So far, styli haven’t done well on the iPad. Apple only stocks children-oriented devices from Disney and Marvel. Nothing else, in spite of the abundance of such devices offered on Amazon. Perhaps we’ll someday see Apple grant Bill Gates his wish, as recounted by Jobs’ biographer Walter Isaacson:

“I’ve been predicting a tablet with a stylus for many years,” he told me. “I will eventually turn out to be right or be dead.”

Someday, we might see an iPad, larger or not, Pro or not, featuring a screen with more degrees of pressure sensitivity. After seeing David Hockney’s work on iPads at San Francisco’s de Young museum, my hopes are high.

JLG@mondaynote.com

@gassee

Goodbye Google Reader

 

Three months ago, Google announced the “retirement” of Google Reader as part of the company’s second spring cleaning. On July 1st — two weeks from today — the RSS application will be given a gold watch and a farewell lunch, then it will pack up its bits and leave the building for the last time.

The other items on Google’s spring cleaning list, most of which are tools for developers, are being replaced by superior (or simpler, friendlier) services: Are you using CalDAV in your app? Use the Google Calendar API, instead; Google Map Maker will stand in for Google Building Maker; Google Cloud Connect is gone, long live Google Drive.

For Google Reader’s loyal following, however, the company had no explanation beyond a bland “usage has declined”, and it offered no replacement nor even a recommendation other than a harsh “get your data and move on”:

Users and developers interested in RSS alternatives can export their data, including their subscriptions, with Google Takeout over the course of the next four months.

The move didn’t sit well with users whose vocal cords were as strong as their bond to their favorite blog reader. James Fallows, the polymathic writer for The Atlantic, expressed a growing distrust of the company’s “experiments” in A Problem Google Has Created for Itself:

I have already downloaded the Android version of Google’s new app for collecting notes, photos, and info, called Google Keep… Here’s the problem: Google now has a clear enough track record of trying out, and then canceling, “interesting” new software that I have no idea how long Keep will be around… Until I know a reason that it’s in Google’s long-term interest to keep Keep going, I’m not going to invest time in it or lodge info there.

The Washington Post’s Ezra Klein echoed the sentiment (full article here):

But I’m not sure I want to be a Google early adopter anymore. I love Google Reader. And I used to use Picnik all the time. I’m tired of losing my services.

What exactly did Google Reader provide that got its users, myself included, so excited, and why do we take its extermination so personally?

Reading is, for some of us, an addiction. Sometimes the habit turns profitable: The hours I spent poring over computer manuals on Saturday mornings in my youth may have seemed cupidic at the time, but the “research” paid off.

Back before the Web flung open the 10,000 Libraries of Alexandria that I dreamed of in the last chapter of The Third Apple my reading habit included a daily injection of newsprint.  But as online access to real world dailies became progressively more ubiquitous and easier to manage, I let my doorstep subscriptions lapse (although I’ll always miss the wee hour thud of the NYT landing on our porch…an innocent pleasure unavailable in my country of birth).

Nothing greased the move to all-digital news as much as the RSS protocol (Real Simple Syndication, to which my friend Dave Winer made crucial contributions). RSS lets you syndicate your website by adding a few lines of HTML code. To subscribe, a user simply pushes a button. When you update your blog, it’s automatically posted to the user’s chosen “feed aggregator”.

RSS aggregation applications and add-ons quickly became a very active field as this link attests. Unfortunately, the user interfaces for these implementations – how you add, delete, and navigate subscriptions — often left much to be desired.

Enter Google Reader, introduced in 2005. Google’s RSS aggregator mowed down everything in its path as it combined the company’s Cloud resources with a clean, sober user interface that was supported by all popular browsers…and the price was right: free.

I was hooked. I just checked, I have 60 Google Reader subscriptions. But the number is less important than the way the feeds are presented: I can quickly search for subscriptions, group them in folders, search through past feeds, email posts to friends, fly over article summaries, and all of this is made even easier through simple keyboard shortcuts (O for Open, V for a full View on the original Web page, Shift-A to declare an entire folder as Read).

Where I once read four newspapers with my morning coffee I now open my laptop or tablet and skim my customized, ever-evolving Google Reader list. I still wonder at the breadth and depth of available feeds, from dissolute gadgetry to politics, technology, science, languages, cars, sports…

I join the many who mourn Google Reader’s impending demise. Fortunately, there are alternatives that now deserve more attention.

I’ll start with my Palo Alto neighbor, Flipboard. More than just a Google Reader replacement, Flipboard lets you compose and share personalized magazines. It’s very well done although, for my own daily use, its very pretty UI gets in the way of quickly surveying the field of news I’m interested in. Still, if you haven’t loaded it onto your iOS or Android device, you should give it a try.

Next we have Reeder, a still-evolving app that’s available on the Mac, iPhone, and iPad. It takes your Google Reader subscriptions and presents them in a “clean and well-lighted” way:

For me, Feedly looks like the best way to support one’s reading habit (at least for today). Feedly is offered as an app on iOS and Android, and as extensions for Chrome, Firefox, and Safari on your laptop or desktop (PC or Mac). Feedly is highly customizable: Personally, I like the ability to emulate Reader’s minimalist presentation, others will enjoy a richer, more graphical preview of articles. For new or “transferring” users, it offers an excellent Feedback and Knowledge Base page:

Feedly makes an important and reassuring point: There might be a paid-for version in the future, a way to measure the app’s real value, and to create a more lasting bond between users and the company.

There are many other alternatives, a Google search for “Google Reader replacement” (the entire phrase) yields nearly a million hits (interestingly, Bing comes up with only 35k).

This brings us back to the unanswered question: Why did Google decide to kill a product that is well-liked and well-used by well-informed (and I’ll almost dare to add: well-heeled) users?

I recently went to a Bring Your Parents to Work day at Google. (Besides comrades of old OS Wars, we now have a child working there.) The conclusion of the event was the weekly TGIF-style bash (which is held on Thursdays in Mountain View, apparently to allow Googlers in other time zones to participate). Both founders routinely come on stage to make announcements and answer questions.

Unsurprisingly, someone asked Larry Page a question about Google Reader and got the scripted “too few users, only about a million” non-answer, to which Sergey Brin couldn’t help quip that a million is about the number of remote viewers of the Google I/O developer conference Page had just bragged about. Perhaps the decision to axe Reader wasn’t entirely unanimous. And never mind the fact Feedly seems to already have 3 million subscribers

The best explanation I’ve read (on my Reader feeds) is that Google wants to draw the curtain, perform some surgery, and reintroduce its RSS reader as part of Google+, perhaps with some Google Now thrown in:

While I can’t say I’m a fan of squirrelly attempts to draw me into Google+, I must admit that RSS feeds could be a good fit… Stories could appear as bigger, better versions of the single-line entry in Reader, more like the big-photo entries that Facebook’s new News Feed uses. Even better, Google+ entries have built in re-sharing tools as well as commenting threads, encouraging interaction.

We know Google takes the long view, often with great results. We’ll see if killing Reader was a misstep or another smart way to draw Facebook users into Google’s orbit.

It may come down to a matter of timing. For now, Google Reader is headed for the morgue. Can we really expect that Google’s competitors — Yahoo!, Facebook, Apple, Microsoft — will resist the temptation to chase the ambulance?

–JLG@mondaynote.com

 

The Silly Web vs. Native Apps Debate

 

Mark Zuckerberg admits Facebook was wrong to bet on HTML5 for its mobile app. Indeed, while the previous version was a mere wrapper around HTML code, the latest iOS app is much improved, faster, nimbler. Facebook’s CEO courageously admits the error, changes course, and promises to ship an equally native Android app in the near future.

A fresh set of broadsides from the usual suspects predict, with equal fervor, the ultimate success/failure of HTML5/native apps. See, for example, Why Web Apps Will Crush Native Apps.

This is bizarre.

We don’t know what Zuckerberg and the Facebook technical team were thinking, exactly, when they chose to take the HTML5 route, but the decision was most likely guided by forces of culture and economy.

Perhaps more than any other company in the HTTP age, Facebook is a product of the Web. The company’s engineers spent days and nights in front of big screen monitors writing javascript, PHP, and HTML code for PC users. And no Website has been so richly and promptly rewarded: Facebook is now the #1 or #2 most-visited site (depending on whether you count pageviews or unique visitors).

Even as the Smartphone 2.0 era dawned in late 2007, there was no reason to jump the Web app ship: Smartphone numbers were low compared to PCs. And I’m guessing that when Facebook first looked at smartphones they saw “PCs, only smaller”. They were not alone.

Then we have the good old Write Once Run Anywhere (WORA) refrain. Developing and maintaining native apps for different devices is time-consuming and expensive. You need to hire separate teams of engineers/designers/QA, experts at squeezing the best performance from their respective devices, educing the most usable and intuitive UI, deftly tracking down elusive bugs. And even then, your product will suffer from “feature drift”: The ostensibly separate-but-equal native apps will differ in subtle and annoying ways.

HTML5 solves these problems. In theory.

In practice, two even more vexing dilemmas emerge: Performance and The Lowest Common Denominator.

Mobile users react poorly to sluggish performance. Native apps have more direct access to optimized OS modules and hardware features…which means better performance, faster, more immediate interaction. That’s why games, always looking for speed, are almost universally native apps, and it’s why all smartphone vendors promote native apps, their app stores sport hundreds of thousands of titles.

For the Lowest Common Denominator, consider a player piano that can read a scroll of eight parallel punched hole tracks, a maximum of eight simultaneous notes. You want to create richer music, perhaps on an organ that has multiple ranks, pedals, and stops? Sorry, we need your music to play everywhere, so we’ll need to enforce the eight note standard.

In the world of smartphones, sticking with the Lowest Common Denominator means trouble for new platform features, both hardware or software, that aren’t available everywhere. A second camera, a new sensor, extended graphic primitives? Tough luck, the Web apps can’t support them. The WORA approach stands in the way of creativity and innovation by demanding uniformity. This is especially wrong in a world as new, as fast-changing as the Smartphone 2.0 universe.

Pointing to the performance and lowest common denominator problems with the WORA gospel shouldn’t be viewed as a criticism of HTML5. This new (and still evolving) version of the Web’s content language provides much improved expressive power and cleans up many past sins.

Also, there are usage scenarios where Web apps makes sense and run well across several platforms. Gmail and Google Docs are prime examples, they work well on all types of PCs and laptops… But Google took pains to write native Android and iOS apps to provide better access to Google Docs on leading smartphones.

Forget facts and nuance. “It Depends” isn’t as enticing a headline as the fight between Right and Wrong.

JLG@mondaynote.com

Apple Never Invented Anything

Monsieur Voiture, you hopeless [redacted French slur], you still can’t prepare a proper mayonnaise! I’ll show you one last time while standing on one foot…”

[Bear with me, the connection with today's title will become apparent in a moment.]

The year is 1965, I’m midway through a series of strange jobs that I take between dropping out of college and joining HP in 1968 — my “psychosocial moratorium”, in California-speak. This one approaches normal: I’m a waiter in a Paris restaurant on rue Galande, not far from Notre-Dame.

Every day, before service starts, it’s my job to make vinaigrette, remoulade, and mayonnaise, condiments for the hors d’oeuvres (French for appetizers) I’ll wheel around on a little cart — hence the Monsieur Voiture snicker from the chef.

The vinaigrette and remoulade are no problem, but the mayonnaise is not my friend: Day after day, my concoction “splits” and the chef berates me.

So now, pushed beyond limit, he grabs a cul-de-poule (a steel bowl with a round bottom), throws in the mustard, vinegar, and a bit of oil, cracks an egg on the bowl’s edge, separates and drops the yolk into the mixture — all with one hand. I see an opportunity to ingratiate myself: Obligingly, I reach for a whisk.

“No, all I need is a fork.”

Up on one foot, as promised, he gives the mixture a single, masterful stroke — and the mayonnaise begins to emulsify, I see the first filaments. The chef sniffs and walks away. I had been trying too hard…the rest was obvious: a thin trickle of oil, whisk calmly.

Clearly, the episode left its mark, and it came back to mind when I first saw the iPad.

For thirty years, the industry had tried to create a tablet, and it had tried too hard. The devices kept clotting, one after the other. Alan Kay’s Dynabook, Go, Eo, GridPad, various Microsoft-powered Tablet PCs, even Apple’s Newton in the early nineties….they didn’t congeal, nothing took.

Then, in January 2010, Chef Jobs walks on stage with the iPad and it all becomes obvious, easy. Three decades of failures are forgotten.

This brings us to last week’s animated debate about Apple’s talent for invention in the Comments section of the “Apple Tax” Monday Note:

“…moving from stylus to touch (finger) was a change in enabling technology, not some invention by Apple – even gesture existed way back before the iPhone. Have an IPAQ on my desk as a reminder – a product ahead of the implementing technology!
Unfortunately Apple have run out of real innovation…”

In other words: “Nothing new, no innovation, the ingredients were already lying around somewhere…”. The comment drew this retort from another reader:

“iPaq as a precursor to iPad?
Are you on drugs? Right now?”

Drugged or sober, the proud iPaq owner falls into the following point: The basic ingredients are the same. Software is all zeroes and ones, after all. The quantity and order may vary, but that’s about it. Hardware is just protons, neutrons, electrons and photons buzzing around, nothing original. Apple didn’t “invent” anything, the iPad is simply their variation, their interpretation of the well-known tablet recipe.

By this myopic logic, Einstein didn’t invent the theory of relativity, Henri Poincaré had similar ideas before him, as did Hendrik Lorentz earlier still. And, come to think of it, Maxwell’s equations contain all of the basic ingredients of relativity; Einstein “merely” found a way to combine them with another set of parts, Newtonian mechanics.

Back to the kitchen: Where does talent reside? Having access to commonly available ingredients or in the subtlety, the creativity — if not the magic — of their artful combination? Why are the great chefs so richly compensated and, yes, imitated? Alain Ducasse, Alain Senderens, and Joel Robuchon might be out of our price range, but Pierre Herme’s macarons are both affordable and out of this world — try the Ispahan, or the salted caramel, or… (We’ll note that he opened his first boutique in Tokyo, where customers pay attention to details.)

In cars, Brand X (I don’t want to offend) and BMW (I don’t drive one) get their steel, aluminum, plastics, rubber, and electronics from similar — and often the same — suppliers. But their respective chefs coax the ingredients differently, with markedly different aesthetic and financial outcomes.

Did IBM invent the PC? Did HP invent the pocket calculators or desktop computers that once put them at the top of the high tech world? Did Henry Ford invent the automobile.

So, yes, if we stick to the basic ingredients list, Apple didn’t invent anything…not the Apple ][, nor the Macintosh, not the iPod, the iPhone, or the iPad…to say nothing of Apple Stores and App Stores. We’d seen them all before, in one fashion or another.

And yet, we can’t escape a key fact: The same chef was involved in all these creations. He didn’t write the code or design the hardware, but he was there in the kitchen — the “executive chef” in trade parlance — with a unique gift for picking ingredients and whipping up unique products.

JLG@mondaynote.com

As a postscript, two links:

– Steve Wildstrom valiantly attempts to clear up the tech media’s distortions of the patents that were — and weren’t — part of the Apple-Samsung trial:

Whatever happens on appeal, I think the jury did an admirable job making sense of the case they were given. They certainly did better than much of the tech media, which have made a complete mess of the verdict.

– This August 2009 Counternotions post provides a well-reasoned perspective on the iPhone’s risks and contributions, as opposed to being a mere packaging job. (The entire Counternotions site is worth reading for its spirited dissection of fashionable “truths”.)

.

Proof by Mask

Web design is in bad shape. In the applications boom, news-related web sites end up as collateral damage. For graphic designers, the graphics tools and the computer languages used to design apps for tablets and smartphones have unleashed a great deal of creativity. The transformation took longer than expected, but great designs begin to appear in iPad applications (in previous Monday Notes, we already discussed Business Week+ and the new Guardian app). The best applications get rid of the print layout; they start from a blank slate in which a basic set of rules (typefaces, general structure of a page, color codes) are adapted to the digital format. Happily, we just stand at the very beginning of a major evolution in news-related graphic design for apps. And this new world proves to be a killer for the traditional web which, in turn, seems to age fast.

The graphic evolution of the web must deal with two negative forces: its language framework doesn’t evolve fast enough, and it faces the burden of messy advertising.

Less than a year ago, the potential in the latest iteration of the HyperText Markup Language a.k.a. HTML5 thrilled everyone: it was seen as the decisive, if not definitive, upgrade of the web, both functionally and visually. Fact is, it didn’t take-off — yet. Reasons are many: backward compatibility (not everyone uses the latest web browser), poor documentation making development uncertain, stability and performances issues. There are are interesting initiatives but nothing compelling so far. None of the large digital media have made the jump.

For advertising, the equation is straightforward. The exponential rise of inventories coupled to fragile economic conditions have pushed ad agencies to ask more (space) for less money. And, for the creativity, the encephalogram remains desperately flat.

The result is this:

This is the first screen of the French website 20 minutes’ home page. A good site indeed, doing quite well audience-wise, but which yields too much to advertising. In its case, the page carries an “arch” that frames the content; and, for good measure, a huge banner is inserted below the main header. If you mask the ad, it looks like this:

The weird thing is this: On the one hand, web designers seem to work on increasingly large monitors; on the other, the displays used by readers tend to shrink as more people browse the web on notebooks, tablets or smartphones.

The result is a appalling when you try to isolate content directly related to the news. In the series of screenshots below, I selected the first scrolls of pages as they render on my laptop’s 15” display. Then, I overlaid a red mask on everything but the news contents: ads, all sorts of promotions, large white spaces, headers and sections lists are all hidden away. More

Flipboard: Threat and Opportunity

Every media company should be afraid of Flipboard. The Palo Alto startup epitomizes the best and the worst of the internet. The best is for the user. The worst is for the content providers that feed its stunning expansion without getting a dime in return. According to Kara Swisher ‘s AllThingsD, nine months after launching its first version, Flipboard’s new $50m financing round gives the company a €200m valuation.

Many newspapers or magazines carrying hundreds of journalists can’t get a €200m valuation today. Last year, for the Groupe Le Monde, an investment bank memo set a valuation of approximately $100m (net of its $86m debt at the time, to be precise). That was for a 644 journalists multimedia company – OK, one that had been badly managed for years. Still, Flipboard is a 32-people startup with a single product and no revenue yet.

So, what’s the fuss about?

The answer is a simple one: Flipboard is THE product any big media company or, better, any group of media companies should have invented. It’s an iPad application (soon to be supplemented by an iPhone version), it allows readers to aggregate any sources they want: social medias such as Twitter, Facebook, Flickr or any combination of RSS feeds. No need to remember the feed’s often-complicated URL, Flipboard searches it for you and puts the result in a neat eBook-like layout. A striking example: the Google Reader it connects you to suddenly morphs from its Icelandic look into a cozy and elegant set of pages that you actually flip. Flipboard most visible feature is an interface that transform this:

Into this:

All implemented with near perfection. No flickering, no hiccups when a page resizes or layouts adjust. More

iPhone 4 Antennas: The Fun Side

We’ll leave serious industry matters aside this week. (If you must, you can wade into Apple’s Q3 numbers here, or luxuriate in the impending ouster of Nokia CEO OPK and consider the list of possible replacements.)

Instead, we’ll look into the fun side of Apple’s antenna, or antennas (not antennae, a solecism from last week. A reader reminded me that antennae is reserved for actual bugs, as in insects.)

As they always do, savvy entrepreneurs immediately saw how to convert a problem into an opportunity, how to spin an unintended “feature” into $$.

Tongue-in-cheekiest of them all, we have Antenn-aid:

Nothing more need be said.

Etsy’s offering is a bit less subtle:

(and the pricetag is $4, not the $29 shown in the picture.) The label is a intentionally contradicatory: Placing the sticker over the gap will prevent involuntarily dropped calls, but the humor (and the product) works.

Let’s talk bumpers.

I like the sleek industrial design of the iPhone 4 but because the bumper and the charging dock are mutually exclusive, I’ve remained defiantly “unprotected.” I should have known better. One small slip of the hand, one bounce off the concrete and… More

The lethal self-complacency of advertising

Is advertising the next casualty of the on-going digital tsunami’s? For now, advertising looks like the patient who developed an asymptomatic form of cancer without realizing how sick he is. Such behavior usually results from excessive confidence in one’s body past performance, mixed with a state of permanent denial and a deep sense of superiority, all aided by a complacent environment. The digital graveyard is filled with the carcasses of utterly confident people who all shared this sense of invincibility. The music industry or, to some extent, the news business built large mausoleums for themselves. Today, the advertising industry is working on its own funeral monument. Same mistakes….

Before performing media oncology tests and discussing possible treatments, let me describe which soapbox I’m standing on. Each time I raise the issue of advertising trailing behind the digital train, I get two responses: media execs nod sagely, and later explain how they intend to progressively circumvent the ad food chain; advertising people breezily dismiss my remarks: ‘Anyway, you don’t like us’. Untrue.

First, I’m in the same boat with many of my friends in the news media: a significant part of my income, past and future, rides on advertising. Therefore, my pragmatic self-interest is to see digital advertising thrive.

Second, over my 25-year career, I worked with ad people in many occasions. In the late 90′s, for a year, I even worked at a large ad agency, trying to evangelize multimedia. I met interesting people there, even though I quickly realized we had little in common. And my last job as a managing editor was at a free newspaper: 20 Minutes — 100% dependent on advertising.

I am way more open to this business than most of my journalist colleagues are. No ideological posture or agenda on my part. Today’s note is the result of two years of observations and conversations with digital editors and publishers I met in Europe, US or Asia.

Let’s face it. On digital medias, advertising hasn’t delivered. In the news business, we have a rule of thumb: an electronic reader brings 15 to 20 times less in advertising revenue than a print reader does. I’ll stop short of saying this dire state of affairs is only attributable to advertising. Between inadequate interfaces, poor marketing, and the certainty that, just by itself, intellectual superiority entitles to success, medias carry their share of responsibility in this situation. But, for the most part, it is the advertising community who missed the digital target.

Digital advertising sucks. Both on the web and on mobile. Two main reasons for this.

#1: Poor design. Where is the creative talent? Not in digital, that’s only too clear. Let’s face it: most of banners, skyscrapers, sliders, pop-ups, you name it, merely act as reader repellents. Judge by yourself.

.

These “creative works” end up as fodder for ad-blocking systems. Unfortunately, these defense mechanisms are thriving. A Google query for “ad block” yields 1.25 million pages which send to dozens of browser add-ons. On Firefox, AdBlockPlus is the most used extension with more than 80m downloads and more than 10m active users. The same goes for Chrome whose ad-blocking extension is downloaded at a rate of 100,000 times a week and now has over one million users. For Internet Explorer, there are simply too many add-ons to count.

I spotted this comment in an excellent Media Guardian ad blocking story.

“I work for a digital advertising agency. Along with microsites, iPhone apps and long-form digital content, I make banners. Shitloads of them. And I use Adblock Plus. I also advise my friends and colleagues to use it too. This is because most advertising, online or otherwise, is utter crap. And banners contain some of the worst of the crap. Flickering, squiriming, farting, buzzing crap”.

More

Reconciling efficiency with serendipity

For digital media publishers, Design is the biggest challenge. Business model is king, of course, but it needs strong design to reign. The same goes for content. Without clever navigation, after a quick stunt on the home page, a good story might die buried deep inside the bowels of a site before realizing its full potential. (Smartly enough, Slate.com used to call “compost” its stacks of forgotten stories). In this regard, many web structures are lethal: they give way too much weight and visibility to novelty, at the expense of relevancy.

After more than fifteen years of internet presence, many online publications are still having trouble moving past the newspaper metaphor. Columns, pages, sections, vertical scrolling…, the old-world graphical newspaper attributes still rule the web and contribute to a quasi-failure on three critical counts:

#1 The identity test. Check for yourself. Take the home page of a dozen online newspapers; once you’ve cut off the header displaying the publication’s title, they become extremely difficult to differentiate from one another. To say nothing of connecting to the paper’s brand. General grid, typography, colors palette, are almost all alike. Of course, the web lowest denominator rule is largely to blame – but there is more.

#2  The personality test. Browse every physical newspaper and see how you “get it”.
First, you’re quickly able to capture of the news cycle’s dominance and its intensity. The size of headlines and illustrations, the space devoted to stories, the angles, all give you a clear idea of the day’s flavor.
Second, you’ll be able to quickly assess the publication’s political leaning, again by evaluating the hierarchy of treatments. Because of the computer screen’s narrow funnel, as opposed to the carbon-based paper UI, it is much more difficult to do so for an online publication.

#3 The serendipity test, i.e. the ability to enjoy something that we were not looking for, a sort of semi-accidental discovery. To me, this might be the most important feature/function.
A print publication, magazine or daily, carries a mix of contents that, in turn, leads to a collection of experiences. I pick the publication because of the implicit contract between the publisher and me, a contract based on trust and expectations.
Trust combines editorial judgment and execution; it can fluctuate. A newspaper can become less trustworthy if its editorial leadership weakens. Expectations carry the serendipity factor: I know this newspaper is going to trigger my curiosity without taking me into totally unchartered territories. I take this paper for its business section to see how it covers the Goldman Sachs scandal, but I will find this fantastic profile of an unknown tycoon, or this opinion piece that challenges my views on a particular subject, but is well balanced, well written. More

Catching The iPad Wave: Seven Thoughts

1. Design

The iPad is all about design, and interface expectations. From a graphic design standpoint, with the iPad, the quantum leap is its ability to render layouts, typefaces, page structure. No more web HTML lowest common denominator, here. What comes out from an art director gets WYSIWYGed on the iPad — if the implementation is right.
Two things will be needed, though : talent and tools. Talent requirements for the iPad won’t be limited to conceiving great graphic arrangements fitting the 9’7″ (25cm) screen. As in multimedia  journalism where storytelling talent is to be enhanced by technical skills, layout and contents will have to be supported by great technical implementation. Clumsiness is not an option.
As for the tools, there is a need for what I’ll call “the first  layer” of content creation, i.e. the design phase that stands above the hard coding. What we need is a set of tools to be used by production people to arrange contents; it is badly needed: consider how often multimedia designers rely on… PostIt to sketch their projects out. Apple could provide this toolkit, of course. As for others, don’t count on Quark Xpress, they badly missed the web design train, but rely more on Adobe, they’re said to have an iPad design toolbox in the pipeline.

The WSJ.Com – OK for a Generation 1 app, but...

The WSJ.Com – OK for a Generation 1 app, but...

2. Innovation / Disruption

The app market is likely to split into two different paths. “Generation 1″ iPad applications will be a direct translation of the print reading experience, slightly improved using the finger-as-a-pointing-device feature for browsing and zooming. That’s the Wall Street Journal way. No point in blaming their designers; like everybody else, they had to crash-code their apps: game developers are handled console prototypes 12 to 24 months in advance of the actual release; for the iPad, it was just weeks. (We’re told many apps never “saw” an actual iPad before they shipped, they were written and tested entirely on the software simulator that comes with the Apple development tools…)
“Generation 2″ apps will have to reinvent navigation, the invitation and handling of user input, the integration of videos or animated graphics, a key challenge.
Publishers will be well advised to stimulate out-of-the box thinking by drilling into new pools of designers, through public, crowdsourced contests. Inevitably, great stuff will emerge; it will not be applicable before a year or two, but this innovative/disruptive stimulus approach is essential (not only for media, but also for books). More