Browsing Category


Forking Apple Brands

design By January 26, 2015 Tags: 21 Comments


by Jean-Louis Gassée

After last week’s lengthy discussion of Apple’s software foibles, today’s fare is lighter but intriguing: The Apple logo is a stamp of excellence that’s proudly worn by the Mac, iPhone, iPad, Watch… why is it withheld from one of Apple’s other major group of products?]

Naming a computer company Apple was a true stroke of genius, the kind that sits beyond the reach of consciousness. With the name came a visual representation. The first, unofficial logo evoked Isaac Newton’s famous epiphany:


(Source: Edible Apple)

Not a stroke of genius. It was a too kitschy, too busy, and failed to provide an easily memorized and recognized image, a signpost to the company’s products. It was quickly replaced with the simple Apple bite logo that we know today:


(Source: Graphic Design 1)

Theories of the the logo’s meaning and construction occupy a corner of Apple mythology. Some are misguided (it’s an homage to Alan Turing, it’s a blasphemous reference to the forbidden fruit), while others are playful: A fellow named Barcelos Thiago points out the use of the Fibonacci series in the Apple logo (and in just about everything else).

Apple’s reputation, products, and imagery have coalesced into a brand, a mark that’s burned (as in the word’s origin) into the collective consciousness. Last year, Forbes called Apple the world’s most valuable brand. It’s impossible to measure contribution of the name and logo to the company’s success, but a peek at the Forbes’ list shows how little Apple spends advertising its products compared to Microsoft, Google, Samsung, or less technical companies such as Coca-Cola or Louis Vuitton:


A brand exists in a circular relationship with the promises that it makes to the customer. If the products and services deliver on the pledge, the customer is more inclined to swear loyalty to the brand. A close examination of some of these circles brings up apparent paradoxes. Burberry’s, for example, was once credited for inventing the oxymoronic “mass-marketing of exclusivity” – a trick that Louis Vuitton now performs at the highest of levels, a feat that requires an advertising budget more than four times Apple’s.

The late Fred Hoar, an erudite Harvard graduate who once served as the head of Apple’s Marketing Communications, likened brand advertising to urinating inside one’s dark-blue flannel suit: It makes you feel warm but no one sees anything.

No such waste at at Apple. The product, not the brand, is the hero. Apple’s ads focus on the product, on what it does, on the feats that it allows unnamed customers to perform. The brand ascends to where it belongs, above specific products and promotions.

Apple ads are also (mostly) free from celebrity endorsements. The imprimatur of a noted figure can be effective — I’m thinking of George Clooney second banana persona in Nestlé’s tongue-in-cheek Nespresso ads. But we usually feel the use of endorsements as an admission that the product needs stilts, that it lacks differentiation.

If Apple ever hires a spokesperson for its iPhones, even if it’s Andrew Wiles or, in a couple of years, a happily retired Barack Obama, you should look elsewhere: The brand has started to unravel. (Apple does, of course, occasionally use celebrities — this ad featuring the Williams sisters for example — but as Adweek points out, it’s rare.)

Given this thinking, what do we make of Apple’s other brand, Beats?

Beats was acquired last year, for $3.2B. The reasons behind the price are still a bit unclear, but we already see ads that aren’t much more than mini-movies of celebrity athletes (Colin Kaepernick, Cesc Fabregas, LeBron James) shutting out the noise of irate fans and implications of social injustice by donning the company’s headphones.

Does the Beats lines needs stilts in order to achieve differentiation and justify its high pricetag? The quality of Beats headphones is a contested subject. One study shows they’re preferred by teens, other painstaking reviews claim there are many better headphones. On this, because of my old ears, I don’t have much of an opinion beyond Sound Holiday Thoughts written in December 2013.

It’s a novel situation: Apple Thinks Different about the two brands it now owns. The personal computing brand is carefully nurtured, pruned, protected, now at the pinnacle. The other is just as carefully kept apart.

Walk into an Apple store and you’ll see Beats headphones and speakers next to Bose, B&O, and Logitech products. Before the acquisition, this was no surprise, Beats products were just third party accessories. Now, they’re Apple products, even if they don’t carry the Apple logo. They sit on the shelves next to their competitors, such as the $999.95 Denon Music Maniac Artisan headphones. Can you imagine the Apple Store selling Surface Pro hybrids, stocking them right next to the iPads?

You won’t find Apple logos on Beats headphones, and you won’t find any Apple references in a Beats headphone commercial. The headphones are part of the Beats Music streaming music ecosystem whose goal is to play everywhere, including the Windows Phone Store.

But there’s a problem. As Horace Dediu notes, Apple’s music business has stopped growing, vastly overwhelmed by apps:


The Beats acquisition raised many questions still unanswered: Why get into the headphones and loudspeakers business? What is the Job To Be Done here?  Same queries for the Beats Music streaming service, one that might benefit from its bundling with Apple hardware, but whose curation “sounds” less than enthralling thus far, notwithstanding Tim Cook’s enthusiasm.

As the year unfolds, we’ll see how Beats products and services grow the brand, if its isolation from the Apple brand merely is prophylactic caution, or part of a bigger plan to stay on top of the music world.

The Apple Watch won’t be the only development to… watch this year.


The Future of Mobile Apps for News

design, mobile internet By August 17, 2014 Tags: 34 Comments


The modern smartphone is 7 years old and yet, when it comes to designing mobile applications, we are still barely scratching the surface. Today we’ll see how harnessing technology already embedded in a phone can unleash great potential. 

A mobile news app has simple goals: Capture and retain reader attention, and repeat the process, several times a day. Pretty straightforward. But not that simple in the real world. For a news provider, the smartphone screen is the the most challenging environment ever seen. There, chances are that a legacy media or a pure-player will find itself in direct competition, not only with the usual players in its field, but also with Facebook, Snapchat, Instagram and scores of gaming applications. Distraction is just one icon away; any weakness in functional or graphic design can be lethal.


Hence the questions for publishers: What type of news should they put on their mobile apps, what formats, what about images and video, sharing, curation, connections to other apps? Should they be selective or stuff as much as they can in their app? Or should they build easily digestible news blocks à la Circa? Or put more emphasis on a nice, small package of news items, as Yahoo News Digest brilliantly does? Or — the last trend –design an app for fast reading, like The New York Times NYT Now? (I must say, NYT Now, is my favorite news application — and I tested many; it delivers exactly what its promises: a constantly updated news stream, sending back to NYT’s stories, and well curated picks from the web. At the same time, Les Echos, the business media I work for, released LesEchosLive, an app also built around a single vertical “rail” of news with compact stories that expand and collapse as needed — readers seems to like it a lot…)

But… Good as they are, these forays into mobile news consumption are not enough. The  mobile tsunami has just begun to unfurl. Soon, it might flood a solid half, then two thirds of all news pageviews — and we can expect further acceleration after the release of the next batch of iPhones: their larger screens will provide more attractive reading.

If mobile is to become the dominant vector for news, retaining readers will be much more challenging than it is on a PC or tablet (though the latter tends to engage readers 10x or sometimes 20x more). A news app needs to be steered with precision. Today’s digital marketing tools allow publishers to select multiple parameters monitoring the use of a application: They can measure how long the app is used, when, for how long, why and where people tend to drop it, what kind of news they like, if they hit a paywall and give up, and why they do so, etc. Similarly, when an app remains unopened for too long, smart tools can pinpoint the user and remind her of the product’s benefits. These tools are as good as the people who (a) set the parameters, (b) monitor them on a daily basis, and (c) take appropriate action such as launching a broadside of super-targeted emails. But these are incremental measures, they don’t breed exponential growth in viewership (and revenue).

Why not envision a few more steps forward and take advantage of technologies now embedded in every smartphone? A mobile phone is filled with features that, well directed, can significantly improve user experience and provide reams of usage data.

Imagine a news feed natively produced in different formats: long, short, capsules of text, with stills and videos in different sizes and lengths. Every five minutes or so, the feed is updated.

After a while, your smartphone has recorded your usage patterns in great detail. It knows when you read the news and, more importantly, under what conditions. Consider Google Now, the search engine’s intelligent personal assistant: It knows when you are at work or at home and, at the appropriate moment, it will estimate your transit time and suggest an itinerary based on your commute patterns; or take Google Location History, a spectacular — and a bit creepy — service for smartphones (also tablets and laptops) that visualizes your whereabouts. Both Google services generate datasets that can be used to tailor your news consumption. Not only does your phone detect when you are on the move, but it can anticipate your motions.

Based on these data sets, it becomes possible to predict your most probable level of attention at certain moments of the day and to take in account network conditions. Therefore, a predictive algorithm can decide what type of news format you’ll be up for at 7:30am when you’re commuting (quickly jumping from one cell tower to another with erratic bandwidth) and switch for faster reads than at 8:00pm, when you’re supposed to be home, or staying in a quiet place equipped with a decent wifi, and receptive to richer formats.

By anticipating your moves, your phone can quickly download heavy media such as video while networks conditions are fine and saving meager bandwidth for essential updates. In addition, the accelerometer and internal gyroscope can tell a lot about reading conditions: standing-up in a crowded subway or waiting for your meeting to start.

By poring over such data, analytics specialists can understand what is read, watch and heard, at what time of the day and in which environment. Do users favor snippets when commuting? What’s the maximum word-length for a story to be read in the subway without being dropped, and what length is more likely to induce future reading? What’s the optimal duration for a video? What kind of news package fits the needs and attention for someone on the move? What sort of move by the way? Motion and vibration for a car are completely different than the ones from the Bay Area transit system or London’s Tube. Accelerometers and motion sensors can tell that for sure — and help to decide if it’s better to serve the smartphone owner with a clever podcast while she is likely to be stuck in her car for the next 50 minutes on Highway 101 heading to San Jose (as revealed by her trajectories and GPS patterns of the last few months) or favor text and preloaded videos for BART commuters between Oakland and San Francisco.

This approach, based on a large spectrum of patterns analytics, can enormously increase readers’ appetite for news. This is yet another reason for media companies to lean more and more on the technology side. Until now, with very few exceptions, legacy media have been slow to move into that direction. As someone who loves good journalism and smart news formats, the last thing I want to see is newcomers providing cheap editorial succeed at capturing people’s attention only because they’ll have been first to harness these technologies. We’ve had that experience on the web, let’s not make the same mistake twice.


The Hybrid Tablet Temptation

design, hardware By January 5, 2014 Tags: , 22 Comments


In no small part, the iPad’s success comes from its uncompromising Do Less To Do More philosophy. Now a reasonably mature product, can the iPad expand its uses without falling into the hybrid PC/tablet trap?

When the iPad came out, almost four years ago, it was immediately misunderstood by industry insiders – and joyously embraced by normal humans. Just Google iPad naysayer for a few nuggets of iPad negativism. Even Google’s CEO, Eric Schmidt, couldn’t avoid the derivative trap: He saw the new object as a mere evolution of an existing one and shrugged off the iPad as a bigger phone. Schmidt should have known better, he had been an Apple director in the days when Jobs believed the two companies were “natural allies”.

I was no wiser. I got my first iPad on launch day and was immediately disappointed. My new tablet wouldn’t let me do the what I did on my MacBook Air – or my tiny EeePC running Windows Xp (not Vista!). For example, writing a Monday Note on an iPad was a practical impossibility – and still is.

I fully accept the personal nature of this view and, further, I don’t buy the media consumption vs. productivity dichotomy Microsoft and its shills (Gartner et al.) tried to foist on us. If by productivity we mean work, work product, earning one’s living, tablets in general and the iPad in particular have more than made the case for their being productivity tools as well as education and entertainment devices.

Still, preparing a mixed media document, even a moderately complex one, irresistibly throws most users back to a conventional PC or laptop. With multiple windows and folders, the PC lets us accumulate text, web pages, spreadsheets and graphics to be distilled, cut and pasted into the intended document.

Microsoft now comes to the rescue. Their hybrid Surface PC/Tablet lets you “consume” media, play games in purely tablet mode – and switch to the comfortable laptop facilities offered by Windows 8. The iPad constricts you to ersatz folders, preventing you to put your document’s building blocks in one place? No problem, the Surface device features a conventional desktop User Interface, familiar folders, comfy Office apps as well as a “modern” tile-based Touch UI. The best of both worlds, skillfully promoted in TV ads promising work and fun rolled into one device.

What’s not to like?

John Kirk, a self-described “recovering attorney”, whose tightly argued and fun columns are always worth reading, has answers. In a post on Tablets Metaphysics – unfortunately behind a paywall – he focuses on the Aristotelian differences between tablets and laptops. Having paid my due$$ to the Techpinions site, I will quote Kirk’s summation [emphasis mine]:

Touch is ACCIDENTAL to a Notebook computer. It’s plastic surgery. It may enhance the usefulness of a Notebook but it doesn’t change the essence of what a Notebook computer is. A keyboard is ACCIDENTAL to a Tablet. It’s plastic surgery. It may enhance the usefulness of a Tablet, but it doesn’t change the essence of what a Tablet is. Further — and this is key — a touch input metaphor and a pixel input metaphor must be wholly different and wholly incompatible with one another. It’s not just that they do not comfortably co-exist within one form factor. It’s also that they do not comfortably co-exist within our minds eye.

In plain words, it’s no accident that tablets and notebooks are distinctly different from one another. On the contrary, their differences — their incompatibilities — are the essence of what makes them what they are.

Microsoft, deeply set in the culture of backwards compatibility that served it so well for so long did the usual thing, it added a tablet layer on top of Windows 7. The result didn’t take the market by storm and appears to have caused the exit of Steve Sinofsky, the Windows czar now happily ensconced at Harvard Business School and a Board Partner with the Andreessen Horowitz venture firm. Many think the $900M Surface RT write-off also contributed to Ballmer’s August 2013 resignation.

Now equipped with hindsight, Apple’s decision to stick to a “pure” tablet looks more inspired than lucky. If we remember that a tablet project preceded the iPhone, only to be set aside for a while, Apple’s “stubborn minimalism”, its refusal to hybridize the iPad might be seen as the result of long experimentation – with more than a dash of Steve Jobs (and Scott Forstall) inflexibility.

Apple’s bet can be summed up thus: MacBooks and iPads have their respective best use cases, they both reap high customer satisfaction scores. Why ruin a good game?

Critics might add: Why sell one device when we can sell two? Apple would rather “force” us to buy two devices in order to maximize revenue. On this, Tim Cook often reminds Wall Street of Apple’s preference for self-cannibalization, for letting its new and less expensive products displace existing ones. Indeed, the iPad keeps cannibalizing laptops, PCs and Macs alike.

All this leaves one question unanswered: Is that it? Will the iPad fundamentals stay the way they have been from day one? Are we going to be thrown back to our notebooks when composing the moderately complex mixed-media documents I earlier referred to? Or will the iPad hardware/software combination become more adept at such uses?

To start, we can eliminate a mixed-mode iOS/Mac device. Flip a switch, it’s an iPad, flip it again, add a keyboard/touchpad and you have a Mac. No contraption allowed. We know where to turn to for that.

Next, a new iOS version allows multiple windows to appear on the iPad screen; folders are no longer separately attached to each app as they are today but lets us store documents from multiple apps in one place. Add a blinking cursor for text and you have… a Mac, or something too close to a Mac but still different. Precisely the reason why that won’t work.

(This might pose the question of an A7 or A8 processor replacing the Intel chip inside a MacBook Air. It can be done – a “mere matter of software” – but how much would it cut from the manufacturing cost? $30 to $50 perhaps. Nice but not game-changing, a question for another Monday Note.)

More modest, evolutionary changes might still be welcome. Earlier this year, Counternotions proposed a slotted clipboard as An interim solution for iOS ‘multitasking‘:

[…] until Apple has a more general solution to multitasking and inter-app navigation, the four-slot clipboard with a visible UI should be announced at WWDC. I believe it would buy Ive another year for a more comprehensive architectural solution, as he’ll likely need it.

This year’s WWDC came and went with the strongest iOS update so far, but no general nor interim solution to the multitasking and inter-app navigation discussed in the post. (Besides  the Counternotions blog, this erudite and enigmatic author also edits and can be followed on Twitter as @Kontra.)

A version of the above suggestion could be conceptualized as a floating dropbox to be invoked when needed, hovering above the document worked on. This would not require the recreation of a PC-like windows and desktop UI. Needed components could be extracted from the floating store, dragged and dropped on the work in process.

We’ll have to wait and see if and how Apple evolves the iPad without falling into the hybrid trap.

On even more speculative ground, a recent iPad Air intro video offered a quick glimpse of the Pencil stylus by Fifty-Three, the creators of the well-regarded Paper iPad app. So far, styli haven’t done well on the iPad. Apple only stocks children-oriented devices from Disney and Marvel. Nothing else, in spite of the abundance of such devices offered on Amazon. Perhaps we’ll someday see Apple grant Bill Gates his wish, as recounted by Jobs’ biographer Walter Isaacson:

“I’ve been predicting a tablet with a stylus for many years,” he told me. “I will eventually turn out to be right or be dead.”

Someday, we might see an iPad, larger or not, Pro or not, featuring a screen with more degrees of pressure sensitivity. After seeing David Hockney’s work on iPads at San Francisco’s de Young museum, my hopes are high.



Goodbye Google Reader

design, online publishing By June 17, 2013 Tags: , 22 Comments


Three months ago, Google announced the “retirement” of Google Reader as part of the company’s second spring cleaning. On July 1st — two weeks from today — the RSS application will be given a gold watch and a farewell lunch, then it will pack up its bits and leave the building for the last time.

The other items on Google’s spring cleaning list, most of which are tools for developers, are being replaced by superior (or simpler, friendlier) services: Are you using CalDAV in your app? Use the Google Calendar API, instead; Google Map Maker will stand in for Google Building Maker; Google Cloud Connect is gone, long live Google Drive.

For Google Reader’s loyal following, however, the company had no explanation beyond a bland “usage has declined”, and it offered no replacement nor even a recommendation other than a harsh “get your data and move on”:

Users and developers interested in RSS alternatives can export their data, including their subscriptions, with Google Takeout over the course of the next four months.

The move didn’t sit well with users whose vocal cords were as strong as their bond to their favorite blog reader. James Fallows, the polymathic writer for The Atlantic, expressed a growing distrust of the company’s “experiments” in A Problem Google Has Created for Itself:

I have already downloaded the Android version of Google’s new app for collecting notes, photos, and info, called Google Keep… Here’s the problem: Google now has a clear enough track record of trying out, and then canceling, “interesting” new software that I have no idea how long Keep will be around… Until I know a reason that it’s in Google’s long-term interest to keep Keep going, I’m not going to invest time in it or lodge info there.

The Washington Post’s Ezra Klein echoed the sentiment (full article here):

But I’m not sure I want to be a Google early adopter anymore. I love Google Reader. And I used to use Picnik all the time. I’m tired of losing my services.

What exactly did Google Reader provide that got its users, myself included, so excited, and why do we take its extermination so personally?

Reading is, for some of us, an addiction. Sometimes the habit turns profitable: The hours I spent poring over computer manuals on Saturday mornings in my youth may have seemed cupidic at the time, but the “research” paid off.

Back before the Web flung open the 10,000 Libraries of Alexandria that I dreamed of in the last chapter of The Third Apple my reading habit included a daily injection of newsprint.  But as online access to real world dailies became progressively more ubiquitous and easier to manage, I let my doorstep subscriptions lapse (although I’ll always miss the wee hour thud of the NYT landing on our porch…an innocent pleasure unavailable in my country of birth).

Nothing greased the move to all-digital news as much as the RSS protocol (Real Simple Syndication, to which my friend Dave Winer made crucial contributions). RSS lets you syndicate your website by adding a few lines of HTML code. To subscribe, a user simply pushes a button. When you update your blog, it’s automatically posted to the user’s chosen “feed aggregator”.

RSS aggregation applications and add-ons quickly became a very active field as this link attests. Unfortunately, the user interfaces for these implementations – how you add, delete, and navigate subscriptions — often left much to be desired.

Enter Google Reader, introduced in 2005. Google’s RSS aggregator mowed down everything in its path as it combined the company’s Cloud resources with a clean, sober user interface that was supported by all popular browsers…and the price was right: free.

I was hooked. I just checked, I have 60 Google Reader subscriptions. But the number is less important than the way the feeds are presented: I can quickly search for subscriptions, group them in folders, search through past feeds, email posts to friends, fly over article summaries, and all of this is made even easier through simple keyboard shortcuts (O for Open, V for a full View on the original Web page, Shift-A to declare an entire folder as Read).

Where I once read four newspapers with my morning coffee I now open my laptop or tablet and skim my customized, ever-evolving Google Reader list. I still wonder at the breadth and depth of available feeds, from dissolute gadgetry to politics, technology, science, languages, cars, sports…

I join the many who mourn Google Reader’s impending demise. Fortunately, there are alternatives that now deserve more attention.

I’ll start with my Palo Alto neighbor, Flipboard. More than just a Google Reader replacement, Flipboard lets you compose and share personalized magazines. It’s very well done although, for my own daily use, its very pretty UI gets in the way of quickly surveying the field of news I’m interested in. Still, if you haven’t loaded it onto your iOS or Android device, you should give it a try.

Next we have Reeder, a still-evolving app that’s available on the Mac, iPhone, and iPad. It takes your Google Reader subscriptions and presents them in a “clean and well-lighted” way:

For me, Feedly looks like the best way to support one’s reading habit (at least for today). Feedly is offered as an app on iOS and Android, and as extensions for Chrome, Firefox, and Safari on your laptop or desktop (PC or Mac). Feedly is highly customizable: Personally, I like the ability to emulate Reader’s minimalist presentation, others will enjoy a richer, more graphical preview of articles. For new or “transferring” users, it offers an excellent Feedback and Knowledge Base page:

Feedly makes an important and reassuring point: There might be a paid-for version in the future, a way to measure the app’s real value, and to create a more lasting bond between users and the company.

There are many other alternatives, a Google search for “Google Reader replacement” (the entire phrase) yields nearly a million hits (interestingly, Bing comes up with only 35k).

This brings us back to the unanswered question: Why did Google decide to kill a product that is well-liked and well-used by well-informed (and I’ll almost dare to add: well-heeled) users?

I recently went to a Bring Your Parents to Work day at Google. (Besides comrades of old OS Wars, we now have a child working there.) The conclusion of the event was the weekly TGIF-style bash (which is held on Thursdays in Mountain View, apparently to allow Googlers in other time zones to participate). Both founders routinely come on stage to make announcements and answer questions.

Unsurprisingly, someone asked Larry Page a question about Google Reader and got the scripted “too few users, only about a million” non-answer, to which Sergey Brin couldn’t help quip that a million is about the number of remote viewers of the Google I/O developer conference Page had just bragged about. Perhaps the decision to axe Reader wasn’t entirely unanimous. And never mind the fact Feedly seems to already have 3 million subscribers

The best explanation I’ve read (on my Reader feeds) is that Google wants to draw the curtain, perform some surgery, and reintroduce its RSS reader as part of Google+, perhaps with some Google Now thrown in:

While I can’t say I’m a fan of squirrelly attempts to draw me into Google+, I must admit that RSS feeds could be a good fit… Stories could appear as bigger, better versions of the single-line entry in Reader, more like the big-photo entries that Facebook’s new News Feed uses. Even better, Google+ entries have built in re-sharing tools as well as commenting threads, encouraging interaction.

We know Google takes the long view, often with great results. We’ll see if killing Reader was a misstep or another smart way to draw Facebook users into Google’s orbit.

It may come down to a matter of timing. For now, Google Reader is headed for the morgue. Can we really expect that Google’s competitors — Yahoo!, Facebook, Apple, Microsoft — will resist the temptation to chase the ambulance?




Apple Never Invented Anything

design, hardware By September 2, 2012 Tags: 111 Comments

Monsieur Voiture, you hopeless [redacted French slur], you still can’t prepare a proper mayonnaise! I’ll show you one last time while standing on one foot…”

[Bear with me, the connection with today’s title will become apparent in a moment.]

The year is 1965, I’m midway through a series of strange jobs that I take between dropping out of college and joining HP in 1968 — my “psychosocial moratorium”, in California-speak. This one approaches normal: I’m a waiter in a Paris restaurant on rue Galande, not far from Notre-Dame.

Every day, before service starts, it’s my job to make vinaigrette, remoulade, and mayonnaise, condiments for the hors d’oeuvres (French for appetizers) I’ll wheel around on a little cart — hence the Monsieur Voiture snicker from the chef.

The vinaigrette and remoulade are no problem, but the mayonnaise is not my friend: Day after day, my concoction “splits” and the chef berates me.

So now, pushed beyond limit, he grabs a cul-de-poule (a steel bowl with a round bottom), throws in the mustard, vinegar, and a bit of oil, cracks an egg on the bowl’s edge, separates and drops the yolk into the mixture — all with one hand. I see an opportunity to ingratiate myself: Obligingly, I reach for a whisk.

“No, all I need is a fork.”

Up on one foot, as promised, he gives the mixture a single, masterful stroke — and the mayonnaise begins to emulsify, I see the first filaments. The chef sniffs and walks away. I had been trying too hard…the rest was obvious: a thin trickle of oil, whisk calmly.

Clearly, the episode left its mark, and it came back to mind when I first saw the iPad.

For thirty years, the industry had tried to create a tablet, and it had tried too hard. The devices kept clotting, one after the other. Alan Kay’s Dynabook, Go, Eo, GridPad, various Microsoft-powered Tablet PCs, even Apple’s Newton in the early nineties….they didn’t congeal, nothing took.

Then, in January 2010, Chef Jobs walks on stage with the iPad and it all becomes obvious, easy. Three decades of failures are forgotten.

This brings us to last week’s animated debate about Apple’s talent for invention in the Comments section of the “Apple Tax” Monday Note:

“…moving from stylus to touch (finger) was a change in enabling technology, not some invention by Apple – even gesture existed way back before the iPhone. Have an IPAQ on my desk as a reminder – a product ahead of the implementing technology!
Unfortunately Apple have run out of real innovation…”

In other words: “Nothing new, no innovation, the ingredients were already lying around somewhere…”. The comment drew this retort from another reader:

“iPaq as a precursor to iPad?
Are you on drugs? Right now?”

Drugged or sober, the proud iPaq owner falls into the following point: The basic ingredients are the same. Software is all zeroes and ones, after all. The quantity and order may vary, but that’s about it. Hardware is just protons, neutrons, electrons and photons buzzing around, nothing original. Apple didn’t “invent” anything, the iPad is simply their variation, their interpretation of the well-known tablet recipe.

By this myopic logic, Einstein didn’t invent the theory of relativity, Henri Poincaré had similar ideas before him, as did Hendrik Lorentz earlier still. And, come to think of it, Maxwell’s equations contain all of the basic ingredients of relativity; Einstein “merely” found a way to combine them with another set of parts, Newtonian mechanics.

Back to the kitchen: Where does talent reside? Having access to commonly available ingredients or in the subtlety, the creativity — if not the magic — of their artful combination? Why are the great chefs so richly compensated and, yes, imitated? Alain Ducasse, Alain Senderens, and Joel Robuchon might be out of our price range, but Pierre Herme’s macarons are both affordable and out of this world — try the Ispahan, or the salted caramel, or… (We’ll note that he opened his first boutique in Tokyo, where customers pay attention to details.)

In cars, Brand X (I don’t want to offend) and BMW (I don’t drive one) get their steel, aluminum, plastics, rubber, and electronics from similar — and often the same — suppliers. But their respective chefs coax the ingredients differently, with markedly different aesthetic and financial outcomes.

Did IBM invent the PC? Did HP invent the pocket calculators or desktop computers that once put them at the top of the high tech world? Did Henry Ford invent the automobile.

So, yes, if we stick to the basic ingredients list, Apple didn’t invent anything…not the Apple ][, nor the Macintosh, not the iPod, the iPhone, or the iPad…to say nothing of Apple Stores and App Stores. We’d seen them all before, in one fashion or another.

And yet, we can’t escape a key fact: The same chef was involved in all these creations. He didn’t write the code or design the hardware, but he was there in the kitchen — the “executive chef” in trade parlance — with a unique gift for picking ingredients and whipping up unique products.

As a postscript, two links:

— Steve Wildstrom valiantly attempts to clear up the tech media’s distortions of the patents that were — and weren’t — part of the Apple-Samsung trial:

Whatever happens on appeal, I think the jury did an admirable job making sense of the case they were given. They certainly did better than much of the tech media, which have made a complete mess of the verdict.

— This August 2009 Counternotions post provides a well-reasoned perspective on the iPhone’s risks and contributions, as opposed to being a mere packaging job. (The entire Counternotions site is worth reading for its spirited dissection of fashionable “truths”.)



Proof by Mask

design By October 30, 2011 22 Comments

Web design is in bad shape. In the applications boom, news-related web sites end up as collateral damage. For graphic designers, the graphics tools and the computer languages used to design apps for tablets and smartphones have unleashed a great deal of creativity. The transformation took longer than expected, but great designs begin to appear in iPad applications (in previous Monday Notes, we already discussed Business Week+ and the new Guardian app). The best applications get rid of the print layout; they start from a blank slate in which a basic set of rules (typefaces, general structure of a page, color codes) are adapted to the digital format. Happily, we just stand at the very beginning of a major evolution in news-related graphic design for apps. And this new world proves to be a killer for the traditional web which, in turn, seems to age fast.

The graphic evolution of the web must deal with two negative forces: its language framework doesn’t evolve fast enough, and it faces the burden of messy advertising.

Less than a year ago, the potential in the latest iteration of the HyperText Markup Language a.k.a. HTML5 thrilled everyone: it was seen as the decisive, if not definitive, upgrade of the web, both functionally and visually. Fact is, it didn’t take-off — yet. Reasons are many: backward compatibility (not everyone uses the latest web browser), poor documentation making development uncertain, stability and performances issues. There are are interesting initiatives but nothing compelling so far. None of the large digital media have made the jump.

For advertising, the equation is straightforward. The exponential rise of inventories coupled to fragile economic conditions have pushed ad agencies to ask more (space) for less money. And, for the creativity, the encephalogram remains desperately flat.

The result is this:

This is the first screen of the French website 20 minutes’ home page. A good site indeed, doing quite well audience-wise, but which yields too much to advertising. In its case, the page carries an “arch” that frames the content; and, for good measure, a huge banner is inserted below the main header. If you mask the ad, it looks like this:

The weird thing is this: On the one hand, web designers seem to work on increasingly large monitors; on the other, the displays used by readers tend to shrink as more people browse the web on notebooks, tablets or smartphones.

The result is a appalling when you try to isolate content directly related to the news. In the series of screenshots below, I selected the first scrolls of pages as they render on my laptop’s 15” display. Then, I overlaid a red mask on everything but the news contents: ads, all sorts of promotions, large white spaces, headers and sections lists are all hidden away.


Flipboard: Threat and Opportunity

design, online publishing By April 17, 2011 Tags: , , 67 Comments

Every media company should be afraid of Flipboard. The Palo Alto startup epitomizes the best and the worst of the internet. The best is for the user. The worst is for the content providers that feed its stunning expansion without getting a dime in return. According to Kara Swisher ‘s AllThingsD, nine months after launching its first version, Flipboard’s new $50m financing round gives the company a €200m valuation.

Many newspapers or magazines carrying hundreds of journalists can’t get a €200m valuation today. Last year, for the Groupe Le Monde, an investment bank memo set a valuation of approximately $100m (net of its $86m debt at the time, to be precise). That was for a 644 journalists multimedia company – OK, one that had been badly managed for years. Still, Flipboard is a 32-people startup with a single product and no revenue yet.

So, what’s the fuss about?

The answer is a simple one: Flipboard is THE product any big media company or, better, any group of media companies should have invented. It’s an iPad application (soon to be supplemented by an iPhone version), it allows readers to aggregate any sources they want: social medias such as Twitter, Facebook, Flickr or any combination of RSS feeds. No need to remember the feed’s often-complicated URL, Flipboard searches it for you and puts the result in a neat eBook-like layout. A striking example: the Google Reader it connects you to suddenly morphs from its Icelandic look into a cozy and elegant set of pages that you actually flip. Flipboard most visible feature is an interface that transform this:

Into this:

All implemented with near perfection. No flickering, no hiccups when a page resizes or layouts adjust.