Uncategorized

Channel Checks: Smart or Illegal?

by Jean-Louis Gassée

Insider trading isn’t new but it’s still exciting, especially if you don’t play the stock market. For spectators, the cops and robbers game mixes ingenuity, mischief, furtiveness and confederacies. And the unavoidable dunces who talk or do too much and get the miscreants in serious trouble with the Law.

The latest episode of the insider trading, as revealed here by the Wall Street Journal, appears to be of epic proportions: ‘[It] could eclipse the impact on the financial industry of any previous such investigation…’

Among the specifics described in the WSJ article and in other pieces such as this one, I note a new and intriguing reference to Channel Checks. In layperson’s terms, the practice sounds more than reasonable. To get an idea of a company’s business, you can listen to their officials, or you can go around and check their distribution channels. Walk into a store, feel the pulse, ask employees how business is doing. Or if you have the time and inclination, stay around a little bit and count customers walking in, and those walking out with a purchase. Rinse and repeat. Do this on a representative sample and you do get very useable data. That’s what I did 31 years ago in Paris: I was interested in buying a franchise of a US business and wanted to have my own set of data before meeting company execs and their glowing projections. I stood in and out of their Champs Elysées store and counted the take. It helped: I stayed in the computer business.

No less an authority than Peter Lynch, the famed Magellan Fund investor, recommended doing precisely that type of legwork and homework. His investing motto was ‘Buy What You Know’. By which he meant studying the business you considered investing in, the product, the books, management, suppliers, distributors, everything. (See his very good books, One Up On Wall Street and Beating the Street for more. Regrettably not available in electronic form.)

What Peter Lynch recommended a couple of decades ago is alive and well, still recommended by pros. But the originally healthy practice appears to have undergone a malignant mutation. Critics and cops allege the pros went deeper and deeper into “channels”, mostly upstream into suppliers. If you manage to know how many processors or screens of a particular spec Motorola ordered, you gain a very precise estimate of their projections. Especially in an age of Just-in-time inventory management. Add information gained from shippers, ship or air, containers or palettes, and you’re on top of things.

Or, as the FBI and SEC allege: inside. You’re now trading on information not available to the investing public, you’re guilty of insider trading.

Quoting from a related WSJ piece:

“Insider trading basically comes down to where you know or ought to know that the person from whom you’re getting this information has a duty to someone else to keep it confidential,” said former Securities and Exchange Commissioner Paul Atkins in a video interview with The Wall Street Journal. “If you go in and pay the mail clerk to give you special information, that’s not proper.”

Channel Checks now becomes an underhanded, criminal activity. For amateurs of sweet ironies, above, this note’s third link takes you to a site titled… Wall Street Cheat Sheet.

We’ll have to see what skilled attorneys on both sides do with the accusations. Insider trading isn’t always easy to prove and provokes an abundance of academic discussions: see this Wharton overview. Some libertarians even contend insider trading ought to be legal… Others allege insider trading by members of Congress, their staffs and government officials is facilitated by loopholes.

This doesn’t help the mood on the street — not the Street.

The Channel Checks evolution must be viewed through four filters. Common sense, legal logic, policy and politics.

Common sense, visceral, emotional, clamors insider trading is unfair. It tilts the playing field against You and Me investors. Trading ought to take place on the proverbial Level Playing Field, meaning everyone having the same information for their trading decisions. Nice sentiment but delusional: What about intellect and homework, the legal kind? Now, everyone has access to satellite pictures of parking lots on heavy shopping days.

Legal logic is a complicated, tortuous, ever changing matter. Case law evolves, Supreme Court interpretations zig and zag. In theory, above the rabble’s emotions but, in practice, tainted by politics.

Speaking of which, politics, our government’s latest bout of against insider trading seems driven by the need to deal with the post-bailout outcry against Wall Street. I’m not saying the outcry isn’t justified, au contraire. From here, it looks like We The People have been stiffed: Wall Street, the cause of the 2008 catastrophe, has been saved at our expense. Yes, it was for our own good. But the obscenity of today’s bonuses and CEO compensation hurts. Good politicians — attorneys general are elected in our country — can’t let the opportunity to run to our defense go unexploited. This isn’t to say some good won’t come out of it. But we have the Sarbox example to the contrary: there, the outcry following scandals such as the Enron affair led to regulations hurting businesses, especially smaller ones, only benefiting accountants and attorneys, not investors as the 2008 crash proved.

Lastly, policy: how we run ourselves. Insider trading lowers confidence in markets, it makes people distrust Wall Street, it limits amount of money available to finance businesses and thus hurts the economy, that is all of us. Based on past examples, one has to worry about politics overrunning policy, about posturing leading to bad law. Still, let’s hope pragma wins over drama…

For myself, I don’t play the stock market. Across the table, I see PhDs, the famous quants, with brains bigger than mine, computers bigger and faster than mine, and wallets fatter than mine. Even if they don’t cheat, how can I win?

JLG@mondaynote.com

Fighting Unlicensed Content With Algorithms

It’s high time to fight the theft of news-related contents, really. A couple of weeks ago, Attributor, a US company, released the conclusions of a five-month study covering the use of unauthorized contents on the internet. The project was called Graduated Response Trial for News and relied on one strong core idea: once a significant breach is established, instead of an all-out legal offensive, a “friendly email”, in Attributor’s parlance, kindly asks the perpetrator to remove the illegal content. Without a response within 14 days, a second email arrives. As a second step, Attributor warns it will contact search engines and advertising networks. The first will be asked to suppress links and indexation for the offending pages; the second will be requested to remove ads, thus killing the monetization of illegal content. After another 14 days, the misbehaving site receives a “cease and desist” notice and faces full-blown legal action (see details on the Fair Syndication Consortium Blog). Attributor and the FSC pride themselves with achieving a 75% compliance rate from negligent web sites taking action after step 2. In other words, once kindly warned, looters change their mind and behave nicely. Cool.

To put numbers on this, the Graduated Response Trial for News spotted 400,000 unlicensed cloned items on 45,000 sites. That is a stunning 900 illegal uses per site. As reported in a February 2010 Monday Note (see Cashing in on stolen contents), a previous analysis conducted by Attributor pointed to 112,000 unlicensed copies of US newspapers articles found on 75,000 sites; this is a rate of of 1.5 stolen articles per site. Granted, we can’t jump to the conclusion of a 900x increase; the two studies were not designed to be comparable, the tracking power of Attributor is growing fast, the perimeter was different, etc. Still. When, last Friday, I asked Attributor’s CEO Jim Pitkow how he felt about those numbers, he acknowledged that the use of stolen content on the internet is indeed on the rise.

No doubt: the technology and the deals organized by Attributor with content providers and search engines are steps in the right direction. But let’s face it: so far, this is a drop the ocean.
First, the nice “Graduated Response” tested by the San Mateo company and its partners needs time to produce its effects. A duo of 14 day-notices before rolling out the legal howitzer doesn’t make much sense considering the news cycle’s duration: the value of a news item decays by 80% in about 48 hours. The 14-days spacing of the two warning shots isn’t exactly a deterrent for those who do business stealing content.
Second, the tactics described above rely too much on manual operations: assessing the scope of the infringement, determining the response, notifying, monitoring, re-notifying, etc. A bit counter, to say the least, to the nature of the internet with its 23 billion pages.

You get my point. The problem requires a much more decisive and scalable response involving all the players: content providers, aggregators, search engines, advertising networks and sales houses. Here is a possible outline:

1/ Attributor needs to be acquired. The company is simply too small for the scope of the work. A few days of Google’s revenue ($68m per 24 hrs) or less than a month for Bing would do the job. Even smarter, a group of American newspapers and book publishers gathered in an ad hoc consortium could be a perfect fit.

2 / Let’s say Google or Bing buy Attributor’s core engineering know-how. It then becomes feasible to adapt and expand its crawling algorithm so it runs against the entire world wide web — in real time. Two hours after a piece of news is “borrowed” from a publisher, it is flagged, the site receives an pointed notification. This could be email, or an automatically generated comment below the article, re-posted every few hours. Or, even better, a well-placed sponsored link like the fictitious one below:

Inevitably, ads dry up. First, ad networks affiliated to the system stop serving display ads. And second, since the search engine severed hyperlinks, ads on orphan pages become irrelevant. Every step is automated. More

What If Google Stored All Our Medical Records?

Regard the horrified looks on the faces of the attendees at a California Council on Science and Technology meeting in Irvine six or seven years ago. I’m the only member from the Dark Side, from the venture capital milieu, inside an institution “designed to offer expert advice to the state government and to recommend solutions to science and technology-related policy issues”. The other members are scientists and scholars.

The question of the day is electronic medical records: How do we computerize, standardize, store, secure, exchange our corpus info with a reasonable assurance of privacy?

My answer: Give the job to Google. And thus follows the politely alarmed reaction…and the objections.

Our records won’t be secure! Google will exploit our most personal history to make money on our backs (or other organs)! They’ve digitized books, is this yet another step towards a privately-controlled but overly powerful public utility/institution?

Years later, what do we know?

First, doctors and patients still have trouble finding and exchanging records. I have, as attorneys are fond of saying, “personal knowledge” of this fact. The exchange of records between my politically-incorrect internist, the Palo Alto Medical Foundation and the Stanford Hospital—organizations within a mere mile of one other—takes multiple phone calls, visits in person, fax machines.

Now try one of the blood-sucking medical insurance companies. To gain access to your own record, they send you, by fax, an authorization form for your signature…but there’s no return number, there’s no way to return the fax. It’s not personal, it’s systemic, an obstacle course to minimize claim payments.

Second, the current system, notwithstanding HIPAA regulations, leaves our records open to outsourcing subcontractors in the US and elsewhere, to poorly qualified claim adjudicators inside insurance companies and to employers’ HR personnel. In theory, there are walls. In practice, expediency: there’s “cost containment”, there’s an astounding number of people, “trusted” or not, who get to look at your records. Compared to this, Google looks pretty good. Yes, they have security breaches, people occasionally lose their password or get their accounts hacked, but these events are statistically insignificant. Add penalties for such incidents, weigh them against what we’d pay Google for the service, and we’d have a decent level of protection, an SLA for our medical records.

Few companies have dealt with size, with what we call “scalability” as successfully as Google has. They have the human expertise and the computer systems to store and index “everything”, this is what they do for a living, with more than 2.5 million servers that keep their data intact.

As to Google’s exploitation of our records… Of course Google cares, they can wring billions from our personal health history? All we have to do is write a contract to share the loot, we call this “revenue-sharing”. Think of what a relentless crawl through billions of medical records will garner them… Take a transversal look at all the patients who take high blood pressure (antihypertensive) drugs, look at morbidity (how often, when, and how severely they get sick) and mortality (when and how we die) rates. Or look at the more subtle but important combinations such as ancestry (the best way to get low cholesterol is to choose your parents well), other drugs, lifestyle (a.k.a. good and bad exercise, food intake, alcohol, tobacco and other substances soon to be legal in California).

This would be much better than the current and deeply corrupt system of medical studies. You think I exaggerate? I wish. See this sobering David H. Freedman story in the November issue of the Atlantic (a treasure of literate America). More

HP’s Board of Directors: Redemption or More Insanity Ahead?

HP’s Board of Directors has accumulated an impressive record of bad judgment calls, the latest being the lame lawsuit against their recently deposed CEO, Mark Hurd, who quickly joined Oracle as Co-President and Director.

The History

Once a revered Silicon Valley icon, HP was arguably the first worldwide success to emerge from pre-war Stanford where Bill Hewlett and Dave Packard studied under the illustrious Frederick Terman. Unfortunately, the insiders who were groomed to replace “Bill & Dave”—first John Young, an HP lifer (1968-1992), followed by Lew Platt, another long-termer (1966-1999)—presided over the company’s long slide into comfortable bureaucracy and middling financial performance.

In 1999, HP’s Board was seduced into giving the CEO mantel to Carly Fiorina, a gerontophiliac sales exec from AT&T/Lucent…only to fire her in early 2005. Known for her posturing and opaque pronouncements, Fiorina antagonized and mystified insiders and industry observers alike. John Cooper, CNET’s Executive Editor and longtime tech writer, characterized one of her more frustrating talks as “a Star Trek script” containing “enough business-babble to reduce even the most hardened McKinsey consultant to a state of dribbling catatonia”. Nice.

To succeed Fiorina, HP went outside again and, this time, managed to snare an experienced and accomplished CEO: As head of NCR, Mark Hurd had led the company through a successful turnaround.

About a year after Hurd’s election, HP’s Board became embroiled in the Pretexting scandal. Board members spied on employees and journalists—and even on each other—in an attempt to track down leaks of confidential strategy documents. This ugly episode led to several Board and executive departures: Chairwoman Patricia Dunn was thrown under bus; HP’s General Counsel, Ann Baskins, “took the Fifth” at a Senate hearing; another director, Tom Perkins, and several employees left as well. What Mark Hurd actually knew or did in relationship to this episode has never been clarified.

Despite the scandal and the departures, Hurd made good on his reputation as a turnaround CEO and, through carefully crafted acquisitions and cost-cutting, put HP back at the top of the computer industry in just five years. His wizardry with numbers, his sober talk, and his attention to execution left the impression that HP had finally found the right helmsman.

But then disaster struck. As discussed in our August 29th Monday Note, HP’s Board unceremoniously fired Hurd, publicly berating him for conduct unbecoming a CEO and barely stopping short of accusing him of fraud. And then, after pillorying him, the company inexplicably paid off the “disgraced” Hurd to the tune of $30M to $40M. HP shareholders sued the directors and the media roasted them.

Enter Ellison

Larry Ellison and Mark Hurd have known each other for several years. They’d been business partners when HP and Oracle allied themselves in serving large government and enterprise clients—and they’re tennis buddies as well.

After harshly criticizing HP’s trustees for firing a star executive, Ellison hired Hurd. In keeping with his leadership style, Ellison made room for the new lieutenant by summarily chucking the previous tenant, Charles Phillips, who, ironically, had also become embroiled in a “relationship contretemps” with an ex-paramour. I’ll hasten to say that I prefer Larry’s summary and clean manner to HP’s: Chuck Phillips had a successful career at Oracle, Larry wished him well on his way out, the money flowed, and everyone moved on to the next stage of their lives. More

Understanding the Digital Natives

They see life as a game. They enjoy nothing more than outsmarting the system. They don’t trust politicians, medias, nor brands. They see corporations as inefficient and plagued by an outmoded hierarchy. Even if they harbor little hope of doing better than their parents, they don’t see themselves as unhappy. They belong to a group — several, actually — they trust and rely upon.

“They”, are the Digital Natives.

The French polling institute BVA published an enlightening survey of this generation: between 18-24 years of age, born with a mouse and a keyboard, and now permanently tied to their smartphone. All of it shaping their vision of an unstable world. The study is titled GENE-TIC for Generation and Technology of Information and Communication. Between November 2009 and February 2010, BVA studied hundred young people in order to understand their digital habits. Various techniques where used: spyware in PCs , subjective glasses to “see what they see”, and hours of video recording. (The 500 pages survey is for sale but abstracts, in French, are here ; BVA is considering a similar study for the US market). Here are the key findings:

The constant gamer. The way a Digital Native see his (or, once for all “her“) environment is deeply shaped by computer games. “When he is buying something”, says Edouard Le Marechal who engineered the survey, “finding the best bargain is a process as important as acquiring the good. The Digital Native enjoys using all tools available in his arsenal to outsmart the merchant system and to find the best deal. He doesn’t trust the brand. Like in a game, the brand is the enemy to defeat”.

According to the study, brands face a serious challenge from the Digital Native. Not only does he gets a kick out of triumphing over the brand, but he is not deceived by the marketing pitch. To make things worse, he’ll become an expert, he’ll achieve more knowledge than the merchant trying to lure him. That’s part of the game. Reading the GENE-TIC survey, brands and their vector (advertising), appear under siege in multiple ways. They look increasingly disconnected and outpaced by their target. In addition, advertising is reduced to its utilitarian dimension: if an ad message does not carry an explicit promotion, it is unlikely to lead to a good bargain.

Weirdly enough, when I asked Edouard Le Marechal if big ad agencies were flocking to subscribe to his survey, he replied they were not. Instead, GENE-TIC is massively subscribed to by clients such as high tech or telecommunications companies. (That also reinforces the idea that the brand – whether it is a manufacturer or a service – is willing to (re)connect more directly with its customer base at the expense of the advertising intermediary which appears to have lost its power). More

Antennagate: If you can’t fix it, feature it!

…and don’t diss your customer, or the media!

Rewind the clock to June 7th 2010. Steve’s on stage at the WWDC in San Francisco. He’s introducing the iPhone 4 and proudly shows off the new external antenna design. Antennae actually, there are two of them wrapped around the side. Steve touts the very Apple-like combination of function (better reception), and form (elegant design).

And now we enter another part of the multiverse. Jobs stops…and after a slightly pregnant pause, continues: The improved reception comes at a price. If you hold the iPhone like this, if your hand or finger bridges the lower-left gap between the two antennae, the signal strength indicator will go down by two or even three bars. He proceeds to demo the phenomenon. Indeed, within ten seconds of putting the heel of his left thumb on the gap, the iPhone loses two bars. Just to make sure, he repeats the experiment with his index finger, all the while making a live call to show how the connection isn’t killed.

It’s not a bug, it’s a feature! It’s a trade-off: Better reception in the vast majority of cases; some degradation, easily remedied, in a smaller set of circumstances.

Actually, it’s a well-known issues with smartphones. Steve demonstrates how a similar thing happens to Apple’s very own 3GS, and to Nokia, HTC/Android, and RIM phones. Within the smartphone species, it’s endemic but not lethal.

Nonetheless, adds Apple’s CEO, we can’t afford even one unhappy customer. Buy in confidence, explore all the new features. If you’re not satisfied, do us the favor of returning the phone within two weeks. At the very least, we want you to say the iPhone didn’t work for you but we treated you well. If you fill out a detailed customer feedback report, we’ll give you an iPod Shuffle in consideration for your time.

One last thing. Knowing the downside of the improved antennae arrangement, we’ve designed a “bumper”, a rubber and plastic accessory that fits snuggly around the iPhone 4’s edges and isolates the antennae from your hands. The bumpers come in six colors—very helpful in multi-iPhone 4 families—and costs a symbolic $2.99.

The antenna “feature” excites curiosity for a few days, early adopters confirm its existence as well as the often improved connections (often but not always—it’s still an AT&T world). The Great Communicator is lauded for his forthright handling of the design trade-off and the matter recedes into the background.

If you can’t fix it, feature it.

End of science fiction.

In a different part of the multiverse, things don’t go as well.

Jobs makes no mention of the trade-off. Did he know, did Apple engineers, execs, marketeers know about the antenna problem? I don’t know for sure and let’s not draw any conclusions from the way Jobs avoids holding the iPhone 4 by its sides while showing it off to Dmitry Medvedev:

There’s a more telling hint. Apple had never before offered an iPhone case or protector of any kind, leaving it to third parties. But now, for the iPhone 4, a first: We have the bumper…at $29, not $2.99. (And which, by the way, prevents the phone from fitting into the new iPhone 4 dock.)

As usual for an Apple product, the new iPhone gets a thorough examination from enterprising early adopters, and many of them discover the antenna gap “feature”. As one wrote Jobs:

It’s kind of a worry. Is it possible this is a design flaw? Regards – Rory Sinclair

Steve’s reply:

Nope. Just don’t hold it that way.

Steve, No! Don’t diss your beloved customer. No tough love with someone who’s holding your money in his/her pocket. More

The poison of arrogance

Arrogance is the most toxic waste-product of technology companies. Past examples abound: IBM, AT&T, Microsoft… All their hauteur got them were expensive antitrust actions and customer backlash. Last week, we got yet another example of the insufferable behavior still prevailing in the high-tech world — with the to-be-expected response from regulators and markets.

Navx is a €1m a year French company whose business is speed radar location databases. In France, it is illegal to sell or use selling radar detectors, devices that pick the microwave or laser radiation emitted by speed guns and automated cameras. But providing speed trap location data is lawful. In fact, the French Interior Ministry maintains a public database for fixed radars. And companies such as Navx, or various GPS makers supply location information for mobile radars.

To sell its product, Navx relies massively on Google AdWords: the company buys keywords that guarantee a high ranking in search results associated to terms like “avertisseur radar” (radar warning). Over the years, Navx invested a large part of its revenue in keywords purchases, up to €400,000 a year. For Navx, like for millions of other businesses all over the world, the result was a massive dependency on Google systems. For Navx, Google worked very well: in October 2009, 69% of new subscribers revenue came from AdWords. The company was still losing money, but growth was promising. Then, Google pulled the plug, arguing Navx business was illegal. Google’s ukase came at the worst possible time: Navx was about to complete its second round of funding. The company lost most of its new revenue stream, causing investors to get cold feet, in turn causing Navx to lay people off, and so on.  Navx argues the legality argument was a mere pretense: Google had a real, ulterior motive for the ejecting the speed trap location ads from its system. Navx believes its tiny but growing service came to be viewed as competition for Google’s own geolocation services. That’s a possibility.

Such a story is typical of Google’s opaque world. Countless examples are offered in books, in newspaper and magazine stories where businesses went belly up because some  geeks in Mountain View turned the dials of an unseen algorithm, without the slightest regard for the impact on the very businesses that pay their salaries. More

Drop that -phone!

I’ll explain the ‘’-’’ in a moment. Today’s piece is about the power of words to shape thought, to distort, to mislead. More specifically, I contend “smartphone” is the wrong word for the new genre of mobile devices.

I’m not completely naïve, however. In the end, I’ll agree there is little chance we’ll settle on another word.

Once upon a time, philosophers held thought preceded words: you thought of something and then struggled to find the right words for that gem. Later, psychologists of the twentieth century persuasion, came to think, no, to say words preceded thought: one could only think of thoughts for which they already possessed words for. As much as I like our dear Lacanians, some of whom hover around the Valley, the word ineffable leaves them… speechless.

Devoid of a clean theory, we can wallow in examples.

The most visible one is the PC, the personal computer. Derivative thought first gave us “microcomputers”, because they were “like” minicomputers, themselves “like” the only serious computers, mainframes — only smaller. Next, because size matters, we’d get nano computers, pico computers, femto computers…

Fortunately, the gestalt, the user experience won: This is my computer, as opposed to the institution’s. The beginnings weren’t always easy: I recall a book called “You bought a personal what?”, published in the late seventies. I also remember our collective indignation at Apple when, in 1981, IBM boldly misappropriated the concept and introduced The Personal Computer and proceeded to win the market, that is until Microsoft gave it to the clones. The P word worked and won.

Decades ago, Motorola was the king of cell phones. Cell was a good word because it pointed to the amazingly powerful innovation of cellular telephony. Previously, mobile phones called a radio station and kept using the same frequency as the user moved around. This severely limited the number of users and forced mobile phones to have powerful radios to stay connected over long distances. With cellular telephony, frequencies  were reusable as users were magically handed over from one lower-powered radio station to another as they drove around, leaving the frequency behind, ready for another user.

The Motorola name came to be associated with radios of all kinds, from cars to the Moon. I recall Motorola execs calling their successfully miniaturized cell phones of the late eighties “little radios”. They were rightly proud of their technical prowess, I owned several StarTacs and MicroTacs. But when cell phones gained PDA features, Motorola’s clock got cleaned by the likes of RIM (Blackberry) and Palm (Treo). For a long while, Motorola’s culture remained backward-focused on the phone part of the customer experience. The new phone boss, Sanjay Jha, is now an Android convert: a couple of impressive Droid devices have put Motorola back in the race. More

Intel’s bold bet against ARM: visionary or myopic?

Today, Intel’s x86 architecture reigns supreme on PCs (and millions of servers, such as Google’s, that use the PC organ bank). Anywhere else, the ARM processors have won; they’re in billions of devices, regular cell phones, smartphones, entertainment devices, navigation systems and legions of other embedded applications.

Understandably, perhaps, Intel didn’t want to play in the low end of the processor market. But we now see the emergence of RPCs, Really Personal Computers, more commonly called smartphones. Nokia, RIM, Apple and the fast-rising army of Android licensees all use high-end ARM derivatives.

Intel’s answer is a family of low-end x86 devices, Atom processors. So far, Atom processors haven’t been used in smartphones, only in netbooks.

‘Wait’, says Intel, ‘over time, our proven semiconductor design and manufacturing capabilities will allow us to reduce the power consumption and cost of x86 processors. That’s how we’ll win this emerging market, just as we won the PC.’

Easier said than done. The older and more complicated x86 architecture is inherently disadvantaged against the more modern ARM architecture. And, as we’ll see, there is more to this fight than semiconductor design and manufacturing prowess.

For context, let’s go to Mary Meeker’s latest (June 7th, 2010) Internet Trends presentation.

By 2012, she predicts, smartphones shipments will exceed PC unit volumes. Approximately 480 million smartphones versus 430 million PCs, going to 650 million next generation devices by 2013:

Just as important, by next year, smartphones unit volumes will overtake “feature phones”:

Smartphones, feature phones? Without losing ourselves in taxonomy games, let’s turn to the popular Blackberry devices: they are good examples of the smartphone category. Anything less is a feature phone, sometimes called a regular phone, or a “dumb phone”. More

Mediocrity is king

Last week, the Huffington Post reached a new apex. Viewed from France, where ads are localized, its home page carried a remarkably tasteful ad: a farting application for the iPhone (see below). As prudery still rules in American media, you’ll notice that the farter’s exhaust aperture has been blurred. Fine.

A quick précis: France is a country of 65m people, with a modern tech infrastructure. Internet to the home is faster than in the United States and way cheaper than in Australia. The cellular networks work even better than the AT&T’s, and the three carriers use a single worldwide standard, GSM. Its internet population numbers 45m, a fast growing proportion of which speaks serviceable English, good enough to read the parts of the Huffington Post that are not written in Shakespearian English.

With this in mind, let’s focus on two interesting aspects of the HuffPo advertising mishap.

First, it shows how advertising is sold: by the bulk. The HuffPo sales people’s intellectual horizon doesn’t extend very far. This is what I call the Burundi Syndrome, one where American companies see the ROW (Rest of the World) as an aggregation of second class people. Consider Apple’s geographical definition for instance: its London-based EMEA division encompasses Europe, Middle-East, Africa. A vast zone ranging from Burkina-Faso to Sweden — where the average student is way more educated than its American counterpart and where the per capita GDP is just 20% lower than in the US (OK, Burkina Faso — I’ve been there too — has a long way to go).
Coming back to the Huffington Post, the choice of a below grade ad served on a ROW market demonstrates a tragic inability to understand the true power of the internet, i.e, making contents globally accessible to a solvent population.
That’s the first distinction between great media brands and cheap ones. Neither the New York Times, nor The Sydney Morning Herald nor the Guardian would delegate the sale of their non-domestic ads without some sort of guarantee covering the advertisers’ relevance.

Second, and more importantly. By allowing such a degradation in its premium advertising space (a home page is supposed to be just that), the HuffPo acknowledges that its content is, in fact, cheap. It therefore admits that volume, rather than targeting or relevance, drives the value of its content.

And volumes the Huffington Post delivers. A lot. According to ComScore (which is blessed with the rigor of a Greek public accountant), the Huff Post cruises at 26m unique visitors per month. Other sources agree on more than 20m UV, which is above the New York Times (19m UV/ Nielsen), and twice as much as the Washington Post.

How do I dare question such an audience success? Simply because, in my not-so-humble-opinion, The Huffington Post is not, per se, a news organization. Its content relies upon on a mixed bag of high profile bloggers, drawn from Arianna Huffington’s vast personal network; these individuals deliver thoughts of varying depth, ranging from fun stuff to leftovers quickly produced by an obscure assistant. More