iPhone 3G — One Week Later

Contrary to what I expected, the dust hasn’t settled yet. A week later, people still queue, 2h30 Friday morning before being admitted to the sanctum sanctorum in San Francisco. Besides the long lines, there were glitches: activation problems, trouble with the new MobileMe service, with getting access to software updates for the “old” iPhones. Apple claims 1 million phones sold worldwide for the first weekend, probably 400,000 in the US alone. The latter number could explain the activation servers overload: in more normal times, AT&T must activate “only” 25,000 phones a day. Apple apologized for MobileMe problems and even conceded they should suspend some of the verbiage used to promote the service. Calling “Push” the way email and other information is coordinated between computers and the iPhone was found a little “anticipatory”, meaning promises made couldn’t yet be fulfilled. ["Push" means your phone or your computer will receive information without asking for it, without "Pulling". The Blackberry is still the king of "Push".]

But this is mostly folklore, fun but transitory. Something more important is taking place: the advent of the App Store. On iTunes, the App Store is a section where you find new applications for the iPhone. On the iPhone, the App Store is an icon that enables the one-click purchase and wireless download of new applications, just like a song and often costing the same, 99 cents, or less. In about the same time it took Apple to sell 1 million phones, users (this includes the updated first generation iPhones) downloaded 10 million applications. Half of these were free. For the paid for ones, about half were games, the rest range from software for general aviation pilots, medical students, bloggers, to light sabers, yes, you read it right, translation with voicing of phrases, nice when you go to China, subway maps, newsreaders, CRM, social networking, instant messaging and music streaming. Apple signed in with a nice, free, flourish: a program transforms your iPhone into a remote control for iTunes or AppleTV, works anywhere in the house through your WiFi network. And on and on… I was going to forget the Chanel Haute Couture Show. Free. Highest Karl Lagerfeld quality. How did this get in? Let me guess, friends in a common advertising agency? Is this one the new business models discussed below?

When the App Store opened a week ago, the catalog featured 27 pages, we’re now at 42. It’s fair to say some applications are silly, useless or unstable. The user review system in the App Store is merciless and deals harshly with stupidity, bad code or dysfunctional UI (User Interface). Also, there is an automatic update mechanism and applications such as Facebook have already been improved. The bad ones will die quickly.

The BFD, as in Big Fundable (or other F words) Deal here is the Great American Instant Gratification. The mental transaction cost of getting an application is very low: lots of choices, small price, one-click transaction. This is the magic of using the existing iTunes infrastructure and exisiting customer behavior. I can’t help but wonder whem Apple (or its competitors) will also use the model for desktop applications, Cloud Computing notwithstanding. I buy iTunes music for my personal computer, why not buy applications for my Mac or my PC from the same store?

Wait, as we say in America, there is more: business models. We’re beginning to see ads on the iPhone, with photos, music or the New York Times. We, VC, will be watching carefully as we wonder if advertising on such small screens will work, will generate real money. Another form of advertising looks more promising: free music channels on the Pandora application. You first set “channels” on Pandora.com from your PC, say Mozart, Bach, Miles Davis and Dave Brubeck. On your iPhone, you click Miles Davis and you either get Miles Davis works or music deemed to belong to the same genre, with a nice note explaining why the piece was put on this channel. And…, if you like it, one click buys it form iTunes. Clever and clever a second time because not convoluted.

Lastly, content presented as, wrapped in applications. For 99 cents you buy and load an application called The Art of War. You’ve recognized Sun Tzu’s book. But, instead of having a separate book reader and content purchased for it, with the risk of “unwanted duplication”, content and reader are now budled as one application for each book. When I pitch my next book to the publisher, I’ll make sur to mention the 45 million iPhones to be sold next year. This number is an admittedly wildly optimistic (and widely criticized) forecast by Gene Munster from Piper Jaffray. Unless RIM (Blackberry), Nokia and Google fight back, which is very likely, they don’t like Steve Jobs wiping his Birkenstocks on their back. —JLG

Outsourcing’s next wave: media

Ever heard of companies like Mindworks Global Media, Express KCS, or Affinity Express? Well, in due course, millions of English speaking newspapers will do. Now, this concerns “only” readers of newspapers such as the San Jose Mercury News, The Miami Herald, or the Orange Country Register, to name just a few. In these newspapers, significant editorial jobs, tasks that once belonged to US newsrooms are now outsourced to a cluster of companies in India.

This is the next effect of globalization: off-shored editorial jobs. Highly specialized sweatshops, hundreds of workers on night shifts — Indian time zone — line up in the outskirts of Delhi or Mumbai. Journalists are no longer only reporting or analyzing job migration to cheaper Asia, they are now about to experience it. Take the Orange County Register for instance. Typical big regional American newspaper: strong power in its own terrotory (the greater South of Los Angeles), several Pulitzer Prizes, and recently about a hundred layoffs in its newsroom. Last month, it became the latest to offshore not only secondary jobs such as laying out ads, but also core competencies such as copyediting. It relies on Mindworks Global Media, a two-year-old company headquartered in Noida, 15 miles from New Delhi where ninety qualified Indians are performing the task (see story in Business Week ). The OC Register features the most advanced example of outsourcing jobs in the print media. Other newspapers such as the San Jose Mercury News (Silicon Valley’s daily), or the Oakland Tribune are testing the waters: they assign advertising layouts to Express KCS a two hundred people startup based in Gurgaon, India, also close to New Delhi. Express KSC provides a wide range of print related services, ranging from pre-press to magazine production or ad design work (story in the Columbia Journalism Review )

Three factors accelerate the trend. The first one is the newspaper sector’s global crisis. English speaking ones are motivated to outsource to India (or Thailand which is eyeing the pie) as much work as they reasonably can: most of the of day-to-day layout jobs will soon be gone, as well as a greater number of sub-editing, copy-proofing positions. When the cost ratio is 2:1 or even 3:1 as it is between US (or UK) and India, the incentive is impossible to resist.

Increasing skills in India and other Asian countries is the second factor. This is the payoff of global knowledge and education. The level of local universities is rising fast, so does cooperation with Western universities. And many formers students from American universities are returning to their homeland, they become clever entrepreneurs and eventually suction up jobs from abroad. Judging by the number of editorial jobs posted on MonsterIndia, this is a heavy trend.

The third factor is the cost of telecommunication that is now asymptotically driving to zero. [Don't tell YouTube's parent, Google...] Speaking with or sending a page layout to a sub-editor costs the same, whether the individual is on another floor in the building or in Mumbai.

This explains the sequence of events we witnessed in recent years. Intellectual (non-engineering) off-shoring has evolved from human-based data-mining, to number-crunching, to basic-design, and now to news content. Reuters (now Thomson-Reuters) was the first to jump in 2004 when its financial service opened an office in Bangalore with 340 people. They where writing about quarterly results of Western companies and compiling analyst research. Since then, this facility has grown to approximately 1,600 jobs, with 100 journalists working on U.S. stories.

Non-English papers will be shielded from this transfer. In theory this will save jobs in Germany, France, Spain or Scandinavia. But in more realistic terms, events might take a different course: unloading selected non-core jobs will help US and UK newspapers to respond more swiftly to the market’s downsizing and, we hope, to do better than just survive. —FF


The Next Googlitzer Prizes

Let me build on my boss Frédéric Filloux’s point about bloggers. And, to do this, let me start with a quick linguistics lemma about California-speak.

In France, when two engineers review a project, the first one energetically “offers” (that’s an example of California-speak), hammers his views thusly: The only way to solve the problem is… And he expresses an opinion couched in Truth terms. The other techie retorts: You’re an idiot, this is brain-dead, the only way to solve the problem is… And another opinion follows, no less forceful. They’re just bantering, nothing personal and, soon, they get into the collaboration part of the review, give and take, get to a resolution and leave the meeting happy with themselves, the other person and the to-do list.

I tried this in Cupertino, when given charge of Apple’s engineers in 1985. They smiled politely: Thank you for sharing. But I sensed a transparent steel curtain descending between us and no actual communication took place after what I thought was just a manly opening. I knew that hypocrisy is the lubricant of social intercourse, I just forgot that it applied to conversations with techies. I had to learn to speak Californian: a set of euphemisms, mannerisms designed to equivocate and, as a result, to avoid giving offence. This is great, fantastic, I like what you do… All mean nothing, just filler speech designed to move the conversation forward without taking risk. Thank you for sharing means “I hate what you just said, asshole!” This is, as you well know, the land of neologism. Add the politics of large organizations and you get “grinfu–ing”, screwing someone with a big smile. Don’t say But, say And…

Back to the opening salvo above, in California-speak, Let me build on that point is what the French engineer must say to his California colleague in order to be heard. Actually, a gentler view of the deflection is that it encourages collaboration, let me use what you just said as a foundation, rather than excite confrontation.

With this in mind, allow me to register mild disagreement with Frédéric’s view of bloggers. I won’t fall for the easy characterization: the professional journalist versus the interlopers. I don’t write a blog, for reasons I don’t fully understand, but I read lots of them. Naively, I bought several newsreader applications and found out that the free Google Reader did the job very nicely. I can subscribe and unsubscribe to hundreds of blogs, ranging from the sublime to the sordid. (Try “Quantum Physics” and “Zoophilia” in the Reader’s search engine for blogs.) You can even “share”, that word again, items, stories with friends or even export your entire set of subscriptions and give it to a friend or family member as a way to let them see the blogosphere through your eyes.

I agree with FF, the bad news abound. There is a lot of garbage, nonsense, paid-for people and content parading as impartial views, bloggers echoing each other to the point where ten blogs spreading the same story could trip one to think: This must be true, there are ten sources for that story. No, it’s one unsubstanciated rumor repeated ten times over. We’re told there are 17 million blogs and growing, this is a gigantic garbage heap even Wall-E can’t mine for the gems. [I just saw the movie and can't comprehend the quasi-universal praise.]

All true but, sorry, and yet enough cream manages to ascend to the surface to make blogs and bloggers an alternative to the conventional newspaper. Experts and perverts of every stripe, yes, and when I’m burned a couple of times, the subscription dies. Speaking of subscriptions dying, I wonder how long I’ll keep longing for the noise of newspaper landing on my door steps in the wee hours. Between blogs and newspaper Web sites, when I open the paper in the morning, I often feel I’ve seen the news item the night before. If I want a knowledgeable discussion of the Microhoo saga, there are two or three bloggers, starting with the almost eponymous Blodget, Peter Kafka, I’m not making this up, and Michael Arrington who’ll give me better/faster food for thought than the Wall Street Journal or the Grey Lady’s Joe Nocera.

As we mention existing newspapers, for all their wrapping themselves in the mantel of professionalism, how often are they guilty of the sins of cronyism, re-writing stories seen elsewhere, when it’s not making them up altogether? Numerous New York Times accidents come to mind: Judith Miller’s “coverage” of the Iraq War build-up, Jason Blair’s fabrications, the scurrilous John McCain sex story and too many more.

Back to the excess(es) argument, there is no good culture without bad taste, without people “going too far”. How do we innovate without breaking things, making mistakes, giving people legitimate reasons to be upset? Yes, legitimate reasons to be upset, but missing the larger point. There are plenty of good reasons to take a dim view of technology, it does facilitate the expression of our lowest instincts. And the Internet is a true revolution for freedom of expression. New genres are emerging and will continue to do so as bandwidth increases change the gamut (and location, think mobility) of available media. As the eternal optimist, I welcome the excesses of bloggers, they’re stimulating, helpful, irritating and fun. And, some day not far in the future, we’ll crown a few of them with something like a Googler Prize. Who knows, a few of today’s journalists might be among them.

Fewer And Newer: Journalism Jobs

Sorry for the winners/whiners of the Oscars of pessimism: Journalism will remain as interesting as it used to be. OK, granted: Most of the job’s mystique is gone for good; football-sized newsrooms; charismatic, seasoned, suspenders-bearing editors belong to the past. So do glossy, reportage-loaded magazines. Many bad things are happening to journalism, including a rise in outsourcing core competences such as editing (see this story in the Hindustan Times ). But reports of journalism’s death are vastly premature. Actually, the big media shift we are experiencing will provide many opportunities — as long as (and yes, this a quite a proviso) the current professionals adapt quickly and the upcoming generation of news people gets proper training.

First thing first: there will always strong demand for good journalism. Bloggers are splendid, they benefit society and journalists as well. Thanks to the blogosphere, we have seen our congenital modesty suitably cut down. But, structurally, bloggers suffer from the inversion of the ten-to-one rule: in order to have a good journalistic story you must gather roughly ten times the amount of information you’ll use eventually. It is see-through American coffee morphing into thick, dark Italian espresso. Too often, the blog world work the other way around. Tiny facts — borrowed from other sources — diluted into bloated, unedited, chatting. Some bloggers are so talented or so specialized that their verbose drivel becomes a must-read. Those are resetting the notion of “most trusted brand” which was the motto to American TV networks, circa 1970. They are also offering what many journalists can no longer do: focus, obsessive specialization, academic knowledge, etc. But how much do they weight in the ambient noise ? 1% ? 10%? (which, either way is a lot in absolute terms). At the other end of the table, journalism is — or is supposed to be — about skills in facts-gathering, it is about explaining, contextualizing, editing and sometimes, analyzing and commenting. It is not molecular genomics (I prefer this metaphor to the “rocket science” one, sorry) but it is a genuine trade that doesn’t get learnt overnight. That craft won’t disappear. It will shrink for sure, but the demand for great storytelling remains: the New Yorker magazine enjoys — and rejoices — several million readers, after all.But, above all, journalism will mutate.

New genres will emerge. They will encompass the all spectrum of journalism on a multiplicity of platforms: multi-layer of text, photo, video, animated graphics. And please don’t tell me it is not noble journalism: click on any multimedia items in the New York Times or Slate Magazine, or, even better, go to the Washington Post sponsored site Mediastorm to forgo any lingering doubt. I personally don’t know of any member of my professional gang (French journalists in their fifties) who is not looking at the digital tools we enjoy today with a mixture of nostalgia and eagerness. How would these tools have fueled his journalistic passion when reporting from Jerusalem, Moscow or New York?

The most important question is: Are we preparing the next batch of journalists to handle such versatility? The answer tends to be no. We can’t blame them, but most of them want to be writers in the most romantic sense. For many, learning the digital trade is more a kind of “passage obligé” rather than an end in itself. Surprisingly, even their use of the Internet is rather shallow. They visit news sites to avoid going to the newsstand, they download profusely, but few of them blog or go inside the bowels of the beast to satisfy their curiosity. A partial explanation is most of their teachers belong to digital-averse generation. It will be some time before the young journalists grab the tools at their disposal (they better hurry because bloggers will do). A new kind of journalistic storytelling has yet to be invented. And it will be as compelling as the old one.

The digital era is an opportunity for journalists to regain a great deal of power in the management of news organizations. Let me explain. Twenty years ago the CEO of Dow Jones said this about the Wall Street Journal’s then managing editor, Norman Pearlstine: “We gave Norm an unlimited budget and he exceeded it!” That’s a pithy quote, indeed. In retrospect, I can’t help but resent a bit the man who caused such a remark. Pearlstine was not an isolated free spender. He was part of a widespread species that dominated newsrooms in those times when newspaper readers were in great abundance. Unfortunately, carelessness gave credence to the idea that journalists are the antithesis of managers as far as business is concerned. It cleared the way to a transfer of management to a business elite who doesn’t have a clue — and doesn’t want to — on what journalism is about.

Consequently, news organizations have been taken over by financial people. They are encompassing the full spectrum. The worst are former comptrollers who patiently climbed the corporate ladder thanks to successful restructurings (or brown-nosing). The best are strategists, MBA’s with the deal-making plug-in added to their embedded software. (To my surprise — at least in my country — boards tend to prefer the former who aremore docile than the ambitious, visionary kind). Newspaper organizations are built on silos (the newsroom, versus the marketing/advertising, logistics, technical, or administrative crowd). With management carefully maintaining hostility between fiefs, using the “divide-to-rule” principle. As the news media is in turmoil, this outdated managing setup must be revisited, by will or by the force of reality. To do that, producers — i.e. the news people — must extend their reach. Evidently, some jobs are up for grabs. Editorial marketing for instance. Today, many media CEO are bragging about hiring of a former Procter & Gamble young Turk as their marketing chief. It usually doesn’t fly very far. It goes for media as for the high tech sector. Meg Whitman tenure at eBay was not mind blowing (she was former marketing manager at the toymaker Hasbro), but when someone from the trade is able to jump into marketing, it really works, cf. Steve Jobs at Apple or Eric Schmidt at Google.

Would journalists be good at editorial marketing? Of course they would. After all it’s all about product design, audience expectations, strategy and tactics to better address a moving and demanding target. Are they ready to grab the challenge? No. Not a shred. Nor they are ready to deal with IT powered journalism such as data-mining (a powerful tool though). They are not up to managing the technical dimension of the Internet that is borderline editorial such as website structuring: how do we assemble all the components of a site to make the most coherent news product, referencing, search and so on. Search engine optimization for instance — a critical alchemy on which depends 30% to 50% of a site audience — is currently done by in-house or external experts, half techies, half marketeers, even though it is an obvious editorial question.

The challenge for journalism schools and universities is integrating the full scope of what is at stake here, of what we just reviewed. Then they must convince idealistic students that the digital arena is their main professional domain and that technical and business skills are as important as good writing — that is if they don’t want to feel exploited by bean counters, MBAs and graduates of the Procter & Gamble University… — FF

Technology / Multicore Processors: More is Better, Right?

Lies, damned lies and benchmarks. So goes an old industry joke setting up an ascending order of offenses to the truth. Old joke but alive and well in the latest industry trend: the recourse to multicore processors in our PCs.

Here, multicore means several processor modules (cores) on the same CPU (Central Processing Unit) chip, as opposed to multiprocessors, several separate chips inside the same computer. This means more computing power inside our computers, this must be good.
Not so fast. Yes, more raw power but do we know how much extra performance percolates to the surface of our user experience? Not as much as we’re led to believe.

Why this sudden conversion to multicores? The simple answer is Moore’s Law stopped working the way it did for almost 40 years; Moore’s Law used to predict a doubling every 18 months for the price/performance ratio of silicon chips. As expected, in about twenty years, we went from 1 MHz (the frequency at which the CPU processes instructions) for the Apple II, to 3 GHz (3,000 times faster) Intel chips — for about the same price. But, in the last few years, something happened: the clock frequency of top-of-the-line chips got stuck around 3 GHz. This didn’t happen because silicon technology stopped improving, we now speak of silicon building blocks as small as 35 nanometers (billionths of a meter) or even smaller in pre-production labs. A few years ago, we were happy with 120 nm or larger. So, the surface of things looks good: we still know how to cram more and more logic elements on a chip. But we have trouble making them run faster. Why?

Here easy basic physics come in. Let’s say I want to move a one gram mass up and down once, this will require a small amount of energy, say one Joule. If I repeat this once per second, we have one Joule per second, this is known as one Watt. Moving to 1,000 times a second, we’re now dealing with a Kilowatt. If the frequency moves to 1 GHz, one billion times per second, we need one Gigawatt. Going back to chips, they move electrons back and forth as the processor clock ticks. You see where I’m going: the electric power consumed by a chip climbs with the clock frequency. At the same time, the basic silicon elements kept shrinking. More and more electric power in smaller and smaller devices. One Intel scientist joked seriously that processors could become as hot as the inside of a nuclear reactor.
Back to our machines, we have desktop processors that dissipate as much as 150 Watts and require a liquid cooling element right on top of the chip. And we all complain our laptops are too hot for our… laps.

But now, imagine the computer industry calmly folding its arms and telling us: That’s all folks, this is as good as it gets. This after decades of more/faster/cheaper? No. That’s why our Valley is now peddling multicores. We can’t have faster processors (this is mostly left unsaid), let’s have more of them. And look at the benchmarks, more power than ever. This is where the question of performance delivered to the user versus raw power comes in.

First, 1+1 doesn’t equal 2. Simply because the two processors sometimes have to contend for a single resource such as memory. One processor must wait for the other to finish before proceeding. More cores, more such losses.

Second, and much more serious, most of today’s software has been written with a single processor in mind. There is no easy mechanism, either in the processors themselves, or the operating system, or the program itself to split code modules off and direct them to one processor or another. The situation is getting better as operating systems learn, at least, to dispatch ancillary, housekeeping functions to another module, leaving more computing power available to a program that only knows how to work on a single processor. And program themselves are slowly but surely being updated to split off modules that work independently. Sometimes it requires much programmer intervention, read time and money. In other cases, automated tools restructure some or most of the code. Still, today’s PC software is far from taking advantage of multicores. Hence the reference to benchmark painting an unrealistic picture of multicore performance in the real application software world.

And, third, there is yet another fly in the benchmark. Some activities are inherently parallelizable: ten people will look on ten library shelves for a single book faster (statistically) than a single person. Four people will definitely paint four walls faster than a lone painter (assuming no contention for a single paint bucket, see above). But other activities are inherently sequential: you must wait for the result of the previous operation before proceeding with the next. One can think of spreadsheets where a complex, real-world financial model cannot be computed in independent parts, each operation feeds the next until all the formulae have been computed and, in some cases, iterated. There are many such applications, weather simulation being one, because it relies on a type of equations that cannot be made to compute in parallel. As you can imagine, there is a whole body of computer science dedicated to parallelism. Let’s just say there is no real substitute for Gigahertz, for faster chips. That’s one of the reasons why weather forecasting hasn’t made much progress recently.

Multicores are nice, they do add some performance, but they’re only a band-aid until we find a way to make faster chips. — JLG




The J-curve of the global print press

The J-curve is an economics metaphor, a way of saying things will get worse before getting better. That’s the prospect for the global print media sector.

For the American press, advertising revenue keeps dropping at a steady yearly rate of 12% to 15%. No industry can withstand a sustained double-digit decrease of its core business. It is not erosion, it is a collapse. And since advertising represents 70% to 90% of the cash-flow for US dailies, the sense of urgency is morphing into panic. Of course, some components of this decline, such as the credit crisis shock wave, are specific to the American market. But we can consider the American market as an advanced indicator for the industry. With this in mind, watching the reactions of two opposite cultures, US and France, could be enlightening.

The US industry was slow to react at first. But, now, the pace is accelerating. For the first six months of 2008, 4494 journalist positions have been lost in the United States. The latest busload was announced last week at the Los Angeles Times where 250 staff members, including 150 journalists will be soon gone (and the number of published pages will drop by 15%). No doubt the shrinkage at the LA Times will spread elsewhere.

The press is now in “survival mode” as an analysis puts it. Big newsrooms like the New York Times (staff of 1400) will soon be history. The market simply can no longer sustain such media battleships. That’s sad for the great trade mystique, but there no time for hand-wringing. We must instead tame and ride the shift, and save what can be saved. The US will be much faster at the restructuring game than Europe. Downsizing will be more decisive and quicker. In less than a year, we already went from a hiring freeze, to buyouts of contracts, to mass layoffs. That’s sad, brutal, unpleasant, but it will clear the way for the major shift ahead. And most American companies — as long as the financial market do not breath too much down their necks — will be left with sufficient cash to invest in new, diversified, more agile kind of media (and yes, for bulk of it, much shallower…).

In a country such as France the course of events might be different. Let’s turn, for an example, to a recent report on the evolution of the French print press. Jean-Marie Charon, a well-respected French media scholar, who also happened to be fiercely independent from the ever-present lobbies of the trade, led the working group. (Disclosure: I was a member of the group, it gathered here and there for nine months or so ; I kept quiet until we found out the report was widely circulated).

To make it short, the report’s conclusions rely on two scenarios: one soft, saying the press will somehow mutate, co-habit with on-line developments, but the basic structure will remain with some refocusing. The other scenario describes a major shift toward the digital media, with some casualites. New breeds of journalists with digital skills should emerge; they will contribute to the reinvention of journalistic “genres” suited to the Internet era. By force, today’s players will adapt or face extinction, as agile pure players will wait in ambush, ready to take over the slots left undefended. Drafted months ago, the reports conclusions appear to be strengthened by ongoing industry events.

Now, guess what is happening to this report? All the lobbies you can think of have obstructed its release. For a start, publishers were outraged. Some old trade fogies contended it was out of question to publish a scenario featuring such an industry upheaval. Bad timing, they said, as the press gathered its rags and prepared to beg the French government for another shot of taxpayer money (in France subsidies already account for 10% of the revenue for daily newspapers). Next September, President Sarkozy will hold a national conference on print-press. The shindig is loftily dubbed “Etats Généraux de la presse”, this is a shameless historical reference to the times when French kings held big public debates to address a national crisis. You see, we are right into the twenty-first century. Every old (and no so old, unfortunately) press baron is getting ready for the event, rehearsing sob stories, thinking of ways to shame a complacent government into “one last dose” of life-support funding.

That kind of French “corporate welfare” is not a stimulus for change. Neither are the unions. Technical workers and journalists are on the same page — no release of the report — but for others reasons. They refuse to even look at recommendations for drastic change of their status. Fact is, with a few exceptions, French newspapers executives and newsrooms managers are still “digital-adverse”. This is is great news for the media pure-players to emerge in the coming months, but no so great for the future of the French print media.

Not every European country suffers of such bad alignment of planets. Nordic countries have been able to reinvent themselves quite quickly thanks to four factors: the big players enjoy a controlling in their market, resulting in solid financial health, in having the means to make changes; a cultural long-term approach, also allowed by the capital structure of their media groups; an obsession with the training and the intellectual openness of their managerial elite; and strong and disciplined leadership.

Countries that yield to corporatist lobbies and rely on government charity will take much longer to adapt. For them, the bottom of the J-curve is still far, far away. –FF

Some Quick Links


A Few Quick Links to Monday Note #42

Newspapers Downsizing – NYTimes and Herald Tribune to merge sites. The move was meant to happen. A growing number of NY Times stories are appearing in the Herald Tribune, the NYT Co. is bleeding ad revenue. There is no longer room for duplication. The merger on the web is the first step (pretty easy to take), and the newspapers will follow. It is a matter of when, not if, the IHT
brand disappears. (Story in the IHT )

Online Advertising – Publicis Group launches VivaKi, a weird name (how much they paid for such an neologism breakthrough?) for a global initiative in which the n°3 advertising group will combine all its digital forces. Says Maurice Levy, Publicis Group Chairman: “Digital revenue should represent more than 25% of the group’s total revenue by 2010 compared with 18% in the first quarter of this year”. (Story in the FT)

Social Networks – LinkedIn worth $1bn. At least according to VCCW (Venture Capital Common Wisdom). This is based on the $53m investment coughed up by a group of VCs including Bain C
apital Ventures, Sequoia Capital, Greylock Partners and Bessemer Venture Partners. (story in Condé Nast Portfolio )

Aggregator – Slow Growth for Google News.
In May, Google News got only 11.4 million users. It ranked No. 8 among news sites, far behind Yahoo News, which was No. 1 with 35.8 million visitors. Its growth rate of 10% over the last two years is far slower than, for instance, MSNBC.com that grew by 42 percent, adding 10.4 million users. Proof that algorithm is not everything. (Interesting story in the New York Time )


(Finally) — The best bang for the buck. Find out how the clever tiny advertising agency Lastfool (no website in sight, sorry) made a funny viral movie for a cell phone earpiece maker. Small budget, many viewers. The funny thing is the counter strike of an anonymous member of the French mobile phone lobby…

Nokia makes Symbian Open Source: Declaring Victory?

When a $oftware company experiences a sudden access of generosity and donates its first born to the world of Open Source, what are we to think? They made so much money it was embarrassing? Or, it’s an act of desperation: We can’t sell it, maybe be they’ll use it if we give it away. Uncharitable minds add: And then we’ll make money telling others how to decipher inscrutable code and by explaining away bugs — not to be confused with fixing them. More politely: Give away the code and sell services around it. It can work, ask IBM and Red Hat. Or look at Google, it wouldn’t exist without the Open Source movement and its star, Linux, powering its servers, one million of them and counting.

Back to Symbian, what’s the real story? Admitting defeat or, having found a way to make money with the OS — finally? Knowing Nokia, certainly not the former. It is today the number one smartphone maker before RIM (Blackberry) and Apple. Nokia has no intention to cede the throne. But it’s not about making money with the Symbian OS either, that’s impossible. Let me explain.

Once upon a time, that was before Newton, Palm and Pocket PC, Psion, a British company, was the king of “organizers”, later called PDA, Personal Digital Assistants. Through the twists and turns of the genre’s history, perhaps a topic for another column, Psion lost its crown and went out of the PDA business. But the OS inside the Psion was a gem, this is an ex-user speaking, it multi-tasked without crashing. More twists and turns and a joint venture is born led by Nokia and Motorola, with followers such as Sony Ericsson and Samsung. Called Symbian, the company got the Psion OS. Symbian was to develop software for smartphones and make money licensing it to its partners.

Bad business model, bad timing, bad structure. Bad business model because handset makers don’t (or didn’t) actually care for software and don’t want to pay anything of significance for it. They (and their masters, the carriers) spend much more money on the nicely printed cardboard box than on the software inside. Bad timing because the smartphone market wasn’t really there when Symbian was born 10 years ago. The smartphone market only woke up around 2005 when Nokia, RIM and Palm totaled a few millions of units shipped that year.

Lastly, bad structure. No one was really in charge, the owners/competitors each wanted different features, a different user interface, application compatibility was nonexistent, unwanted even in many cases and development tools weren’t up to the power and quality PC developers enjoyed. Symbian kept losing money and Nokia, viewed as the main beneficiary of the messy joint venture, kept pouring cash in.

Today, we see that the smartphone market did more than wake up. RIM’s business grows by more than 100% a year; Apple, while number three worldwide, manages to shake up the industry and to look bigger than it is — or to project an accurate picture of its future, we’ll see; Google announces its Open Source smartphone OS, Android; Microsoft acquires Danger, the maker of an interesting smartphone, the Sidekick, and proclaims its intent to “own” 40% of the market by 2012.

All this, in my view mostly Apple and Android, pushed Symbian to try and regain control of its OS future. To do so, Nokia buys out its partners and becomes the sole owner of Symbian, now called the Symbian Foundation, sounding very non-profit.

Good, you’ll say, they want to be in the driver’s seat (unintended obscure geek pun here…) but why go Open Source then? My guess is that was a condition of buying the partners out. Nokia: You have access to the source code, my dear friends, you have total freedom. My other hunch is that the license won’t be the most constraining of the Open Source variants. By this I mean there is the GPL license that obligates you to share every improvement (or bug) you make and that also forces you to put in the Open Source domain any code that uses, connects to the GPL software you’re enjoying. Everything must become Open Source. Other licensing arrangements let you make contributions to the public Open Source domain but let you keep a wall between your private code and the public one. This, “true” Open Source or not, is the topic of heated arguments hopelessly mixing principle and money. Type “Open Source arguments” in Google for a sample.

I doubt Motorola, Samsung and Sony Ericsson will keep using Symbian Open Source code for long, they’re likely to go to one of several mobile Linux vendors, this is better than developing their own OS code or safer than hoping Nokia will give away improved Symbian code. Just last week, the LIPS, the Linux Phone Standards group decided to merge into the LiMo, Linux Mobile Foundation.

This looks like a smart move by Nokia: Regain control of its OS future, look politically correct and throw its competitors into a jungle of platforms (more than 60 worldwide, I’m told) out there. A beautiful mess, opportunities galore, like microcomputers before Microsoft and Apple made them PC.

Nokia, control like Apple, sound like Google. –JLG


Web measurement must be free and transparent

In the recent history of technology, success is not often related to superior performance. Take MS-DOS for instance, it dominated the operating systems of personal computing because it was the only one available at a key moment of the evolution of the PC and secured by an exclusive license between Bill Gates and IBM. (Had Darwin been at work, we probably would have ended with something better, but a clever entrepreneur, son of a prominent Seattle attorney, and his own lawyers were running the show at the time). There are others example, like the stupid keyboard which I’m using to write the MondayNote. The positions of the keys descend from a layout designed to actually slow down typing on mechanical machines. At the time, typewriters were unable to keep up with the dexterity of typists. Now we have spell checking software correcting mistakes as we type, but we’re stuck with the unpractical keyboard. In that instance, no contract is responsible, simply the weight of the habits, and the equally heavy burden of the backward compatibility of education (I don’t see a company buying, all of a sudden, Dvorak keyboard for its people, even though any PC or Mac can handle it).

Let’s come back to this century and talk about measuring website audiences. We are witnessing the same growing dominance of a system without regard to its performance. In this case, the system is Nielsen. A truly imperfect technology, to put it nicely, but widely adopted by the advertising community.

Two systems: To measure the audience of a website, you have user-centric and site-centric systems. Nielsen falls into the first category. It relies on the old, unreliable process long in use for television (that’s Nielsen’s DNA, actually). It is based on a panels of people periodically queried on their viewing habits. Perfect to assess the number of Joe-six-packs
on their couches watching a football game, but totally unfit for the Internet. It would be like, in a biology lab, asking lab mice how they feel, instead of counting cell divisions in a Petri dish. Site-centric measurement is the Petri dish and the real “quant” analysis. Basically, software that follows the users when they land on your site. What they see, for how long, all with increasingly sophisticated data reduction and display interfaces. There are many of vendors, competition is fierce. In the early days of the Internet, these systems where prone to some amount of cheating. But order prevailed quickly. Now, most are certified and no serious player would dare tampering with its stats system.

Now, let’s compare performance. Where site-centric measurements happen in real time, the relevant data released by Nielsen are published monthly. Yep, monthly. Like in the good old days of broadcast radio. On the top of that, Nielsen websites look like a social security database in the Soviet era. Rows and columns, no comparison tools, clunky features, bugged like a Louisiana swamp. You have to perform tedious exports to Excel files (friends even sent me screen captures!) to perform the analysis that a good Ajax-based website would perform in a split second with graphics ready to be exported in any document or presentation.

As a result, website operators are like the owner of a sweatshop in the garment district: they rely on two sets of books. The official one — in that instance the approximate Nielsen monthly data to feed the ad market — and the unofficial, precise one. Because to add insult to injury, the results of the two systems differ widely: when you have 1 million unique visitors on Nielsen, your internal stat analysis tools will probably yield 1.5m or 2.3m depending upon where your site tends to be visited the most (at work or at home).

How come such a lousy system became the standard in the advertising business? Two possible answers. First, the site-centric players were a bit slow to organize themselves. And when they did, it was too late. Second : it’s a question of culture. The advertising sector is (still) dominated by the TV/radio mentality — a Nielsen fiefdom. Therefore, when Nielsen came saying “let us to bring order to Web stats, it’ll be a piece of cake”, nobody questioned that statement. Now, because Nielsen is a powerhouse in the media buying milieu, we are stuck with that company. (Other reasons combine the usual laziness and conservatism).

How will it evolve? Again two (tentative) answers. The first is generational. I would bet that the upcoming generation of Web publishers will be more committed to transparency in terms of audience and basic data. (That’s not the case for print media executives, they were unbeatable at constantly misleading the ad market with bogus audience figures. Now, facing the Internet era, they are paying the hard price). The second factor is called Google. A year ago, it introduced Google Analytics, a great site-centric stats system — available for free at a click near you — that an increasing number of site are using in tandem with their usual tool set. They did the same with MeasureMap, a tool dedicated to blogs. Now, Google is developing new services (also for free) designed to help advertisers plan their campaigns. (When such a plan was announced last week, the stock of ComScore, a publicly-traded firm that tracks Internet usage, fell by 23% ). Is this new foray of Google is a problem? Yes and no. Yes, the domination is somewhat worrisome, but now it is a given fact. And no because if we consider (as I do) that the Web has to be transparent for its basic audience data, then relying on Google is not such a bad thing since it already knows everything about our sites. And that will leave plenty of room for highly specialized firms that will deliver customized, value-added audience analysis, that will be worth the price. –FF


Wait, Wait, This Is My Stuff!

Social networks and PC becoming an arranged knwoledge network

Let me start with an example. Hopefully, the concept will emerge.
Facebook. The latest fracas is their conflict with Goggle’s Friend Connect,
technology that gives any web site simple tools to acquire social networking features.

As a result, users of my organic gardening site connect, share ideas, recipes, pictures with their friends on other participating sites, such as Facebook, hi5, Orkut and many others (social networking or not). The point of Friend Connect not being forced to become members of other sites, just sharing. A side-effect is it becomes easier to take my personal data from Facebook and move my information elsewhere.
No, no, says Facebook. After initially agreeing to the Friend Connect interchange, it blocked access.

This raises the question in the title: Is my Facebook information mine or not? The company has spent upwards to two hundred million dollars building a “free” service. The value Facebook counts on to generate advertising revenue is what they felicitously call the social graph. As the name suggests, this is information about me, about the people I connect to, what we like, picture we share, music recommendations, games we play, purchases we make, invitations to events.

Everything about everyone, arranged in a knowledge network. Slight exaggeration, but you see the idea. Not just tons of details about me but a web of such details. This leads to the advertiser’s wet dream: ads focused on one individual, at the right time. Gee, Joe just told his friends he’s got a new job, let’s see if he’s in the mood for a new car or a new suit, or inviting his best friends to a celebratory dinner. For you, special prrrrice today!

Facebook is currently investigated by Canadian authorities for its ways with user privacy and we’ll recall last Fall’s stumble with Beacon. Users weren’t pleased to discover Facebook passed information to merchants without their knowledge and consent. The plan was creepy: even when users weren’t logged on Facebook, some of their moves were recorded and passed on to “partners”. There is a pattern here: Facebook thinks it owns my data. This is the gold mine they want to exploit and they don’t like the idea of the data flowing somewhere else (read Google).

They are not alone. Many suppliers in our PC/Internet life clearly think they have extensive rights on our machines and our data. I recall the incessant Orwellian demands to download Windows Genuine Advantage (nice bit of newspeak) to enable operating system and Office updates. But I already proved last week I have a genuine copy of Windows! Never mind, do it again. In ironic ways, it gets worse with companies such as Symantec and their security products. Once installed, they are exceedingly difficult to remove. This is for your safety, you see. We conceal key bits so the virus bad guys can’t remove them. Well, no, you keep insisting and Symantec will reluctantly tell you where to download a removal tool the bad guys can use as well. –JLG