DIS: a view from the Valley

Modest and proud of it, that’s us. Our perch at a center of innovation gives us the “right” to opine about almost anything, from biotech to movies, Net politics, wireless carriers and operating systems. So, why not mull over the future of newspapers?

et’s deal quickly with the formula: I agree with Frédéric’s prescription for the DIS. As described in last week’s Monday Note, new newspaper, laptop, smartphone, each medium, each prong of the integrated DIS has its features, its “rules of the genre”, its specific use and business model. Business model is a little abstract for me, let’s say money pump, the pockets we pick, advertisers, readers, and how.

Case closed, it’s a mere matter of implementation, right?
In the Valley, “a mere matter of implementation” is a code phrase, a tongue-in-cheek way to say we think we know the What but not the How. As in: to lose weight, all you need to do is eat les and exercise more – for ever. With the DIS, I see the question morphing into Who will do it? Fresh new money for an ab ovo entrant, an existing newspaper empire such as the New York Times or Rupert’s, or an existing enterprise outside of the newspaper world, Google, Tata or the Quandt family (they control BMW), for examples, realistic or not.

Let’s pause for a detour in the past: Exxon Information Systems.
In the seventies, the Big Oil company chartered the hypnotists at the Boston Consulting Group with designing a diversification strategy. Oil is running out, the OPEC is out of control, Exxon needs an alternative future. Information is the oil of the 21st century, chanted the Boston marabouts. (The Robber Baron from Redmond hadn’t emerged yet, but the BCG sees into the future.) So, Exxon started collecting little or no so little information systems companies, ranging from Intecom to Qwix, Qwip, Vydec and Zilog. The kommentariat bought it, Fortune Magazine sagely praised the diversification, the cover of Business Week asked: Exxon’s Next Pray, IBM or Xerox?

It all ended up in a $4 billion dollars hole. I know: I, too, bought the story and briefly ran their French subsidiary. And less than six months into the job decided I needed out. Right idea, wrong culture. We forgot Culture Eats Strategy For Breakfast. This was evident at Exxon, a well-managed company with no cultural clue (and no clue about lacking a clue) about the alien ways of computer people and technology.

Back to the DIS, fear someone with the right idea, armed with the right strategy but clueless about the people and the technology.
In the Valley, experienced, successful executives and entrepreneurs open a winery or buy a restaurant. You see, we know restaurants, we’re wine connoisseurs, we’ve been to the best ones around the world, we’ve swilled the grandest vintages. Wags call these pursuits buying oneself a phallic extender – these deluded individuals are all male, women are more sensible. These guys truly know how to be diners and wine tasters, but they know worse than nothing about the tough, thankless restaurateur trade or the bottomless vintner métier.

We need not look further than my country of birth to see other examples of Gallic phallic pride, of talented industrialists buying themselves an “organe de presse”. The malady is widespread and tells us big enterprises with big wallets probably won’t succeed in bringing a DIS to the world, try as they might.

In the Valley, we have this known, sunny view of entrepreneurs.
As a result, we could be tempted to think a totally fresh start will do it for the DIS. An experienced team of media and technology entrepreneurs with gobs of patient money from the likes of Kleiner Perkins, Sequoia or NEA, to names the firms ready to place big bets.

There is a small problem with the big idea: the business model doesn’t work like a venture investment, the rewards are too small for the risk.
As previous Monday Notes have pointed out, advertising revenue sharply declines when moving from paper to the Web. And there is Google whose riches come from pimping, sorry, selling advertising on, other media, not from being itself a new medium. So, we’re left with existing media groups. One gives us hope: Rupert Murdoch’s News Corp. He’s not exactly a kid fresh out of college who doesn’t know the word impossible. In an apparent paradox, his age, 77, is an advantage. He is, so to speak, not afraid to die, he’s repeatedly succeeded against the advice of the wise. Murdoch managed to take over choice properties such as the Times of London and, damned the Cassandras, improved them. Too early to say for the WSJ and no such luck for MySpace yet. The latter could be a case of cultural deafness. Still, my hope lies with a media group finding the will or the enlightened dictator to “cannibalize” its existing business rather than silently capitulating to its fate. This excludes most publicly traded groups, Wall Street hates cannibalism. As a result, the first step in the conversion to the DIS is a leverage buyout, the group becomes private so the surgery takes place behind the curtain. –JLG

Fiction: How Steve Jobs Cuckolds AT&T

Steve shimmers into a bar, materializes next to Dan Hesse, Sprint’s CEO, crying in his mojito and whispers: I can fulfill your fondest dream. You’re the Devil, go away! No, I’m merely Steve Jobs and I want nothing to do with your soul or your chiseled body. Relax, it’s just about money.

A little bit of context before we move to the How of Steve’s bargain.

In the US, we have three main carriers (sorry, T-Mobile), AT&T, Verizon and Sprint. Verizon appears to have the better, more modern (EVDO) network.
AT&T is rapidly upgrading to what is known as 3G, a world standard, competitive but not compatible with EVDO. Sprint, the smaller one, has EVDO, almost identical to Verizon, it is losing ground to the two big ones. The Sprint-Nextel merger is a disaster, to the point where Sprint wants to get rid of the company it acquired for $35 billions in 2005. Sprint’s revenue is falling: -11% when compared to the same second quarter last year, this in spite of introducing a $99 Everything plan, unlimited voice, data, music, video. “Some restrictions apply”: look at the minuscule print here, at the bottom of the screen, tiny white characters on a black background. In the almost illegible but instructive gibberish, they have the nerve to add: “Other restrictions apply. See store or sprint.com for details”. But I am on the Details Page on sprint.com!
(Intrigued, I checked: Verizon does a better job of spelling out its conditions and AT&T has the best organized one of all three.)

And, for the first six months of 2008, Sprint has lost 2 million subscribers, nothing to do with the reality and the perception of Apple smartphone sales: probably more than 10 million units in 2008, a majority of in the US.
Now we understand why the CEO is in his cups.

Steve whispers: Dan, look at the iPod Touch here. We’ve added a microphone, already available from third parties, and we grafted a Sprint radio, liberated from Jeff’s Kindle. It’s not a telephone. No, we have this exclusivity agreement with Ma Bell. In 2007, we let them say it was for five years. Now, with our 3G product, it’s been “extended” to 2010. Who knows, next year we’ll extend it to 2009.

Offer this iPod Touch with one of your All You Can Packetize plans. I’m sure the iPhone developers will put one or more Skype-like applications on it, VoIP software. You won’t mind, right? You’re not as uptight as AT&T outlawyering the use of an iPhone as a 3G laptop modem. This iPod is not a phone, it’s an Internet device, you’ll sell millions of them, your errant subscribers will return to Sprint’s fold. And you’ll keep your job. What do you say?

Awright, stop drinking that stuff and sign here. –JLG

What a modern newspaper will look like.
Inventing the DIS.

A few years ago, someone involved in the rescue of the French newspaper Libération asked me what would I do to save the paper. The question meant a lot to me. I had spent a total of twelve years at “Libé”, many of those when the paper was at its best (I even enrolled Jean-Louis Gassée as a columnist at the time).

This is what I told the owner’s representative:
- One: Dump the idea of a daily paper. Too expensive. Too much competition with the Internet. Distribution in France is hopelessly costly and unreliable.
- Two: Equally allocate journalistic resources to two products, a website and a weekly paper. The website (and its mobile version) covers daily news. The weekly is a light and focused Friday magazine: a small number of well adjusted, value-added stories (investigative pieces, in-depth news analysis, great profiles), great photographs (Libération was once renowned for its piercing, memorable pictures).
- Three: Dump your current printing contract; is only produces a paper where the news stick to the reader’s fingers. Pick a modern printing plant, one able to make a 60 pages magazine, tabloid-sized, with a look and feel comparable to classy British Sunday magazines.
- Four: Restructure the newsroom. Not a little, drastically. Keep the well-known bylines (I meant, those who work), keep the editors who will preserve the standard for the news gathering process. Flatten the organization (French papers, like American ones, have about ten layers of management in the newsroom). Don’t do a buyout like you did already four times (in each instance, the best people took it, it was an IQ test). Inject new blood, there is plenty of young talent out there. Outsource whatever doesn’t make the paper’s style and substance.
- Five: Build on your brand. It is a terrific, undervalued asset, your poor management has downgraded it to charity business (I was even more diplomatic, but that was the idea).

Needless to say, Libération chose a different path. Mostly flattering the oldest segment of their shrinking readership and therefore, sliding slowly on the tedious slope of a complacent irrelevancy. In marketing theory, this is known as “following your demography to the grave”.

Was a turnaround of such magnitude feasible? Perhaps not. Too much financial and human pain. Maybe the very fabric of the paper would have been lost in the process. Maybe. But I’ll always think this paper, which used to be the most brilliant of its time, missed an opportunity to regain its avant-garde status.

Like most of my generation, I don’t see life without newspapers. Well, without something that fulfills the theoretical functions of a newspaper (which, in turn, open the door to other forms of news products).

Two things strike me though.

The first is the cliff-like drop in newspaper advertising revenue. (Read this stunning account in last week’s NY Times). Speaking of The New York Times, its debt is approaching “junk” status.

The second is the number of news junkies (look around you, not at me) who give up physical newspapers without any visible withdrawal symptom. They simply replace one interface with many: web, mobile internet, RSS feeds, a good laser printer to enjoy long articles in bed or at breakfast.

This leads me to wonder: knowing what we know today — shifting advertising market, readership changing habits, modern production settings — what would a modern newspaper designed from scratch look like?

The DIS (Daily Information System) core features:

1. No more one-media setup. Today’s stand-alone daily is on deathwatch. As DIS component, it has a future. Let’s face it: pure news, breaking news, developing stories now belong to the electronic medium. Radio, mobile Internet, website: when speed is key, the paper is dead. Therefore, a DIS must allocate resources flexibly between electronic and paper versions. The survival of the paper is not conceivable otherwise.

2. No more 365 print runs a year. The paper must not be printed every single day of the year. Readers don’t need it; the advertising market no longer supports daily printing. Relevancy and value-added are the only allowable motives for a newspaper, not day-to-day obligation, a stricture of the pre-Internet era. But now, as long as a media enjoys a comparable audience for its electronic and print product, a newspaper can afford (enjoy is a better word, as in financial health) a dotted publication pace. A sustainable model assumes publishing three or four times a week.

Wait, it makes more sense than what we see when only looking through today’s lens: since breaking news and updates are on the web & mobile, the paper is devoted to in-depth journalism (news analysis, reporting, investigative pieces, profiles). Frankly, as long as hard news is available elsewhere and controlled by the same editorial team, who cares if an analysis of Vladimir Putin’s strategy in Georgia has to wait a couple of days? (Actually, more thinking and editing time will make it better). There, relevancy majestically trumps immediacy. Majestically? Think of the regard in which the NYT’s editorials and columns are held.

From a cost perspective, this model makes a huge difference: no more dual labor shifts, better profitability of the ad space (less discount for slow circulation days).

3. The price equation: paid or free? I lean towards the free model. Here is why.

- First, most newspapers are already free — almost. From the Times of India to the Washington Post. Advertising makes the bulk of their revenue. The reader paying to support his paper? This is mostly an illusion (France is an exception, its press is expensive, elitist… and dying).

- Second, like it or not, the Generation X sees can’t see information in any way no other than free.
Three, a sophisticated free newspaper can have a distribution system as targeted and precise as a paid one. Today’s techniques for spotting audience groups are unprecedentedly refined, much more efficient than dumping a stack of papers before a newsstand at 5:30am. By factoring socio-demographics, hourly habits, even newscycle and weather conditions, distribution can be laser sharp.

Readers like free papers. Research shows they find the concept friendly, generous, practical. Many free papers launched as a defensive move by their publishers turn out to be embarrassing successes.

There is an alternative to the free model: a very low price; it yields a better measure of readership and discourages people to discard the paper after 30 seconds of scanning. (For the best free newspapers, the proportion of premature evaluation readers turned out to be small).

4. A more sophisticated sales model. Airlines hate empty seats, look at what they do: dynamic pricing, rates change every minute, yes, a deal will disappear before your very eyes, if computer’s instant load forecast says so. Contrast this with newspapers: the advertiser is asked to pay the same rate per square centimeter every single day of the year (plus or minus 20% with good negotiation skills). Weirdly enough, dynamic pricing has percolated into broadcast media, but not into the print press, why is that? The size of the inventory — i.e. number of slots — does not explain everything. The roots of the problem lie in the advertising food chain, in its creaky conservatism. It starts with the sales manager. There, the preferred staff performance metric is the number of appointments the salesperson manages to stuff in a week (bear with me: it’s pathetic). Then, we move to the media buying agency’s struggling contortions to justify its presumed competency.
For the business plan of a modern DIS, my first move would be hiring a quant PhD. I’d task the brainiac on a tri-media (paper-mobile-web) dynamic pricing model.

5. The product interface and production. Low quality newsprint on a broadsheet is like vinyl records for the music industry. Time to switch to iPods, folks. Contemporary recipes are: small format, no more than forty pages, paper that doesn’t bleed ink, pages glued or stapled, good quality printing to justify premium pricing to advertisers. Indisputably, it works, cf. the tremendous success of the French 20 Minutes (2.5m readers). And layout must be as modular as a Lego game.

This also means the end of cathedral-like, union-controlled printing plants. Small printing presses, able to do profitable runs of few thousand copies are key. And no more printing ownership anymore. That’s passé. Now is the time for well-designed contracts that reflect the new medium’s flexibility.

Modern printers can also economically handle upstream distribution tasks such as preparing bar-coded bundles of papers at the end of the printing chain to make the truck distribution process more efficient (unthinkable in France or the United States due to union obstruction, of course).

6. Staff structure. Keep the org chart as flat as possible. A newspaper must be run by no more than five top editors, plus a few section heads. That’s it. Three or four levels of management maximum, not ten or twelve. The complexity (hence the cost) of a newsroom tends to grow with the square of its staff size.

Outsource non-core competencies. Including journalistic ones. By core competencies, I mean what really defines the identity, the orientation of a newspaper: national coverage, foreign affairs, economy, and culture. Conversely, sports, consumerism, science, style, travel can be outsourced to specialized entities, on a contractual or on-demand basis. Less people in the core newsroom means a smaller chain of command and therefore a much healthier metabolism. No place to hide, bosses included.

Outsourcing includes the recourse to outside experts. Experience shows that many stories would be vastly improved with input from technical experts (legal or economic areas come to mind). A respectable paper maintains a network of experts and scholars, real ones, not quote machines.

Oh, by the way, to the best of my knowledge, an engineer at Apple is not especially encouraged to work on the side for Cisco or Google. Therefore I don’t think a journalist should be allowed to moonlight for other media outlets. It’s fine to have some star-writers who are going to enhance the visibility of a title by writing best sellers or hosting TV shows. But, frankly, how many fall into that category? Two percent? Truth is: the Woodward type is a scarce commodity (and still: according to his contract with the Washington Post, he can work on any subject, as long as he gives first dibs for his scoops to his paper). Therefore, salaries must be adjusted accordingly (kill the idea of low-cost journalism, would you trust a low-cost neurosurgeon?).

7. The test and learn approach. An virtue of an Internet venture lies in its ability to morph and adapt in response to change, whether it is market conditions, unexpected competition, or simply intuition. By comparison, the concept of “release” (v.1.0, 1.1, etc.) is totally alien to newspaper culture. There, because of layers of managements and fiefdom mentality, committee is required to make the simplest change in a layout or to launch a new heading. Like any product, a newspaper needs constant adjustments. The ability to test and adjust is not a byproduct of Internet technology, it is a core feature.

In my view, the DIS is not an option, it’s not even “innovation”, as in something that’s nice to have, that you can get on your own time. How many of today’s newspapers will survive by merely tweaking their ways, their culture? Will they march to the grave with their aging readership? They should look at how many Grey Panthers are using laptops now and weep. In other words, how many titles will get to the newspaper graveyard leaving their readers to really new newspapers? –FF

Launchpad Chicken: MobileMe and Sync Trouble

by Jean-Louis Gassée

Simple is hard. Easy is harder. Invisible is hardest. So goes one of the many proverbs of our computer lore. As Apple found out last month with the MobileMe launch misfires, the lofty promise of “Exchange for the rest of us” translated into a user experience that was neither simple nor easy — in a highly visible way. Four weeks later, the service appears stable but doubts linger: Is Apple able to run a worldwide wireless data synchronization service for tens of millions of users.

What happened and what does it mean for MobileMe’s future?

Let’s start by decoding the “Launchpad Chicken” phrase. The game of Chicken is one by which two young males test their virility in the following way: from opposite directions, two cars speed towards each other on the same lane of a country road. The one who steers away first obviously lacks cojones and is derisively called chicken. You might ask about brains versus testes but here we are, the chicken is the one who “blinks first”. Now, let’s turn to the launchpad. Picture the NASA control room before the launch of an expedition to the Moon. Hundreds of (mostly) men in white short-sleeves shirts, pocket protectors and eyeglasses, hunched before screens, keyboards and telephones. Each one monitors a subsystem: left liquid hydrogen tank, backup gyroscopes, main engine telemetry… In the huge air-conditioned control room, five of these men are sweating, something’s not quite right with their baby. The temperature keeps rising, the pressure is falling, the telemetry link is weakening. Almost but not quite in the red zone. If the parameters keep drifting like this, they’ll have to pick up the red phone. But who wants to be the one who aborts the launch? So, they sweat some more and hope someone else blinks first. There you have it: Launchpad Chicken.

Now, move the imagery to projects with complicated subsystems. You see how the NASA metaphor made its way to Silicon Valley. There is always hope some other engineer will raise a hand and spare me the embarrassment of admitting my part of the project could crash the launch. This is what happened for MobileMe, with a twist on the cojones, so to speak. No one had enough brains and guts to risk humiliation, to raise a hand and say: Chief, we’re not ready here, let’s stop everything. As a result, MobileMe badly crashed on launch. A couple of weeks later, we have a leak: an “internal” memo from Steve Jobs. The email states the retroactively obvious, the project should have been delayed or at least launched in stages. No less obviously, a new leader is appointed, Eddy Cue, he’ll continue to run the iTunes systems as well. Charitably, the deposed MobileMe boss is granted anonymity, he might have been misinformed by his charges, or he might not have asked the right questions at the right times, it doesn’t matter anymore.

But, you’ll ask, that doesn’t tell us what went wrong, which liquid hydrogen tank sprung a leak. This now gets us into two more topics: sync and size. Sync here means keeping information identical, consistent over two or more devices. Less abstractly, for a simple example, I have a phone and a computer, I want their address books to identical or, at least, consistent. On simple cell phones, I use a cable (or a Bluetooth wireless connection) plus software to copy (parts of) my computer address book to the phone. But, wait a minute, I entered numbers on the phone that are not on my computer; I don’t want the copy from the computer to wipe out those new numbers. Trouble starts, as if connecting the cell phone to the computer and running the program wasn’t buggy enough. Tou want the software to compare the two address books, the phone’s and the laptop’s and decide what to keep and what to change, on both devices. But what about homonyms, or different numbers for the same person’s home? The program, hopefully, raises those “exceptions” and lets a human arbitrate.

We’re just warming up. Now picture a more real-life situation. One traveling consultant with one laptop, one smartphone, both carrying mail, address books and calendars and one assistant in the office with a desktop computer. In Microsoft Exchange’s lingo, the assistant is a “delegate”, has access, including modifications and new entries, to the traveling consultant’s data. Everything must be kept identical, consistent, in sync. How is this done?

Using the Exchange server as an example, it keeps the “true” data. And the “clients”, meaning the smartphone, the laptop, the assistant’s PC submit changes, new mail, an updated appointment, a new contact home phone to the Exchange server. In turn, the server propagates changes to the clients. We say the updates are “pushed” to the smartphone or the laptop, just as they “push” new mail or a new calendar item to the server. You can easily imagine conflict situations: the same appointment changed by the consultant and the assistant, address updates and the like. By now, at least on Exchange, these “exceptions” are well understood and generally well-handled. But it took years of practice. Just as it has taken years for RIM (founded in 1984), the Blackberry (launched in 1999) creators to polish what is the best-selling synchronized smartphone. Details, details and more subtle mistakes and special cases found and fixed. The Blackberry got its stardom from truly delivering the Simple, Easy, Invisible proposition referred to in the beginning of this essay.

MobileMe aspires to deliver a similarly invisible level of synchronization for people who don’t have an Exchange server, hence the “Exchange for the rest if us” slogan. But seeing the launch glitches, I wonder how many people at Apple stooped to using a Blackberry with an Exchange account. Doing this would have sobered them a little in advance of the launch, or delayed the whole thing, or tempered the boasts. Shortly after MobileMe’s first missteps, Apple publicly and smartly retracted its use of “Push” to describe MobileMe’s synchronization and the “Exchange for the rest of us” motto is no longer seen on the company’s Web site.

Moving to size: quantity begets nature. At some (often mysterious) point, more of the same becomes something different. One server, ten servers, more of the same. One thousand servers or, in Google’s case, running one million servers is of a different nature. Meaning different people with different knowledge and appetites than the ones needed to run a company’s email server. If every other iPhone customer wants to sync a PC or Mac with the newly (or old, with the 2.0 software update) purchased iPhone, MobileMe will soon serve millions and, in a not too distant future, tens of millions of iPhones. Besides knowing or not knowing the Buddha of sync, did the MobileMe team have the experience, the knowledge, the appreciation of the “size” problem before them? Very few people in our industry do. Ask Google’s rivals why they were trounced by someone coming late to the game but with a better handle on the “size” or “scale” problem. (See this paper from UC Berkeley, where ultra-large scale computing is actively researched, with private industry subsidies.)
In passing, 10 million MobileMe subscriptions at $100/year is a nice piece of change, one billion dollars, worth the trouble.

Let’s step back a little. Apple “pushes” somewhere between 100 and 200 megabytes of updates per month to each Mac user. Last week, the iPhone 2.0.1 update was announced, I connected two iPhones within minutes, the 200Mb files were downloaded and installed without a hitch and I haven’t heard any blogosphere complaints on the matter. iTunes has sold billions of songs, serves tens of millions of customers everyday and everything works with very few exceptions. In other words, some very large scale Apple systems do work. As discussed above, the iTunes boss (some say slave driver, a meliorative term in context) in now also in charge of MobileMe.

And, last week, parts of the Gmail service were down for 15 hours or so. Last month, Amazon’s respected Web Services went down. And, last year, RIM’s servers went down for about half a day in the Western Hemisphere, freaking out Wall Street investment bankers and management consultants. Even the best players must endure their share of false notes.

Back to MobileMe today:if you ask subscribers who’ve never experienced a Blackberry’s smooth delivery of sync, they love MobileMe. It works, it’s easy to set up and in the simple (most frequent) case of a PC/Mac with an iPhone, it does the wireless (OTA, Off The Air) sync job as now advertised. We’ll see how this scales once iPhones are sold in 21 more countries, 43 total starting August 22nd.


The slow drift of globalization: Watching the Baltic Dry Index

Durable high oil prices might kill the main lubricant of globalization. And trigger decisive innovations in the car industry.

Why worry about the Baltic Dry Index (BDI) in a column usually devoted to media and technology? Two reasons. One, the index is an advanced measurement of the state of the worldwide trade, of its growth. Two, we sense tiny drifts in the tectonic plates of globalization. Recent movement in the BDI gives interesting clues on why and where the plates are moving. (Bonus in this week’s box: we forget newsmedia for a moment, its fate looks bleaker than ever).

The BDI has nothing to do with the spot value of Baltic herring.
It derives its name form the Virginia & Baltick coffee house in London back in 1744. Still traded in the City, it tracks the cost of shipping raw goods across the planet. In theory, it is an economic indicator in its purest form, deprived of any speculative distortion. It is a precursor in the sense that it reflects the movement of major commodities (iron ore, grain, steel) calculated on a day-to-day basis by monitoring costs on 24 major shipping routes. Exactly three years ago, in August 2005, the index was at 1700. It reached its all time high on May 15th 2008 at 11,465. A 574% increase. Since then the Baltic Dry Index has lost 38% to around 7200. What’s going on?

Two consecutive events are visible here.
The first one is the combined explosion of commodities and oil prices. This boom multiplied by a factor of six the cost of sending a ton of steel to China. The second event is the consequence of these soaring costs: worldwide stagflation — lackluster growth combined to inflation pressure. The rise of the BDI also impacts the cost of shipping finished goods back to Europe or America. In the last two years, depending on time and route, the cost of chartering a container vessel has doubled or even tripled. Consequently, shippers are about to do what airlines do: one, pass the increase on to the customer and, two, reduce trim capacity to preserve (or restore) margins — a dual inflation boost at the end of the chain. Maersk, the Copenhagen-based and world’s largest container line is undertaking its sharpest cost reduction in its 103-year history by reducing 12% of its workforce. Traffic departing from the Chinese ports of Shenzhen or Shanghaï is already slowing down a little. This signals a coming drop in demand from importing countries.

The era of cheap oil is gone for good. Gone are the goofy projects such as superfast container-carriers able to halve the time of sending boxes of electronic goods from China or US or Europe (now ships are reducing speeds by 20%). Others will cry over Maersk’s bad timing: two years ago, the company launched the biggest container ship ever built, the 452m long Emma Maersk, able to carry 13,000 “boxes”.

The main lubricant of globalization was the negligible costs of shipping. It’s gone! Now, economists see a coming move: the Neighborhood Effect, putting factories closer to consumers and components suppliers (see story in the NY Times). The change won’t happen overnight — it takes years to setup a production and logistics chain — but the trend is there. In a few years, we might see electronic manufacturers emerge in Eastern Europe or Central America. After all, the first (2001) version of Microsoft’s Xbox was manufactured by Flextronics in Mexico.

Besides altering the supply chain’s geography, durably high oil prices will trigger innovation. (In that respect, green frenzy will help). Again, it will take time. The auto industry in notoriously slow to (actually versus verbally) embrace true innovation on a grand scale. Ten years after Toyota’s first Prius, sales of hybrid cars in the US market are not expected to reach the million mark before 2012 (compared to about 15 million new cars to be sold in the US in 2008). And the truly electric car remains stuck at the prototype stage, this because battery of fuel-cell technology isn’t really there. Not yet but soon say techies and investors…

Amazingly enough, these two technologies, the hybrid and the electric motor, are old inventions. The first one goes actually back to 1901, when Ferdinand Porsche envisioned a dual propulsion system. A more elaborated HEV (Hybrid Electric Vehicle) concept was set in the 70′s by a scientists named Victor Wouk
Even Audi tried it. In the end, the car that became the Prius was conceived in 1993, when Eiji Toyoda, Toyota’s chairman and the patriarch of its ruling family, expressed concern about the future of the automobile (read this excellent story about the birth of the Prius in Fortune). And as the owner of one, I can safely say that, great as it is, the application is still in version 1.0.

As far as the full electric car is concerned, the idea is roughly as old as the automobile itself. Just one example, recently, Google made a great deal of its initiative toward plug-in Hybrid (a Prius with additional batteries to be recharged from the electric network, not the vehicle’s kinetic energy, yielding an intergalactic range of 60 km on a single charge). But the very idea of a public vehicle recharger goes back to 1899, it was invented by General Electric and was called the Electrant. A question you might ask: why were these inventions not adopted by the car industry? Well, technical hurdles are part of the reason. (See the headaches of the expensive new Tesla which holds 6800 laptop-size batteries for a 450 km range). But the main explanation lies into the conspiracy engineered by the Big Three US automakers together with Phillips Petroleum, Standard Oil of California and Firestone Tires. It is long forgotten but, back in the 20′s and 30′s, the United States urban landscape was dominated by public transportation systems. But thanks to the unholy alliance acting under the name of National City Lines, electric tramways and trolleys where methodically removed and burned, paving the way to the individual automobile era (read the insightful book by Edwin Black: “Internal Combustion. How corporations and governments addicted the world to oil and derail alternatives“, or watch the documentary “Who Killed the Electric car” here on YouTube)

A sad, ironic form of justice: The Big Three are now on life support. After decades of mocking the Californian crazies (they often said fags) and their Japanese tin cans (read Honda Civics) they’ve largely lost the initiative. Now, the ball could be back in Silicon Valley as technology will play a decisive role in the making of the new automobile. The region possesses the three components needed for the big automotive turnaround mandated by durably high oil prices: intellectual creativity (for instance the ability to design the complex software-managed systems that will be required for next generations of hybrid cars), capital from companies such as Google, the ability to raise big amounts of money, and the innovation DNA. San Jose as the new Detroit (without the factories)? Possibly yes. –FF

iPhone 3G — One Week Later

Contrary to what I expected, the dust hasn’t settled yet. A week later, people still queue, 2h30 Friday morning before being admitted to the sanctum sanctorum in San Francisco. Besides the long lines, there were glitches: activation problems, trouble with the new MobileMe service, with getting access to software updates for the “old” iPhones. Apple claims 1 million phones sold worldwide for the first weekend, probably 400,000 in the US alone. The latter number could explain the activation servers overload: in more normal times, AT&T must activate “only” 25,000 phones a day. Apple apologized for MobileMe problems and even conceded they should suspend some of the verbiage used to promote the service. Calling “Push” the way email and other information is coordinated between computers and the iPhone was found a little “anticipatory”, meaning promises made couldn’t yet be fulfilled. ["Push" means your phone or your computer will receive information without asking for it, without "Pulling". The Blackberry is still the king of "Push".]

But this is mostly folklore, fun but transitory. Something more important is taking place: the advent of the App Store. On iTunes, the App Store is a section where you find new applications for the iPhone. On the iPhone, the App Store is an icon that enables the one-click purchase and wireless download of new applications, just like a song and often costing the same, 99 cents, or less. In about the same time it took Apple to sell 1 million phones, users (this includes the updated first generation iPhones) downloaded 10 million applications. Half of these were free. For the paid for ones, about half were games, the rest range from software for general aviation pilots, medical students, bloggers, to light sabers, yes, you read it right, translation with voicing of phrases, nice when you go to China, subway maps, newsreaders, CRM, social networking, instant messaging and music streaming. Apple signed in with a nice, free, flourish: a program transforms your iPhone into a remote control for iTunes or AppleTV, works anywhere in the house through your WiFi network. And on and on… I was going to forget the Chanel Haute Couture Show. Free. Highest Karl Lagerfeld quality. How did this get in? Let me guess, friends in a common advertising agency? Is this one the new business models discussed below?

When the App Store opened a week ago, the catalog featured 27 pages, we’re now at 42. It’s fair to say some applications are silly, useless or unstable. The user review system in the App Store is merciless and deals harshly with stupidity, bad code or dysfunctional UI (User Interface). Also, there is an automatic update mechanism and applications such as Facebook have already been improved. The bad ones will die quickly.

The BFD, as in Big Fundable (or other F words) Deal here is the Great American Instant Gratification. The mental transaction cost of getting an application is very low: lots of choices, small price, one-click transaction. This is the magic of using the existing iTunes infrastructure and exisiting customer behavior. I can’t help but wonder whem Apple (or its competitors) will also use the model for desktop applications, Cloud Computing notwithstanding. I buy iTunes music for my personal computer, why not buy applications for my Mac or my PC from the same store?

Wait, as we say in America, there is more: business models. We’re beginning to see ads on the iPhone, with photos, music or the New York Times. We, VC, will be watching carefully as we wonder if advertising on such small screens will work, will generate real money. Another form of advertising looks more promising: free music channels on the Pandora application. You first set “channels” on Pandora.com from your PC, say Mozart, Bach, Miles Davis and Dave Brubeck. On your iPhone, you click Miles Davis and you either get Miles Davis works or music deemed to belong to the same genre, with a nice note explaining why the piece was put on this channel. And…, if you like it, one click buys it form iTunes. Clever and clever a second time because not convoluted.

Lastly, content presented as, wrapped in applications. For 99 cents you buy and load an application called The Art of War. You’ve recognized Sun Tzu’s book. But, instead of having a separate book reader and content purchased for it, with the risk of “unwanted duplication”, content and reader are now budled as one application for each book. When I pitch my next book to the publisher, I’ll make sur to mention the 45 million iPhones to be sold next year. This number is an admittedly wildly optimistic (and widely criticized) forecast by Gene Munster from Piper Jaffray. Unless RIM (Blackberry), Nokia and Google fight back, which is very likely, they don’t like Steve Jobs wiping his Birkenstocks on their back. —JLG

Outsourcing’s next wave: media

Ever heard of companies like Mindworks Global Media, Express KCS, or Affinity Express? Well, in due course, millions of English speaking newspapers will do. Now, this concerns “only” readers of newspapers such as the San Jose Mercury News, The Miami Herald, or the Orange Country Register, to name just a few. In these newspapers, significant editorial jobs, tasks that once belonged to US newsrooms are now outsourced to a cluster of companies in India.

This is the next effect of globalization: off-shored editorial jobs. Highly specialized sweatshops, hundreds of workers on night shifts — Indian time zone — line up in the outskirts of Delhi or Mumbai. Journalists are no longer only reporting or analyzing job migration to cheaper Asia, they are now about to experience it. Take the Orange County Register for instance. Typical big regional American newspaper: strong power in its own terrotory (the greater South of Los Angeles), several Pulitzer Prizes, and recently about a hundred layoffs in its newsroom. Last month, it became the latest to offshore not only secondary jobs such as laying out ads, but also core competencies such as copyediting. It relies on Mindworks Global Media, a two-year-old company headquartered in Noida, 15 miles from New Delhi where ninety qualified Indians are performing the task (see story in Business Week ). The OC Register features the most advanced example of outsourcing jobs in the print media. Other newspapers such as the San Jose Mercury News (Silicon Valley’s daily), or the Oakland Tribune are testing the waters: they assign advertising layouts to Express KCS a two hundred people startup based in Gurgaon, India, also close to New Delhi. Express KSC provides a wide range of print related services, ranging from pre-press to magazine production or ad design work (story in the Columbia Journalism Review )

Three factors accelerate the trend. The first one is the newspaper sector’s global crisis. English speaking ones are motivated to outsource to India (or Thailand which is eyeing the pie) as much work as they reasonably can: most of the of day-to-day layout jobs will soon be gone, as well as a greater number of sub-editing, copy-proofing positions. When the cost ratio is 2:1 or even 3:1 as it is between US (or UK) and India, the incentive is impossible to resist.

Increasing skills in India and other Asian countries is the second factor. This is the payoff of global knowledge and education. The level of local universities is rising fast, so does cooperation with Western universities. And many formers students from American universities are returning to their homeland, they become clever entrepreneurs and eventually suction up jobs from abroad. Judging by the number of editorial jobs posted on MonsterIndia, this is a heavy trend.

The third factor is the cost of telecommunication that is now asymptotically driving to zero. [Don't tell YouTube's parent, Google...] Speaking with or sending a page layout to a sub-editor costs the same, whether the individual is on another floor in the building or in Mumbai.

This explains the sequence of events we witnessed in recent years. Intellectual (non-engineering) off-shoring has evolved from human-based data-mining, to number-crunching, to basic-design, and now to news content. Reuters (now Thomson-Reuters) was the first to jump in 2004 when its financial service opened an office in Bangalore with 340 people. They where writing about quarterly results of Western companies and compiling analyst research. Since then, this facility has grown to approximately 1,600 jobs, with 100 journalists working on U.S. stories.

Non-English papers will be shielded from this transfer. In theory this will save jobs in Germany, France, Spain or Scandinavia. But in more realistic terms, events might take a different course: unloading selected non-core jobs will help US and UK newspapers to respond more swiftly to the market’s downsizing and, we hope, to do better than just survive. —FF

The Next Googlitzer Prizes

Let me build on my boss Frédéric Filloux’s point about bloggers. And, to do this, let me start with a quick linguistics lemma about California-speak.

In France, when two engineers review a project, the first one energetically “offers” (that’s an example of California-speak), hammers his views thusly: The only way to solve the problem is… And he expresses an opinion couched in Truth terms. The other techie retorts: You’re an idiot, this is brain-dead, the only way to solve the problem is… And another opinion follows, no less forceful. They’re just bantering, nothing personal and, soon, they get into the collaboration part of the review, give and take, get to a resolution and leave the meeting happy with themselves, the other person and the to-do list.

I tried this in Cupertino, when given charge of Apple’s engineers in 1985. They smiled politely: Thank you for sharing. But I sensed a transparent steel curtain descending between us and no actual communication took place after what I thought was just a manly opening. I knew that hypocrisy is the lubricant of social intercourse, I just forgot that it applied to conversations with techies. I had to learn to speak Californian: a set of euphemisms, mannerisms designed to equivocate and, as a result, to avoid giving offence. This is great, fantastic, I like what you do… All mean nothing, just filler speech designed to move the conversation forward without taking risk. Thank you for sharing means “I hate what you just said, asshole!” This is, as you well know, the land of neologism. Add the politics of large organizations and you get “grinfu–ing”, screwing someone with a big smile. Don’t say But, say And…

Back to the opening salvo above, in California-speak, Let me build on that point is what the French engineer must say to his California colleague in order to be heard. Actually, a gentler view of the deflection is that it encourages collaboration, let me use what you just said as a foundation, rather than excite confrontation.

With this in mind, allow me to register mild disagreement with Frédéric’s view of bloggers. I won’t fall for the easy characterization: the professional journalist versus the interlopers. I don’t write a blog, for reasons I don’t fully understand, but I read lots of them. Naively, I bought several newsreader applications and found out that the free Google Reader did the job very nicely. I can subscribe and unsubscribe to hundreds of blogs, ranging from the sublime to the sordid. (Try “Quantum Physics” and “Zoophilia” in the Reader’s search engine for blogs.) You can even “share”, that word again, items, stories with friends or even export your entire set of subscriptions and give it to a friend or family member as a way to let them see the blogosphere through your eyes.

I agree with FF, the bad news abound. There is a lot of garbage, nonsense, paid-for people and content parading as impartial views, bloggers echoing each other to the point where ten blogs spreading the same story could trip one to think: This must be true, there are ten sources for that story. No, it’s one unsubstanciated rumor repeated ten times over. We’re told there are 17 million blogs and growing, this is a gigantic garbage heap even Wall-E can’t mine for the gems. [I just saw the movie and can't comprehend the quasi-universal praise.]

All true but, sorry, and yet enough cream manages to ascend to the surface to make blogs and bloggers an alternative to the conventional newspaper. Experts and perverts of every stripe, yes, and when I’m burned a couple of times, the subscription dies. Speaking of subscriptions dying, I wonder how long I’ll keep longing for the noise of newspaper landing on my door steps in the wee hours. Between blogs and newspaper Web sites, when I open the paper in the morning, I often feel I’ve seen the news item the night before. If I want a knowledgeable discussion of the Microhoo saga, there are two or three bloggers, starting with the almost eponymous Blodget, Peter Kafka, I’m not making this up, and Michael Arrington who’ll give me better/faster food for thought than the Wall Street Journal or the Grey Lady’s Joe Nocera.

As we mention existing newspapers, for all their wrapping themselves in the mantel of professionalism, how often are they guilty of the sins of cronyism, re-writing stories seen elsewhere, when it’s not making them up altogether? Numerous New York Times accidents come to mind: Judith Miller’s “coverage” of the Iraq War build-up, Jason Blair’s fabrications, the scurrilous John McCain sex story and too many more.

Back to the excess(es) argument, there is no good culture without bad taste, without people “going too far”. How do we innovate without breaking things, making mistakes, giving people legitimate reasons to be upset? Yes, legitimate reasons to be upset, but missing the larger point. There are plenty of good reasons to take a dim view of technology, it does facilitate the expression of our lowest instincts. And the Internet is a true revolution for freedom of expression. New genres are emerging and will continue to do so as bandwidth increases change the gamut (and location, think mobility) of available media. As the eternal optimist, I welcome the excesses of bloggers, they’re stimulating, helpful, irritating and fun. And, some day not far in the future, we’ll crown a few of them with something like a Googler Prize. Who knows, a few of today’s journalists might be among them.

Fewer And Newer: Journalism Jobs

Sorry for the winners/whiners of the Oscars of pessimism: Journalism will remain as interesting as it used to be. OK, granted: Most of the job’s mystique is gone for good; football-sized newsrooms; charismatic, seasoned, suspenders-bearing editors belong to the past. So do glossy, reportage-loaded magazines. Many bad things are happening to journalism, including a rise in outsourcing core competences such as editing (see this story in the Hindustan Times ). But reports of journalism’s death are vastly premature. Actually, the big media shift we are experiencing will provide many opportunities — as long as (and yes, this a quite a proviso) the current professionals adapt quickly and the upcoming generation of news people gets proper training.

First thing first: there will always strong demand for good journalism. Bloggers are splendid, they benefit society and journalists as well. Thanks to the blogosphere, we have seen our congenital modesty suitably cut down. But, structurally, bloggers suffer from the inversion of the ten-to-one rule: in order to have a good journalistic story you must gather roughly ten times the amount of information you’ll use eventually. It is see-through American coffee morphing into thick, dark Italian espresso. Too often, the blog world work the other way around. Tiny facts — borrowed from other sources — diluted into bloated, unedited, chatting. Some bloggers are so talented or so specialized that their verbose drivel becomes a must-read. Those are resetting the notion of “most trusted brand” which was the motto to American TV networks, circa 1970. They are also offering what many journalists can no longer do: focus, obsessive specialization, academic knowledge, etc. But how much do they weight in the ambient noise ? 1% ? 10%? (which, either way is a lot in absolute terms). At the other end of the table, journalism is — or is supposed to be — about skills in facts-gathering, it is about explaining, contextualizing, editing and sometimes, analyzing and commenting. It is not molecular genomics (I prefer this metaphor to the “rocket science” one, sorry) but it is a genuine trade that doesn’t get learnt overnight. That craft won’t disappear. It will shrink for sure, but the demand for great storytelling remains: the New Yorker magazine enjoys — and rejoices — several million readers, after all.But, above all, journalism will mutate.

New genres will emerge. They will encompass the all spectrum of journalism on a multiplicity of platforms: multi-layer of text, photo, video, animated graphics. And please don’t tell me it is not noble journalism: click on any multimedia items in the New York Times or Slate Magazine, or, even better, go to the Washington Post sponsored site Mediastorm to forgo any lingering doubt. I personally don’t know of any member of my professional gang (French journalists in their fifties) who is not looking at the digital tools we enjoy today with a mixture of nostalgia and eagerness. How would these tools have fueled his journalistic passion when reporting from Jerusalem, Moscow or New York?

The most important question is: Are we preparing the next batch of journalists to handle such versatility? The answer tends to be no. We can’t blame them, but most of them want to be writers in the most romantic sense. For many, learning the digital trade is more a kind of “passage obligé” rather than an end in itself. Surprisingly, even their use of the Internet is rather shallow. They visit news sites to avoid going to the newsstand, they download profusely, but few of them blog or go inside the bowels of the beast to satisfy their curiosity. A partial explanation is most of their teachers belong to digital-averse generation. It will be some time before the young journalists grab the tools at their disposal (they better hurry because bloggers will do). A new kind of journalistic storytelling has yet to be invented. And it will be as compelling as the old one.

The digital era is an opportunity for journalists to regain a great deal of power in the management of news organizations. Let me explain. Twenty years ago the CEO of Dow Jones said this about the Wall Street Journal’s then managing editor, Norman Pearlstine: “We gave Norm an unlimited budget and he exceeded it!” That’s a pithy quote, indeed. In retrospect, I can’t help but resent a bit the man who caused such a remark. Pearlstine was not an isolated free spender. He was part of a widespread species that dominated newsrooms in those times when newspaper readers were in great abundance. Unfortunately, carelessness gave credence to the idea that journalists are the antithesis of managers as far as business is concerned. It cleared the way to a transfer of management to a business elite who doesn’t have a clue — and doesn’t want to — on what journalism is about.

Consequently, news organizations have been taken over by financial people. They are encompassing the full spectrum. The worst are former comptrollers who patiently climbed the corporate ladder thanks to successful restructurings (or brown-nosing). The best are strategists, MBA’s with the deal-making plug-in added to their embedded software. (To my surprise — at least in my country — boards tend to prefer the former who aremore docile than the ambitious, visionary kind). Newspaper organizations are built on silos (the newsroom, versus the marketing/advertising, logistics, technical, or administrative crowd). With management carefully maintaining hostility between fiefs, using the “divide-to-rule” principle. As the news media is in turmoil, this outdated managing setup must be revisited, by will or by the force of reality. To do that, producers — i.e. the news people — must extend their reach. Evidently, some jobs are up for grabs. Editorial marketing for instance. Today, many media CEO are bragging about hiring of a former Procter & Gamble young Turk as their marketing chief. It usually doesn’t fly very far. It goes for media as for the high tech sector. Meg Whitman tenure at eBay was not mind blowing (she was former marketing manager at the toymaker Hasbro), but when someone from the trade is able to jump into marketing, it really works, cf. Steve Jobs at Apple or Eric Schmidt at Google.

Would journalists be good at editorial marketing? Of course they would. After all it’s all about product design, audience expectations, strategy and tactics to better address a moving and demanding target. Are they ready to grab the challenge? No. Not a shred. Nor they are ready to deal with IT powered journalism such as data-mining (a powerful tool though). They are not up to managing the technical dimension of the Internet that is borderline editorial such as website structuring: how do we assemble all the components of a site to make the most coherent news product, referencing, search and so on. Search engine optimization for instance — a critical alchemy on which depends 30% to 50% of a site audience — is currently done by in-house or external experts, half techies, half marketeers, even though it is an obvious editorial question.

The challenge for journalism schools and universities is integrating the full scope of what is at stake here, of what we just reviewed. Then they must convince idealistic students that the digital arena is their main professional domain and that technical and business skills are as important as good writing — that is if they don’t want to feel exploited by bean counters, MBAs and graduates of the Procter & Gamble University… — FF

Technology / Multicore Processors: More is Better, Right?

Lies, damned lies and benchmarks. So goes an old industry joke setting up an ascending order of offenses to the truth. Old joke but alive and well in the latest industry trend: the recourse to multicore processors in our PCs.

Here, multicore means several processor modules (cores) on the same CPU (Central Processing Unit) chip, as opposed to multiprocessors, several separate chips inside the same computer. This means more computing power inside our computers, this must be good.
Not so fast. Yes, more raw power but do we know how much extra performance percolates to the surface of our user experience? Not as much as we’re led to believe.

Why this sudden conversion to multicores? The simple answer is Moore’s Law stopped working the way it did for almost 40 years; Moore’s Law used to predict a doubling every 18 months for the price/performance ratio of silicon chips. As expected, in about twenty years, we went from 1 MHz (the frequency at which the CPU processes instructions) for the Apple II, to 3 GHz (3,000 times faster) Intel chips — for about the same price. But, in the last few years, something happened: the clock frequency of top-of-the-line chips got stuck around 3 GHz. This didn’t happen because silicon technology stopped improving, we now speak of silicon building blocks as small as 35 nanometers (billionths of a meter) or even smaller in pre-production labs. A few years ago, we were happy with 120 nm or larger. So, the surface of things looks good: we still know how to cram more and more logic elements on a chip. But we have trouble making them run faster. Why?

Here easy basic physics come in. Let’s say I want to move a one gram mass up and down once, this will require a small amount of energy, say one Joule. If I repeat this once per second, we have one Joule per second, this is known as one Watt. Moving to 1,000 times a second, we’re now dealing with a Kilowatt. If the frequency moves to 1 GHz, one billion times per second, we need one Gigawatt. Going back to chips, they move electrons back and forth as the processor clock ticks. You see where I’m going: the electric power consumed by a chip climbs with the clock frequency. At the same time, the basic silicon elements kept shrinking. More and more electric power in smaller and smaller devices. One Intel scientist joked seriously that processors could become as hot as the inside of a nuclear reactor.
Back to our machines, we have desktop processors that dissipate as much as 150 Watts and require a liquid cooling element right on top of the chip. And we all complain our laptops are too hot for our… laps.

But now, imagine the computer industry calmly folding its arms and telling us: That’s all folks, this is as good as it gets. This after decades of more/faster/cheaper? No. That’s why our Valley is now peddling multicores. We can’t have faster processors (this is mostly left unsaid), let’s have more of them. And look at the benchmarks, more power than ever. This is where the question of performance delivered to the user versus raw power comes in.

First, 1+1 doesn’t equal 2. Simply because the two processors sometimes have to contend for a single resource such as memory. One processor must wait for the other to finish before proceeding. More cores, more such losses.

Second, and much more serious, most of today’s software has been written with a single processor in mind. There is no easy mechanism, either in the processors themselves, or the operating system, or the program itself to split code modules off and direct them to one processor or another. The situation is getting better as operating systems learn, at least, to dispatch ancillary, housekeeping functions to another module, leaving more computing power available to a program that only knows how to work on a single processor. And program themselves are slowly but surely being updated to split off modules that work independently. Sometimes it requires much programmer intervention, read time and money. In other cases, automated tools restructure some or most of the code. Still, today’s PC software is far from taking advantage of multicores. Hence the reference to benchmark painting an unrealistic picture of multicore performance in the real application software world.

And, third, there is yet another fly in the benchmark. Some activities are inherently parallelizable: ten people will look on ten library shelves for a single book faster (statistically) than a single person. Four people will definitely paint four walls faster than a lone painter (assuming no contention for a single paint bucket, see above). But other activities are inherently sequential: you must wait for the result of the previous operation before proceeding with the next. One can think of spreadsheets where a complex, real-world financial model cannot be computed in independent parts, each operation feeds the next until all the formulae have been computed and, in some cases, iterated. There are many such applications, weather simulation being one, because it relies on a type of equations that cannot be made to compute in parallel. As you can imagine, there is a whole body of computer science dedicated to parallelism. Let’s just say there is no real substitute for Gigahertz, for faster chips. That’s one of the reasons why weather forecasting hasn’t made much progress recently.

Multicores are nice, they do add some performance, but they’re only a band-aid until we find a way to make faster chips. — JLG