Sorry for the winners/whiners of the Oscars of pessimism: Journalism will remain as interesting as it used to be. OK, granted: Most of the job’s mystique is gone for good; football-sized newsrooms; charismatic, seasoned, suspenders-bearing editors belong to the past. So do glossy, reportage-loaded magazines. Many bad things are happening to journalism, including a rise in outsourcing core competences such as editing (see this story in the Hindustan Times ). But reports of journalism’s death are vastly premature. Actually, the big media shift we are experiencing will provide many opportunities — as long as (and yes, this a quite a proviso) the current professionals adapt quickly and the upcoming generation of news people gets proper training.
First thing first: there will always strong demand for good journalism. Bloggers are splendid, they benefit society and journalists as well. Thanks to the blogosphere, we have seen our congenital modesty suitably cut down. But, structurally, bloggers suffer from the inversion of the ten-to-one rule: in order to have a good journalistic story you must gather roughly ten times the amount of information you’ll use eventually. It is see-through American coffee morphing into thick, dark Italian espresso. Too often, the blog world work the other way around. Tiny facts — borrowed from other sources — diluted into bloated, unedited, chatting. Some bloggers are so talented or so specialized that their verbose drivel becomes a must-read. Those are resetting the notion of “most trusted brand” which was the motto to American TV networks, circa 1970. They are also offering what many journalists can no longer do: focus, obsessive specialization, academic knowledge, etc. But how much do they weight in the ambient noise ? 1% ? 10%? (which, either way is a lot in absolute terms). At the other end of the table, journalism is — or is supposed to be — about skills in facts-gathering, it is about explaining, contextualizing, editing and sometimes, analyzing and commenting. It is not molecular genomics (I prefer this metaphor to the “rocket science” one, sorry) but it is a genuine trade that doesn’t get learnt overnight. That craft won’t disappear. It will shrink for sure, but the demand for great storytelling remains: the New Yorker magazine enjoys — and rejoices — several million readers, after all.But, above all, journalism will mutate.
New genres will emerge. They will encompass the all spectrum of journalism on a multiplicity of platforms: multi-layer of text, photo, video, animated graphics. And please don’t tell me it is not noble journalism: click on any multimedia items in the New York Times or Slate Magazine, or, even better, go to the Washington Post sponsored site Mediastorm to forgo any lingering doubt. I personally don’t know of any member of my professional gang (French journalists in their fifties) who is not looking at the digital tools we enjoy today with a mixture of nostalgia and eagerness. How would these tools have fueled his journalistic passion when reporting from Jerusalem, Moscow or New York?
The most important question is: Are we preparing the next batch of journalists to handle such versatility? The answer tends to be no. We can’t blame them, but most of them want to be writers in the most romantic sense. For many, learning the digital trade is more a kind of “passage obligé” rather than an end in itself. Surprisingly, even their use of the Internet is rather shallow. They visit news sites to avoid going to the newsstand, they download profusely, but few of them blog or go inside the bowels of the beast to satisfy their curiosity. A partial explanation is most of their teachers belong to digital-averse generation. It will be some time before the young journalists grab the tools at their disposal (they better hurry because bloggers will do). A new kind of journalistic storytelling has yet to be invented. And it will be as compelling as the old one.
The digital era is an opportunity for journalists to regain a great deal of power in the management of news organizations. Let me explain. Twenty years ago the CEO of Dow Jones said this about the Wall Street Journal’s then managing editor, Norman Pearlstine: “We gave Norm an unlimited budget and he exceeded it!” That’s a pithy quote, indeed. In retrospect, I can’t help but resent a bit the man who caused such a remark. Pearlstine was not an isolated free spender. He was part of a widespread species that dominated newsrooms in those times when newspaper readers were in great abundance. Unfortunately, carelessness gave credence to the idea that journalists are the antithesis of managers as far as business is concerned. It cleared the way to a transfer of management to a business elite who doesn’t have a clue — and doesn’t want to — on what journalism is about.
Consequently, news organizations have been taken over by financial people. They are encompassing the full spectrum. The worst are former comptrollers who patiently climbed the corporate ladder thanks to successful restructurings (or brown-nosing). The best are strategists, MBA’s with the deal-making plug-in added to their embedded software. (To my surprise — at least in my country — boards tend to prefer the former who aremore docile than the ambitious, visionary kind). Newspaper organizations are built on silos (the newsroom, versus the marketing/advertising, logistics, technical, or administrative crowd). With management carefully maintaining hostility between fiefs, using the “divide-to-rule” principle. As the news media is in turmoil, this outdated managing setup must be revisited, by will or by the force of reality. To do that, producers — i.e. the news people — must extend their reach. Evidently, some jobs are up for grabs. Editorial marketing for instance. Today, many media CEO are bragging about hiring of a former Procter & Gamble young Turk as their marketing chief. It usually doesn’t fly very far. It goes for media as for the high tech sector. Meg Whitman tenure at eBay was not mind blowing (she was former marketing manager at the toymaker Hasbro), but when someone from the trade is able to jump into marketing, it really works, cf. Steve Jobs at Apple or Eric Schmidt at Google.
Would journalists be good at editorial marketing? Of course they would. After all it’s all about product design, audience expectations, strategy and tactics to better address a moving and demanding target. Are they ready to grab the challenge? No. Not a shred. Nor they are ready to deal with IT powered journalism such as data-mining (a powerful tool though). They are not up to managing the technical dimension of the Internet that is borderline editorial such as website structuring: how do we assemble all the components of a site to make the most coherent news product, referencing, search and so on. Search engine optimization for instance — a critical alchemy on which depends 30% to 50% of a site audience — is currently done by in-house or external experts, half techies, half marketeers, even though it is an obvious editorial question.
The challenge for journalism schools and universities is integrating the full scope of what is at stake here, of what we just reviewed. Then they must convince idealistic students that the digital arena is their main professional domain and that technical and business skills are as important as good writing — that is if they don’t want to feel exploited by bean counters, MBAs and graduates of the Procter & Gamble University… — FF
Lies, damned lies and benchmarks. So goes an old industry joke setting up an ascending order of offenses to the truth. Old joke but alive and well in the latest industry trend: the recourse to multicore processors in our PCs.
Here, multicore means several processor modules (cores) on the same CPU (Central Processing Unit) chip, as opposed to multiprocessors, several separate chips inside the same computer. This means more computing power inside our computers, this must be good.
Not so fast. Yes, more raw power but do we know how much extra performance percolates to the surface of our user experience? Not as much as we’re led to believe.
Why this sudden conversion to multicores? The simple answer is Moore’s Law stopped working the way it did for almost 40 years; Moore’s Law used to predict a doubling every 18 months for the price/performance ratio of silicon chips. As expected, in about twenty years, we went from 1 MHz (the frequency at which the CPU processes instructions) for the Apple II, to 3 GHz (3,000 times faster) Intel chips — for about the same price. But, in the last few years, something happened: the clock frequency of top-of-the-line chips got stuck around 3 GHz. This didn’t happen because silicon technology stopped improving, we now speak of silicon building blocks as small as 35 nanometers (billionths of a meter) or even smaller in pre-production labs. A few years ago, we were happy with 120 nm or larger. So, the surface of things looks good: we still know how to cram more and more logic elements on a chip. But we have trouble making them run faster. Why?
Here easy basic physics come in. Let’s say I want to move a one gram mass up and down once, this will require a small amount of energy, say one Joule. If I repeat this once per second, we have one Joule per second, this is known as one Watt. Moving to 1,000 times a second, we’re now dealing with a Kilowatt. If the frequency moves to 1 GHz, one billion times per second, we need one Gigawatt. Going back to chips, they move electrons back and forth as the processor clock ticks. You see where I’m going: the electric power consumed by a chip climbs with the clock frequency. At the same time, the basic silicon elements kept shrinking. More and more electric power in smaller and smaller devices. One Intel scientist joked seriously that processors could become as hot as the inside of a nuclear reactor.
Back to our machines, we have desktop processors that dissipate as much as 150 Watts and require a liquid cooling element right on top of the chip. And we all complain our laptops are too hot for our… laps.
But now, imagine the computer industry calmly folding its arms and telling us: That’s all folks, this is as good as it gets. This after decades of more/faster/cheaper? No. That’s why our Valley is now peddling multicores. We can’t have faster processors (this is mostly left unsaid), let’s have more of them. And look at the benchmarks, more power than ever. This is where the question of performance delivered to the user versus raw power comes in.
First, 1+1 doesn’t equal 2. Simply because the two processors sometimes have to contend for a single resource such as memory. One processor must wait for the other to finish before proceeding. More cores, more such losses.
Second, and much more serious, most of today’s software has been written with a single processor in mind. There is no easy mechanism, either in the processors themselves, or the operating system, or the program itself to split code modules off and direct them to one processor or another. The situation is getting better as operating systems learn, at least, to dispatch ancillary, housekeeping functions to another module, leaving more computing power available to a program that only knows how to work on a single processor. And program themselves are slowly but surely being updated to split off modules that work independently. Sometimes it requires much programmer intervention, read time and money. In other cases, automated tools restructure some or most of the code. Still, today’s PC software is far from taking advantage of multicores. Hence the reference to benchmark painting an unrealistic picture of multicore performance in the real application software world.
And, third, there is yet another fly in the benchmark. Some activities are inherently parallelizable: ten people will look on ten library shelves for a single book faster (statistically) than a single person. Four people will definitely paint four walls faster than a lone painter (assuming no contention for a single paint bucket, see above). But other activities are inherently sequential: you must wait for the result of the previous operation before proceeding with the next. One can think of spreadsheets where a complex, real-world financial model cannot be computed in independent parts, each operation feeds the next until all the formulae have been computed and, in some cases, iterated. There are many such applications, weather simulation being one, because it relies on a type of equations that cannot be made to compute in parallel. As you can imagine, there is a whole body of computer science dedicated to parallelism. Let’s just say there is no real substitute for Gigahertz, for faster chips. That’s one of the reasons why weather forecasting hasn’t made much progress recently.
Multicores are nice, they do add some performance, but they’re only a band-aid until we find a way to make faster chips. — JLG
The J-curve is an economics metaphor, a way of saying things will get worse before getting better. That’s the prospect for the global print media sector.
For the American press, advertising revenue keeps dropping at a steady yearly rate of 12% to 15%. No industry can withstand a sustained double-digit decrease of its core business. It is not erosion, it is a collapse. And since advertising represents 70% to 90% of the cash-flow for US dailies, the sense of urgency is morphing into panic. Of course, some components of this decline, such as the credit crisis shock wave, are specific to the American market. But we can consider the American market as an advanced indicator for the industry. With this in mind, watching the reactions of two opposite cultures, US and France, could be enlightening.
The US industry was slow to react at first. But, now, the pace is accelerating. For the first six months of 2008, 4494 journalist positions have been lost in the United States. The latest busload was announced last week at the Los Angeles Times where 250 staff members, including 150 journalists will be soon gone (and the number of published pages will drop by 15%). No doubt the shrinkage at the LA Times will spread elsewhere.
The press is now in “survival mode” as an analysis puts it. Big newsrooms like the New York Times (staff of 1400) will soon be history. The market simply can no longer sustain such media battleships. That’s sad for the great trade mystique, but there no time for hand-wringing. We must instead tame and ride the shift, and save what can be saved. The US will be much faster at the restructuring game than Europe. Downsizing will be more decisive and quicker. In less than a year, we already went from a hiring freeze, to buyouts of contracts, to mass layoffs. That’s sad, brutal, unpleasant, but it will clear the way for the major shift ahead. And most American companies — as long as the financial market do not breath too much down their necks — will be left with sufficient cash to invest in new, diversified, more agile kind of media (and yes, for bulk of it, much shallower…).
In a country such as France the course of events might be different. Let’s turn, for an example, to a recent report on the evolution of the French print press. Jean-Marie Charon, a well-respected French media scholar, who also happened to be fiercely independent from the ever-present lobbies of the trade, led the working group. (Disclosure: I was a member of the group, it gathered here and there for nine months or so ; I kept quiet until we found out the report was widely circulated).
To make it short, the report’s conclusions rely on two scenarios: one soft, saying the press will somehow mutate, co-habit with on-line developments, but the basic structure will remain with some refocusing. The other scenario describes a major shift toward the digital media, with some casualites. New breeds of journalists with digital skills should emerge; they will contribute to the reinvention of journalistic “genres” suited to the Internet era. By force, today’s players will adapt or face extinction, as agile pure players will wait in ambush, ready to take over the slots left undefended. Drafted months ago, the reports conclusions appear to be strengthened by ongoing industry events.
Now, guess what is happening to this report? All the lobbies you can think of have obstructed its release. For a start, publishers were outraged. Some old trade fogies contended it was out of question to publish a scenario featuring such an industry upheaval. Bad timing, they said, as the press gathered its rags and prepared to beg the French government for another shot of taxpayer money (in France subsidies already account for 10% of the revenue for daily newspapers). Next September, President Sarkozy will hold a national conference on print-press. The shindig is loftily dubbed “Etats Généraux de la presse”, this is a shameless historical reference to the times when French kings held big public debates to address a national crisis. You see, we are right into the twenty-first century. Every old (and no so old, unfortunately) press baron is getting ready for the event, rehearsing sob stories, thinking of ways to shame a complacent government into “one last dose” of life-support funding.
That kind of French “corporate welfare” is not a stimulus for change. Neither are the unions. Technical workers and journalists are on the same page — no release of the report — but for others reasons. They refuse to even look at recommendations for drastic change of their status. Fact is, with a few exceptions, French newspapers executives and newsrooms managers are still “digital-adverse”. This is is great news for the media pure-players to emerge in the coming months, but no so great for the future of the French print media.
Not every European country suffers of such bad alignment of planets. Nordic countries have been able to reinvent themselves quite quickly thanks to four factors: the big players enjoy a controlling in their market, resulting in solid financial health, in having the means to make changes; a cultural long-term approach, also allowed by the capital structure of their media groups; an obsession with the training and the intellectual openness of their managerial elite; and strong and disciplined leadership.
Countries that yield to corporatist lobbies and rely on government charity will take much longer to adapt. For them, the bottom of the J-curve is still far, far away. –FF
A Few Quick Links to Monday Note #42
Newspapers Downsizing – NYTimes and Herald Tribune to merge sites. The move was meant to happen. A growing number of NY Times stories are appearing in the Herald Tribune, the NYT Co. is bleeding ad revenue. There is no longer room for duplication. The merger on the web is the first step (pretty easy to take), and the newspapers will follow. It is a matter of when, not if, the IHT brand disappears. (Story in the IHT )
Online Advertising – Publicis Group launches VivaKi, a weird name (how much they paid for such an neologism breakthrough?) for a global initiative in which the n°3 advertising group will combine all its digital forces. Says Maurice Levy, Publicis Group Chairman: “Digital revenue should represent more than 25% of the group’s total revenue by 2010 compared with 18% in the first quarter of this year”. (Story in the FT)
Social Networks – LinkedIn worth $1bn. At least according to VCCW (Venture Capital Common Wisdom). This is based on the $53m investment coughed up by a group of VCs including Bain Capital Ventures, Sequoia Capital, Greylock Partners and Bessemer Venture Partners. (story in Condé Nast Portfolio )
Aggregator – Slow Growth for Google News. In May, Google News got only 11.4 million users. It ranked No. 8 among news sites, far behind Yahoo News, which was No. 1 with 35.8 million visitors. Its growth rate of 10% over the last two years is far slower than, for instance, MSNBC.com that grew by 42 percent, adding 10.4 million users. Proof that algorithm is not everything. (Interesting story in the New York Time )
(Finally) — The best bang for the buck. Find out how the clever tiny advertising agency Lastfool (no website in sight, sorry) made a funny viral movie for a cell phone earpiece maker. Small budget, many viewers. The funny thing is the counter strike of an anonymous member of the French mobile phone lobby…
When a $oftware company experiences a sudden access of generosity and donates its first born to the world of Open Source, what are we to think? They made so much money it was embarrassing? Or, it’s an act of desperation: We can’t sell it, maybe be they’ll use it if we give it away. Uncharitable minds add: And then we’ll make money telling others how to decipher inscrutable code and by explaining away bugs — not to be confused with fixing them. More politely: Give away the code and sell services around it. It can work, ask IBM and Red Hat. Or look at Google, it wouldn’t exist without the Open Source movement and its star, Linux, powering its servers, one million of them and counting.
Back to Symbian, what’s the real story? Admitting defeat or, having found a way to make money with the OS — finally? Knowing Nokia, certainly not the former. It is today the number one smartphone maker before RIM (Blackberry) and Apple. Nokia has no intention to cede the throne. But it’s not about making money with the Symbian OS either, that’s impossible. Let me explain.
Once upon a time, that was before Newton, Palm and Pocket PC, Psion, a British company, was the king of “organizers”, later called PDA, Personal Digital Assistants. Through the twists and turns of the genre’s history, perhaps a topic for another column, Psion lost its crown and went out of the PDA business. But the OS inside the Psion was a gem, this is an ex-user speaking, it multi-tasked without crashing. More twists and turns and a joint venture is born led by Nokia and Motorola, with followers such as Sony Ericsson and Samsung. Called Symbian, the company got the Psion OS. Symbian was to develop software for smartphones and make money licensing it to its partners.
Bad business model, bad timing, bad structure. Bad business model because handset makers don’t (or didn’t) actually care for software and don’t want to pay anything of significance for it. They (and their masters, the carriers) spend much more money on the nicely printed cardboard box than on the software inside. Bad timing because the smartphone market wasn’t really there when Symbian was born 10 years ago. The smartphone market only woke up around 2005 when Nokia, RIM and Palm totaled a few millions of units shipped that year.
Lastly, bad structure. No one was really in charge, the owners/competitors each wanted different features, a different user interface, application compatibility was nonexistent, unwanted even in many cases and development tools weren’t up to the power and quality PC developers enjoyed. Symbian kept losing money and Nokia, viewed as the main beneficiary of the messy joint venture, kept pouring cash in.
Today, we see that the smartphone market did more than wake up. RIM’s business grows by more than 100% a year; Apple, while number three worldwide, manages to shake up the industry and to look bigger than it is — or to project an accurate picture of its future, we’ll see; Google announces its Open Source smartphone OS, Android; Microsoft acquires Danger, the maker of an interesting smartphone, the Sidekick, and proclaims its intent to “own” 40% of the market by 2012.
All this, in my view mostly Apple and Android, pushed Symbian to try and regain control of its OS future. To do so, Nokia buys out its partners and becomes the sole owner of Symbian, now called the Symbian Foundation, sounding very non-profit.
Good, you’ll say, they want to be in the driver’s seat (unintended obscure geek pun here…) but why go Open Source then? My guess is that was a condition of buying the partners out. Nokia: You have access to the source code, my dear friends, you have total freedom. My other hunch is that the license won’t be the most constraining of the Open Source variants. By this I mean there is the GPL license that obligates you to share every improvement (or bug) you make and that also forces you to put in the Open Source domain any code that uses, connects to the GPL software you’re enjoying. Everything must become Open Source. Other licensing arrangements let you make contributions to the public Open Source domain but let you keep a wall between your private code and the public one. This, “true” Open Source or not, is the topic of heated arguments hopelessly mixing principle and money. Type “Open Source arguments” in Google for a sample.
I doubt Motorola, Samsung and Sony Ericsson will keep using Symbian Open Source code for long, they’re likely to go to one of several mobile Linux vendors, this is better than developing their own OS code or safer than hoping Nokia will give away improved Symbian code. Just last week, the LIPS, the Linux Phone Standards group decided to merge into the LiMo, Linux Mobile Foundation.
This looks like a smart move by Nokia: Regain control of its OS future, look politically correct and throw its competitors into a jungle of platforms (more than 60 worldwide, I’m told) out there. A beautiful mess, opportunities galore, like microcomputers before Microsoft and Apple made them PC.
Nokia, control like Apple, sound like Google. –JLG
Social networks and PC becoming an arranged knwoledge network
Let me start with an example. Hopefully, the concept will emerge.
Facebook. The latest fracas is their conflict with Goggle’s Friend Connect,
technology that gives any web site simple tools to acquire social networking features.
As a result, users of my organic gardening site connect, share ideas, recipes, pictures with their friends on other participating sites, such as Facebook, hi5, Orkut and many others (social networking or not). The point of Friend Connect not being forced to become members of other sites, just sharing. A side-effect is it becomes easier to take my personal data from Facebook and move my information elsewhere.
No, no, says Facebook. After initially agreeing to the Friend Connect interchange, it blocked access.
This raises the question in the title: Is my Facebook information mine or not? The company has spent upwards to two hundred million dollars building a “free” service. The value Facebook counts on to generate advertising revenue is what they felicitously call the social graph. As the name suggests, this is information about me, about the people I connect to, what we like, picture we share, music recommendations, games we play, purchases we make, invitations to events.
Everything about everyone, arranged in a knowledge network. Slight exaggeration, but you see the idea. Not just tons of details about me but a web of such details. This leads to the advertiser’s wet dream: ads focused on one individual, at the right time. Gee, Joe just told his friends he’s got a new job, let’s see if he’s in the mood for a new car or a new suit, or inviting his best friends to a celebratory dinner. For you, special prrrrice today!
Facebook is currently investigated by Canadian authorities for its ways with user privacy and we’ll recall last Fall’s stumble with Beacon. Users weren’t pleased to discover Facebook passed information to merchants without their knowledge and consent. The plan was creepy: even when users weren’t logged on Facebook, some of their moves were recorded and passed on to “partners”. There is a pattern here: Facebook thinks it owns my data. This is the gold mine they want to exploit and they don’t like the idea of the data flowing somewhere else (read Google).
They are not alone. Many suppliers in our PC/Internet life clearly think they have extensive rights on our machines and our data. I recall the incessant Orwellian demands to download Windows Genuine Advantage (nice bit of newspeak) to enable operating system and Office updates. But I already proved last week I have a genuine copy of Windows! Never mind, do it again. In ironic ways, it gets worse with companies such as Symantec and their security products. Once installed, they are exceedingly difficult to remove. This is for your safety, you see. We conceal key bits so the virus bad guys can’t remove them. Well, no, you keep insisting and Symantec will reluctantly tell you where to download a removal tool the bad guys can use as well. –JLG
Maurice Levy, 66, is chairman and CEO of Publicis, n°3 advertising group in the world. His son, Alain Levy, 45, is the CEO of Weborama, one of the leaders of Internet analytics in Europe. Two generations, two different vantage points on the changing advertising market, confronted in this interview by Le Monde (full text in French below).
Here are their respective takes on various subjects:
On the ad sector in general. Maurice (Publicis): “Our response time [to the tech challenges] are way too long. We need to speed up. The inflexion point for our companies is now”.
On the shift in ad spending. Maurice: “Print and TV has far from dead. Today, they account for 92% in ad spending. In 2010, it will be 88% but the share of the Internet will have grown twofold”. Alain (Weborama): “OK, TV will remain dominant, but it will become digital and will eventually allow all what is currently done with the Internet — interaction, targeting…”
The difficulties of Print media Maurice: “The print media must take advantage of two assets: their brand and their ability to select, process the information”. Alain: “Yeah, but today the so-called digital natives have zero loyalty toward content brands”.
The strategies to implement Alain: “One of the key questions is the relationship the big players will have with different technologies. Should they own them?” (Background: Alain Levy is adamantly warning against the domination by Google as he said in the issue #27 of the Monday Note). Pragmatic as usual, Maurice has chosen his camp: “In the interest of its clients, Publicis has decided to make a deal with Google and to work with it”.
Family lunches must me animated between Maurice and Alain Levy.
Pub, médias, Internet : le grand chambardement
Maurice Lévy est président du groupe Publicis, Alain Lévy est président de StartUp Avenue et de Weborama. Les deux générations que la “numérisation” a rapprochés confrontent leurs analyses.
Alain Lévy : D’un ensemble de techniques de connaissance des comportements des internautes qu’on appelle les Web analytics. Mon entreprise, Weborama, conçoit des outils qui sont placés sur les sites pour compter leur nombre de visiteurs, et d’autres qu’on place sur le navigateur de l’internaute (des “cookies”), et qui analysent sa navigation. Pour les annonceurs, l’intérêt est grand. Quand une publicité s’affiche, on sait si l’internaute a cliqué dessus, si ensuite il a acheté, combien il a dépensé. Ce qui permet d’évaluer l’efficacité des campagnes.
M. L. : Ces nouvelles possibilités ne signifient pas que la télévision ou la presse sont caduques. Celles-ci ont encore leur place, et une place prépondérante puisque aujourd’hui, ce sont 92 % des investissements publicitaires qui vont dans ce domaine. Demain, en 2010, ce sera encore 88 %, mais entretemps la part du Web aura doublé.
A. L. : La télévision restera prépondérante, mais elle sera numérique. Cela veut dire que tout ce qu’on peut faire sur Internet, on pourra le faire avec la télévision. Des campagnes ciblées, interactives…
Et la presse écrite ?
M. L. : Je considère que la presse joue un rôle essentiel comme ferment de nos démocraties. L’essor du Net lui pose un problème parce qu’une partie de la publicité bascule vers ces nouveaux médias. La presse est plus lourde sur le plan publicitaire : les espaces sont figés. Il n’y a ni mouvement, ni son, ni musique. C’est donc un mode d’expression assez limité pour les annonceurs. Résultat, ils coupent le plus facilement les budgets des journaux.
La presse possède deux avantages, qu’elle exploite plus ou moins bien. Le premier, c’est une marque. Dans l’univers Internet, il est plus facile de s’orienter quand on connaît le nom du site, par exemple lemonde.fr. Le second avantage, c’est que la presse a une maîtrise de l’information : elle sait la sélectionner, la traiter, la hiérarchiser. Elle doit tirer parti de cet atout face au foisonnement des messages. Mais le temps presse, si j’ose dire.
A. L. : Au risque d’être politiquement incorrect, je crois que les carottes ne sont pas loin d’être cuites. La mutation des médias classiques vers le numérique prendra du temps, et, pour la recherche d’information, Google est en train de rafler la mise. Les générations dites “natives”, qui sont nées avec Internet, ont zéro fidélité envers des marques de contenu. En revanche, elles ont besoin d’avoir tout de suite ce qu’elles veulent, et pas beaucoup plus. C’est un devoir d’éducation de leur transmettre l’idée qu’on peut aller plus loin que l’info brute. Moi, quand je lis une information sur le Net, il m’arrive d’avoir un doute et de vérifier dans les journaux. Mais j’appartiens à la dernière génération qui a ce réflexe. Les suivantes seront celles du tout-numérique.
M. L. : Les marques de journaux qui sauront faire la mutation vers le Net sont celles qui vont gagner. C’est déjà ce qui se passe aux Etats-Unis. Le New York Times, le Wall Street Journal abandonnent de plus en plus les espaces payants pour profiter de la fréquentation de leurs sites, et valoriser leur audience. Cela me fait dire qu’il y a un avenir pour la presse, mais plus le même, et plus seulement sur papier.
Et pour le secteur de la publicité, quelle doit être la stratégie ?
A. L. : La vraie question est de savoir quelle relation les grands acteurs de l’Internet entretiennent avec la technologie : doivent-ils la posséder, maîtriser l’ensemble des outils, ou au contraire laisser des entreprises nouvelles se mesurer aux très grands ? Google, il faut lui reconnaître ce mérite, a inventé le modèle économique de l’Internet. C’est grâce à lui qu’une page vue égale des euros, alors qu’avant elle valait zéro. Mais nous sommes entrés dans une nouvelle ère depuis que la Commission européenne a autorisé le rachat par Google de DoubleClick, le leader mondial de la publicité en ligne. Sa prédominance devient sans partage…
M. L. : Google est imbattable sur la recherche des mots, le “search“. DoubleClick a la maîtrise des bannières. La conjonction des deux donne une force considérable. Publicis a donc jugé bon, dans l’intérêt de ses clients, de parvenir à un accord avec Google et de travailler avec lui.
A. L. : J’ai un point de vue différent. La puissance de Google est fondée sur une technologie très efficace, une capacité à accumuler et à analyser des données inégalée jusque-là. Cela lui donne les moyens d’acheter tout ce qui bouge. C’est une espèce de grande faucheuse qui attaque tous les acteurs, tous les médias : les télécoms, la publicité, la communication numérique au sens large. C’est ainsi que Google, le symbole de l’hyperconcurrence des marchés, finit par tuer toute concurrence.
Comment les métiers de la pub vont-ils évoluer avec les nouvelles technologies ?
M. L. : C’est le point essentiel. Quand on fait une campagne à la télévision ou dans la presse, on lance les ordres, on attend, et à la fin de la campagne, on mesure les effets et on ajuste le tir pour la vague d’après. Et on recommence le cycle de manière indéfinie…
A. L. : Désormais, on peut faire la même chose en temps réel. Dès qu’il y a un clic, il s’imprime sur l’écran. Pour un annonceur, cet outil est grisant : un clic, et le chiffre d’affaires s’implémente. On n’a pas besoin d’attendre le verdict des hommes de l’art. C’est là que mon père et moi avons un désaccord. Je pense qu’à terme les plus gros annonceurs vont vouloir maîtriser tout ce processus. Du coup, le métier de l’agence va se retrouver cantonné à l’aspect créatif, qui sera d’ailleurs très important puisque nous allons vers un modèle : une personne, un comportement, une “créa”. La technologie va s’en mêler, donc Google va entrer sur ce marché.
M. L. : C’est ignorer comment Google fonctionne. Son rendement vient du fait que tout est automatisé. Il met beaucoup d’ingénieurs, un déploiement d’intelligence considérable pour développer un outil. Mais, une fois que l’outil est au point, c’est terminé, il fonctionne avec très peu de main-d’oeuvre. Dans la communication, on met très peu de gens pour penser les outils, et on en met énormément pour penser les besoins spécifiques de chaque annonceur. Les deux modèles économiques sont à l’opposé l’un de l’autre.
Quelles sont les prochaines étapes de la “numérisation” généralisée ?
A. L. : On ne connaîtra pas seulement le consommateur à travers son ordinateur. On le suivra dans la vraie vie. C’est ce sur quoi travaille une autre société que j’ai aidée à démarrer, Majority Report. Elle fait la même chose que Weborama, mais dans la réalité : analyser les trajectoires, comprendre les comportements des clients sur le lieu de vente. Les technologies du Net vont rayonner dans notre univers, et pas seulement dans les médias. Par exemple, on pourra compter exactement le nombre de personnes dans une manifestation.
Ce tout-numérique, qu’implique-t-il pour notre société ?
A. L. : C’est une vraie question. Moi, comme utilisateur, que suis-je prêt à tolérer ? Que suis-je prêt à donner comme informations sur ma vie ? Le terme “tracking”, qui désigne le suivi statistique des comportements sur Internet, signifie “suivre à la trace”, c’est assez épouvantable. La Commission nationale de l’informatique et des libertés (CNIL), en France, veille à ça, mais elle a un peu de mal à appréhender tout ce qui se passe. Chez Weborama, en tout cas, nous veillons à n’avoir aucune donnée qui permette de relier notre analyse d’un comportement à un individu. Ce sera un enjeu majeur dans les années qui viennent. Le consommateur est de plus en plus conscient de l’exploitation des traces qu’il laisse. On touche à la liberté ?
M. L. : C’est vrai que nous entrons dans le monde de Big Brother, et qu’il existe des moyens d’établir une traçabilité des comportements. On peut savoir à partir des technologies du GPS où se trouvent les gens grâce à leur téléphone portable, on peut suivre leur voiture, savoir où ils vont, ce qu’ils achètent, ce que sont leurs échanges de communication. Nous sommes dans une société de communication qui peut mettre en danger les libertés publiques et la vie privée.
Sous l’aspect publicitaire, il y a un autre danger, qui est celui de l’intrusion. Par exemple, vous visitez un site automobile, le publicitaire peut intervenir et vous faire une offre plus intéressante. Chez Publicis, nous résistons à cela parce qu’il s’agit vraiment d’une intrusion. Nous pensons que les gens n’accepteront pas qu’on regarde ce qu’ils font par-dessus leur épaule.
The free business newspaper CityAM is growing slowly but steadily. Its circulation is now close to 102,000 copies, a 47% increase since its launch in September 2005, and it could now expand out of London. Financially, CityAM made a 59,000 Euros profit for the six months ended in March, on revenues of 4,4m Euros for the period.
In many ways this small newspaper represents what a modern, focused daily should be:
– A lean and mean organization, built around a circulation calibrated for its audience, and which is small compared to other free UK papers (400,000 for Londonlite, 500,000 for The LondonPaper, 1.36m for Metro UK)
– Precise targeting: CityAM is distributed in the City of London at Canary Wharf, i.e. at the exit of only 17 subway stations out of the 572 in London. As a result, it enjoys a market reach greater than the Financial Times (actually, 80% of those who take CityAM have not read the FT when they arrive to work).
– Selective journalism: not only does CityAM bring its share of scoops, but it also manages to provide incisive analysis thanks to a sharp set of columnists. Its small staff also conducts excellent in-depth, fairly long, interviews. (CityAM is largely killing the idea that free press means only short articles).
– Editorial mix: a fair share of the paper is dedicated to lifestyle (about fifteen items spread all the week) and sport.
The result are enviable and solvent demographics, with readers making an average of 80,000 Euros a year. Not the grandiosity of the Financial Times, but way more readers per stories. (You can find more stuff on the subject in PressGazette and in the Newspaper Innovation blog.
For many of us involved in the transition from digital to print media, Jim Romenesko was an early warning of what was about to happen to the industry. On his blog — always spartan — he has been gathering information at various stages of elaboration, from gossip to more fact-checked content. In his excellent column of Portfolio magazine, Howell Raines (former editor if the New York Times), recounts his virtual interview with this influential blogger, paid $170,000 a year, who scans 100 blogs a day. A “fact-free journalism” according to Howell Raines.