About Frédéric Filloux

http://

Posts by Frédéric Filloux:

Gunning for the Copyright Reformers

Going after copyright reformers is risky business. To digital zealots, defending copyright is like advocating the return to the typewriter. (I personally like typewriters; I own several and I recommend a wonderful 1997 Atlantic piece on them at Longform.org). Going after sworn copyright opponents is what Robert Levine does in his just-published  book Free Ride — How the Internet is Destroying the Culture Business and How the Culture Business can Fight Back.

The pitch: Digital corporations are conspiring to promote the free ideology that has been plaguing the internet over the last decade. With their immense financial firepower, the Googles and the Apples and the Silicon Valley venture capital firms that funded Napster did whatever it took to undermine the concept of copyright. From lobbying the United States Congress to funding free-culture advocates, they created a groundswell for rip-and-burn products that would sell their MP3 devices. They got lawmakers and pundits to pave the way for a general ransacking of intellectual property — from music to journalistic content. Once Levine makes his point, he explores possible solutions to restore value to creativity (We’ll address these in a future column).

Needless to say, Robert Levine has produced a non-politically correct opus. And that’s what makes his book fascinating.

To start, the author reframes the famous quote, “Information wants to be free.” Free Ride recalls the complete sentence as far more nuanced. This is actually what tech writer Stewart Brand said at an 1984 a hacker conference:

“One one hand information wants to be expensive because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other.”

Few quotes in recent history have been more twisted and misinterpreted than this one. Everyone jumped on Stewart Brand’s distinction between collecting information and making it available to the audience. While the cost of the former remains high — at least for those producing original information, or content — the marginal cost of broadcasting it fell dramatically, and that is what sparked the idea of a zero-cost culture. Yet, “media products have never been priced according to their marginal cost,” Levine says, and therefore, free is an idea that’s hard to defend.

As described in Free Ride, US lawmakers played a critical role in opening the floodgates of piracy and copyright violation on the internet. On October 28, 1998, Bill Clinton signed the Digital Millennium Copyright Act. That law, says Levine, gave a “safe harbor” to internet service providers and some online companies. No longer liable for copyright infringement based on the actions of users,  Levine writes that the “safe harbor made it easier for sites like YouTube to become valuable forums for amateur creativity. But it also let them build big businesses out of professional content they didn’t pay for.” That, he says, is how Congress created YouTube. (Google purchased it in 2006 for $1.65 billon).

The book’s most spectacular deconstruction involves Lawrence Lessig. The Harvard law professor is one of the most outspoken opponents of tough copyright. For years, he’s been criss-crossing the world delivering well-crafted, compelling presentations about the need to overhaul copyright. When, in 2007, Viacom sued YouTube for copyright infringement, seeking more than a billion dollars in damages, Lessig accused Viacom of trying to overturn the Digital Millennium Copyright Act. It was a de facto defense of Google by Lessig who at the time was head of the Center for Internet and Society at Stanford University. What Lessig failed to disclose is that two weeks after closing the deal to acquire YouTube, Google made a $2-million donation to the Stanford Center, and a year later gave another $1.5 million to Creative Commons, Lessig’s most famous intellectual baby. To be fair, Levine told me he didn’t believe Lessig’s positions on copyright were influenced by the grants from Google. Moreover, Google set aside $100 million to fight the Viacom lawsuit. Numerous examples throughout Free Ride show how technology companies are committed to influence public policy. Ironically, Lawrence Lessig’s newest crusade at Harvard is about corruption in Washington.

Robert Levine’s book could be disputed on a few items.

- One, he’s too kind to the music industry. (His view may have been influenced by his tenure as executive editor of Billboard magazine where he witnessed first-hand the self-inflicted deterioration of the music industry.) The music business missed all the trains: (a) it defended the physical model up to the last minute even as its annihilation seemed unavoidable; (b) it extended as long as it could the double screwing of consumers and artists alike (sadly, poor analog artists have been replaced by poor digital ones).

- Two, he tends to forget the general complacency of content creators toward all forms of digital looting. I’ve often described in the Monday Note how publishers – blinded by the short-term appeal of the eyeball count – became consenting victims of all sorts of aggregators (see my Lenin’s Rope series).

- Three, the advent of free content has in fact unleashed talent. Unknown authors have been able to rise from obscurity thanks to direct access to the audience. And some have found alternative ways to make money (more on this in another future column).

Lastly, the unfolding of technology made the relaxing of copyright unavoidable. The Digital Millennium Copyright Act may have accelerated the transition but it didn’t cause the upheaval. Today, BitTorrent file transfer for music and movies accounts for about 10-12% of the internet bandwidth consumption, and YouTube accounts for 11%. Pirated content represents almost 100% of the former and about a third of the latter. Huge numbers, indeed, and huge losses for the music and movie industries. But Netflix with its legitimate content now accounts for 30% of the entire internet traffic (Hulu has less than 2%) and iTunes is growing faster than ever. And some economists do consider that giving up a large quantity of content for free is the price that must be paid to preserve a marketable share.

The music industry paid a terrible price during the digital transition, with a drop of 50% of its sales in one decade. But it would be unfair to make lenient lawmakers and internet pirates the main culprits. Unbundling played a critical role as well, just as in the newspaper industry. Being able to buy a single song on iTunes (instead of an album), or hoping that a single article on a web page will generate enough viewers to pay for itself (instead or purchasing an entire bundled newspaper) caused a great deal of damage.

As plagued as it is by piracy, the movie industry is immune to the notion of unbundling, which partly explains why box office revenue between 2006 and 2010 rose by 30% outside the United States and by 15% in the US/Canada market. Although the number of moviegoers is slipping, the industry has been able to find its way into the digital world.

Robert Levine’s book is a must-read that reframes the debate on the evolution of copyright. In an unusual way, it encompasses a European view on the issue (Levine lives part-time in Berlin). That makes the book even more interesting as countries explore ways for content creators to finance their work while not killing the formidable creative freedom unleashed by the digital world.

frederic.filloux@mondaynote.com

Free Ride, By Robert Levine is published by Bodley Head in the UK (available now on Amazon UK)and by Doubleday in the US (available oct 25 on Amazon US) and is also available the iTunes iBook Store.

Catching the Cloud

When it comes to contracting for a computer service, there is little choice but hoping for the best. Small or mid-size companies, especially those located outside the United States, are betting they’ll never have to go to court – usually one located 11,000km and thousands of dollars in legal fees away. Let’s face it: contracting with a large American company is a jump into the unknown. Agreements are written in an obscure form of English, often presented in PDF format, transparently implying modifications are out of question. Should you consider litigating, be prepared to make your case before a judge located on the West Coast of the United States. The not-so-subliminal reading of such contracts: ‘Sue me…’, with a grin.

The Cloud’s rise to prominence makes things worse. A growing number of companies and individuals handle their data to a remote infrastructure offering little hope of any legal leverage. The Cloud is the ultimate form of the outsourcing cascade. A US-based company rents capacity wherever electric power is cheap, connections reliable, and climate friendly to server farms cooling towers. As world connectivity expands, so do eligible regions. (While doing research for this column, I found Greenland was for served by a 960 Gbps (Gigabit per second) undersea cable linked to Iceland. In turn, the volcano island is linked to the rest of the world via a the huge 5 Tbps “Danice” cable). Datacenters are sprinkled over a number of countries and workload moves from one server farm to another as capacity management dictates. At this point, no company knows for sure where its data reside. This raises further legal hurdles as Cloud operators might be tempted to deploy datacenters in less stable but cheaper countries with even looser contractual protections.

European lawyers are beginning to look at better ways to protect their clients’ interests. A couple of weeks ago, I discussed the legal implications of Cloud Computing with Guillaume Seligmann, the lead tech attorney at the law firm Cotty Vivant Marchisio & Lauzeral. (He is also an associate professor at l’Ecole Centrale a prominent French engineering school). ‘When it comes to Cloud Computing, the relationship between the service provider and the customer is by nature asymmetrical’, he says. ‘The former has thousands if not millions of customers and limited liability; in case of litigation, it will have entire control over elements of proof. As for the customer, he bears the risk of having his service interrupted, his data lost or corrupted — when not retained by the supplier, or accessed by third parties and government agencies)’.
In theory, the contract is the first line of defense. ‘It is, except there is usually little room for negotiation on contracts engineered by expert American attorneys, based on US legislation and destined to be handled by US judges. Our conclusion is that solely relying on contracts is largely insufficient because it may not offer efficient means of sanctioning breaches in the agreement’.

The CVML partner then laid out six critical elements to be implemented in European legislation. These would legally supersede US contractual terms and, as a result, better protect European customers.

1 / Transparency. Guillaume Seligmann suggests a set of standard indicators pertaining to service availability, backup arrangements and pricing – like in the banking industry for instance. In Europe, a bank must provide a borrower with the full extent of his commitments when underwriting a loan. (Some economists say this disposition played a significant role at containing the credit bubble that devastated the US economy).

2 / Incident notifications. Today, unless he is directly affected, the customer learns about outages from specialized medias, rarely though a detailed notification from the service provider. Again, says Seligmann, the Cloud operator should have the obligation to report in greater details all incidents as well as steps taken to contain damage. This would allow the customer to take all measures required to protect his business operations.

3 / Data restitution. On this crucial matter, most contracts remain vague. In many instances, the customer wanting to terminate his contract and to get back his precious data, will get a large dump of raw data, sometimes in the provider’s proprietary format. ‘That’s unacceptable’, says the attorney. ‘The customer should have the absolute guarantee that, at any moment of his choosing, he we have the right to get the latest backed-up version of his data, presented in a standard format immediately useable by another provider. By no means can data be held hostage in the event of a lawsuit’.

4 / Control and certification. Foreign-headquartered companies, themselves renting facilities in other countries, create a chain fraught with serious hazards. The only way to mitigate risks is to give customers the ability to monitor at all times the facility hosting their data. Probably not the easiest to implement for confidentiality and security reasons. At least, says Guillaume Seligmann, any Cloud provider should be certified by a third party entity in the same way many industries (energy, transportation, banking) get certifications and ratings from specialized agencies – think about how critical such provisions are for airlines or nuclear power plants.

5 / Governing laws. The idea is to avoid the usual clause: “For any dispute, the parties consent to personal jurisdiction in, and the exclusive venue of, the courts of Santa Clara County, California”. To many European companies, this sounds like preemptive surrender. According to Seligmann’s proposal, the end-user should have the option to take his case before his own national court and the local judge should have the power to order really effective remedies. This is the only way to make the prospect of litigation a realistic one.

6 / Enforceability. The credibility of the points stated above depends on their ability to supersede and to render ineffective conflicting contractual terms imposed by the service provider. In that respect, the European Union is well armed to impose such constraints, as it already did on personal data protection. In the US, imposing the same rules might be a different story.

The overall issue of regulating the cloud is far from anecdotal. Within a few years, we can bet the bulk of our hard drives – individual as well as collective ones – will be in other people’s large hands: Amazon S3 storage service now stores 339 billion objects – twice last year’s volume.
We’ll gain in terms of convenience and efficiency. We should also gain in security.

—  frederic.filloux@mondaynote.com

The ePresse Digital Kiosk: First Lessons

[correction added about Relay.com's rate]

On June 30th, the French consortium ePresse opened its digital kiosk. Six months of hard work for a very small team (the ePresse consortium is a three persons operation: a CTO, a marketing person, and a manager), and still a long way to go. ePresse brought up eight titles: five dailies (Le Figaro, Le Parisien and its national edition, Libération, the sports daily l’Equipe and the business paper Les Echos), and three newsweeklies (L’Express, Le Point, Le Nouvel Observateur). This is only the rocket’s first stage: an iPad/iPhone app allowing per-copy purchases within the App Store; more to come this Fall.

Knowing I’m charge of this development, editors and news executives abroad inquired about the experience. Here are a few early observations.

[English version of ePresse demo here]

First, the big question: Why build a digital newsstand? After all, there is no shortage of places for buying online editions: Zinio, deployed globally; Relay.com and LeKiosque.fr in France. And, of course, Apple, which will roll-out its own Newsstand before year-end.

The answer is of a strategic nature: we’re dealing with concerns over control and technology.

For publishers, retaining full control of all commercial aspects of their digital sales channels is a critical matter. They must safeguard their freedom to decide prices, marketing strategies, discounts, bundles, special deals. They must also protect their ability to collect valuable customer data, without having to beg permission from a third party to do so. Marketing being the tactical engine of the trade, it is also one of the most underdeveloped assets of the press — and not just in France. A kiosk owned and controlled by publishers will be immensely beneficial for all involved.

Now, let’s take a walk through a usage scenario. You start by downloading the (free) kiosk application on your mobile phone. Next, you launch the application. A welcome screen greets you: for one euro (or dollar, or pound), you get unlimited access to the entire kiosk for one (or two) weeks, all you can eat.
Publishers might not like this: it amounts to a “leak” of digital copy sales that won’t be counted by the Audit Bureau of Circulation. But savvy publishers will also consider the upside: (a) the customer leaves his name and credit card info (that’s what the one-euro thing is about); (b) he will leave a trail of data. Then, when the almost-free trial period ends, a tailored offer is pushed to each individual customer, based on his recorded readings. An individual’s preferred title would be well inspired to offer him/her a steep subscription discount.
Over time, as reading patterns build up in the customer database, it becomes easier and easier to push offers not only based on title preferences, but also on a predictable news cycle. A political newspaper might cook up special deals six months before an election; a sports paper might do the same with Olympics and similarly attractive events. Here, tactical flexibility provides inordinate payoffs. As for occasional customers sticking to per-digital-copy purchases, they should be offered an incentive to give an email address, the ultimate goal being to convert them into digital subscribers.

Now the reality check: this scenario doesn’t work for current kiosks; pricing policies are constrained, promotional offers are not possible (they will eat up the kiosk’s margin) and the newsstand keeps customer data for its own marketing purposes. Plus, most kiosks charge around 30%, roughly three times the cost of an efficient digital delivery system.

The same goes for bundles. Currently, platforms handle those rather crudely. For publishers, beside per-copy sales, subscription systems end up as value-killers. In France, the Hachette-operated Relay.com kiosk offers a 9.90€ a month digital bundle for up to 30 10 magazines. A great bargain indeed, but one that yields a mere 0.30€ for each publication — before the kiosk’s cut. In other words, nothing. One of Relay.com’s bestsellers is said to be the 19.90€ a month all-magazines-you-can-read, with a similarly puny outcome for magazines.

In contrast, a publisher-run kiosk can introduce more bundling refinements such as a combined daily + weekly subscriptions system or any other such combination that makes sense marketing-wise. Deploying such arrangements will require a great deal of cooperation among titles – something close to performing unnatural acts, a delicate aspect of the job.

Building the system also involves deploying multi-title CRM (Customer Relationship Management) systems. This, in turn, requires weaving together customer databases belonging to different and sometimes competing titles – again, plenty of diplomatic issues in sight. I might be a little naïve, but I think media groups have done a great deal of progress recently when it comes to understanding the benefits of building integrated systems. With this in mind, for a consortium such as ePresse, the goal is to yield more value that the sum of its parts.

Now let’s jump to the technology aspect. ePresse.fr, launched ten days ago, is but the first stage of a much larger setup. Today, we limit ourselves to proposing an iOS app with per-issue sales only, through the Apple app store (lower case ‘‘app store’’ with intent as it seems Apple won’t be able to own those words). Obvious next steps include other mobile platforms and, more importantly, a subscription system directly available to smartphones, tablets and, of course, the web. In the process, we’ll add a couple more titles, but we intend to remain selective.

Mobility is a critical component. Currently, digital kiosks offer mostly PDF-based editions. As  discussed in a previous Monday Note, PDF is by no means the future of digital media. PDF once was a fantastic invention, but it wasn’t designed for today’s task: encapsulating news.
With this in mind, during the first months of ePress development, we spent a great deal of time aligning the output of the different publications to what we knew was the right target for mobility: XML feeds for stories on top of a “zoned” PDF that defines the placement of a story in a page.  Such feeds were supposed to come directly from each publication’s CMS (Content Management System). Some were able to supply the correctly formatted feeds right from day one, other needed upgrades to their CMS output. At publications, tech teams were very cooperative. We also got serious help from EDD, a French company specialized in digitizing media contents (EDD indexes and distributes 50,000 articles per day). EDD collects publishers’ PDF files, send those to India where the files are taken apart in order to produce the required output, all of it done every night within two hours.

Once clean XML feeds (standardized for the eight titles) became finally available, we had to put those on our content-delivery platform. We did this by re-aggregating all the components (PDF, zoning/mapping files, XML files, summaries, graphic elements) within a transaction-tracking mechanism. For this, we picked miLibris, a French startup that provides reading tools and cataloging systems for publishers, and for the French ISP and mobile carrier Orange.

Again: the idea was to use native XML to publish each title we serve, fully formatted for each article.

Three reasons for this:

Readability. You don’t comfortably read by constantly zooming and pinching. The screen of smartphone covers only a 1/60th of a broadsheet newspaper. For a reading a “facsimile” rendering of a 30 pages publication you’d need 1800 pans and zooms. Insanely unrealistic. XML gives us the ability to automatically reformat the text to fit the device, smartphone, tablet or, eventually, PC browser. No more pinching and zooming, just scrolling.

Functionalities. Relying on XML and text opens the way to a broad set of additional features: font-size adjustment, social sharing of articles, ability to create users’ folders, search, recommendation engines, etc.

Future-proof. At some point, we’ll get rid of PDF. As mobility usage rises, readers will demand quicker downloads over 3G or Edge cellular network. Two obstacles remain: one is each title’s graphic identity; legitimately, publishers demand the preservation of the visual aspects of the publications. As shown below, we’re making progress; the PDF version of a page:

and its XML/HTML5 translation (click to enlarge both):

… But we aren’t yet able to translate the minute details of a refined newspaper layout in XML and HTML5.

The second aspect is more economical. In some countries (such as France), the entity in charge circulation audits (equivalent to ABC) refuses to take in account digital copies as long as they are not exactly identical to the print version. This outdated posture explains the remanence of PDF formatting: it is accepted as a ‘’carbon copy’’ of the print original. My take is this will evolve over time. Already, titles such as the Economist offer an encapsulated version of their print edition that carries the same editorial content, but with a different advertising setup in some parts.

The evolution of the “edition” concept of is indeed a key question. On the one hand, the notion is deeply associated with the idea of branded news encapsulated in a “cognitive container” – yesterday the paper, today the digital edition tied to an app. On the other hand, digital news also begs for real-time. This can be implemented through a variety of techniques: overlay real-time news display, or permanently updated editions, which, in turn, push hard in favor of a subscription model vs. per-copy sales, the latter a mere (but necessary) transition.

frederic.filloux@mondaynote.com

The New Faces of Digital Readers

First of all, note the evolving language: the term Online Readers is now passé as it morphed into Digital Readers. The shift reflects two trends: a broader range of device types and, in news consumption, the spectacular rise of mobility. Today, we’ll focus on a recent set of surveys that quantify these trends. And we’ll take a look at their impact on business models and strategies.

The first survey was released last week in Paris by Havas Media, a major European advertising player with a 25% market share in France. Last May, the polling company CSA surveyed a panel of 600 people reading 20 major French publications: national dailies and weeklies. Because the French rate of ownership for digital devices is comparable to what happens in other markets, the survey’s findings can be safely extrapolated outside of France.

Here are the key findings:

Respondents declare spending 37 minutes a day on digital publications as opposed to 22 minutes a day on print press. This number is astonishingly high. It shows the switch to digital has occurred – at least for readers of large national medias. It also confirms the segmentation of digital audiences. More broadly, when Nielsen finds that, on all mature markets, internet users spend no more than 30 minutes a month on digital newspapers, it also proves how important it is to go after the most loyal customers as opposed to collecting eyeballs – and flybys – for the sake of raw audience numbers that carry less and less economic meaning…)

How media consumption is distributed: according to the Havas Media survey, 51% of the respondents prefer web sites, 31% go for electronic editions, and 17% use applications. In these numbers, the web’s dominance reflects (a) the high volume of contents that are still free as many publications keep playing both sides of the fence, meaning both ad-supported and paid-for models, and (b) the importance of real time news.
In contrast, the lower score of digital editions stems from the fact most still use a basic PDF format. This doesn’t deliver the best reader experience, nor does it fit the needs of mobility: download speed and reading comfort on a smartphone screen. (I’ll come back to the future of digital editions in a next Monday Note by talking about the ePresse.fr kiosk we launched last week in France).
As for the poor scoring of apps, it can be explained by the lack of great interfaces for smartphones, and the still relatively small penetration of tablets.

When do people actually read their news on digital devices? Mid-morning breaks constitute the first of two prime times during which web consultation is favored by most users (36% of respondents), while digital editions and apps account each for 21-22%  (apps are doing quite well at lunch time). The second prime time occurs in the evening, after work, when use is evenly distributed between devices.

The Havas Media / CSA survey also points to the prime motives in news consumption:

#1: Real Time information, mentioned by 48% of the respondents.
#2: Free access. Not really surprising, it will be difficult to get people to pay for news. But there is hope: 29% say they’d be willing  to buy a digital edition. Interestingly enough (and sweet to Havas’ ears): 72% of respondents would be ready to trade a digital subscription in exchange for advertising, and 54% would trade the ability to get free downloads of digital contents in exchange for more advertising.
#3: Availability. A notion that encompasses accessibility and ease of use.
#4: Selectiveness is seen as print’s privilege and a key factor of for liking it.

As for the tablets, 56% of their use involves reading the branded press; that’s behind internet usage (77%), email (66%), or watching videos (62%). Respondents are not apps freaks: they have downloaded only 7 free apps and a bit less that 4 paid-for apps in their devices. These surprisingly low figures appear to be specific to the French market.
In the United States, according to recent Nielsen survey, the picture is different: iPhone/iPad users have an average of 48 apps of all kinds in their device, vs. 35 for Android users and 15 for RIM users (read Jean-Louis’s recent Monday Note to understand the Blackberry’s problem).
But if you factor the actual use of these apps by counting people who open them several times a day (68% of the users for iOS, 60% for Android and 45% for RIM), you’ll see what provides the best return on effort in the application business. In terms of numbers of app loaded and used, if we take a base of 100 for iOS,  Android will score 64 and the Blackberry 21.
Of course, the picture needs refinement: on the tablet market, Apple still dwarf Android by 100 to 2 but in the smartphone business, Android enjoy a 38% penetration (according to Nielsen), vs. 27% for the iPhone and 21% for RIM. Altogether, between a higher Android penetration and more usage by iPhone users, building apps for each platform will yield the same result in terms of market reach.

In conclusion, what does this mean for our media business?

1 / There is still a long way to go for applications to match browser adoption; it is mostly a question of interface quality.
2 / People expect real-time news, including in applications, or the added value needs to be outstanding.
3 /  Digital editions carry more of the brand attributes; but as long as they are not supported by better applications, and able to provide real time news updates, they will remain a relatively small market.
4 / The advertising model needs a bigger dose of creativity: a large chunk of readers would agree to more ads as long as their publication remains free — which paves the way to reinventing the sponsoring model for digital editions or for encapsulated contents.

frederic.filloux@mondaynote.com

It’s all about accountability

Compared to Anglo-Saxon journalism standards, French practices are regrettably lax. It doesn’t mean that France doesn’t have remarkable writers, editors or medias; but, too often, their practices are just sloppy. Here, journalists abuse anonymous quotes and are too cozy with their sources. Papers are insufficiently edited, reporters routinely go after a story with a pre-defined agenda – they know what they want to write and will twist facts, quotes and background accordingly.

In France, stories are never corrected. Or corrections can be used to further drill a point . If someone dares to exercise his legal “Droit de réponse” (the right to force the paper to publish a response to erroneous statements), he risks retribution. In 1984, as I was writing for Le Monde, some politician felt misrepresented and demanded a correction. My editor reacted:” Okay, we’ll publish his response. But we’ll append a “Six-bracket” that will make him cry…” He was referring to a small piece (typeface size: 6) appended below the response that usually blasts the righteous individual… That was my introduction to the ritual.

For the record, I’m not by any means putting myself above the crowd. I made my share of mistakes, I’ve not always acted in good faith and more… And, in management positions, I failed to go after the behaviors I just criticized – mostly by not hiring people eager to improve journalistic standards. The mistakes I made during my career still haunt me; we’ll see which ones resurface in this Monday Note’s Comments section…

The chain of command plays a key role in this collective failure, standards are set at the top. I know a couple of editors who encourage their reporters not to bother collecting the other side’s view on facts, as contradiction would impair the “mission”.
French editors have issued stupid rules such as the “journalism stops at the bedroom’s door”; read: beyond, it’s just muckraking. Sure thing. Except it encouraged the press to turn a blind eye on François Mitterrand’s morganatic family living in an opulent government-owned building and protected by a squad of dedicated gendarmes with their own rules. Or, until recently, French media chose to ignore that Dominique Strauss-Kahn was more a predator than a seducer. (Never wondered why DSK never go after female foreign correspondents? It’s because he knew they’d have reported any misbehavior, as opposed to France where her peers and her superiors will ask a harassed woman reporter to shut up). As for investigative reporting, it went down the drain a long time ago as police, magistrates and lawyers became extremely proficient at manipulating complacent reporters.

In 2009, Francois Dufour, the publisher and editor-in-chief of a successful set of publications for young readers (Mon Quotidien, see story in the New York Times), wrote a funny book titled: Are French Journalists Bad? (Les journalistes français sont-ils mauvais?) He didn’t answer the question directly, but the facts he presented were compelling.
French journalists are not genetically worse than others; it’s their culture, they are simply poorly trained and managed.
That year, I found myself involved in a debate with Dufour along with other journalists who had joined the cyber-zealots crowd. There, I got my first exposure to the “Permanent  correction” concept and to the “Publish first, check later and correct (PFCLC)” notion. Dufour and I took the same side, saying the ability to correct a story should not be a license to a kind of permanent approximation. After all, all-news medias have been around since the eighties; they always had the ability to permanently correct stories, but – even though they were far from perfect – they refrained from abusing the  PFCLC thing. (I don’t recall seeing a 7:00am news item airing rumors, unverified facts – at least to the best of the reporters’ ability – and issuing a correction an hour later).

The debate about the management of facts at “digital speed” is spurred by two important factors: the Distribution of responsibilities and the Merchant relationship.

1/  Along with social media comes the notion of distributed responsibility. As everyone reports what’s happening, no one will carry the full responsibility for it. In the event of a breaking or a developing news, when hundreds of people congregate around a Twitter feed hashtag, they don’t have – by definition – the safety net of someone with the role of deciding whether or not to publish (by asking basic questions, for instance). When everyone is in empowered to feed the echo chamber (sometimes with a pseudonym), no one is responsible.

2 / The absence of a merchant relationship also plays a significant role in the dilution of responsibility. In the digital cauldron, free is too often associated with a permission to be sloppy. A compulsive tweeter or blogger, propagating whatever s/he is able to grab, without any commercial relationship with readers, will feel no obligation whatsoever to quality. Being first becomes the main goal.
That is exactly the opposite for a newspaper, an online news organization, a TV or a radio network. Such organizations will (at least in theory) feel the obligation to respond to the trust that people are paying for – directly in the case of a paid-for service, or indirectly though advertising.

In the end, this is a matter of accountability. Having an entity, embodied by a group of people (an identifiable set of writers or editors), accountable for what is published or aired, is the best guarantee of acceptable standards. In the best cases, this accountability will apply to direct reporting. Or accountability will play a a key role in curating, in assessing the validity of third party contents coming from places unreachable by professionals.
One last thing, again, for the record. I was among the millions of people literally glued to live-blogging or Twitter feeds during major news events such as the Fukushima disaster, the Arab revolutions or the (less important) DSK affair. Therefore I’m NOT advocating some kind of regulation of the digital flow. For society, I’m still convinced its advantages far outweigh its drawbacks.

frederic.filloux@mondaynote.com

Losing value in the “Process”

Digital media zealots are confused: they mistake news activity for the health of the news business. Unfortunately, the two are not correlated. What they promote as a new kind of journalism carries almost no economic value. As great as they are from a user standpoint, live blogging / tweeting, crowdsourcing and hosting “experts” blogs bring very little money – if any, to the news organization that operates them. Advertising-wise and on a per page basis, these services yield only a fraction of what a premium content fetches. On some markets, a blog page will carry a CPM (Cost per Thousand page views) of one, while premium content will get 10 or 15 (euros or dollars). In net terms, the value can even be negative, as many such contents consume manpower in order to manage, moderate, curate or edit them.

More realistically, these contents also carry some indirect but worthy value: in a powerful way, they connect the brand to the user. Therefore, I still believe news organization should do more, no less of such coverage. But we should not blind ourselves: the economic value isn’t there. It lies in the genuine and unique added value of original journalism deployed by organizations of varying size and scope, ranging from traditional media painfully switching to the new world, to pure online players — all abiding by proven standards.

What’s behind the word standard is another area of disagreement with Jeff Jarvis, as he opposes the notion of standards to what he calls “process”, or “journalism in beta” (see his interesting post Product v. process journalism; The myth of perfection v. beta culture).  Personally, I’d rather stick to the quest for perfection rather than embrace the celebration of the “process”. The former is inherently more difficult to reach, more prone to the occasional ridicule (cf. the often quoted list of mishaps by large newspapers). As for the latter, it amounts to shielding behind the comfortable “We say this, but we are not sure; don’t worry, we’ll correct it over time”.

To some extent, such position condones mediocrity. It’s one thing to acknowledge live reporting or covering developing stories bear the risk of factual errors. But it is another to defend inaccuracies as a journalistic genre, as a French site did (until recently): it labeled its content with tags like “Verified”, “Readers’ info”, etc.

Approximation must remain accidental, it should not be advocated as a normal journalistic way.

In the digital world, the rise of the guesstimate is also a byproduct of the structure in which a professional reporter finds himself competing with the compulsive blogger or twitterer. Sometimes, the former will feel direct pressure from the latter (“Hey, Twitter is boiling with XY, could you quickly do something about it? — Not yet, I’m unable to verify… — Look pal, we need to do something, right?). Admittedly, such competition can be a good thing: we’ll never say enough how much the irruption of the reader benefited and stimulated the journalistic crowd.

Unfortunately, the craze of instant “churnalism” tends to accommodate all the trade’s deviances. Today, J-Schools consider following market demands and teaching the use of Twitter or live-blogging at the expense of learning more complex types of journalism. Twenty years ago, we were still hoping the trade of narrative writing could be taught in newsrooms populated with great editors, but this is no longer the case. Now, most of the 30-40 something who plunged into the live digital frenzy have already become unable to produce long form journalism. And the obsessive productivism of digital serfdom won’t make things better (as an illustration, see this tale of a burned-out AOL writer in Faster Times).

The business model will play an important role in solving this problem. Online organizations will soon realize there is little money to be made in “process-journalism”. But, as they find it is a formidable vector to drive traffic and to promote in-depth reporting, they will see it deserves careful strategizing.

Take Twitter. Its extraordinary growth makes it one of the most potent news referral engines. Two weeks ago, at the D9 conference, Twitter CEO Dick Costolo  (video here) released a stunning statistic: it took three years to send the first billion tweets; today, one billion tweets are send every six days.

No wonder many high profile journalists or writers enjoy tweeter audiences higher than many news organizations, or became a brand on their own, largely thanks to Twitter. The twice Pulitzer prize winner and NY Times columnist Nicholas Kristof has 1.1m followers, that is one third of the New York Times’ official Twitter accounts followers.  And Nobel Prize economist Paul Kurgman, who also writes for the New York Times, has more than 610,000 followers. Not bad for specialized writing.

In some cases, the journalist will have a larger Twitter audience that the section where he/she writes: again for the NY Times, the business reporter Andrew Ross Sorkin has 20 times more followers (370,000) than Dealbook, the sub-site he edits. According to its CEO Arthur Sulzberger, a NY Times story is tweeted every four seconds, and all Times Twitter accounts have four times more followers that any other paper in America. Similarly, the tech writer Kara Swisher has 50 times more Twitter followers (757,000) that her employer, the WSJ tech site AllThingsD .

There are several ways to read this. One can marvel at the power of a personal branding that thrives to the mother ship’s benefit. Then, on the bean counter floor, someone else will object this stream of  tweets is an unmonetized waste of time. Others, at the traffic analytic desk, will retort Twitter’s incoming traffic represents a sizable part of the audience, and can therefore be measured in hard currency. Well… your pick.

frederic.filloux@mondaynote.com


Jazz Is not a Byproduct of Rap Music

Defining article as a “luxury or a byproduct” as Jeff Jarvis did last month, is like suggesting jazz is secondary to rap music, or saying literature is a Deluxe version of slamming. Reading Jarvis’ Buzz Machine blog is always interesting, often entertaining and more than occasionally grating. His May 28th blog post titled The article as luxury or byproduct reverberated across the media sphere – as provocative pieces are meant to, regardless of the argument’s actual connection with facts. Quite frankly, I didn’t pay attention to Jarvis’ latest taunt until the issue was raised in a conference I was invited to.

Let’s take a closer look – in a gracious and constructive manner.

What Jarvis said:

  • Tweeting and retweetting events as they unfold is a far more superior way of reporting than painstakingly gathering the facts and going through a tedious writing and editing process.
  • Background can be done easily with links.
  • The article is: “An extra service to readers. A luxury, perhaps”.
  • “An article can be a byproduct of the process”.

In fairness to the City University of New York journalism professor, he fell short of saying that articles are useless or dead (we can breathe a sigh of relief).

To support his position, Jarvis mentions Brian Settler’s coverage of the Joplin tornado: the abundant stream-of-consciousness tweets provided raw material for good reporting. He also refers to the Arab Spring where legions of witnesses fed the social cauldron with an endless current of instant accounts, often supplementing the work of journalists.

Let’s get this straight: I’m not going to join the collective glorification of approximate journalism. Like Jeff Jarvis (but on a smaller scale), I teach journalism. In doing so, I’m careful to remind aspiring reporters that live blogging or compulsive tweeting is not the essence of journalism, merely a tool – sometimes an incredibly efficient one – created by modern internet technology.

The article actually is the essence of journalism. And by no means a “byproduct of the process”.

Two and a half years ago, the Airbus landing in the Hudson became the poster-child for crowd-powered breaking news. Then, the only true visual document was a cell phone picture taken by a ferry passenger. Today, the same event would have been live-tweeted by a dozen of witnesses using all the digital nomad firepower you can think of, from hi-res pics to HD video. And, by the time the genuine reporters show up, all relevant material would have been broadcast to the entire world.
Then, if we follow the Jarvis Doctrine, any additional reporting – let alone narrative reconstruction – would become extraneous or useless. (OK, I’m slightly over the top here).

Still, this “extraneous or useless” byproduct is precisely when and where the real craft enters the media stage. For me, William Langewiesche’s 11,000 words article in Vanity Fair became one of the most compelling stories ever written about this spectacular event.

Similarly, tweets about the Arab revolution are great, but I’m still awaiting for an in-depth profile of Mohamed Bouazizi, the individual who set himself on fire, thus triggering a cluster of unprecedented civil unrest events in the region. Similarly, no social media flow can explain why Western diplomacy is so indulgent towards Syrian president Bashar al-Assad.

“Articles are no longer necessary for every event”, states Jarvis. As a matter of fact, I think exactly the opposite. Articles are more necessary than ever to understand and to correct excesses and mistakes resulting from an ever expanding flurry of instant coverage. The substitution from one genre, article, to the other, tweets and the like, can only be done in a marginal way. Daily newspapers become increasingly unable to deal with breaking news or developing  stories. Publishers’ heads remain deeply buried into the sand; they don’t see their costly publications scream their irrelevancy every morning when hitting the streets. They still haven’t come to terms with the need for bold moves such as really separating what belongs to digital media from what works best on paper. (Practically, this means transforming daily newspapers into biweeklies offering strong value-added reporting and perspectives, and using electronic media for the rest.)

My biggest disagreement with Jarvis lies in his lack of appreciation for a story’s background. Don’t bother with the context, he said, just link to it:

In a do-what-you-do-best-and-link-to-the-rest ecosystem, if someone else has written a good article (or background wiki), isn’t it often more efficient to link than to write? Isn’t it more valuable to add reporting, filling in missing facts or correcting mistakes or adding perspectives, than to rewrite what someone else has already written?

Come on Jeff. You are way too smart to seriously believe what you’re saying. Or maybe you need clicks on the Buzz Machine to cash in on your AdSenses… You can’t ignore that  good journalistic coverage cannot exist without serious background. Are you suggesting background work ought to be subcontracted to third party providers?  On what criteria? What about the notion (outdated, I know) of accuracy, of fact-checking? Is this your vision of modern journalism?

Actually, Jarvis ‘piece doesn’t make any reference to the notion of journalistic sources. Weirdly enough, the most essential part of the reporting process – finding sources, determining who is reliable and who is not, who is genuine and who is manipulative – is completely absent from his pronouncements (not from his teaching, I hope).

The problem is not Jarvis’ views of journalism. He’s a talented provocateur who sometimes smokes his own exhaust. But punditry isn’t reporting or analysis. Still, his talks, books, multiple appearances and knack for self-promotion are quite influential with many young journalists. They shouldn’t be misled. It’s not because news organizations tend to spend less and less on original reporting or on expertise, that those assets ought to be declared unimportant. Also, it’s not because a growing proportion of journalists are actually unable to produce high value stories or articles that the genre is no longer needed. On these matters, Jarvis is reversing cause and effect.

frederic.filloux@mondaynote.com

Analyzing the metered model

The metered model deserves a closer look. One the dirtiest little secrets of the online media business is the actual number of truly loyal readers — as opposed to fly-bys. No one really wants to know (let alone let anyone else know). Using a broad brush, about half of the audience is composed of casual users dropping by less than 3 times a month, or sent by search engines; 25% come more than 10 times a month. Over the years, as audience segmentation increased, media buyers (and publishers) selected the simplistic counting of Unique Visitors (UVs) as the metric of choice. In the meantime, all forms of Search Engine Optimization (SEO) and Search Engine Marketing (SEM) outfits have further elevated the collecting UVs as the primary goal for online publishers. Along with that practice came cheating. In order to inflate their UV numbers, many large news sites now rely on third party services such games that have nothing to do with their core business.

This distortion contributed to the erosion in advertising prices. Media buyers might by cynical, but they are not stupid. They know that a growing percentage of audiences is composed of accidental visitors with no brand loyalty whatsoever and who offer no attractive demographics. Combined to the “unlimited supply” factor inherent to the internet business, the result is a downward spiral for ad prices. These are important factors to keep in mind while considering paid-for systems.
News organization have implemented such systems in different gradations. At the far end of the spectrum, we have the Times of London: no access to the site without first paying. That’s is the riskiest option. The site ends up losing 90% of its audience (and the related advertising revenue) but hopes to offset the loss by gathering enough online subscribers. Without the promotional booster of free contents, this is a challenge – to say the least.
Others choose to give some of the site for free and put the most valuable contents — sometimes the digital version of the print edition — behind a paywall. This doesn’t always make economical sense as many readers are happy enough with the free content part. Editorially speaking, this leads to the creation of two categories: cheap fodder available for free (often created by junior staffers), and more “noble” content produced by the most senior members of the newsroom who also feed the print version.  This works fine for a brand associated with significant added value, or specialized (such as business news), or one that dominates its own market. The most successful paywall implementation has been the Wall Street Journal: it now has more than 1m paid subscribers, but it took 10 years to get there.

The third option involves a metered system. The principle is simple: once you’ve seen a certain number of stories in a given period of time, you need to become a paid subscriber to keep viewing the site. Some newspapers have been quite successful at deploying such a metered system.
For example, the Financial Times has set the cursor to 10 stories per month before hitting the paywall, after which the reader is asked to pay between € 4.99 and €7.49 (about $7.30 and $11) per month, depending on the package deal. A high price for really premium content. So far, FT.com has 3.4m registered users of which 224,000 have been converted to paid-for contents (+8% for Q1 2011). This translate into €20m to €25m extra revenue, only from subscribers (the service has been launched in October 2007). Currently, digital revenue (both ads and subscriptions) accounts for  30% of the FT’s revenue; according to FT execs, it is expected to reach 50% in 2013.

For the meter, finding the right setting is far from trivial. The trick is to decide how many free stories will be allowed before hitting the paywall, and how much to charge thereafter. In New York, three weeks ago, I spoke with Gordon Crovitz. With Steven Brill, Gordon co-founded Press+, which creates bespoke metered system for online medias. Press+ provides a complete set of e-commerce tools for publishers, from the access mechanism to the transaction system. It works with passes (daily, weekly), subscriptions plans (monthly or annual), topical packages, bundles and ancillary products.
Determining the right formula is usually done through A/B testing. Crovitz and Brill explain: the publisher will test two or three levels of free access (5, 10, 15 stories per month) and the same number of prices ($5 to $10 or maybe $15 a month). A few months of testing will determine the right formula. Typical ingredients are: the type of content, surrounding competition and possible alternative for the customers, the publisher’s willingness to bundle digital and print products. Metering can also be attractive for out-of-market audiences: an Australian newspaper will be free for its domestic audience but will charge overseas readers consuming more than 10 stories a month.

Another factor is the site’s advertising structure. The amount of inventory sold to advertisers varies widely. In the US market, the “sell-trough” ratio is about 60%, but it can go as low as 30% on some markets. This means the media can sustain some loss in page views due to the implementation of the metered system without losing ad revenue. An online media with a sell-trough rate of 55% can allow a 45% decrease in page views before eroding its ad revenue. According to Press+, traffic losses from implementing a meter are modest, ranging from 0% to 20% as counted in page views, and 0% to 7% in UVs.

Let’s try back-of-the-envelope calculations. A site gets 5m UVs and 100m page views per month; its yearly ARPU (Average Revenue Per User) coming from advertising is $3. This results in a yearly revenue of $15m. Now suppose only 20% of its audience reads more than 15 stories a month and one out of ten such readers are willing to pay $10 a month. The additional revenue will be: 5m UVs x 20% hitting the paywall x 10% willing to pay $100/year (discount included) = $10m in additional income — without depleting its advertising revenue. Actually, experience shows advertisers are now paying roughly 30% more for readers reached behind a paywall. All this before the 20% cut taken by Press+.

Naturally, as the saying goes, YMMV (Your Mileage May Vary), actual results will depend on many factors, one of them being how the pricing system is set (the simpler, the better).  Again, a rigorous test of all hypotheses is critical. Metered systems are the opposite of the one-size-fits-all.

—frederic.filloux@mondaynote.com

Trifling Twitter

When a member of the old guard barges into their cozy backyard, the Digerati jump up and strike indignant poses. And when the intruder’s point is missed, its author gets crucified. This is what happened to Bill Keller, the New York Times’ executive editor, when he dared to write a column critical of Twitter. In short, Keller’s well-documented piece, titled “The Twitter Trap“, contends the medium’s shallowness encourages superficial exchanges to the detriment of in-depth discussions. When, as a minor provocation, he twitted “#TwitterMakesYouStupid. Discuss“, someone keyboarded back “Depends who you follow” — and should have added: “… Depends also on how you follow people”.

I will stop short of joining the crowd of zealous Bill Keller critics. But I’m not fond of the piece, either: on several counts, I consider it misguided.

1 / Twitter is in fact small, and therefore cognitively inoffensive. Officially, the micro-blogging network (we ought to call it a media) born five years ago has 200 million users. This supposedly huge user base allowed it to raise about $360m in capital, including a last round of $200m led by Kleiner Perkins, the Valley venture capital grandee, on a $3.7bn valuation. Stunning indeed.
Now, let’s get back to Earth. Over the last 18 months, traffic has stayed flat. Time spent is eroding: 14 mn 6 sec per user in March 2010 vs. 12 mn 37 sec in March 2011. Contrast this to more than 6 hours spent on Facebook. (According to a recent cover story in Fortune, Mark Zuckerberg is said to pay less and less attention to Twitter’s evolution). Despite occasional news cycle-triggered traffic outbursts (the Spring unrest in Arab countries is a good example), such spikes don’t really translate into audience gains. As for the number of accounts, half are idle. And, as usual on the internet, the usage is extremely concentrated: 10% of all users account for 90% of the twits.
In the latter figure lays Twitter’s peculiar character: as they get better at using the medium, its most powerful users’ voices becomes louder than ever.

2 / Twitter is controlled by the user. The most notable fact in Twitter’s evolution is the increasing sophistication of its users. The top ten percent have become good at finding the best “relevancy niche”, i.e. a sector in which they’ll be able to rise above the crowd. Many do so by mastering all the available tools: they look a their retweets data, monitor who retweets them, and watch their ranking.
Symmetrically, the passive audience (reading more than actually twitting), has become adept at continuously refining their feed selection. Prattlers prone to comment on the Saturday night sports games tend to be abandoned to the benefit of those who stick to their expertise. Trimming subscriptions has become mandatory on Twitter (as it is on Facebook).

3 / Twitter’s pervasiveness has nothing in common with what we observe on Facebook or Google. As a business, Twitter’s trajectory looks more like Yahoo’s (unfortunately in a more precocious way) than a Google’s or Facebook’s. Zuckerberg’s social network enjoys unabated growth and much better monetization: it extracts about $3 in revenue per user (and makes a profit at it) versus $0.25 for Twitter.
This gap allows Facebook to continuously roll out new features. As a result, its already faithful users end up even more solidly anchored, increasing their time spent on the service. Twitter, on the other hand, has yet to show a sustainable business model, and its small core of heavy users remains difficult to monetize. This results in a hard to break vicious circle: no cash-flow => no investment capacity => costly investments due to a theoretically large user base. Twitter’s inability to introduce new sticky features is likely to further concentrate the twitterer base, while the broader circle of less involved users will tend to look elsewhere for excitement.
It will be difficult for Twitter’s management and investors to find their way out of this decaying orbit.

Already, Twitters’s limitations are visible in the way users consume online news. According the a study conducted by the Pew Research Center for Excellence in Journalism and based on Nielsen data (PDF here), Twitter is an insignificant referral (1%) for news when compared to Facebook (5%) or Google (30%).  However, the use of Twitter deserves to be encouraged in the newsroom (and taught in journalism schools), since:
a) it is an effective promotional tool for value-added stories;
b) it allows reporters to actually pinpoint their most loyal audience – and establish a relationship with it;
c) it doesn’t kill value like RSS feeds do (see a previous Monday Note on that matter).

Twitter will increasingly be a one-to-a-few medium, with a small base of hard-core users, increasingly selective about the contents they broadcast and who they follow. In passing, this trend will further reinforce the ongoing news sites traffic concentration where about 5% of the users account for 75% of the page views. (As an example, the Pew Research study indicates that 85% of USA Today.com users visit the site less than 3 times a month. And for the top 25 American news sites, “power users”, i.e. visiting a site more than 10 times a month, account for only…. 7% of the total).

Bill Keller’s handwringing about Twitter largely miss the point. Twitter remains largely controlled by its users, on both emitting and receiving sides. That is not the case for the search business that relies on sophisticated and secret algorithms to serve contents supposedly tailored for us – without our knowledge of this invisible editing (see this enlightening TED video by Eli Pariser on what he calls the “Filter Bubble”). What Bill Keller ought to worry about is the algorithm-powered news stream, designed to maximize its audience — and the advertising revenue. Therein lies the real danger for the brains of our children and their ability to learn how to judge by themselves. In comparison to the AOL Way (I’m referring to the stats-based news master plan exposed by Business Insider), the use of Twitter is a trifling matter.

frederic.filloux@mondaynote.com

Media & tech: Reconcilable Differences

Media and tech worlds must work together. There is not a shred of a doubt about it. The former have lost the dual battle for growth and economic performance; the latter are attracting eyeballs and endless funding. Still. When combined, their relevance to society can be greater than the sum of their respective parts.

Last week in New York, I was asked to share my views on the matter. This was before an audience of 350 media executives gathered for the Inma World Congress. Most were looking for ways to effectively partner with digital companies. As I worked on my speech, I asked my tech world contacts how they see us, the media crowd. Here are some quotes, from people who requested not to be identified.

“You guys, are geared to compete rather than collaborate. You’re not getting that collaboration is the new name for the game”. “Even among yourselves, you are unable to cooperate on key industrial issues, shooting yourselves in the foot as a result”. “Your internal organizations are still plagued by a culture of silos. The winners will be the ones  who break silos”.

Tech executives also underline they see media companies as co-managed with unions – the consequence being a wage system that discourages rewarding valuable individuals. Media companies are also viewed as having a tech-averse culture. “Media don’t understand that their business has become engineering-intensive. Their investment in technology is grossly insufficient”.

Symmetrically, I collected adjectives summing up media people’s perception of the tech world. “Arrogant, condescending”: true, old media people always have the feeling of being looked down upon by the guys in chinos. “Nerdy, left-brained”: well, it goes along with the flip-flops and the hoodie… “Wealthy”, (I’ll come to that later). “Alien to the notion of value for content”: also true; and that might be the most difficult obstacle to a reconciliation.

More than anything else, techies view the contents news outlets painstakingly put together as an annoyance. They don’t have a clue, nor are they interested in getting one, to the complex, costly and often dangerous process of collecting original information. “Euro-ignorant”: let’s just recall what the geographic distribution looks like in large tech corporations. The often-used EMEA  acronym encompasses Europe, Middle East, and Africa, i.e. from Germany to Burundi. Practically, when landing in Silicon Valley from Paris, you’re often made to feel you’re dropping in from the Third World.

“Contract Nuts”: when a 30 pages contract lands in your inbox from California, written in knotty legal English (even for a France-based deal), stipulating the relevant jurisdiction will be the Santa Clara County Superior Court, you can’t help but feeling a bit bewildered and put off. In dealing with tech companies, the amount of money spent in legal fees suddenly appears out of proportions. We have no choice but getting used to it.

The only identical critic, evenly spread on both sides, concerns bureaucracy: medias point at intricate technostructures staffed with legions of people working on the same subject; tech people mock news media needing six weeks to sign the innocuous non-disclosure agreement covering a routine project.

Let’s stop for a moment on the financial issue. Three key factors differentiate the tech from the media world.

1 / Size. The combined revenue of the US newspaper + magazine industry, all sources combined is about $60bn. This is sector is facing the following: Apple (most likely $100bn in revenue this year); Google ($29bn last year); Microsoft ($62bn) or Yahoo ($6bn). As for stock valuations over the last 10 years, consider the graphic below. It shows the performances of three mostly newspapers groups with market values above $1bn: Gannett Co. (market cap: $3.5bn), The Washington Post Co. ($3.33bn), The New York Times Co. ($1.13bn). Over the last 10 years, their stock prices went like this :

Now, on the same 10-year scale, let’s superimpose, Apple, Google, Microsoft; the scale flattens quite a bit:

You get the point. The media industry faces dramatic value depletion.

2 / Access to cash. Technology companies have access to a huge pool of money. After years of disappointing results, the Venture Capital industry is red hot again. In a previous Monday Note, I mentioned Flipboard – great app for the iPad, 32 people, no revenue –  with a current valuation of $200m, roughly the equivalent of the McClatchy Company with its 20 newspapers, 7700 employees, 24% EBITDA for a revenue of  $1.4bn.

3 / How to spend it. In itself, the cash allocation illustrates the cultural gap. In a tech company, once a project is approved, money will be injected until the outcome becomes clear: success or failure. As I asked an exec in a large tech group what the budget of the project we were discussing was, he answered: “Look, honestly I’ve never seen any spreadsheets on this. This project has been decided at the highest level of the corporation. We’ll pour money into it until it works or closes”.

By contrast, in a media company, investment will be kept at a bare minimum. Any engagement is set as low as possible: temporary staffing,  outsourced work, everything is in penny-pinching mode. Not exactly the “No Guts, No Glory” way…

Nevertheless, the more I’m involved in digital media projects, the more I’m convinced that both worlds need a rapprochement. Medias have a lot to learn from tech companies. The way they conduct projects, their relentless drive for innovation, their bold imagination, coupled with a systematic and agile “Test & Learn” approach…  For the news industry, drawing inspiration from such a culture is a matter or survival.

As for the tech ventures, they must admit they need the media industry more than they like to think. Flipboard, Google Reader, Bing: all aggregators would lose a great deal of their appeal if they no longer had original contents to aggregate or organize.

Over the past fifteen years, we kept hearing stories telling us Google or Yahoo could swallow any old media in a single gulp. It didn’t happen. Nor did these deep-pocketed corporations find within themselves the vision and skills to create a decent news gathering operation from scratch. The reason is simple and complicated: it’s a métier of its own; thousands of people have been practicing and evolving it for decades.

People like me, working on both sides of the fence, strongly believe in the virtues of cross-pollination. On the media side, it might have to start by finding out what we expect from the tech world, whether they are aggregators, distributors, or search engines. Then, we’ll need to change the way we innovate. In a nutshell, screw the bean-counters that will strangle decisive investments while being unable to stop the hemorrhage in their “legacy” businesses; assign small teams on a small numbers of really (as opposed to cosmetically) crucial projects; do more prototypes and less spreadsheets. Be bold and fearless. As the techies like to say: Go big, or go home!

Failure must be an option. Paralysis is not.

frederic.filloux@mondaynote.com