Browsing Articles Written by

Frédéric Filloux

NYT vs Buzzfeed: Valuations Discrepancies – Part II

Disruptive Models, newspapers, online publishing By February 22, 2015 Tags: , , , , , , 6 Comments


by Frédéric Filloux

My last column about new valuations in digital media triggered an abundance of comments. Here are my responses and additions to the discussion. 

The most revealing part of argument used by those who tweeted (800 of them), commented or emailed me, is how many wished things to remain simple and segregated: legacy vs. native media, content producers vs. service providers, ancestral performances indicators and, of course, the self-granted permission to a certain category of people to decide what is worthy. Too bad for cartesian minds and simplifiers, the digital world is blurring known boundaries, mixing company purposes of and overhauling the competitive landscape.

Let’s start with one point of contention:

Why throw LinkedIn, Facebook and old companies such as the NYTimes or the Guardian into the equation? That’s the old apples and oranges point some commenters have real trouble seeing past. Here is why, precisely, the mix is relevant.

Last Tuesday February 17, LinkedIn announced it had hired a Fortune reporter as its business editor. Caroline Fairchild is the archetypal modern, young journalist: reporter, blogger with a cause (The Broadsheet is her newsletter on powerful women), mastering all necessary tools (video editing, SEO tactics, partnerships) as she went from Bloomberg to the HuffPo, among other gigs. Here is what she says about her new job:

 LinkedIn’s been around for 11 years and today publishes more than 50,000 posts a week (that’s roughly 10 NYTs per day) — but the publishing platform is still an infant, debuting widely less than a year ago. The rules and roles are being defined and redefined daily; experimenting is a constant.

Here we are: LinkedIn intends to morph into a major business news provider and a frontal competitor to established business media. Already, scores of guest columnists publish on a regular basis on LinkedIn, enjoying audiences many times larger than their DeLuxe appearances in legacy media. (For the record, I was invited to blend the Monday Note into LinkedIn, but the conditions didn’t quite make sense to us. Jean-Louis Gassée and I preferred preserving our independent franchise.)

For a $2.2bn revenue company such as LinkedIn, creating a newsroom aimed at the business community definitely makes sense and I simply wonder why it took them so long to go full throttle in that direction — not only with an avalanche of posts but with a more selective, quality-oriented approach. If it shows an ability to display properly value-added editorial, LinkedIn could be poised to become a potent publishing platform eventually competing with The Economist, Quartz, or Les Echos. All of it with a huge data analytics staff led by world-class engineers.

That’s why I think the comparison with established media makes sense.

As for Facebook, the argument is even more straightforward. Last October, I published a column titled How Facebook and Google Now Dominate Media Distribution; it exposed our growing dependence on social media, and the need to look more closely at the virtues of direct access as a generator of quality traffic. (A visit coming from social generates less than one page view versus 4 to 6 page views for direct access.) Facebook has become a dominant channel for accessing the news. Take a look at this table from Reuters Institute Report on Digital News Report (PDF here.)


There’s no doubt that these figures are now outdated as media’s quest to tap into the social reservoir has never been greater. (In passing, note the small delta between News Lovers and Casual Users.) It varies widely from one country to another, but about 40% of the age segment below 35 relies on social as its primary source for news… and when we say “social”, we mostly mean Facebook. Should we really ignore this behemoth when it comes to assess news economics? I don’t think so.

More than ever, Facebook deserves close monitoring. No one is eager to criticize their dope dealer, but Mark Zuckerberg’s construction is probably the most pernicious and the most unpredictable distributors the news industry ever faced.

For instance, even if you picked a given media for your FB newsfeed, the algorithm will decide how much you’ll see from it, based on your past navigation and profile. And numbers are terrible: as an example, only 16% of what the pushes on Facebook actually reaches its users, and that’s not a bad number when compared to the rest of the industry.

And still, the media sector continues to increase its dependence on social. Consider the recent change in the home page of NowThis,  a clever video provider specialized in  rapid-fire news clips:


No more home page! Implementing a rather bold idea floated years ago by BuzzFeed’s editor Ben Smith, NowThis recently decided to get rid of the traditional web access to, instead, propagate its content only via, from left to right: Tumbler, Kik, YouTube, Facebook, Twitter, Instagram, Vine, and Snapchat. We can assume that this strategy is based on careful analytics (more on this in a future Monday Note.)

Among other questions raised by Monday Note readers: Why focus solely on the New York Times and why not include the Gannetts or McClatchys? It’s simply because, along with The Guardian or the, the NYT is substantially more likely to become predominantly a digital brand than many others in the (old) league.

To be sure, as one reader rightly pointed out, recent history shows how printed media that chose to go full digital end up losing on both vectors. Indeed, given the size of its print advertising revenue, the Times would be foolish to switch to 100% online — at least for now. However, the trends is there: a shrinking print readership, fewer points of copy sale, consequently higher cost of delivery… Giving up the idea of a daily newspaper (while preserving a revamped end-of-the-week offering) its just a matter of time — I’ll give it five years, not more. And the more decisive the shift, the better the results will be: Keep in mind that only 7 (seven!) full-time positions are assigned to the making of the Financial Times’ print edition; how many in the vast herd of money-losing, newspaper-obsessed companies?

Again, this is not a matter of advocating the disappearance of print; it is about market relevancy such as addressing niches and the most solvent readerships. The narrower the better: if your target group is perfectly identified, affluent, geographically bound — e.g. the financial or administrative district in big capital — a print product still makes sense. (And of course, some magazines will continue to thrive.)

Finally, when it comes to assessing valuations, the biggest divide lies between the static and the dynamic appreciation of the future. Wall Street analysts see prospects for the NYT Co. in a rather static manner: readership evolution, in volumes and structures, ability to reduce production expenditures, cost of goods — all of the above feeding the usual Discounted Cash Flow model and its derivatives… But they don’t consider drastic changes in the environment, nor signs of disruption.

Venture Capital people see the context in a much more dynamic, chaotic perspective. For instance: the unabated rise of the smartphone; massive shifts in consumer behaviors and time allocation; the impact of Moore’s or Metcalfe’s Laws (tech improvements and network effects); or a new breed of corporations such as the Full Stack Startup concept exposed by Andreessen Horowitz’ Chris Dixon (the man behind BuzzFeed valuation):

Suppose you develop a new technology that is valuable to some industry. The old approach was to sell or license your technology to the existing companies in that industry. The new approach is to build a complete, end-to-end product or service that bypasses existing companies.
Prominent examples of this “full stack” approach include Tesla, Warby Parker, Uber, Harry’s, Nest, Buzzfeed, and Netflix.

All of it is far more enthralling than promising investors a new print section for 2016, two more tabs on the website all manned by a smaller but more productive staff.

One analysis looks at a continuously evolving environment, the other places bets on an uncertain, discontinuous future.

The problem for legacy media is their inability to propose disruptive or scalable perspectives. Wherever we turn — The NYT, The Guardian, Le Monde — we see only a sad narrative based on incremental gains and cost-cutting. No game changing perspective, no compelling storytelling, no conquering posture. Instead, in most cases, the scenario is one of quietly managing an inevitable decline.

By contrast, native digital players propose a much brighter (although riskier) future wrapped in high octane concepts, such as: Transportation as reliable as running water, everywhere, for everyone (Uber), or Organize the world’s information and make it universally accessible and useful (Google), or Redefining online advertising with social, content-driven publishing technology, [and providing] the most shareable breaking news, original reporting, entertainment, and video across the social web (BuzzFeed).

No wonder why some are big money attractors while others aren’t.


The NYTimes could be worth $19bn instead of $2bn  

business models, newspapers, social networks, Uncategorized By February 15, 2015 Tags: , , , 21 Comments


by Frédéric Filloux

Some legacy media assets are vastly underestimated. A few clues in four charts.   

Recent annual reports and estimates for the calendar year 2014 suggest interesting comparisons between the financial performance of media (either legacy or digital) and Internet giants.

In the charts below, I look at seven companies, each in a class by itself:

A few explanations are required.

For two companies, in order to make comparisons relevant, I broke down “digital revenues” as they appear in financial statements: $351m for the New York Times ($182m in digital advertising + $169m for digital subscriptions) and, for The Guardian, $106m (the equivalent of the £69.5m in the Guardian Media Group annual report (PDF here).

Audience numbers above come from ComScore (Dec 2014 report) for a common reference. We’ll note traffic data do vary when looking at other sources – which shows the urgent need for an industry-wide measurement standard.

The “Members” column seemed necessary because traffic as measured by monthly uniques does differ from actual membership. Such difference doesn’t apply to news media (NYT, Guardian, BuzzFeed).

For valuations, stock data provide precise market cap figures, but I didn’t venture putting a number the Guardian’s value. For BuzzFeed, the $850m figure is based on its latest round of investment. I selected BuzzFeed because it might be one of the most interesting properties to watch this year: It built a huge audience of 77m UVs (some say the number could be over 100m), mostly by milking endless stacks of listicles, with clever marketing and an abundance of native ads. And, at the same time, BuzzFeed is poaching a number first class editors and writers, including, recently, from the Guardian and ProPublica; it will be interesting to see how Buzzfeed uses this talent pool. (For the record: If founder Jonah Peretti and editor-in-chief Ben Smith pull this off, I will gladly revise my harsh opinion of BuzzFeed).

The New York Times is an obvious choice: It belongs to the tiny guild of legacy media that did almost everything right for their conversion to digital. The $169m revenue coming from its 910,000 digital subscribers didn’t exist at all seven years ago, and digital advertising is now picking up thanks to a decisive shift to native formats. Amazingly enough, the New York Times sales team is said to now feature a ratio of one to one between hardcore sales persons and creative people who engineer bespoke operations for advertisers. Altogether, last year’s $351m in digital revenue far surpasses newsroom costs (about $200m).

A “normal” board of directors would certainly ask management why it does not consider a drastic downsizing of newspaper operations and only keep the fat weekend edition. (I believe the Times will eventually go there.)

The Guardian also deserves to be in this group: It became a global and digital powerhouse that never yielded to the click-bait temptation. From its journalistic breadth and depth to the design of its web site and applications, it is the gold standard of the profession – but regrettably not for its financial performances, read Henry Mance’s piece in the FT.

Coming back to our analysis, Google unsurprisingly crushes all competitors when it comes its financial performance against its audience (counted in monthly unique visitors):

Google monetizes its UVs almost five times better than its arch-rival Facebook, and 46 times better than The New York Times Digital. BuzzFeed generates a tiny $1.30 per unique visitors per year.

When measured in terms of membership — which doesn’t apply to digital media — the gap is even greater between the search engine and the rest of the pack :


The valuation approach reveals an apparent break in financial logic. While being a giant in every aspects (revenue, profit, market share, R&D spending, staffing, etc), Google appears strangely undervalued. When you divide its market capitalization by its actual revenue, the multiple is not even 6 times the revenue. By comparison, BuzzFeed has a multiple of 8.5 times its presumed revenue (the multiple could fall below 6 if its audience remains the same and its projected revenue increases by 50% this year as management suggests.)  Conversely, when using this market cap/revenue metric, the top three (Twitter, Facebook, and even LinkedIn) show strong signs of overvaluation:

Through this lens, if Wall Street could assign to The New York Times the ratio Silicon Valley grants BuzzFeed (8.5 instead of a paltry 1.4), the Times would be worth about $19bn instead of the current $2.2bn.

Again, there is no doubt that Wall Street would respond enthusiastically to a major shrinkage of NYTCo’s print operations; but regardless of the drag caused by the newspaper itself, the valuation gap is absurdly wide when considering that 75% of BuzzFeed traffic is actually controlled by Facebook, certainly not the most reliably unselfish partner.

As if the above wasn’t enough, a final look confirms the oddity of market valuations. Riding the unabated trust of its investors, BuzzFeed brings three times less money per employee  than The New York Times does (all sources of revenue included this time):

I leave it to the reader to decide whether this is a bubble that rewards hype and clever marketing, or if the NYT is an unsung investment opportunity.


From “Trust In News” to “News Profiling”

journalism By February 1, 2015 Tags: , , 8 Comments


by Frédéric Filloux

For news organizations, the key challenge is to lift value-added editorial above Internet noise. Many see “signals” as a possible solution, one that could be supplemented by a derivative of ad profiling.   

Last year Richard Gingras and Sally Lehrman came up with the Trust Project (full text here, on Medium). Richard is a seasoned journalist and the head of News and Social at Google; Sally is a senior journalism scholar at the Markkula Center for Applied Ethics at Santa Clara University in California.

Their starting point is readers’ eroding confidence in media. Year after year, every survey confirms the trend. A recent one, released ten days ago at the Davos Economic Forum by the global PR firm Edelman confirms the picture. For the first time, according to the 2014 version of Edelman’s Trust Barometer, public trust in search engines surpasses trust in media organizations (64% vs 62%). The gap is even wider for Millennials who trust search engines by 72% vs 62% for old medias.



And when it comes to segmenting sources by type — general information, breaking, validation –, search leaves traditional media even further in the dust.



No wonder why, during the terrorist attack in Paris three weeks ago, many publishers saw more than 50% of their traffic coming from Google. This was reflected on with a mixture of satisfaction (our stuff surfaces better in Google search and News) and concern (a growing part of news media traffic is now in the hands of huge US-based gatekeepers.)

Needless to say, this puts a lots of pressure on Google (much less so to Facebook that is not that much concerned with its growing role as a large news conduit.) Hence the implicit mission given to Richard Gingras and others to build on this notion of trust.

His project is built around five elements to parse news contents with:

#1. A mission and Ethics statement. As described in the Trust Project:

One simple first step is a posted mission statement and ethics policy that convey the mission of a news organization and the tenets underlying its journalistic craft. Only 50% of the top ten US newspapers have ethics policies available on the web and only 30% of ten prominent digital sites have done so.

The gap between legacy and digital native news media is an interesting one. While the former have built their audience on the (highly debatable) notion of objective reporting, balanced point of views, digital natives come with a credibility deficit. Many of the latter are seen as too close to the industry they cover; some prominent ones did not even bother to conceal their ties to the venture capital ecosystem, others count among their backers visible tech industry figures. Others are built around clever click-bait mechanisms that are supplemented — marginally — by solid journalism. (I’ll let our readers put names on each kind.)
In short, a clear statement on what a media is about and what are the potential conflicts of interests is a mandatory building block for trust.

#2. Expertise and Disclosure. Here is the main idea:

Far too often the journalist responsible for the work is not known to us. Just a byline. Yet expertise is an important element of trust. Where has their work appeared? How long have they worked with this outlet? Can audiences access their body of work? 

Nothing much to add. Each time I spot an unknown and worth reading writer, my first reaction is to Google him/er to understand who I’m dealing with. Encapsulating background information in an accessible way (and standardized enough to be retrievable by a search engine) makes plain sense. 

#3. Editing Disclosure, i.e. details on the whole vetting process a story had gone through before hitting the pixels. Fine, but it’s a legacy media approach. Stories by Benedict Evans, Horace Dediu, or Jeff Jarvis (see his view on the Trust Project), just to name a few respected analysts, are not likely to be reviewed by editors, but their views deserve to be surfaced as original contents. Therefore, Editing Disclosure should not carry a large weight in the equation.

#4. Citation and Corrections. The idea is to have Wikipedia-like standards that give access to citations and references behind the author’s assertions. This is certainly an efficient way to prevent plagiarism, or even “unattributed inspiration”. The same goes for corrections and amplifications, as the digital medium encourages article versioning.

#5. Methodology. What’s behind a story, how many first-hand interviews, reporting made on location as opposed to the soft reprocessing of somebody else’s work. Let’s be honest, the vast majority of news shoveled on the internet won’t pass that test.

Google’s idea to implement all of the above is to create a set of standardized “signals” that will yield objective ways to extract quality stuff from the vast background noise on the Web. Not an easy task.

First, Google news already works that way. In a Monday Note based on Google News’ official patent filing (see: Google news: The Secret Sauce), I looked at the signals isolated by Google to improve its news algorithm. There are 13 of them, ranging from the size of the organization’s staff to the writing style. It certainly worked fine (otherwise, Google News won’t be such a success). But it no longer is enough. Legacy media are now in constant race to produce more in order to satisfy Google’s (News + Search) insatiable appetite for fresh fodder. In the meantime, news staffs keep shrinking and “digital serfs”, hired for their productivity rather than their journalistic acumen, become legions. Also, criteria such as the size of a news staff no longer apply as much, this because independent writers and analysts — as those mentioned above — have become powerful and credible voices.

In addition, any system aimed at promoting quality — and value — is prone to guessing, to cheating. Search algorithm has become a moving target for all the smart people the industry has bred, forcing Google to make several thousands adjustments in its search formulae every year.

The News Profile and Semantic Footprint approach. If the list stated by the creators of The Trust Project is a great start, it has to be supplemented by other systems. Weirdly enough, profiling techniques used in digital advertising can be used as a blueprint.

Companies specialized in audience profiling are accumulating anonymous profiles in staggering numbers: to name just one, in Europe, Paris-based Weborama has collected 210m profiles (40% of the European internet population), each containing detailed demographics, consumer tastes for clothing, gadgets, furniture, transportation, navigation habits, etc. Such data are sold to advertisers that can then pinpoint who is in the process of acquiring a car, or of looking for a specific travel destination. No one ever opted-in to give such information, but we all did by allowing massive cookies injections in our browsers.

Then, why not build a “News Profile”? It could have all the components of my news diet: The publications I subscribed or registered to, the media I visit on a frequent basis, the authors I searched for, my average length of preferred stories, my propensity to read large documented profiles of business persons, the documentaries I watched on You Tube, the decks I downloaded from SlideShare… Why not adding the books I ordered on Amazon and the people I follow on Twitter, etc. All of the above already exists inside my computer, in the form of hundreds, if not thousands, of cookies I collected in my navigations.

It could work this way: I connect — this time knowingly — to a system able to reconcile my “News Profile” to the “Semantic Footprint” of publications, but also of authors (regardless of their affiliation, from NYT’s John Markoff to A16z’ Ben Horowitz), type of production, etc. Such profiling would be fed by criteria described in The Project Trust and by Google News algorithm signals. Today, only Google is in the position to perform such daunting task: It has done part of the job since the first beta of Google News in 2002, it collects thousands of sources, and it has a holistic view of the Internet. I personally have no problem with allowing Google to create my News Profile based on data… it already has on me.

I can hear the choir of whiners from here. But, again, it could be done on a voluntary basis. And think about the benefits: A skimmed version of Google News, tailored to my preferences, that could include a dose of serendipity for good measure… Isn’t it better than a painstakingly assembled RSS feed that needs constant manual updates? To me it’s a no-brainer.


2015 Digital Media: A Call For a Big Business Model Cleanup 

Uncategorized By January 18, 2015 17 Comments


by Frédéric Filloux

Digital media are stuck with bad economics resulting in relentless deflation. It’s time to wake-up and make 2015 the year of radical — and concerted — solutions.

Trends in digital advertising feel like an endless agony to me. To sum up: there is no sign of improvement on the performance side; a growing percentage of ads are sold in bulk; click-fraud and user rejection are on the rise, all resulting in ceaseless deflation. Call it the J-Curve of digital advertising, as it will get worse before it gets better (it must – and it will).

Here is a quick summary of issues and possible solutions.

The rise of Ad Blocking system, the subject of a December 8th, 2014 Monday Note. That column was our most viewed and shared ever, which measures a growing concern for the matter. Last week, AdBlockPlus proudly announced a large scale deployment solution: with a few clicks, system administrators can now install AdBlockPlus on an entire network of machines. This yet another clue that the problem won’t go away.

There are basically three approaches to the issue.

The most obvious one is to use the court system against Eyeo GmBH, the company operating AdBlockPlus. After all, the Acceptable Ads agreement mechanism in which publishers pay to pass unimpeded through ABP filters is a form of blackmail. I don’t see how Eyeo will avoid collective action by publishers. Lawyers — especially in Europe — are loading their guns.

The second approach is to dissuade users from installing ABP on their browsers. It’s is up to browser makers (Google, Microsoft, Apple) to disable ABP’s extensions. But they don’t have necessarily much of an incentive to do so. Browser technology is about user experience quality when surfing the web or executing transactions. Performance relies on sophisticated techniques such as developing the best “virtual machines” (for a glimpse on VM technology, this 2009 FT Magazine piece, The Genius behind Google’s browser  is a must-read). Therefore, if the advertising community, in its shortsighted greed, ends up saturating the internet with sloppy ads that users massively reject, and that such excesses led a third party developer to create a piece of software to eliminate the annoyance, it should be no surprise to see the three browsers providers tempted to allow ad blocking technologies.

Google’s is in a peculiar position on this because it also operates the ad-serving system DFP (DoubleClick for Publishers). Financially speaking, Google doesn’t necessarily care if a banner is actually viewed because DFP collects its cut when the ad is served. But, taking the long view, as Google people usually do, we can be sure they will address the issue in coming months.

The best way to address the growing ad rejection is to take it at its root: It’s up to the advertising sector to wake up and work on better ads that everybody will be happy with.

But reversing this trends will take time. The perversity of ad-blocking is that everyone ends up being affected by the bad practices of a minority: Say a user installs ABP on her computer after repeated visits on a site where ads are badly implemented, chances are that she will intentionally disconnect ABP on sites that carefully manage their ads are next to zero.

As if the AdBlock challenge wasn’t not enough, the commercial internet has to deal with growing “Bot Fraud”. Ads viewed by robots generating fake — but billable — impressions become a plague as the rate of bogus clicks is said to be around 36% (see this piece in MIT’s Technology Review). This is another serious problem for the industry when advertisers are potentially defrauded with such magnitude: as an example, last year, the revealed that up to 57% of a Mercedes-Benz campaign viewers actually were robots.

In the digital advertising sector, the places to find some relief remain branded content or native ads. Depending on how deals are structured, prices are still high and such ad forms can evade blocking. Still, to durably avoid user rejection, publishers should be selective and demanding on the quality of branded content they’ll carry.

Another ingredient of the cleanup involves Internet usage metrics — fixed and mobile. More than ever, our industry calls for reliable, credible and, above all, standardized measurement systems. The usual ‘Unique Visitor’ or page views can’t remain the de rigueur metrics as both are too easily faked. The ad market and publishers need more granular metrics to reflect actual reader engagement (a more critical measure when reading in-depth contents vs. devouring listicles dotted with cheap ads). Could it be time spent on a piece of content or shares on social networks? One sure thing, though: the user needs to be counted across platforms she’s using. It is essential to reconcile the single individual who is behind a variety of devices: PC, smartphone or tablet. To understand her attention level — and to infer its monetary value, we need to know when, for how long, and in which situations she uses her devices. Wether it is anonymously or based on a real ID, retrieving actual customer data is critical.

The answer is complicated, but one thing is sure: to lift up its depleted economics, the industry needs to agree on something solid and long-lasting.

The media industry solutions to the problems we just discussed will have a significant impact on digital information. As long as the advertising market remains in today’s mess, everybody loses: Advertisers express their dissatisfaction with more pressure on the prices they’re willing to pay; intermediaries — media buying agencies — come under more scrutiny; and, in the end, publisher P&Ls suffer. The two digital world ‘mega-gatekeepers’ — Facebook and Google — could play a critical role in such normalization. Unfortunately, their interests diverge. There is not a month when we do not see competition increase between them, on topics ranging from user attention, to mobile in emerging markets, internet in the sky, and artificial intelligence… At this stage, the result of this multi-front war is hard to predict.