by Frédéric Filloux
For news organizations, the key challenge is to lift value-added editorial above Internet noise. Many see “signals” as a possible solution, one that could be supplemented by a derivative of ad profiling.
Last year Richard Gingras and Sally Lehrman came up with the Trust Project (full text here, on Medium). Richard is a seasoned journalist and the head of News and Social at Google; Sally is a senior journalism scholar at the Markkula Center for Applied Ethics at Santa Clara University in California.
Their starting point is readers’ eroding confidence in media. Year after year, every survey confirms the trend. A recent one, released ten days ago at the Davos Economic Forum by the global PR firm Edelman confirms the picture. For the first time, according to the 2014 version of Edelman’s Trust Barometer, public trust in search engines surpasses trust in media organizations (64% vs 62%). The gap is even wider for Millennials who trust search engines by 72% vs 62% for old medias.
And when it comes to segmenting sources by type — general information, breaking, validation –, search leaves traditional media even further in the dust.
No wonder why, during the terrorist attack in Paris three weeks ago, many publishers saw more than 50% of their traffic coming from Google. This was reflected on with a mixture of satisfaction (our stuff surfaces better in Google search and News) and concern (a growing part of news media traffic is now in the hands of huge US-based gatekeepers.)
Needless to say, this puts a lots of pressure on Google (much less so to Facebook that is not that much concerned with its growing role as a large news conduit.) Hence the implicit mission given to Richard Gingras and others to build on this notion of trust.
His project is built around five elements to parse news contents with:
#1. A mission and Ethics statement. As described in the Trust Project:
One simple first step is a posted mission statement and ethics policy that convey the mission of a news organization and the tenets underlying its journalistic craft. Only 50% of the top ten US newspapers have ethics policies available on the web and only 30% of ten prominent digital sites have done so.
The gap between legacy and digital native news media is an interesting one. While the former have built their audience on the (highly debatable) notion of objective reporting, balanced point of views, digital natives come with a credibility deficit. Many of the latter are seen as too close to the industry they cover; some prominent ones did not even bother to conceal their ties to the venture capital ecosystem, others count among their backers visible tech industry figures. Others are built around clever click-bait mechanisms that are supplemented — marginally — by solid journalism. (I’ll let our readers put names on each kind.)
In short, a clear statement on what a media is about and what are the potential conflicts of interests is a mandatory building block for trust.
#2. Expertise and Disclosure. Here is the main idea:
Far too often the journalist responsible for the work is not known to us. Just a byline. Yet expertise is an important element of trust. Where has their work appeared? How long have they worked with this outlet? Can audiences access their body of work?
Nothing much to add. Each time I spot an unknown and worth reading writer, my first reaction is to Google him/er to understand who I’m dealing with. Encapsulating background information in an accessible way (and standardized enough to be retrievable by a search engine) makes plain sense.
#3. Editing Disclosure, i.e. details on the whole vetting process a story had gone through before hitting the pixels. Fine, but it’s a legacy media approach. Stories by Benedict Evans, Horace Dediu, or Jeff Jarvis (see his view on the Trust Project), just to name a few respected analysts, are not likely to be reviewed by editors, but their views deserve to be surfaced as original contents. Therefore, Editing Disclosure should not carry a large weight in the equation.
#4. Citation and Corrections. The idea is to have Wikipedia-like standards that give access to citations and references behind the author’s assertions. This is certainly an efficient way to prevent plagiarism, or even “unattributed inspiration”. The same goes for corrections and amplifications, as the digital medium encourages article versioning.
#5. Methodology. What’s behind a story, how many first-hand interviews, reporting made on location as opposed to the soft reprocessing of somebody else’s work. Let’s be honest, the vast majority of news shoveled on the internet won’t pass that test.
Google’s idea to implement all of the above is to create a set of standardized “signals” that will yield objective ways to extract quality stuff from the vast background noise on the Web. Not an easy task.
First, Google news already works that way. In a Monday Note based on Google News’ official patent filing (see: Google news: The Secret Sauce), I looked at the signals isolated by Google to improve its news algorithm. There are 13 of them, ranging from the size of the organization’s staff to the writing style. It certainly worked fine (otherwise, Google News won’t be such a success). But it no longer is enough. Legacy media are now in constant race to produce more in order to satisfy Google’s (News + Search) insatiable appetite for fresh fodder. In the meantime, news staffs keep shrinking and “digital serfs”, hired for their productivity rather than their journalistic acumen, become legions. Also, criteria such as the size of a news staff no longer apply as much, this because independent writers and analysts — as those mentioned above — have become powerful and credible voices.
In addition, any system aimed at promoting quality — and value — is prone to guessing, to cheating. Search algorithm has become a moving target for all the smart people the industry has bred, forcing Google to make several thousands adjustments in its search formulae every year.
The News Profile and Semantic Footprint approach. If the list stated by the creators of The Trust Project is a great start, it has to be supplemented by other systems. Weirdly enough, profiling techniques used in digital advertising can be used as a blueprint.
Companies specialized in audience profiling are accumulating anonymous profiles in staggering numbers: to name just one, in Europe, Paris-based Weborama has collected 210m profiles (40% of the European internet population), each containing detailed demographics, consumer tastes for clothing, gadgets, furniture, transportation, navigation habits, etc. Such data are sold to advertisers that can then pinpoint who is in the process of acquiring a car, or of looking for a specific travel destination. No one ever opted-in to give such information, but we all did by allowing massive cookies injections in our browsers.
Then, why not build a “News Profile”? It could have all the components of my news diet: The publications I subscribed or registered to, the media I visit on a frequent basis, the authors I searched for, my average length of preferred stories, my propensity to read large documented profiles of business persons, the documentaries I watched on You Tube, the decks I downloaded from SlideShare… Why not adding the books I ordered on Amazon and the people I follow on Twitter, etc. All of the above already exists inside my computer, in the form of hundreds, if not thousands, of cookies I collected in my navigations.
It could work this way: I connect — this time knowingly — to a system able to reconcile my “News Profile” to the “Semantic Footprint” of publications, but also of authors (regardless of their affiliation, from NYT’s John Markoff to A16z’ Ben Horowitz), type of production, etc. Such profiling would be fed by criteria described in The Project Trust and by Google News algorithm signals. Today, only Google is in the position to perform such daunting task: It has done part of the job since the first beta of Google News in 2002, it collects thousands of sources, and it has a holistic view of the Internet. I personally have no problem with allowing Google to create my News Profile based on data… it already has on me.
I can hear the choir of whiners from here. But, again, it could be done on a voluntary basis. And think about the benefits: A skimmed version of Google News, tailored to my preferences, that could include a dose of serendipity for good measure… Isn’t it better than a painstakingly assembled RSS feed that needs constant manual updates? To me it’s a no-brainer.