A new generation of photographers reinvents the way stories are told. For their images, the weapons of choice are social networks and applications, video and mobile phones.
The limitations of algorithmic curation of news and culture has prompted a return to the use of actual humans to select, edit, and explain. Who knows, this might spread to another less traditional media: apps.
At a time when the information world becomes increasingly shallow, journalists ought to join forces with experts. The alliance would bring deeper knowledge to journos and sharper story-telling to eggheads.
Monetizing digital journalism requires one key ingredient: Causing quality contents to emerge from the internet’s background noise. New kinds of Content Management Systems and appropriate syntax can help in a decisive way.
by Frédéric Filloux
For news organizations, the key challenge is to lift value-added editorial above Internet noise. Many see “signals” as a possible solution, one that could be supplemented by a derivative of ad profiling.
Last year Richard Gingras and Sally Lehrman came up with the Trust Project (full text here, on Medium). Richard is a seasoned journalist and the head of News and Social at Google; Sally is a senior journalism scholar at the Markkula Center for Applied Ethics at Santa Clara University in California.
Their starting point is readers’ eroding confidence in media. Year after year, every survey confirms the trend. A recent one, released ten days ago at the Davos Economic Forum by the global PR firm Edelman confirms the picture. For the first time, according to the 2014 version of Edelman’s Trust Barometer, public trust in search engines surpasses trust in media organizations (64% vs 62%). The gap is even wider for Millennials who trust search engines by 72% vs 62% for old medias.
And when it comes to segmenting sources by type — general information, breaking, validation –, search leaves traditional media even further in the dust.
No wonder why, during the terrorist attack in Paris three weeks ago, many publishers saw more than 50% of their traffic coming from Google. This was reflected on with a mixture of satisfaction (our stuff surfaces better in Google search and News) and concern (a growing part of news media traffic is now in the hands of huge US-based gatekeepers.)
Needless to say, this puts a lots of pressure on Google (much less so to Facebook that is not that much concerned with its growing role as a large news conduit.) Hence the implicit mission given to Richard Gingras and others to build on this notion of trust.
His project is built around five elements to parse news contents with:
#1. A mission and Ethics statement. As described in the Trust Project:
One simple first step is a posted mission statement and ethics policy that convey the mission of a news organization and the tenets underlying its journalistic craft. Only 50% of the top ten US newspapers have ethics policies available on the web and only 30% of ten prominent digital sites have done so.
The gap between legacy and digital native news media is an interesting one. While the former have built their audience on the (highly debatable) notion of objective reporting, balanced point of views, digital natives come with a credibility deficit. Many of the latter are seen as too close to the industry they cover; some prominent ones did not even bother to conceal their ties to the venture capital ecosystem, others count among their backers visible tech industry figures. Others are built around clever click-bait mechanisms that are supplemented — marginally — by solid journalism. (I’ll let our readers put names on each kind.)
In short, a clear statement on what a media is about and what are the potential conflicts of interests is a mandatory building block for trust.
#2. Expertise and Disclosure. Here is the main idea:
Far too often the journalist responsible for the work is not known to us. Just a byline. Yet expertise is an important element of trust. Where has their work appeared? How long have they worked with this outlet? Can audiences access their body of work?
Nothing much to add. Each time I spot an unknown and worth reading writer, my first reaction is to Google him/er to understand who I’m dealing with. Encapsulating background information in an accessible way (and standardized enough to be retrievable by a search engine) makes plain sense.
#3. Editing Disclosure, i.e. details on the whole vetting process a story had gone through before hitting the pixels. Fine, but it’s a legacy media approach. Stories by Benedict Evans, Horace Dediu, or Jeff Jarvis (see his view on the Trust Project), just to name a few respected analysts, are not likely to be reviewed by editors, but their views deserve to be surfaced as original contents. Therefore, Editing Disclosure should not carry a large weight in the equation.
#4. Citation and Corrections. The idea is to have Wikipedia-like standards that give access to citations and references behind the author’s assertions. This is certainly an efficient way to prevent plagiarism, or even “unattributed inspiration”. The same goes for corrections and amplifications, as the digital medium encourages article versioning.
#5. Methodology. What’s behind a story, how many first-hand interviews, reporting made on location as opposed to the soft reprocessing of somebody else’s work. Let’s be honest, the vast majority of news shoveled on the internet won’t pass that test.
Google’s idea to implement all of the above is to create a set of standardized “signals” that will yield objective ways to extract quality stuff from the vast background noise on the Web. Not an easy task.
First, Google news already works that way. In a Monday Note based on Google News’ official patent filing (see: Google news: The Secret Sauce), I looked at the signals isolated by Google to improve its news algorithm. There are 13 of them, ranging from the size of the organization’s staff to the writing style. It certainly worked fine (otherwise, Google News won’t be such a success). But it no longer is enough. Legacy media are now in constant race to produce more in order to satisfy Google’s (News + Search) insatiable appetite for fresh fodder. In the meantime, news staffs keep shrinking and “digital serfs”, hired for their productivity rather than their journalistic acumen, become legions. Also, criteria such as the size of a news staff no longer apply as much, this because independent writers and analysts — as those mentioned above — have become powerful and credible voices.
In addition, any system aimed at promoting quality — and value — is prone to guessing, to cheating. Search algorithm has become a moving target for all the smart people the industry has bred, forcing Google to make several thousands adjustments in its search formulae every year.
The News Profile and Semantic Footprint approach. If the list stated by the creators of The Trust Project is a great start, it has to be supplemented by other systems. Weirdly enough, profiling techniques used in digital advertising can be used as a blueprint.
Companies specialized in audience profiling are accumulating anonymous profiles in staggering numbers: to name just one, in Europe, Paris-based Weborama has collected 210m profiles (40% of the European internet population), each containing detailed demographics, consumer tastes for clothing, gadgets, furniture, transportation, navigation habits, etc. Such data are sold to advertisers that can then pinpoint who is in the process of acquiring a car, or of looking for a specific travel destination. No one ever opted-in to give such information, but we all did by allowing massive cookies injections in our browsers.
Then, why not build a “News Profile”? It could have all the components of my news diet: The publications I subscribed or registered to, the media I visit on a frequent basis, the authors I searched for, my average length of preferred stories, my propensity to read large documented profiles of business persons, the documentaries I watched on You Tube, the decks I downloaded from SlideShare… Why not adding the books I ordered on Amazon and the people I follow on Twitter, etc. All of the above already exists inside my computer, in the form of hundreds, if not thousands, of cookies I collected in my navigations.
It could work this way: I connect — this time knowingly — to a system able to reconcile my “News Profile” to the “Semantic Footprint” of publications, but also of authors (regardless of their affiliation, from NYT’s John Markoff to A16z’ Ben Horowitz), type of production, etc. Such profiling would be fed by criteria described in The Project Trust and by Google News algorithm signals. Today, only Google is in the position to perform such daunting task: It has done part of the job since the first beta of Google News in 2002, it collects thousands of sources, and it has a holistic view of the Internet. I personally have no problem with allowing Google to create my News Profile based on data… it already has on me.
I can hear the choir of whiners from here. But, again, it could be done on a voluntary basis. And think about the benefits: A skimmed version of Google News, tailored to my preferences, that could include a dose of serendipity for good measure… Isn’t it better than a painstakingly assembled RSS feed that needs constant manual updates? To me it’s a no-brainer.
by Frédéric Filloux
Anglo-saxon media that refused to publish religious caricatures should revise their position. This is the worst time to surrender to self-censorship and politically correctness. There is a too much at stake, here.
As I’m writing this column, sharpshooters are positioned on the roofs of my neighborhood, a hundred yards away from Place de la Nation where hundreds of thousands of people will gather in the memory of the 17 people killed in last week terror attacks. France is in a state of shock, the emotion is overwhelming, and the concern is growing as everyone realizes the size and depth of French Jihad networks.
While anti-semitic attacks are, unfortunately, not a novelty in France, the retaliation on news media now takes the shape of professionally executed targeted assassinations. From now on, every media publishing offensive cartoons could suffer Charlie Hebdo’s fate. This is what happened to the Hamburger Morgenpost: firebombed this Sunday at 2:00am — exactly in the same way as Charlie Hebdo was four years ago.
France is not through with terrorist attacks. Friday evening, hours after SWAT teams stormed the kosher supermarket, the Interior minister painted a grim picture of what’s ahead. ‘Over the recent months’, he said, ‘103 legal procedures have been initiated against terror cells, involving 505 people. There is not a single day in which I don’t take an operational decision regarding this issues’. More broadly, law enforcement estimates the threat at 1200 “potential jihadists”. Several hundreds of them are under surveillance.
On the investigative site Mediapart, former counter-terrorism magistrate Gilbert Thiel said this:
“Our problem, today, is that we went from 100 people to monitor in 1995 to 1000 today. Between 12 and 20 law enforcement people are needed to keep track of one single individual on a 24-hour basis. Then we discover that the individual’s friends and relatives need to be monitored as well. At some point, we’re swamped.”
To make the problem worse, counterterrorism experts quoted in Le Monde believe than 3000 to 5000 Europeans are fighting in the name of Jihad in Syria and Irak; half of them are said to be identified after their departure and 20% are coming back, most of them brainwashed and not in a sunny mood.
Unlike the September 11th era of terrorism where attacks were engineered from abroad, today, Al Qaeda and ISIS have been very good at exporting terrorism into the social fabric of Western countries, encouraging the emergence of widespread, independent micro-cells with people, usually coarse (as heard in the audio recordings of last week’s perpetrators), but quite effective at using kalashnikov rifles and explosives.
Let’s come back to the cartoons. I think news media that balk at republishing caricatures of the Prophet Mahomet are ill-advised. This is the worst time to yield to self-censorship and politically correctness.
I wasn’t personally a fan of Charlie Hebdo. Ten years ago, it published an article saying, in substance, that the newspaper I was editing at the time — 20 Minutes with its 3 million readers and a staff of 80 fine reporters and editors — didn’t deserve to exist. The Charlie Hebdo author said that he’d prefer that people read nothing rather than a free newspaper – a genre that was unanimously loathed by the “noble” paid-for news media at the time. Charlie was then under the editorship of a sectarian character, a friend of Nicolas Sarkozy’s wife Carla Bruni, a fact that helped him land a managing job at Radio France for a quickly forgotten tenure. At the time, the written part of Charlie wasn’t the paper’s best. But its cartoons were. Definitely. I deeply believe that satire and caricatures are an important component of free speech; because of this, Charlie has every right to exist and I really hope it will survive. (Frankly, I doubt it as most of its great talents have been killed.)
Among many comments I read, I spotted an editor saying that he doesn’t feel like putting his staff at risk by re-publishing Charlie’s cartoons.
I can’t disagree more. As unpleasant it is, I think it’s part of the job.
In February 1989, I was a young reporter at Libération when a fatwa was issued by Iran then leader Ayatollah Ruhollah Khomeini against the Salman Rushdie author of The Satanic Verses. The first reaction of Libération was to publish large abstract of Rushdie’s novel. Needless to say, in the months afterward, we operated under serious police protection. To every staffer of the paper, this was obviously the right decision to make (we were actually quite proud our editors.) Later, when the Danish newspaper Jyllands Posten published 12 cartoons that trigged scores of violent demonstrations across the world, Libé republished most of the cartoons.
In his style, Charlie Hebdo went many steps further, its editor Stephane Charbonnier (“Charb”) was put on a hit-list by the Yemen-based, pro-Jihad, magazine Inspire, along with other writers and cartoonists.
In 2011, the paper a satirical issue titled “Charia Hebdo”, “guest edited” by the Prophet Mahomet with this front page:
[“100 lashes if you don’t die laughing”]
Quickly after, the magazine was firebombed, and English and American newspapers published this pixelated image:
And last week, The Telegraph, among many others, opted for a carefully cropped version of the photograph of “Charb” holding the controversial front page:
Certainly not the finest hours of the Anglo-saxon press.
Publishing controversial caricatures is a mandatory mission for news media.
First, because it’s newsworthy; readers must see by themselves what this is about without the filtering of virtuous editors who entitle themselves with the right to decide what their audience should or should not see.
Two, when it comes to caricatures, the line between fun, sharp and excessive treatment is blur. It is completely subjective. In 2011, Le Monde cartoonist Plantu published this drawing:
He might be seen by devout muslims as crossing a religious boundary (Plantu is one of France’s most talented and courageous cartoonist.)
Would the New York Times, The Telegraph and others, pixelize Plantu’s work as well under the pretext might find if offensive and retaliation might ensue?
Then what about real journalistic work, investigative series, video reporting, documentaries about such sensitives issues? If one day extremists decide to use rifles and explosives against journalists and documentary makers, to what extent will these cautious news organizations refrain from picking up great — but dangerously hot — stories ?
Over the last days, we’ve seen pundits stating that the millions people marching in France were the proof that extremism had failed. They are wrong. The battle has just begun, and it’s not the time to balk.
With its idea of creating “iTunes for the press”, Blendle rattles the news industry’s cage. In spite of blessings from The New York Times and Axel Springer, the shiny new thing might just be a mirage.
Last week, two young Dutch people came up with a string of magic words: “iTunes for the press”, “New York Times”, and “Axel Springer”. The founders of Blendle, Alexander Klöpping and Marten Blankesteijn, were promising a miracle cure to a sick industry: a global system for the distribution of editorial products (the iTunes reference), backed by the gold standard of digital journalism (The New York Times), and also supported by the European leader of the rebellion against Google (Axel Springer). Great casting, great promises. Like handing out Zmapp doses in an Ebola ward.
Blendle’s principle is to unbundle publications and sell stories by the slice, for €0.10 to €0.30 ($0.13 to $0.38) each. (Actually, on Blendle.nl, some articles shoot up to €0.89 or $1.11, publisher’s choice). Basically, you register and get €2.50 credit, browse a well-designed kiosk (or an equally good app), and cherry-pick what you want. Blendle added unique features such as the possibility of a refund for a story you don’t like; its founders saying its a mandatory feature for any e-commerce business (“returns” account for around 4% of transactions). Launched in April on the Dutch market, the service is a success: 135,000 subscribers so far. According to the founders, 20,000 to 30,000 are added each month. Not bad for a 16-million-people country that enjoys an internet penetration of 94%.
This indisputable success spread beyond Netherlands when Blendle announced The New York Times Company and Axel Springer SE had invested a combined €3m ($3.8m) in the startup. (For more on the subject, see coverages by Les Echos (in French), The Guardian, Bloomberg BusinessWeek.)
I see many reasons to cast strong doubt about Blendle’s sustainability as a global business, and I see no benefit for digital media. The idea of unbundling news content is an old one. I recall a 1995 conversation with Nicholas Negroponte, at the time head of MIT’s MediaLab. Back then, he envisioned exactly what Klöpping and Blankesteijn are trying to implement now (both were 8-year old at the time.)
Negroponte’s vision never materialized and there are many reasons for this.
The first one is the hyper-abundance of free content, especially in English, a notion completely overlooked by Blendle’s advocates. Years ago, I used to tell my colleagues at Schibsted ASA in Norway that their country was so small (4.5m inhabitants) and their market position so dominant, with the huge traffic machines of their large print and digital publications, that if they put out online text in Pashto, it would still drive serious audience numbers. (Schibsted became a $2.2bn global player thanks to a strong diversification strategy served by a remarkable execution.) In the case of Blendle, the Dutch language serves as a cordon sanitaire, a kind of firewall mostly shielding publishers from the interference of free contents. In other words, it makes a relative sense for De Volkskrant, NRC, or De Telegraaf to join Blendle since they are already well-positioned on a small market.
This cannot work for the English language and its 1.2 billion speakers spread across the world, including 350 million native speakers. Pick any subject in the news cycle — say, Blendle precisely. In a few clicks, I will get a 800 words story from the Economist, a 900 words one for the Guardian, another 700 words article for BusinessWeek and a 1600 words piece from TechCrunch. And I’m not mentioning the… 24,400 other “Blendle” references that pop-up in Google News. In this list, only The Economist intends to join the Dutch service. Hence my question: Would you pay even 20 cents to get the Economist story while a profusion of good coverage is available just one click away for free? Me neither.
Second reason for discounting the Blendle model: News media have always built their business on a “cross-subsidy” system. Quite often, high audience stories — that don’t even cost much to produce (sports as an example), support low audience but costly reporting such as foreign coverage or “enterprise journalism” (that is when editor decide to assign large resources to go after a worthwhile subject –needless to say, this concept has become an endangered species.) Granted, a media powerhouse such as The New York Times still produces unique contents that justify paying for it (about the recent NYT economics, read Ken Doctor’s piece on NiemanLab). But I doubt that a buy-by-the-slice Blendle revenue will contribute for more than a fraction of a percentage point to the $200m a year cost of operating the Grey Lady’s 1300-staff newsroom.
Third reason: Lack of serendipity. A well-edited media — print or digital — is a clever assemblage of diversified subjects aimed a triggering readers’ curiosity for topics outside their usual range of interests. That’s not likely to work in Blendle’s model, because it relies on three entry points — Trending, Realtime and StaffPicks — that actually transfer the classical user-induced serendipity to the editors of the service. I doubt lots of media are actually willing to give up the opportunity to capture reader’s attention on the widest possible spectrum by leaving the reins in Blendle’s hands.
Fourth reason: Advertising loss. While digital ads is mostly a failure for the news industry, separating ads from content sounds like a weird idea. Today, publishers are working hard to get a more granular profile of their audiences in order to serve them with more relevant contents, tailored ads, and ancillary products. Content dissemination won’t help this process.
Why then do the NYT and Springer, both strongly attached to the value of their editorial production, jump aboard this boat? For the Times, it might have to do with the idea of diversifying revenue streams in every possible ways by extracting more dollars for its vast supply of occasional readers. Axel Springer’s motive is different. The German giant is literally obsessed with undermining Google’s de facto position in the news sector. Hence the bets it takes here and there, buying the French search engine Qwant or taking over the babbling Open Internet Project. Both choices are far from promising high potential, scalable moves.
Publishers who are tempted by the Blendle model also choose to ignore the damage suffered by the music industry. Once the user was given the opportunity to buy each song separately (for a dollar, not 20 cents), the ARPU quickly collapsed, and there was no turning back. Also, at the time, the paid-for music was not competing against free content — except for piracy — in the way that today’s paid contents have to face a profusion of free editorial, sometimes excellent.
And finally, let’s not forget that the original “iTunes model” is not as shiny as it used to be. For Apple, its ARPU went from $4.3 per user for Q1 2012, to $1.9 per user for Q1 2014, a 56% drop. The reason: Users are massively switching to the flat-fee/no ownership model of music streaming (hence Apple bet on Beats.)
Even before it reached news media, the iconic iTunes system was already seriously damaged.
[Updated with fresh data]
Corporations are tempted to take over journalism with increasingly better contents. For the profession, this carries both dangers and hopes for new revenue streams.
Those who fear Native Advertising or Branded Content will dread the unavoidable rise of Corporate Journalism. At first glance, associating the two words sounds like of an oxymoron of the worst possible taste, an offense punishable by tarring and feathering. But, as I will now explain, the idea deserves a careful look.
First, consider the chart below, lifted form an Economist article titled Slime-slinging Flacks vastly outnumber hacks these days. Caveat lector, published in 2011. The numbers are a bit old (I tried to update them without success), but the trend was obvious and is likely to have continued:
There were 4.6 public relations specialists for every reporter in 2013, according to the [Bureau of Labor Statistics] data. That is down slightly from the 5.3 to 1 ratio in 2009 but is considerably higher than the 3.2 to 1 margin that existed a decade ago, in 2004.
[Over the last 10 years], the number of reporters decreased from 52,550 to 43,630, a 17% loss according to the BLS data. In contrast, the number of public relations specialists during this timeframe grew by 22%, from 166,210 to 202,530.
Williams also exposes the salary gap between PR people and news reporters:
In 2013, according to BLS data, public relations specialists earned a median annual income of $54,940 compared with $35,600 for reporters.
And I should also mention this excellent piece in this Weekend FT, on The invasion of Corporate News. —
In short, while the journalistic staffing is shrinking dramatically in every mature market (US, Europe), the public relation crowd is rising in a spectacular fashion. It grows in two dimensions: the spinning aspect, with more highly capable people, most often former seasoned writers willing to become spin-surgeons. These are both disappointed by the evolution of their noble trade and attracted by higher compensation. The second dimension is the growing inclination for PR firms, communication agencies and corporations themselves to build fully-staffed newsrooms with editor-in-chief, writers, photo and video editors.
That’s the first issue.
The second trend is the evolution of corporate communication. Slowly but steadily, it departs from the traditional advertising codes that ruled the profession for decades. It shifts toward a more subtle and mature approach based on storytelling. Like it or not, that’s exactly what branded content is about: telling great stories about a company in a more intelligent way versus simply extolling a product’s merits.
I’m not saying that one will disappear at the other’s expense. Communication agencies will continue to plan, conceive and produce scores of plain, product-oriented campaigns. This is first because brands need it, but also because there are often no other ways to promote a product than showing it in the most effective (and sometimes aesthetic) fashion. But fact is, whether it is to stage the manufacturing process of a luxury watch, or the engineering behind a new medical imagery device, more and more companies are getting into a full-blown storytelling. To do so, they (or their surrogates) are hiring talent — which happens to be in rather large supply these days.
The rise of digital media is no stranger to this trend. In the print era, for practical reasons, it would have been inconceivable to intertwine classic journalism with editorial treatments. In the digital world things are completely different. Endless space, the ability to link, insert expandable formats all open new possibilities when it comes to accommodating large, rich, multimedia contents.
This evolution carries both serious hazards for traditional journalism as well as tangible economic opportunities. Let’s start with the business side.
Branded content (or native advertising) has achieved significant traction in the modern media business — even if the quality of its implementation varies widely. Some companies (that I will refrain from naming) screwed up big time by failing to properly identify what was paid-content as opposed to genuine journalistic production. And a misled reader is a lost reader (especially if there is a pattern). But for those who pull out good execution, both in terms of ethics and products, native ads carry a much better value than banners, billboards, pushdowns, interstitials, or other pathetic “creations” massively rejected by readers. I know of several media selling dumb IAB formats that find out they can achieve rates 5x to 8x higher by relying on high quality, bespoke branded contents. These more parsimonious and non invasive products achieve a much better audience acceptance than traditional formats.
For media companies, going decisively for branded content is also a way to regain control on their own business. Instead of getting avalanches of ready-to-eat campaigns from media buying agencies, they retain more control on the creation of advertising elements by dealing with the creative agencies or even with the brand themselves. Such a move goes with some constraints, though. Entering branded content at a credible scale requires investments. To serve its advertising clients, BuzzFeed maintains 50 people in its own design studio. Relative to the size of their entire staff, many other new media companies decided from the outset to build fairly large creative teams (including Quartz). That’s precisely why I believe most legacy media will miss this train (again). Focused on short-term cost control, also under pressure from conservative newsrooms who see branded content as the Antichrist, they will delay the move. In the meantime, pure players will jump on the opportunity.
Newsrooms have reasons to fear Corporate Journalism — in the sense of the ultimate form of branded content entirely packaged by the advertiser — but not for the reasons editors usually put forward. Dealing with the visual segregation of native ads vs. editorial is not utterly complicated; it depends mostly on the mutual understanding between the head of sales (or the publisher) and the editor; the latter needs to be credible enough among his peers to impose his/er choices without yielding to corporatism-induced demagoguery.
But the juxtaposition of articles (or multimedia contents) produced on one side by the newsroom and on another hand by a sponsor willing to build its storytelling at any cost might trigger another kind of conflict, around means and sources.
In the end, journalism is all about access. Beat reporters from a news media will do their best to circumvent the PR fence to get access to sources, while at the same time the PR team will order a bespoke story from its own staff writers. Both teams might actually find themselves in competition. Let’s say a media wants to write a piece on the strategy shift of major energy conglomerate with respect to global warming; the news team will talk to scores of specialists outside the company, financial analysts who challenge management’s choices, shareholders who object to expensive diversification, advocacy group who monitor operations in sensitive areas, unions, etc. They will also try to gain access to those who decide the fate of the company, i.e. top management, strategic committees, etc. Needless to say, such access will be tightly controlled.
On the corporate journalism side, the story will be told differently: strategist and managers will talk openly and in a very interesting way (remember, they are interviewed by pros). At the same time, a well-crafted on-site video shot in an oil-field in Borneo, or on a solar farm in Africa will reinforce the message, in a 60 Minutes way. The whole package won’t carry silly corporate messages, it will be rich, carefully balanced for credibility and well-staged. Click-wise, it is also likely to be quite attractive with its glowing, sleek videos and great text that will have the breadth (but not the substance) of professional reporting.
I’m painting this in broad strokes. But you get my point: Authentic news reporting and corporate journalism are bound to compete as audience could increasingly enjoy informative, well-design corporate production over drier journalistic work — even though it is labelled as such. Of course, corporate journalism will remain small compared to the editorial content produced by a newsroom, but it could be quite effective on the long run.
A key way to differentiate value-added news from commodity contents is to rework the notion of linking. Thanks to semantics and APIs, we could move from dumb links to knowledge linking.
Most media organizations are still stuck in version 1.0 of linking. When they produce content, they assign tags and links to mostly internal other contents. This is done out of fear that readers would escape for good if doors were opened too wide. Assigning tags is not exact science: I recently spotted a story about the new pregnancy in the British Royal family; it was tagged “Demography”, as if it was some piece about Germany’s weak fertility rate.
Today’s ways of laying out tags and and structuring topics are a mere first step; they are compulsory tools to keep the reader within the publication’s perimeter. The whole mechanism is improving, though. Some publications already use reader data profiling to dynamically assign related stories based on presumed affinities: Someone reading a story about General Electric might get a different set of related stories if she had been profiled as working in legal or finance rather than engineering.
But there is much more to come in that field. Two factors are are at work: API’s and semantic improvements. APIs (Application Programming Interfaces) act like the receptors of a cell that exchanges chemical signals with other cells. It’s the way to connect a wide variety of contents to the outside world. A story, a video, a graph can “talk” to and be read by other publications, databases and other “organisms”. But first, it has to pass through semantic filters. From a text, the most basic tools extract sets of words and expressions such as named entities, patronyms, places.
Another higher level involves extracting meanings like “X acquired Y for Z million dollars” or “X has been appointed to Finance Minister….”, etc. But what about a video? Some go with granular tagging systems; others, such as Ted Talks, come with multilingual transcripts that provide valuable raw material for semantic analysis. But the bulk of contents remain stuck in a dumb form: minimal and most often unstructured tagging. These require complex treatments to make them “readable” by the outside world. For instance, a untranscribed video seen as interesting (say a Charlie Rose interview), will have to undergo a speech-to-text analysis to become usable. This processes requires both human curation (finding out what content is worth processing) and sophisticated technology (transcribing a speech by someone speaking super-fast or with a strong accent.)
Once this issues are solved, a complete new world of knowledge emerges. Enter “Semantic Culturonomics“. The term has been coined by two scholars working in France, Fabian Suchanek and Nicoleta Preda. Here is a short abstract of their paper (thanks to Christophe Tricot for the tip):
Newspapers are testimonials of history. The same is increasingly true of social media such as online forums, online communities, and blogs.
Semantic Culturomics [is] a paradigm that uses semantic knowledge bases in order to give meaning to textual corpora such as news and social media. This idea is not without challenges, because it requires the link between textual corpora and se-antic knowledge, as well as the ability to mine a hybrid data model for trends and logical rules. […]
Semantics turns the texts into rich and deep sources of knowledge, exposing nu- ances that today’s analyses are still blind to. This would be of great use not just for historians and linguists, but also for journalists, sociologists, public opinion analysts, and political scientists.
In other words, and viewed through my own glasses, these two scientists suggest to go from this:
Now picture this: A hypothetical big-issue story about GE’s strategic climate change thinking, published in the Wall Street Journal, the FT, or in The Atlantic, suddenly opens to a vast web of knowledge. The text (along with graphics, videos, etc.) provided by the news media staff, is amplified by access to three books on global warming, two Ted Talks, several databases containing references to places and people mentioned in the story, an academic paper from Knowledge@Wharton, a MOOC from Coursera, a survey from a Scandinavian research institute, a National Geographic documentary, etc. Since (supposedly), all of the above is semanticized and speaks the same lingua franca as the original journalistic content, the process is largely automatized.
Great, but where is the value for the news organization, you might ask? First of all, a trusted publication (and a trusted byline) offering such super-curation to its readers is much more likely to attract a solvent audience: readers willing to pay for a service no one else offers. Second, money-making business-to-business intelligence services can be derived from modern tagging, structuring and linking. Such products would carry great value because they would be unique, based on trust, selection and relevance.