Browsing Tag


Goodbye Google Reader

design, online publishing By June 17, 2013 Tags: , 22 Comments


Three months ago, Google announced the “retirement” of Google Reader as part of the company’s second spring cleaning. On July 1st — two weeks from today — the RSS application will be given a gold watch and a farewell lunch, then it will pack up its bits and leave the building for the last time.

The other items on Google’s spring cleaning list, most of which are tools for developers, are being replaced by superior (or simpler, friendlier) services: Are you using CalDAV in your app? Use the Google Calendar API, instead; Google Map Maker will stand in for Google Building Maker; Google Cloud Connect is gone, long live Google Drive.

For Google Reader’s loyal following, however, the company had no explanation beyond a bland “usage has declined”, and it offered no replacement nor even a recommendation other than a harsh “get your data and move on”:

Users and developers interested in RSS alternatives can export their data, including their subscriptions, with Google Takeout over the course of the next four months.

The move didn’t sit well with users whose vocal cords were as strong as their bond to their favorite blog reader. James Fallows, the polymathic writer for The Atlantic, expressed a growing distrust of the company’s “experiments” in A Problem Google Has Created for Itself:

I have already downloaded the Android version of Google’s new app for collecting notes, photos, and info, called Google Keep… Here’s the problem: Google now has a clear enough track record of trying out, and then canceling, “interesting” new software that I have no idea how long Keep will be around… Until I know a reason that it’s in Google’s long-term interest to keep Keep going, I’m not going to invest time in it or lodge info there.

The Washington Post’s Ezra Klein echoed the sentiment (full article here):

But I’m not sure I want to be a Google early adopter anymore. I love Google Reader. And I used to use Picnik all the time. I’m tired of losing my services.

What exactly did Google Reader provide that got its users, myself included, so excited, and why do we take its extermination so personally?

Reading is, for some of us, an addiction. Sometimes the habit turns profitable: The hours I spent poring over computer manuals on Saturday mornings in my youth may have seemed cupidic at the time, but the “research” paid off.

Back before the Web flung open the 10,000 Libraries of Alexandria that I dreamed of in the last chapter of The Third Apple my reading habit included a daily injection of newsprint.  But as online access to real world dailies became progressively more ubiquitous and easier to manage, I let my doorstep subscriptions lapse (although I’ll always miss the wee hour thud of the NYT landing on our porch…an innocent pleasure unavailable in my country of birth).

Nothing greased the move to all-digital news as much as the RSS protocol (Real Simple Syndication, to which my friend Dave Winer made crucial contributions). RSS lets you syndicate your website by adding a few lines of HTML code. To subscribe, a user simply pushes a button. When you update your blog, it’s automatically posted to the user’s chosen “feed aggregator”.

RSS aggregation applications and add-ons quickly became a very active field as this link attests. Unfortunately, the user interfaces for these implementations – how you add, delete, and navigate subscriptions — often left much to be desired.

Enter Google Reader, introduced in 2005. Google’s RSS aggregator mowed down everything in its path as it combined the company’s Cloud resources with a clean, sober user interface that was supported by all popular browsers…and the price was right: free.

I was hooked. I just checked, I have 60 Google Reader subscriptions. But the number is less important than the way the feeds are presented: I can quickly search for subscriptions, group them in folders, search through past feeds, email posts to friends, fly over article summaries, and all of this is made even easier through simple keyboard shortcuts (O for Open, V for a full View on the original Web page, Shift-A to declare an entire folder as Read).

Where I once read four newspapers with my morning coffee I now open my laptop or tablet and skim my customized, ever-evolving Google Reader list. I still wonder at the breadth and depth of available feeds, from dissolute gadgetry to politics, technology, science, languages, cars, sports…

I join the many who mourn Google Reader’s impending demise. Fortunately, there are alternatives that now deserve more attention.

I’ll start with my Palo Alto neighbor, Flipboard. More than just a Google Reader replacement, Flipboard lets you compose and share personalized magazines. It’s very well done although, for my own daily use, its very pretty UI gets in the way of quickly surveying the field of news I’m interested in. Still, if you haven’t loaded it onto your iOS or Android device, you should give it a try.

Next we have Reeder, a still-evolving app that’s available on the Mac, iPhone, and iPad. It takes your Google Reader subscriptions and presents them in a “clean and well-lighted” way:

For me, Feedly looks like the best way to support one’s reading habit (at least for today). Feedly is offered as an app on iOS and Android, and as extensions for Chrome, Firefox, and Safari on your laptop or desktop (PC or Mac). Feedly is highly customizable: Personally, I like the ability to emulate Reader’s minimalist presentation, others will enjoy a richer, more graphical preview of articles. For new or “transferring” users, it offers an excellent Feedback and Knowledge Base page:

Feedly makes an important and reassuring point: There might be a paid-for version in the future, a way to measure the app’s real value, and to create a more lasting bond between users and the company.

There are many other alternatives, a Google search for “Google Reader replacement” (the entire phrase) yields nearly a million hits (interestingly, Bing comes up with only 35k).

This brings us back to the unanswered question: Why did Google decide to kill a product that is well-liked and well-used by well-informed (and I’ll almost dare to add: well-heeled) users?

I recently went to a Bring Your Parents to Work day at Google. (Besides comrades of old OS Wars, we now have a child working there.) The conclusion of the event was the weekly TGIF-style bash (which is held on Thursdays in Mountain View, apparently to allow Googlers in other time zones to participate). Both founders routinely come on stage to make announcements and answer questions.

Unsurprisingly, someone asked Larry Page a question about Google Reader and got the scripted “too few users, only about a million” non-answer, to which Sergey Brin couldn’t help quip that a million is about the number of remote viewers of the Google I/O developer conference Page had just bragged about. Perhaps the decision to axe Reader wasn’t entirely unanimous. And never mind the fact Feedly seems to already have 3 million subscribers

The best explanation I’ve read (on my Reader feeds) is that Google wants to draw the curtain, perform some surgery, and reintroduce its RSS reader as part of Google+, perhaps with some Google Now thrown in:

While I can’t say I’m a fan of squirrelly attempts to draw me into Google+, I must admit that RSS feeds could be a good fit… Stories could appear as bigger, better versions of the single-line entry in Reader, more like the big-photo entries that Facebook’s new News Feed uses. Even better, Google+ entries have built in re-sharing tools as well as commenting threads, encouraging interaction.

We know Google takes the long view, often with great results. We’ll see if killing Reader was a misstep or another smart way to draw Facebook users into Google’s orbit.

It may come down to a matter of timing. For now, Google Reader is headed for the morgue. Can we really expect that Google’s competitors — Yahoo!, Facebook, Apple, Microsoft — will resist the temptation to chase the ambulance?




Google News: The Secret Sauce

online publishing By February 24, 2013 Tags: 15 Comments


A closer look at Google’s patent for its news retrieval algorithm reveals a greater than expected emphasis on quality over quantity. Can this bias stay reliable over time?

Ten years after its launch, Google News’ raw numbers are staggering: 50,000 sources scanned, 72 editions in 30 languages. Google’s crippled communication machine, plagued by bureaucracy and paranoia, has never been able to come up with tangible facts about its benefits for the news media it feeds on. It’s official blog merely mentions “6 billion visits per month” sent to news sites and Google News claims to connect “1 billion unique users a week to news content” (to put things in perspective, the or the Huffington Post are cruising at about 40 million UVs per month). Assuming the clicks are sent to a relatively fresh news page bearing higher value advertising, the six billion visits can translate into about $400 million per year in ad revenue. (This is based on a $5 to $6 revenue per 1,000 pages, i.e. a few dollars in CPM per single ad, depending on format, type of selling, etc.) That’s a very rough estimate. Again: Google should settle the matter and come up with accurate figures for its largest markets. (On the same subject, see a previous Monday Note: The press, Google, its algorithm, their scale.)

But how exactly does Google News work? What kind of media does its algorithm favor most? Last week, the search giant updated its patent filing with a new document detailing the thirteen metrics it uses to retrieve and rank articles and sources for its news service. (Computerworld unearthed the filing, it’s here).

What follows is a summary of those metrics, listed in the order shown in the patent filing, along with a subjective appreciation of their reliability, vulnerability to cheating, relevancy, etc.

#1. Volume of production from a news source:

A first metric in determining the quality of a news source may include the number of articles produced by the news source during a given time period [week or month]. [This metric] may be determined by counting the number of non-duplicate articles produced by the news source over the time period [or] counting the number of original sentences produced by the news source.

This metric clearly favors production capacity. It benefits big media companies deploying large staffs. But the system can also be cheated by content farms (Google already addressed these questions); new automated content creation systems are gaining traction, many of them could now easily pass the Turing Test.

#2. Length of articles. Plain and simple: the longer the story (on average), the higher the source ranks. This is bad news for aggregators whose digital serfs cut, paste, compile and mangle abstracts of news stories that real media outlets produce at great expense.

#3. “The importance of coverage by the news source”. To put it another way, this matches the volume of coverage by the news source against the general volume of text generated by a topic. Again, it rewards large resource allocation to a given event. (In New York Times parlance, such effort is called called “flooding the zone”.)

#4. The “Breaking News Score”:   

This metric may measure the ability of the news source to publish a story soon after an important event has occurred. This metric may average the “breaking score” of each non-duplicate article from the news source, where, for example, the breaking score is a number that is a high value if the article was published soon after the news event happened and a low value if the article was published after much time had elapsed since the news story broke.

Beware slow moving newsrooms: On this metric, you’ll be competing against more agile, maybe less scrupulous staffs that “publish first, verify later”. This requires a smart arbitrage by the news producers. Once the first headline has been pushed, they’ll have to decide what’s best: Immediately filing a follow-up or waiting a bit and moving a longer, more value-added story that will rank better in metrics #2 and #3? It depends on elements such as the size of the “cluster” (the number of stories pertaining to a given event).

#5. Usage Patterns:

Links going from the news search engine’s web page to individual articles may be monitored for usage (e.g., clicks). News sources that are selected often are detected and a value proportional to observed usage is assigned. Well known sites, such as CNN, tend to be preferred to less popular sites (…). The traffic measured may be normalized by the number of opportunities readers had of visiting the link to avoid biasing the measure due to the ranking preferences of the news search engine.

This metric is at the core of Google’s business: assessing the popularity of a website thanks to the various PageRank components, including the number of links that point to it.

#6. The “Human opinion of the news source”:

Users in general may be polled to identify the newspapers (or magazines) that the users enjoy reading (or have visited). Alternatively or in addition, users of the news search engine may be polled to determine the news web sites that the users enjoy visiting. 

Here, things get interesting. Google clearly states it will use third party surveys to detect the public’s preference among various medias — not only their website, but also their “historic” media assets. According to the patent filing, the evaluation could also include the number of Pulitzer Prizes the organization collected and the age of the publication. That’s for the known part. What lies behind the notion of “Human opinion” is a true “quality index” for news sources that is not necessarily correlated to their digital presence. Such factors clearly favors legacy media.

# 7. Audience and traffic. Not surprisingly Google relies on stats coming from Nielsen Netratings and the like.

#8. Staff size. The bigger a newsroom is (as detected in bylines) the higher the value will be. This metric has the merit of rewarding large investments in news gathering. But it might become more imprecise as “large” digital newsrooms tend now to be staffed with news repackagers bearing little added value.

#9. Numbers of news bureaus. It’s another way to favor large organizations — even though their footprint tends to shrink both nationally and abroad.

#10. Number of “original named entities”. That’s one of the most interesting metric. A “named entity is the name of a person, place or organization”. It’s the primary tool for semantic analysis.

If a news source generates a news story that contains a named entity that other articles within the same cluster (hence on the same topic) do not contain, this may be an indication that the news source is capable of original reporting.

Of course, some cheaters insert misspelled entities to create “false” original entities and fool the system (Google took care of it). But this metric is a good way to reward original source-finding.

#11. The “breadth” of the news source. It pertains to the ability of a news organizations to cover a wide range of topics.

#12. The global reach of the news sources. Again, it favors large media who are viewed, linked, quoted, “liked”, tweeted from abroad.

This metric may measure the number of countries from which the news site receives network traffic. In one implementation consistent with the principles of the invention, this metric may be measured by considering the countries from which known visitors to the news web site are coming (e.g., based at least in part on the Internet Protocol (IP) addresses of those users that click on the links from the search site to articles by the news source being measured). The corresponding IP addresses may be mapped to the originating countries based on a table of known IP block to country mappings.

#13. Writing style. In the Google world, this means statistical analysis of contents against a huge language model to assess “spelling correctness, grammar and reading levels”.

What conclusions can we draw? This enumeration clearly shows Google intends to favor legacy media (print or broadcast news) over pure players, aggregators or digital native organizations. All the features recently added, such as Editor’s pick, reinforce this bias. The reason might be that legacy media are less prone to tricking the algorithm. For once, a know technological weakness becomes an advantage.


The Google Fund for the French Press

newspapers, online publishing By February 3, 2013 Tags: , 13 Comments


At the last minute, ending three months of  tense negotiations, Google and the French Press hammered a deal. More than yet another form of subsidy, this could mark the beginning of a genuine cooperation.

Thursday night, at 11:00pm Paris time, Marc Schwartz, the mediator appointed by the French government got a call from the Elysée Palace: Google’s chairman Eric Schmidt was en route to meet President François Hollande the next day in Paris. They both intended to sign the agreement between Google and the French press the Friday at 6:15pm. Schwartz, along with Nathalie Collin, the chief representative for the French Press, were just out of a series of conference calls between Paris and Mountain view: Eric Schmidt and Google’s CEO Larry Page had green-lighted the deal. At 3 am on Friday, the final draft of the memorandum was sent to Mountain View. But at 11:00am everything had to be redone: Google had made unacceptable changes, causing Schwartz and Collin to  consider calling off the signing ceremony at the Elysée. Another set of conference calls ensued. The final-final draft, unanimously approved by the members of the IPG association (General and Political Information), was printed at 5:30pm, just in time for the gathering at the Elysée half an hour later.

The French President François Hollande was in a hurry, too: That very evening, he was bound to fly to Mali where the French troops are waging as small but uncertain war to contain Al-Qaeda’s expansion in Africa. Never shy of political calculations, François Hollande seized the occasion to be seen as the one who forced Google to back down. As for Google’s chairman, co-signing the agreement along with the French President was great PR. As a result, negotiators from the Press were kept in the dark until Eric Schmidt’s plane landed in Paris Friday afternoon and before heading to the Elysée. Both men underlined what  they called “a world premiere”, a “historical deal”…

This agreement ends — temporarily — three months of difficult negotiations. Now comes the hard part.

According to Google’s Eric Schmidt, the deal is built on two stages:

“First, Google has agreed to create a €60 million Digital Publishing Innovation Fund to help support transformative digital publishing initiatives for French readers. Second, Google will deepen our partnership with French publishers to help increase their online revenues using our advertising technology.”

As always, the devil lurks in the details, most of which will have to be ironed over the next two months.

The €60m ($82m) fund will be provided by Google over a three-year period; it will be dedicated to new-media projects. About 150 websites members of the IPG association will be eligible for submission. The fund will be managed by a board of directors that will include representatives from the Press, from Google as well as independent experts. Specific rules are designed to prevent conflicts of interest. The fund will most likely be chaired by the Marc Schwartz, the mediator, also partner at the global audit firm Mazars (all parties praised him for his mediation and wish him to take the job).

Turning to the commercial part of the pact, it is less publicized but at least as equally important as the fund itself. In a nutshell, using a wide array of tools ranging from advertising platforms to content distribution systems, Google wants to increase its business with the Press in France and elsewhere in Europe. Until now, publishers have been reluctant to use such tools because they don’t want to increase their reliance on a company they see as cold-blooded and ruthless.

Moving forward, the biggest challenge will be overcoming an extraordinarily high level distrust on both sides. Google views the Press (especially the French one) as only too eager to “milk” it, and unwilling to genuinely cooperate in order to build and share value from the internet. The engineering-dominated, data-driven culture of the search engine is light-years away from the convoluted “political” approach of legacy media that don’t understand or look down on the peculiar culture of tech companies.

Dealing with Google requires a mastery of two critical elements: technology (with the associated economics), and the legal aspect. Contractually speaking, it means transparency and enforceability. Let me explain.

Google is a black box. For good and bad reasons, it fiercely protects the algorithms that are key to squeezing money from the internet, sometimes one cent at a time — literally. If Google consents to a cut of, say, advertising revenue derived from a set of contents, the partner can’t really ascertain whether the cut truly reflects the underlying value of the asset jointly created – or not. Understandably, it bothers most of Google’s business partners: they are simply asked to be happy with the monthly payment they get from Google, no questions asked. Specialized lawyers I spoke with told me there are ways to prevent such opacity. While it’s futile to hope Google will lift the veil on its algorithms, inserting an audit clause in every contract can be effective; in practical terms, it means an independent auditor can be appointed to verify specific financial records pertaining to a business deal.

Another key element: From a European perspective, a contract with Google is virtually impossible to enforce. The main reason: Google won’t give up on the Governing Law of a contract that is to be “Litigated exclusively in the Federal or States Courts of Santa Clara County, California”. In other words: Forget about suing Google if things go sour. Your expensive law firm based in Paris, Madrid, or Milan will try to find a correspondent in Silicon Valley, only to be confronted with polite rebuttals: For years now, Google has been parceling out multiples pieces of litigation among local law firms simply to make them unable to litigate against it. Your brave European lawyer will end up finding someone that will ask several hundreds thousands dollars only to prepare but not litigate the case. The only way to prevent this is to put an arbitration clause in every contract. Instead of going before a court of law, the parties agrees to mediate the matter through a private tribunal. Attorneys say it offers multiples advantages: It’s faster, much cheaper, the terms of the settlement are confidential, and it carries the same enforceability as a Court order.

Google (and all the internet giants for that matter) usually refuses an arbitration clause as well as the audit provision mentioned earlier. Which brings us to a critical element: In order to develop commercial relations with the Press, Google will have to find ways to accept collective bargaining instead of segmenting negotiations one company at a time. Ideally, the next round of discussions should come up with a general framework for all commercial dealings. That would be key to restoring some trust between the parties. For Google, it means giving up some amount of tactical as well as strategic advantage… that is part of its long-term vision. As stated by Eric Schmidt in its upcoming book “The New Digital Age” (the Wall Street Journal had access to the galleys) :

“[Tech companies] will also have to hire more lawyers. Litigation will always outpace genuine legal reform, as any of the technology giants fighting perpetual legal battles over intellectual property, patents, privacy and other issues would attest.”

European media are warned: they must seriously raise their legal game if they want to partner with Google — and the agreement signed last Friday in Paris could help.

Having said that, I personally believe it could be immensely beneficial for digital media to partner with Google as much as possible. This company spends roughly two billion dollars a year refining its algorithms and improving its infrastructure. Thousands of engineers work on it. Contrast this with digital media: Small audiences, insufficient stickiness, low monetization plague both web sites and mobile apps; the advertising model for digital information is mostly a failure — and that’s not Google’s fault. The Press should find a way to capture some of Google’s technical firepower and concentrate on what it does best: producing original, high quality contents, a business that Google is unwilling (and probably culturally unable) to engage in. Unlike Apple or Amazon, Google is relatively easy to work with (once the legal hurdles are cleared).

Overall, this deal is a good one. First of all, both sides are relieved to avoid a law (see last Monday Note Google vs. the press: avoiding the lose-lose scenario). A law declaring that snippets and links are to be paid-for would have been a serious step backward.

Second, it’s a departure from the notion of “blind subsidies” that have been plaguing the French Press for decades. Three months ago, the discussion started with irreconcilable positions: publishers were seeking absurd amounts of money (€70m per year, the equivalent of IPG’s members total ads revenue) and Google was focused on a conversion into business solutions. Now, all the people I talked to this weekend seem genuinely supportive of building projects, boosting innovation and also taking advantage of Google’s extraordinary engineering capabilities. The level of cynicism often displayed by the Press is receding.

Third, Google is changing. The fact that Eric Schmidt and Larry Page jumped in at the last minute to untangle the deal shows a shift of perception towards media. This agreement could be seen as a template for future negotiations between two worlds that still barely understand each other.


Google’s Amazing “Surveywall”

advertising By September 9, 2012 Tags: , , 8 Comments


How Google could reshape online market research and also reinvent micro-payments. 

Eighteen months ago — under non disclosure — Google showed publishers a new transaction system for inexpensive products such as newspaper articles. It worked like this: to gain access to a web site, the user is asked to participate to a short consumer research session. A single question, a set of images leading to a quick choice. Here are examples Google recently made public when launching its Google Consumer Surveys:

Fast, simple and efficient. As long as the question is concise and sharp, it can be anything: pure market research for a packaging or product feature, surveying a specific behavior,  evaluating a service, intention, expectation, you name it.

This caused me to wonder how such a research system could impact digital publishing and how it could benefit web sites.

We’ll start with the big winner: Google, obviously. The giant wins on every side. First, Google’s size and capillarity puts it in a unique position to probe millions of people in a short period of time. Indeed, the more marketeers rely on its system, the more Google gains in reliability, accuracy, granularity (i.e. ability to probe a segment of blue collar-pet owners in Michigan or urbanite coffee-drinkers in London).The bigger it gets, the better it performs. In the process, Google disrupts the market research sector with its customary deflationary hammer. By playing on volumes, automation (no more phone banks), algorithms (as opposed to panels), the search engine is able to drastically cut prices. By 90% compared to  traditional surveys, says Google. Expect $150 for 1500 responses drawn from the general US internet population. Targeting a specific group can cost five times as much.

Second upside for Google: it gets a bird’s eye on all possible subjects of consumer researches. Aggregated, anonymized, recompiled, sliced in every possible way, these multiple datasets further deepen Google’s knowledge of consumers — which is nice for a company that sells advertising. By the way, Google gets paid for research it then aggregates into its own data vault. Each answer collected contributes a smallish amount of revenue; it will be a long while, if ever, before such activity shows in Google’s quarterly results — but the value is not there, it resides in the data the company gets to accumulate.

The marketeers’ food chain should be happy. With the notable exception of those who make a living selling surveys, every company, business unit or department in charge of a product line or a set of services will be able to throw a poll quickly, efficiently and cheaply. Of course, legacy pollsters will argue Google Consumer Surveys are crude, inaccurate. They will be right. For now. Over time the system will refine itself, and Google will have put  a big lock on another market.

What’s in Google’s Consumer Surveys for publishers whose sites will host a surveywall? In theory, the mechanism finally solves the old quest for tiny, friction-free transactions: replace the paid-for zone with a survey-zone through which access is granted after answering a quick question. Needless to say, it can’t be recommended for all sites. We can’t reasonably expect a general news site, not to mention a business news one, to adopt such a scheme. It would immediately irritate the users and somehow taint the content.

But a young audience should be more inclined to accept such a surveywall. Younger surfers will always resist any form of payment for digital information, regardless of quality, usefulness, relevance. Free is the norm. Or its illusion. Young people have already demonstrated their willingness to give up their privacy in exchange for free services such as Facebook — they have yet to realize they paid the hard price, but that’s another subject.
On the contrary, a surveywall would be at least more straightforward, more honest: users gives a split second of their time by clicking on an image or checking a box to access the service (whether it is an article, a video or a specific zone.) The system could even be experienced as fun as long as the question is cleverly put.
Economically, having one survey popping up from time to time — for instance when the user reconnects to a site — makes sense. Viewed from a spreadsheet (I ran simulations with specific sites and varying parameters), it could yield more money than the cheap ads currently in use. This, of course, assumes broad deployment by Google with thousands of market research sessions running at the same time.

A question crosses my mind : how come Facebook didn’t invented the surveywall?





Samsung vs. Google

mobile internet By January 8, 2012 Tags: , , 74 Comments

Android is a huge success. Google bought Andy Rubin’s company in 2005 and turned it into a smartphone operating system giant, with more than 50% of the global market and 700,000 activations a day this past December.

Perhaps, as Steve Jobs seemed to think, it was Eric Schmidt’s position on Apple’s Board of Directors that infected Google with an itch to enter the smartphone OS market. Or maybe Larry Page and Sergey Brin simply recognized the Next Big Thing when they saw it. (As Page points out, the company had begun Android development a year before Schmidt joined the Apple Board.)

Regardless of the “authenticity” of Google’s smartphone impulse, it’s the execution of the idea, the integration of Android into Google’s top-level strategy where the product really shines. Android improves quickly; the “free and open” platform is popular with developers and, perhaps even more so, with handset makers who no longer have to create their own software, a task they’re culturally ill-suited to perform. And everyone loves being associated with a technically competent winner. (I might be a little biased in my regard for the Android engineering team: Comrades from a previous OS war work there.)

For the past three years, Android has experienced a kind of free space expansion: The platform has grown without hitting obstacles. I’m not ignoring the IP wars, they’re real and the outcome(s) are still unclear, but these fights haven’t slowed Android’s triumphant march.

As we enter 2012, however, it seems the game may be changing. Looking at last week’s numbers for Motorola, HTC, and Samsung, we see a different picture. Instead of the old “there’s more than enough room for every Android handset maker to be a winner”, we have a three-horse’s-length leader, Samsung, while Motorola and HTC lag behind.

From October to December of last year, a.k.a. Q4CY11, Samsung is said to have shipped 35 million smartphones, taking it to the number one spot worldwide. Citing “competitive reasons”, Samsung no longer makes its sales/shipment numbers public, so we have to rely on ‘‘independent” observers to tally up the score. Having worked in the high-tech industry for decades, I’ve seen how this information game is played: firm XYZ sells its “research” to manufacturer W…and ends up as its mouthpiece. I’d love to follow the money, but these private firms don’t have to reveal who their clients are and how much they pay for their services. (For a more detailed discussion of these shenanigans, read an excellent piece by The Guardian’s Charles Arthur: Dear Samsung: could we have some clarity on your phone sales figures now? Another possible bias: The Guardian re-publishes the Monday Note on its site.)

But even if we “de-propagandize” the numbers, Samsung is clearly the number one Android handset maker, and, just as clearly, it’s taking large chunks of market share from the other two leading players: Motorola and HTC both announced lower than expected Q4CY11 numbers. HTC’s unit volume was 10 million units, down from 13.2 million in Q3; Motorola got 10.5 million units in Q4, down from 11.6 million in Q3.

This leaves us with the potential for an interesting face-off. Not Samsung vs Motorola/HTC, but…Samsung vs. Google. As Erik Sherman observes in his CBS MoneyWatch post, since Samsung ships close to 55% of all Android phones, the company could be in a position to twist Google’s arm. If last quarter’s trend continues — if Motorola and HTC lose even more ground — Samsung’s bargaining position will become even stronger.

But what is Samsung’s ‘‘bargaining position’’? What could they want? Perhaps more search referral money (the $$ flowing when Google’s search engine is used on a smartphone), earlier access to Android releases, a share of advertising revenue…

Will Google let Samsung gain the upper hand? Not likely, or at least not for long. There’s Motorola, about to become a fully-owned but “independent” Google subsidiary. A Googorola vertically-integrated smartphone line could counterbalance Samsung’s influence.

And so it would be Samsung’s move…and they wouldn’t be defenseless. Consider the Kindle Fire example: Just like Amazon picked the Android lock, Samsung could grab the Android Open Source code and create its own unlicensed but fully legal smartphone OS and still benefit from a portion of Android apps, or it could build its own app store the way Amazon did. Samsung is already showing related inclinations with its Music Hub and its iMessage competitor.

Samsung is a tough, determined fighter and won’t let Google dictate its future. The same can be said of Google.

This is going to be interesting.


What If Google Stored All Our Medical Records?

Uncategorized By October 17, 2010 Tags: 18 Comments

Regard the horrified looks on the faces of the attendees at a California Council on Science and Technology meeting in Irvine six or seven years ago. I’m the only member from the Dark Side, from the venture capital milieu, inside an institution “designed to offer expert advice to the state government and to recommend solutions to science and technology-related policy issues”. The other members are scientists and scholars.

The question of the day is electronic medical records: How do we computerize, standardize, store, secure, exchange our corpus info with a reasonable assurance of privacy?

My answer: Give the job to Google. And thus follows the politely alarmed reaction…and the objections.

Our records won’t be secure! Google will exploit our most personal history to make money on our backs (or other organs)! They’ve digitized books, is this yet another step towards a privately-controlled but overly powerful public utility/institution?

Years later, what do we know?

First, doctors and patients still have trouble finding and exchanging records. I have, as attorneys are fond of saying, “personal knowledge” of this fact. The exchange of records between my politically-incorrect internist, the Palo Alto Medical Foundation and the Stanford Hospital—organizations within a mere mile of one other—takes multiple phone calls, visits in person, fax machines.

Now try one of the blood-sucking medical insurance companies. To gain access to your own record, they send you, by fax, an authorization form for your signature…but there’s no return number, there’s no way to return the fax. It’s not personal, it’s systemic, an obstacle course to minimize claim payments.

Second, the current system, notwithstanding HIPAA regulations, leaves our records open to outsourcing subcontractors in the US and elsewhere, to poorly qualified claim adjudicators inside insurance companies and to employers’ HR personnel. In theory, there are walls. In practice, expediency: there’s “cost containment”, there’s an astounding number of people, “trusted” or not, who get to look at your records. Compared to this, Google looks pretty good. Yes, they have security breaches, people occasionally lose their password or get their accounts hacked, but these events are statistically insignificant. Add penalties for such incidents, weigh them against what we’d pay Google for the service, and we’d have a decent level of protection, an SLA for our medical records.

Few companies have dealt with size, with what we call “scalability” as successfully as Google has. They have the human expertise and the computer systems to store and index “everything”, this is what they do for a living, with more than 2.5 million servers that keep their data intact.

As to Google’s exploitation of our records… Of course Google cares, they can wring billions from our personal health history? All we have to do is write a contract to share the loot, we call this “revenue-sharing”. Think of what a relentless crawl through billions of medical records will garner them… Take a transversal look at all the patients who take high blood pressure (antihypertensive) drugs, look at morbidity (how often, when, and how severely they get sick) and mortality (when and how we die) rates. Or look at the more subtle but important combinations such as ancestry (the best way to get low cholesterol is to choose your parents well), other drugs, lifestyle (a.k.a. good and bad exercise, food intake, alcohol, tobacco and other substances soon to be legal in California).

This would be much better than the current and deeply corrupt system of medical studies. You think I exaggerate? I wish. See this sobering David H. Freedman story in the November issue of the Atlantic (a treasure of literate America).


The perspectives of the two Lévys

Uncategorized By June 23, 2008 Tags: , No Comments

Maurice Levy, 66, is chairman and CEO of Publicis, n°3 advertising group in the world. His son, Alain Levy, 45, is the CEO of Weborama, one of the leaders of Internet analytics in Europe. Two generations, two different vantage points on the changing advertising market, confronted in this interview by Le Monde (full text in French below).

Here are their respective takes on various subjects:

On the ad sector in general. Maurice (Publicis): “Our response time [to the tech challenges] are way too long. We need to speed up. The inflexion point for our companies is now”.

On the shift in ad spending. Maurice: “Print and TV has far from dead. Today, they account for 92% in ad spending. In 2010, it will be 88% but the share of the Internet will have grown twofold”. Alain (Weborama): “OK, TV will remain dominant, but it will become digital and will eventually allow all what is currently done with the Internet — interaction, targeting…”

The difficulties of Print media Maurice: “The print media must take advantage of two assets: their brand and their ability to select, process the information”. Alain: “Yeah, but today the so-called digital natives have zero loyalty toward content brands”.

The strategies to implement Alain: “One of the key questions is the relationship the big players will have with different technologies. Should they own them?” (Background: Alain Levy is adamantly warning against the domination by Google as he said in the issue #27 of the Monday Note). Pragmatic as usual, Maurice has chosen his camp: “In the interest of its clients, Publicis has decided to make a deal with Google and to work with it”.

Family lunches must me animated between Maurice and Alain Levy.


Pub, médias, Internet : le grand chambardement

© LE MONDE | 21.06.08 | 15h07 • Mis à jour le 21.06.08 | 15h07

Maurice Lévy est président du groupe Publicis, Alain Lévy est président de StartUp Avenue et de Weborama. Les deux générations que la “numérisation” a rapprochés confrontent leurs analyses.

Maurice Lévy, comment la publicité et les médias vont-ils évoluer dans un monde où les innovations se succèdent à toute vitesse ?
Maurice Lévy : Face aux technologies nouvelles, nos temps de réponse sont trop longs. Il faut accélérer. Nos sociétés sont à un point d’inflexion. Songez au temps que passent les internautes à s’informer, à se documenter, à se former, à travailler, à se distraire et à établir des relations entre eux : tout cela prend le pas sur les autres moyens de communication. Cela modifie les comportements, les attentes. Par exemple, les gens pensent que l’information doit être gratuite, que la musique est une marchandise. Il y a quantité de services qu’ils n’acceptent plus de payer. La vitesse des changements est telle que les schémas anciens de communication sont périmés. L’idée de faire une grande campagne de publicité à la télévision, avec des relais dans d’autres médias, est un schéma qui appartient au pasVous, Alain Lévy, vous créez des technologies dont se servent les publicitaires sur Internet. De quoi s’agit-il ?

Alain Lévy : D’un ensemble de techniques de connaissance des comportements des internautes qu’on appelle les Web analytics. Mon entreprise, Weborama, conçoit des outils qui sont placés sur les sites pour compter leur nombre de visiteurs, et d’autres qu’on place sur le navigateur de l’internaute (des “cookies”), et qui analysent sa navigation. Pour les annonceurs, l’intérêt est grand. Quand une publicité s’affiche, on sait si l’internaute a cliqué dessus, si ensuite il a acheté, combien il a dépensé. Ce qui permet d’évaluer l’efficacité des campagnes.

M. L. : Ces nouvelles possibilités ne signifient pas que la télévision ou la presse sont caduques. Celles-ci ont encore leur place, et une place prépondérante puisque aujourd’hui, ce sont 92 % des investissements publicitaires qui vont dans ce domaine. Demain, en 2010, ce sera encore 88 %, mais entretemps la part du Web aura doublé.

A. L. : La télévision restera prépondérante, mais elle sera numérique. Cela veut dire que tout ce qu’on peut faire sur Internet, on pourra le faire avec la télévision. Des campagnes ciblées, interactives…

Et la presse écrite ?

M. L. : Je considère que la presse joue un rôle essentiel comme ferment de nos démocraties. L’essor du Net lui pose un problème parce qu’une partie de la publicité bascule vers ces nouveaux médias. La presse est plus lourde sur le plan publicitaire : les espaces sont figés. Il n’y a ni mouvement, ni son, ni musique. C’est donc un mode d’expression assez limité pour les annonceurs. Résultat, ils coupent le plus facilement les budgets des journaux.

La presse possède deux avantages, qu’elle exploite plus ou moins bien. Le premier, c’est une marque. Dans l’univers Internet, il est plus facile de s’orienter quand on connaît le nom du site, par exemple Le second avantage, c’est que la presse a une maîtrise de l’information : elle sait la sélectionner, la traiter, la hiérarchiser. Elle doit tirer parti de cet atout face au foisonnement des messages. Mais le temps presse, si j’ose dire.

A. L. : Au risque d’être politiquement incorrect, je crois que les carottes ne sont pas loin d’être cuites. La mutation des médias classiques vers le numérique prendra du temps, et, pour la recherche d’information, Google est en train de rafler la mise. Les générations dites “natives”, qui sont nées avec Internet, ont zéro fidélité envers des marques de contenu. En revanche, elles ont besoin d’avoir tout de suite ce qu’elles veulent, et pas beaucoup plus. C’est un devoir d’éducation de leur transmettre l’idée qu’on peut aller plus loin que l’info brute. Moi, quand je lis une information sur le Net, il m’arrive d’avoir un doute et de vérifier dans les journaux. Mais j’appartiens à la dernière génération qui a ce réflexe. Les suivantes seront celles du tout-numérique.

M. L. : Les marques de journaux qui sauront faire la mutation vers le Net sont celles qui vont gagner. C’est déjà ce qui se passe aux Etats-Unis. Le New York Times, le Wall Street Journal abandonnent de plus en plus les espaces payants pour profiter de la fréquentation de leurs sites, et valoriser leur audience. Cela me fait dire qu’il y a un avenir pour la presse, mais plus le même, et plus seulement sur papier.

Et pour le secteur de la publicité, quelle doit être la stratégie ?

A. L. : La vraie question est de savoir quelle relation les grands acteurs de l’Internet entretiennent avec la technologie : doivent-ils la posséder, maîtriser l’ensemble des outils, ou au contraire laisser des entreprises nouvelles se mesurer aux très grands ? Google, il faut lui reconnaître ce mérite, a inventé le modèle économique de l’Internet. C’est grâce à lui qu’une page vue égale des euros, alors qu’avant elle valait zéro. Mais nous sommes entrés dans une nouvelle ère depuis que la Commission européenne a autorisé le rachat par Google de DoubleClick, le leader mondial de la publicité en ligne. Sa prédominance devient sans partage…

M. L. : Google est imbattable sur la recherche des mots, le “search“. DoubleClick a la maîtrise des bannières. La conjonction des deux donne une force considérable. Publicis a donc jugé bon, dans l’intérêt de ses clients, de parvenir à un accord avec Google et de travailler avec lui.

A. L. : J’ai un point de vue différent. La puissance de Google est fondée sur une technologie très efficace, une capacité à accumuler et à analyser des données inégalée jusque-là. Cela lui donne les moyens d’acheter tout ce qui bouge. C’est une espèce de grande faucheuse qui attaque tous les acteurs, tous les médias : les télécoms, la publicité, la communication numérique au sens large. C’est ainsi que Google, le symbole de l’hyperconcurrence des marchés, finit par tuer toute concurrence.

Comment les métiers de la pub vont-ils évoluer avec les nouvelles technologies ?

M. L. : C’est le point essentiel. Quand on fait une campagne à la télévision ou dans la presse, on lance les ordres, on attend, et à la fin de la campagne, on mesure les effets et on ajuste le tir pour la vague d’après. Et on recommence le cycle de manière indéfinie…

A. L. : Désormais, on peut faire la même chose en temps réel. Dès qu’il y a un clic, il s’imprime sur l’écran. Pour un annonceur, cet outil est grisant : un clic, et le chiffre d’affaires s’implémente. On n’a pas besoin d’attendre le verdict des hommes de l’art. C’est là que mon père et moi avons un désaccord. Je pense qu’à terme les plus gros annonceurs vont vouloir maîtriser tout ce processus. Du coup, le métier de l’agence va se retrouver cantonné à l’aspect créatif, qui sera d’ailleurs très important puisque nous allons vers un modèle : une personne, un comportement, une “créa”. La technologie va s’en mêler, donc Google va entrer sur ce marché.

M. L. : C’est ignorer comment Google fonctionne. Son rendement vient du fait que tout est automatisé. Il met beaucoup d’ingénieurs, un déploiement d’intelligence considérable pour développer un outil. Mais, une fois que l’outil est au point, c’est terminé, il fonctionne avec très peu de main-d’oeuvre. Dans la communication, on met très peu de gens pour penser les outils, et on en met énormément pour penser les besoins spécifiques de chaque annonceur. Les deux modèles économiques sont à l’opposé l’un de l’autre.

Quelles sont les prochaines étapes de la “numérisation” généralisée ?

A. L. : On ne connaîtra pas seulement le consommateur à travers son ordinateur. On le suivra dans la vraie vie. C’est ce sur quoi travaille une autre société que j’ai aidée à démarrer, Majority Report. Elle fait la même chose que Weborama, mais dans la réalité : analyser les trajectoires, comprendre les comportements des clients sur le lieu de vente. Les technologies du Net vont rayonner dans notre univers, et pas seulement dans les médias. Par exemple, on pourra compter exactement le nombre de personnes dans une manifestation.

Ce tout-numérique, qu’implique-t-il pour notre société ?

A. L. : C’est une vraie question. Moi, comme utilisateur, que suis-je prêt à tolérer ? Que suis-je prêt à donner comme informations sur ma vie ? Le terme “tracking”, qui désigne le suivi statistique des comportements sur Internet, signifie “suivre à la trace”, c’est assez épouvantable. La Commission nationale de l’informatique et des libertés (CNIL), en France, veille à ça, mais elle a un peu de mal à appréhender tout ce qui se passe. Chez Weborama, en tout cas, nous veillons à n’avoir aucune donnée qui permette de relier notre analyse d’un comportement à un individu. Ce sera un enjeu majeur dans les années qui viennent. Le consommateur est de plus en plus conscient de l’exploitation des traces qu’il laisse. On touche à la liberté ?

M. L. : C’est vrai que nous entrons dans le monde de Big Brother, et qu’il existe des moyens d’établir une traçabilité des comportements. On peut savoir à partir des technologies du GPS où se trouvent les gens grâce à leur téléphone portable, on peut suivre leur voiture, savoir où ils vont, ce qu’ils achètent, ce que sont leurs échanges de communication. Nous sommes dans une société de communication qui peut mettre en danger les libertés publiques et la vie privée.

Sous l’aspect publicitaire, il y a un autre danger, qui est celui de l’intrusion. Par exemple, vous visitez un site automobile, le publicitaire peut intervenir et vous faire une offre plus intéressante. Chez Publicis, nous résistons à cela parce qu’il s’agit vraiment d’une intrusion. Nous pensons que les gens n’accepteront pas qu’on regarde ce qu’ils font par-dessus leur épaule.

Propos recueillis par Sophie Gherardi

Google — The case for buying Associated Press

Uncategorized By June 23, 2008 Tags: , 1 Comment

Would it makes sense for Google to buy AP? Yes, says a contributor to Wired.Com. AP is a non-profit cooperative with 1500 members, many of them on the verge of extinction. (See the latest’s figures form the New York Times which is bleeding ad revenue at a yearly rate of 13%), or the terrible situation of Tribune Co.

Google, by comparison is in excellent health and needs content. In August 2006, it agreed to pay for AP stories appearing on Google News (except for Agence France Presse, everyone else is giving abstracts for free). Starting from this, buying such a news gathering capability would make sense. At least it is much cheaper and way more tangible than any social network.