Uncategorized

Yahoo: The Marissa Mayer Turnaround

 

Critics spew well-meaning generalities when criticizing Marissa Mayer’s first moves at Yahoo! They fail to see the urgency of the company’s turnaround situation, the need to refocus the workforce and spruce up the management.

Last July, Yahoo! elected a new CEO, their seventh or eight, I’ve lost count. Marissa Mayer is an ex-Google exec with a BS in symbolics systems and an MS in Computer Science from Stanford, just like Scott Forstall. After a 13-year career at the biggest Cloud company on Earth, Mayer brings relevant experience to the CEO position of the once-great Web company. She also happens to be female but, unlike a predecessor of the same gender, Mayer doesn’t appear to feel the need to assert power by swearing like a sailor.

Power she asserts nonetheless. Barely pausing to deliver her first child, Mayer set to work: Yahoo! apps were too many, she vowed to cut them from 60 to the dozen or so that support our “digital daily habit“. Hiring standards have been seriously upgraded, the CEO wants to review every candidate to weed out “C-list slackers“. People were shown the door, starting in the executive suite. Some were replaced by ex-Google comrades such as her newly-appointed COO, Henrique De Castro.

The changes have been met with intramural criticism, from charges of Google cronyism to moaning over her meddling with the hiring process (“Yahoo’s Mayer gets internal flak for more rigorous hiring“). The complainers might as well get used to it: Mayer knows who she’s competing against, she wants to win, and that means Yahoo! needs to attract Valley-class talent. If she can pull them from Google, even better. The insiders who complain to the media only advertise their fear — a bad idea — and unwittingly make the case for Mayer’s higher standards.

The new sheriff is a high-intensity person. Friends tell me she also reviews new apps in great detail, down to color choices. (Didn’t another successful leader so annoy people?)

The protests over Mayer’s hiring practices and (supposed) micromanagement are nothing compared to the howls of pain over Mayer’s most controversial decision: No more Working From Home.

The prohibition is an affront to accepted beliefs about white-collar productivity, work/life balance, working mothers, sending less CO2 into the atmosphere. Does Mayer oppose a balanced life and a greener planet?

No, presumably — but reality intrudes. Once the king of the Web, Yahoo! stood by and watched as Google and Facebook seduced their users and advertisers. In 2008, in an effort to bolster its flagging on-line fortunes, Microsoft offered more than $44B to acquire Yahoo. The Board nixed the deal and Yahoo! kept sinking. Right before Mayer took the helm in July 2012, Yahoo’s market cap hovered around $16B, a decline of more than 60%.

The niceties of peacetime prosperity had to go. Unlike her “explicit” predecessor, Mayer doesn’t stoop to lash out at the protesters but one can imagine what she thinks: “Shut up, you whiners. This is a turnaround, not a Baja California cruise!”

In the Valley, WFH has long been controversial. In spite of its undeniable benefits, too-frequent abuses led to WFH becoming a euphemism for goofing off, or for starting a software business on one’s employer’s dime, an honored tradition.

Telecommuting requires a secure VPN (Virtual Private Network) connection from your computer at home to the company’s servers. These systems keep a traffic log, a record of who connects, from what IP address, when, for how long, how much data, and so on. Now, picture a CEO from the Google tradition of data analysis. She looks at the VPN logs and sees too much “comfort”, to be polite.

Mayer did what leaders do: She made a decision that made some people unhappy in order to achieve success for the whole enterprise (toned-up employees and shareholders). After seeing Yahoo! lose altitude year after year, the criticism leveled at Mayer makes me optimistic about the company’s future: Mayer’s treatment hurts where it needs to.

Among the many critics of Mayer’s no-WHF decision, the one I find most puzzling — or is it embarrassing? — emanates from the prestigious Wharton School of Business (at the University of Pennsylvania). In a Knowledge@Wharton article, scholars make sage but irrelevant comments such as:

Wharton faculty members who specialize in issues pertaining to employee productivity and work/life balance were similarly surprised by Mayer’s all-encompassing policy change. “Our experience in this field is that one-size-fits-all policies just don’t work,” notes Stewart Friedman, Wharton practice professor of management and director of the school’s Work/Life Integration Project. “You want to have as many tools as possible available to you as an executive to be able to tailor the work to the demands of the task. The fewer tools you have available, the harder it is to solve the problem.”

Nowhere in the article do the Wharton scholars consider the urgency of Yahoo’s situation, nor do they speculate that perhaps Mayer didn’t like what she found in the VPN logs. And, speaking of numbers, the Wharton experts provide no numbers, no sample size, no control group to buttress their statements. Our well-meaning academics might want to take a look at a recent blog post by Scott Adams, the prolific creator of corpocrat-skewering Dilbert cartoons. Titled Management/Success/Leadership: Mostly Bullshit, the post vigorously delivers what the title promises, as in this paragraph:

The fields of management/success/leadership are a lot like the finance industry in the sense that much of it is based on confusing correlation and chance with causation. We humans like to feel as if we understand and control our environments. We don’t like to think of ourselves as helpless leaves blowing in the wind of chance. So we clutch at any ridiculous explanation of how things work. 

Or this one, closer to today’s topic [emphasis mine]:

I first noticed the questionable claims of management experts back in the nineties, when it was fashionable to explain a company’s success by its generous employee benefits. The quaint idea of the time was that treating employees like kings and queens would free their creative energies to create massive profits. The boring reality is that companies that are successful have the resources to be generous to employees and so they do. The best way a CEO can justify an obscene pay package is by treating employees generously. To put this in another way, have you ever seen a corporate turnaround that was caused primarily by improving employee benefits?

Tony Hsieh, the founder and CEO of on-line shoe store Zappos, isn’t a blogger, cartoonist, or academic theoretician; he leads a very successful company that’s admired for its customer-oriented practices (culture, if you will). In this Business Insider piece, titled Here’s Why I Don’t Want My Employees To Work From Home, Hsieh is unequivocal about the value of Working From Work [emphasis mine]:

Research has shown that companies with strong cultures outperform those without in the long-term financially. So we’re big, big believers in building strong company cultures. And I think that’s hard to do remotely.

We don’t really telecommute at Zappos. We want employees to be interacting with each other, building those personal relationships and relationships outside of work as well.

What we found is when they have those personal connections that productivity increases because there’s higher levels of trust. Employees are willing to do favors for each other because they’re not just co-workers, but also friends, and communication is better. So we’re big believers in in-person interactions.

Who in good conscience believes that Mayer’s edict is absolute and permanent? You have a sick child at home, will you be granted the permission to work from home for a few days? Of course. Or, you’re an asocial but genius coder, will you be allowed to code at home from 10 pm to 7 am? Again, yes. Mayer saw it done, with good results, at her previous company.

With Mayer’s guidance, the patient has been stabilized and is on the road to recovery. But where does that road lead to? What does Yahoo! want to be now that it’s starting to act like a grownup? A better portal, a place to which we gravitate because, as an insider says, we’ll find more relevant fodder — without relying on “friends”? This would be a return to Yahoo’s original mission, one of cataloguing the Web, only with better technology and taste than Facebook, Google, AOL or even Microsoft’s Bing (Yahoo’s supplier of search data).

This leads to the $$ question, to Yahoo’s business model: advertising or services? With Google and now Facebook dominating the advertising space, how much room is left?

We hear Mayer is focusing Yahoo! on mobile applications. This sounds reasonable… but isn’t everyone?

In the search for a renewed identity (and profits), the question of alliances comes up. Who’s my enemy, my enemy’s enemy, irreplaceable partner/supplier, natural complement? In this regard, the Microsoft question will undoubtedly pop up again. I doubt Mayer has the utmost regard for Microsoft or for its CEO’s bullying style, but can she live without Bing? Is there an alternative? Also, what, if anything, could a healthier Yahoo! offer to Facebook or Apple?

The fun is just starting.

JLG@mondaynote.com

Data in the driver’s seat

 

Autonomous vehicles — fully or partially — will rely on a large variety of data types. And guess who is best positioned to take advantage of this enormous new business? Yep, Google is. 

The Google driveless car is an extraordinary technical achievement. To grasp the its scope, watch this video featuring a near-blind man sitting behind the wheel of an autonomous Prius as the car does the driving. Or, to get an idea of the complexity of the system, see this presentation by Sebastian Thrun (one of the main architects of Google’s self-driving car project) going through the multiple systems running inside the car.

Spectacular as it is, this public demonstration is merely the tip of the iceberg. For Google, the economics of self-driving cars lie in a vast web of data that will become a must to operate partially or fully self-driving vehicles on a massive scale. This network of data will require immense computational and storage capabilities. Consider the following needs in the context of Google’s current position in related fields.

Maps. Since the acquisition of Where2 Technologies and Keyhole Inc. in 2004, Google has been refining its mapping system over and over again (see this brief history of Google Maps). After a decade of work, Google Maps feature a rich set of layers and functions. Their mapping of the world has been supplemented by crowdsourcing systems that allow corrections as well as the creation of city maps where data do not exist. Street View has been launched in 2007 and more than 5 million miles of metropolitan area have been covered. Today, maps are augmented with satellite imagery, 3D, 45-degree aerial views, buildings and infrastructure renderings. All this is now merged, you can plunge from a satellite view to the street level.

Google’s goal is building the most complete an reliable map system in the world. Gradually, the company replaces geo-data from third party suppliers with data collected by its own crews around the world. To get an idea of how fast Google progresses, consider the following: In 2008, Google mapping covered 22 countries and offered 13 million miles with driving directions. In 2012, 187 countries where covered, 26 million miles with driving directions, including 29 countries with turn-by-turn directions. On the chart below, you can also see the growing areas of Google-sourced maps (in green) as opposed to licensed data (in red):

Apple’s failure in maps shows that, regardless of the amount of money invested, experience remains a key element. In California and India, Google maintains a staff of hundreds if not thousands of people manually checking key spots in large metropolitan areas and correcting errors. They rely on users whose individual suggestions are manually checked, using Street View imagery as shown here (the operator drags the 360° Street View image to verify signs at an intersection — click to enlarge.)

Google’s engineers even developed algorithms aimed at correcting slight misalignments between “tiles” (pieces of satellite imagery stitched together) that could result from… tectonic plates movement — it could happen when two pictures are taken two years apart. Such accuracy is not a prerequisite for current navigation, but it could be important for autonomous cars that will depend heavily on ultra-precise (think what centimers/inches mean when cars are close on the road) mapping of streets and infrastructures.

But, one might object, Google is not the only company providing geo-data and great mapping services. True: The Dutch company Tom-tom, or the Chicago-based Navteq have been doing this for years. As geo-data became strategically important, Tom-tom acquired TeleAtlas for $2.9bn in 2008, and Nokia bought Navteq in 2007. But Google intends to move one step ahead by merging its mapping and imagery technologies with its search capabilities. Like in this image:

Accurate, usable and data-rich maps are one thing. Now, when you consider the variety of data needed for autonomous or semi-autonomous vehicles, the task becomes even more enormous. The list goes on:

Traffic conditions will be a key element. It’s pointless to envision fleets of self-driving, or assisted-driving cars without systems to manage traffic. These goes along with infrastructure development. For instance, as  Dr. Kara Kockelman, professor of transportation engineering at the University of Texas at Austin explained to me, in the future, we might see substantial infrastructure renovation aimed at accommodating autonomous vehicles (or vehicles set on self-driving mode). Dedicated highway corridors would be allocated to “platoons” of cars driving close together, in a faster and safer way, than manned cars. Intersections, she said, are also a key challenge as they are responsible for most traffic jams (and a quarter of accidents). With the advent of autonomous vehicles, we can see cars taken over by intersection management systems that will regroup them in platoons and feed them seamlessly in intersecting traffic flows, like in this spectacular simulation. If traffic lights are still needed, they will change every five or six seconds just to optimize the flow.

Applied to millions of vehicles, traffic and infrastructure management will turn into a gigantic data and communication problem. Again, Google might be the only entity able to write the required software and to deploy the data centers to run it. Its millions of servers will be of great use to handle weather information, road conditions (as cars might be able to monitor their actual friction on the road and transmit the data to following vehicles, or detect humidity and temperature change), parking data and fuel availability (gas or electricity). And we can even think of merging all this with day-to-day life elements such as individual calendars, commuting patterns and geolocating people through their cell phones.

If the data collection and crunching tasks can conceivably be handled by a Google-like player, communications remain an issue. “There is not enough overlap between car-to-car communication and in other fields”, Sven Beiker, director Center for Automotive Research  (CARS) at Stanford told me (see his recent lecture about The Future if the Car). He is actually echoing executives from Audi (who made a strategic deal with Google), BMW and Ford; together at the Mobile World Congress, they were critical of cell phone carriers’ inability to provide the right 4G (LTE) infrastructure to handle the amount of data required by future vehicles.

Finally, there is the question of an operating system for cars. Experts are divided. Sven Beiker believes the development of self-driving vehicles will depend more on communication protocols than on an OS per se. Others believe that Google, with its fleet of self-driving Priuses criss-crossing California, is building the first OS dedicated to autonomous vehicles. At some point, the search giant could combine its mapping, imagery and local search capabilities with the accumulation of countless self-driven miles, along with scores of specific situations “learned” by the cars’ software. The value thus created would be huge, giving Google a decisive position in yet another field. The search company could become the main provider of both systems and data for autonomous or semi autonomous cars.

frederic.filloux@mondaynote.com

Linking: Scraping vs. Copyright

 

Irish newspapers created quite a stir when they demanded a fee for incoming links to their content. Actually, this is a mere prelude to a much more crucial debate on copyrights,  robotic scraping and subsequent synthetic content re-creation from scraps. 

The controversy erupted on December 30th, when an attorney from the Irish law firm McGarr Solicitors exposed the case of one of its client, the Women’s Aid organization, being asked to pay a fee to Irish newspapers for each link they send to them. The main quote from McGarr’s post:

They wrote to Women’s Aid, (amongst others) who became our clients when they received letters, emails and phone calls asserting that they needed to buy a licence because they had linked to articles in newspapers carrying positive stories about their fundraising efforts.
These are the prices for linking they were supplied with:

1 – 5 €300.00
6 – 10 €500.00
11 – 15 €700.00
16 – 25 €950.00
26 – 50 €1,350.00
50 + Negotiable

They were quite clear in their demands. They told Women’s Aid “a licence is required to link directly to an online article even without uploading any of the content directly onto your own website.”

Recap: The Newspapers’ agent demanded an annual payment from a women’s domestic violence charity because they said they owned copyright in a link to the newspapers’ public website.

Needless to say, the twittersphere, the blogosphere and, by and large, every self-proclaimed cyber moral authority, reacted in anger to Irish newspapers’ demands that go against common sense as well as against the most basic business judgement.

But on closer examination, the Irish dead tree media (soon to be dead for good if they stay on that path) is just the tip of the iceberg for an industry facing issues that go well beyond its reluctance to the culture of web links.

Try googling the following French legalese: “A défaut d’autorisation, un tel lien pourra être considéré comme constitutif du délit de contrefaçon”. (It means any unauthorized incoming link to a site will be seen as a copyright infringement.) This search get dozens of responses. OK, most come from large consumers brands (carmakers, food industry, cosmetics) who don’t want a link attached to an unflattering term sending the reader to their product description… Imagine lemon linked to a car brand.

Until recently, you couldn’t find many media companies invoking such a no-link policy. Only large TV networks such as TF1 or M6 warn that any incoming link is subject to a written approval.

In reality, except for obvious libel, no-links policies are rarely enforced. M6 Television even lost a court case against a third party website that was deep-linking to its catch-up programs. As for the Irish newspapers, despite their dumb rate card for links, they claimed to be open to “arrangements” (in the ill-chosen case of a non-profit organization fighting violence against women, flexibility sounds like a good idea.)

Having said that, such posture reflects a key fact: Traditional media, newspapers or broadcast media, send contradictory messages when it comes to links that are simply not part of their original culture.

The position paper of the National Newspapers of Ireland association’s deserves a closer look (PDF here). It actually contains a set of concepts that resonate with the position defended by the European press in its current dispute with Google (see background story in the NYTimes); here are a few:

– It is the view of NNI that a link to copyright material does constitute infringement of copyright, and would be so found by the Courts.
– [NNI then refers to a decision of the UK court of Appeal in a case involving Meltwater Holding BV, a company specialized in media monitoring], that upheld the findings of the High Court which findings included:
- that headlines are capable of being independent literary works and so copying just a headline can infringe copyright
- that text extracts (headline plus opening sentence plus “hit” sentence) can be substantial enough to benefit from copyright protection
- that an end user client who receives a paid for monitoring report of search results (incorporating a headline, text extract and/or link, is very likely to infringe copyright unless they have a licence from the
Newspaper Licencing Agency or directly from a publisher.
– NNI proposes that, in fact, any amendment to the existing copyright legislation with regard to deep-linking should specifically provide that deep-linking to content protected by copyright without respect for  the linked website’s terms and conditions of use and without regard for the publisher’s legitimate commercial interest in protecting its own copyright is unlawful.

Let’s face it, most publishers I know would not disagree with the basis of such statements. In the many jurisdictions where a journalist’s most mundane work is protected by copyright laws, what can be seen as acceptable in terms of linking policy?

The answer seems to revolve around matters of purpose and volume.

To put it another way, if a link serves as a kind of helper or reference, publishers will likely tolerate it. (In due fairness, NNI explicitly “accepts that linking for personal use is a part of how individuals communicate online and has no issue with that” — even if the notion of “personal use” is pretty vague.) Now, if the purpose is commercial and if linking is aimed at generating traffic, NNI raises the red flag (even though legal grounds are rather brittle.) Hence the particular Google case that also carries a notion of volume as the search engine claims to harvest thousands of sources for its Google News service.

There is a catch. The case raised by NNI and its putative followers is weakened by a major contradiction: everywhere, Ireland included, news websites invest a great deal of resources in order to achieve the highest possible rank in Google News. Unless specific laws are voted (German lawmakers are working on such a bill), attorneys will have hard time invoking copyright infringements that in fact stem for the very Search Engine Optimization tactics publishers encourage.

But there might be more at stake. For news organizations, the future carries obvious threats that require urgent consideration: In coming years, we’ll see great progress — so to speak — in automated content production systems. With or without link permissions, algorithmic content generators will be able (in fact: are) to scrap sites’original articles, aggregate and reprocess those into seemingly original content, without any mention, quotation, links, or reference of any kind. What awaits the news industry is much more complex than dealing with links from an aggregator.

It boils down to this: The legal debate on linking as copyright infringement will soon be obsolete. The real question will emerge as a much more complex one: Should a news site protect itself from being “read”  by a robot? The consequences for doing so are stark: except for a small cohort of loyal readers, the site would purely and simply vanish from cyberspace… Conversely, by staying open to searches, the site exposes itself to forms of automated and stealthy depletion that will be virtually impossible to combat. Is the situation binary — allowing “bots” or not — or is there middle ground? That’s a fascinating playground for lawyers and techies, for parsers of words and bits.

frederic.filloux@mondaynote.com

Google’s looming hegemony

 

If we factor Google geospatial applications + its unique data processing infrastructure + Android tracking, etc., we’re seeing the potential for absolute power over the economy. 

Large utility companies worry about Google. Why? Unlike those who mock Google for being a “one-trick pony”, with 99% of its revenue coming from Adwords, they connect the dots. Right before our eyes, the search giant is weaving a web of services and applications aimed at collecting more and more data about everyone and every activity. This accumulation of exabytes (and the ability to process such almost unconceivable volumes) is bound to impact sectors ranging from power generation, transportation, and telecommunications.

Consider the following trends. At every level, Western countries are crumbling under their debt load. Nations, states, counties, municipalities become unable to support the investment necessary to modernize — sometimes even to maintain — critical infrastructures. Globally, tax-raising capabilities are diminishing.

In a report about infrastructure in 2030 (500 pages PDF here), the OECD makes the following predictions (emphasis mine):

Through to 2030, annual infrastructure investment requirements for electricity, road and rail transport, telecommunications and water are likely to average around 3.5% of world gross domestic product (GDP).

For OECD countries as a whole, investment requirements in electricity transmission and distribution are expected to more than double through to 2025/30, in road construction almost to double, and to increase by almost 50% in the water supply and treatment sector. (…)

At present, governments are not well placed to meet these growing, increasingly complex challenges. The traditional sources of finance, i.e. government budgets, will come under significant pressure over the coming decades in most OECD countries – due to aging populations, growing demands for social expenditures, security, etc. – and so too will their financing through general and local taxation, as electorates become increasingly reluctant to pay higher taxes.

What’s the solution? The private sector will play a growing role through Public-Private-Partneships (PPPs). In these arrangements, a private company (or, more likely, a consortium of such) builds a bridge, a motorway, a railroad for a city, region or state, at no expense to the taxpayer. It will then reimburse itself from the project’s cash-flow. Examples abound. In France the elegant €320m ($413m) viaduct of Millau was built — and financed — by Eiffage, a €14 billion revenue construction group. In exchange for financing the viaduct, Eiffage was granted a 78-year toll concession with an expected internal rate of return ranging from 9.2% 17.3%. Across the world, a growing number of projects are built using this type of mechanism.

How can a company commit hundreds of millions of euros, dollars, pounds with an acceptable level of risk over several decades? The answer lies in data-analysis and predictive models. Companies engineer credible cash-flow projections using reams of data on operations, usages patterns and components life cycles.

What does all this have to do with Google?

Take a transportation company building and managing networks of buses, subways or commuter trains in large metropolitan areas. Over the years, tickets or passes analysis will yield tons of data on customer flows, timings, train loads, etc. This is of the essence when assessing the market’s potential for a new project.

Now consider how Google aggregates the data it collects today — and what it will collect in the future. It’s a known fact that cellphones send back to Mountain View (or Cupertino) geolocalization data. Bouncing from one cell tower to another, catching the signal of a geolocalized wifi transmitter, even if the GPS function is turned off, Android phone users are likely to be tracked in realtime. Bring this (compounded and anonymized) dataset on information-rich maps, including indoor ones, and you will get very high definition of profiles for who goes or stays where, anytime.

Let’s push it a bit further. Imagine a big city such as London, operating 500,000 security cameras, which represent the bulk of the 1.85 million CCTVs deployed in the UK — one for every 32 citizens. 20,000 of them are in the subway system. The London Tube is the perfect candidate for partial or total privatization as it bleeds money and screams for renovations. In fact, as several people working at the intersection of geo applications and big data project told me, Google would be well placed to provide the most helpful datasets. In addition to the circulation data coming from cellphones, Google would use facial recognition technology. As these algorithms are already able to differentiate a woman from a man, they will soon be able to identify (anonymously) ethnicities, ages, etc. Am I exaggerating ? Probably not. Mercedes-Benz already has a database of 1.5 million visual representations of pedestrians to be fed into the software of its future self-driving cars. This is a type of applications in which, by the way, Google possesses a strong lead with its fleets of driverless Prius crisscrossing Northern California and Nevada.

Coming back to the London Tube and its unhappy travelers, we have traffic data, to some degree broken down into demographics clusters; why not then add shopping data (also geo-tagged) derived from search and ads patterns, Street View-related informations… Why not also supplement all of the above with smart electrical grid analysis that could refine predictive models even further (every fraction of percentage points counts…)

The value of such models is much greater than the sum of their parts. While public transportation operators or utility companies are already good at collecting and analyzing their own data, Google will soon be in the best position to provide powerful predictive models that aggregate and connect many layers of information. In addition, its unparalleled infrastructure and proprietary algorithms provide a unique ability to process these ever-growing datasets. That’s why many large companies over the world are concerned about Google’s ability to soon insert itself into their business.

frederic.filloux@mondaynote.com

 

Schibsted’s extraordinary click machines

 

The Nordic media giant wants to be the #1 worldwide of online classifieds by replicating its high-margin business one market after another, with great discipline. 

It all starts in 2005 with a Power Point presentation in Paris. At the time, Schibsted ASA, the Norwegian media group, is busy deploying its free newspapers in Switzerland, France and Spain. Schibsted wants its French partner Ouest-France — the largest regional newspapers group — to co-invest in a weird concept: free online classifieds. As always with the Scandinavian, the deck of slides is built around a small number of key points. To them, three symptoms attest to the maturity of a market’s online classified business:  (a) The number one player in the field ranks systematically among the top 10 web sites, regardless of the category; (b) it is always much bigger than the number two; (c) it reaps most of the profits in the sector. “Look at the situation here in France”, the Norwegians say, “the first classifieds site ranks far down in Nielsen rankings. The market is up for grabs, and we intend to get it”. The Oslo and Stockholm executives already had an impressive track record: in 2000, they launched Finn.no in Norway and, in 2003, they acquired Blocket.se in Sweden. Both became incredible cash machines for the group, with margins above 50% and unabated growth. Ouest-France eventually agreed to invest 50% in the new venture. In november 2010, they sold their stake back to Schibsted at a €400m valuation. (As we’ll see in a moment, the classified site Le Bon Coin is now worth more than twice that number.)

November 2012. I’m sitting in the office of Olivier Aizac, CEO of Le Bon Coin, the French iteration of Schibsted’s free classifieds concept. The office space is dense and scattered over several floors in a building near the Paris Bourse. Since my last 2009 visit (see a previous Monday Note Learning from free classifieds), the startup grew from a staff of 15 to 150 people. And Aizac tells me he plans to hire 70 more staff in 2013. Crisis or not, the business is booming.

A few metrics: According to Nielsen, LeBonCoin.fr (French for The Right Spot) ranks #9 in France with 17m monthly unique users. With more than 6 billion page views per month, it even ranks #3, behind Facebook and Google. Revenue-wise, Le Bon Coin might hit the €100m mark this year, with a profit margin slightly above… 70%. Fort the 3rd quarter of this year, the business grew by 50% vs. a year ago.

In terms of competition it dominates every segment: cars, real estate (twice the size of Axel Springer’s SeLoger.com) and jobs with about 60,000 classifieds, roughly five times the inventory of a good paid-for job board (LeBonCoin is not positioned in the upper segment, though, it mostly targets regional small to medium businesses).

Le Bon Coin’s revenue stream is made of three parts: premium services (you pay to add a picture, a better ranking, tracking on your ad); fees coming from the growing number professionals who flock to LBC (many car dealerships put their entire inventory here); and advertising for which the primary sectors are banking and insurance, services such as mobile phone carriers or pay-TV, and automobile. Although details are scarce, LBC seems to have given up the usual banner sales, focusing instead on segmented yearly deals: A brand will target a specific demographic and LBC will deliver, for half a million or a million euros per annum.

One preconceived idea depicts Le Bon Coin as sitting at the cheaper end of the consumer market. Wrong. In the car segment, its most active advertiser is Audi for whom LBC provides tailored-made promotions. (Strangely enough Renault is much slower to catch the wave.) “We are able to serve any type of market”, says Olivier Aizac who shows an ad peddling a €1.4m Bugatti, and another for the brand new low-cost Peugeot 301, not yet available in dealerships but offered on LBC for €15,000. Similarly, LBC is the place to go to rent a villa on the Cote d’Azur or a chalet for the ski season. With more than 21 millions ads at any given moment, you can find pretty much anything there.

Now, let’s zoom out and look at a broader picture. How far can Le Bon Coin go? And how will its cluster of free classifieds impact Schibsted’s future?

Today, free online classifieds weigh about 25% of Schibsted revenue (about 15bn Norwegian Kroner, €2bn this year), but it it accounts for 47% of the group’s Ebitda (2.15bn NOK, €300m). All online activities now represent 39% of the revenue and 62% of the Ebitda.

The whole strategy can be summed up in these two charts: The first shows the global deployment of the free classifieds business (click ton enlarge):

Through acquisitions, joint ventures or ex nihilo creations, Schibsted now operates more than 20 franchises. Their development process is highly standardized. Growth phases have been codified in great detail, managers often gather to compare notes and the Oslo mothership watches everything, providing KPIs, guidelines, etc. The result is this second chart showing the spread of deployment phases. More than half of the portfolio still is in infancy, but most likely to follow the path to success:

Source: Schibsted Financial Statements

This global vision combined to what is seen as near-perfect execution explains why the financial community is betting so much on Schibsted’s classified business.

When assessing the potential of each local brand, analysts project the performances of the best and mature properties (the Nordic ones) onto the new ones. As an example, see below the number of visits per capita and per month from web and mobile since product launch:

Source : Dankse Market Equities

For Le Bon Coin’s future, this draws a glowing picture: according to Danske Market Equities, today, the Norwegian Finn.no generates ten times more revenue per page view than LBC, and twenty times more when measured by Average revenue per user (ARPU). The investment firm believes that Le Bon Coin’s revenue can reach €500m in 2015, and retain a 65% margin. (As noted by its CEO, Le Bon Coin has yet to tap into its trove of data accumulated over the last six years, which could generate highly valuable consumer profiling information).

When translated into valuation projections, the performance of Schibsted classifieds businesses far exceed the weight of traditional media properties (print and online newspapers). The sum-of-the-parts valuations drawn by several private equities firms show the value of the classifieds business yielding more than 80% of the total value of this 173 year-old group.

frederic.filloux@mondaynote.com
Disclosure: I worked for Schibsted for nine years altogether between 2001 and 2010; six years indirectly as the editor of 20 minutes and three years afterwards, in a business development unit attached to the international division.
——- 

It’s the Competitive Spirit, Stupid

 

Legacy media suffer from a deadly DNA mutation: they’ve lost  their appetite for competition; they no longer have the will to fight the hordes of new, hungry mutants emerging from the digital world. 

For this week’s column, my initial idea was to write about Obama’s high tech campaign. As in 2008, his digital team once again raised the bar on the use of data mining, micro-targeting, behavioral analysis, etc. As Barack Obama’s strategist David Axelrod suggested just a year ago in Bloomberg BusinessWeek, compared to what they were working on, the 2008 campaign technology looked prehistoric. Without a doubt, mastering the most sophisticated practices played a crucial role in Obama’s November 6th victory.

As I researched the subject, I decided against writing about it. This early after the election, it would have been difficult to produce more than a mere update to my August 2008 story, Learning from the Obama Internet Machine. But, OK. For those of you interested in the matter, here are a couple of resources I found this week: An interesting book by Sasha Issenberg, The Victory Lab, The Secret Science of  Winning Campaigns, definitely worth a read; or previously unknown tidbits in this Stanford lecture by Dan Siroker, an engineer who left Google to join the Obama campaign in 2008. (You can also feast on a Google search with terms like “obama campaign + data mining + microtargeting”.)

I switched subjects because something jumped at me: the contrast between a modern election campaign and the way traditional media cover it. If it could be summed up in a simplistic (and, sorry, too obvious) graph, it would look like this :

The 2012 Election campaign carries all the ingredients of the fiercest of competitions: concentrated in a short time span; fueled by incredible amounts of cash (thus able to get the best talent and technology money can buy); a workforce that is, by construction, the most motivated any manager can dream of, a dedicated staff led by charismatic stars of the trade; a binary outcome with a precise date and time (first Tuesday of November, every four years.) As if this was not enough, the two camps actually compete for a relatively small part of the electorate, the single digit percentage that will swing one way or the other.

At the other end of the spectrum, you have traditional media. Without falling into caricature, we can settle for the following descriptors: a significant pool of (aging) talent; a great sense of entitlement; a remote connection with the underlying economics of the business; a remarkably tolerance for mediocrity (unlike, say, pilots, or neurosurgeons); and, stemming from said tolerance, a symmetrical no-reward policy — perpetuated by unions and guilds that planted their nails in the media’s coffin.

My point: This low level of competitive metabolism has had a direct and negative impact on the economic performance of legacy media.

In countries, regions, or segments where newsrooms compete the most on a daily basis (on digital or print), business is doing just fine.

That is the case in Scandinavia which enjoys good and assertive journalism, with every media trying to beat the other in every possible way: investigation, access to sources, creative treatment, real-time coverage, innovations in digital platforms… The UK press is also intensively competitive — sometimes for the worse as shown in the News Corp phone hacking scandal. To some extent, German, Italian, Spanish media are also fighting for the news.

At the other end of the spectrum, the French press mostly gave up competing. The market is more or less distributed on the basis readers’ inclinations. The biggest difference manifests itself when a source decides to favor one media against the others. Reminding someone of the importance of competing, of sometimes taking a piece of news from someone else’s plate tends to be seen as ill-mannered, not done. The result is an accelerating drop in newspapers sales. Strangely enough, Nordic media will cooperate without hesitation when it comes to sharing industrial resources such as printing plants and distribution channels while being at each other’s throat when it comes to news gathering. By contrast, the French will fight over printing resources, but will cooperate when it’s time to get subsidies from the government or to fight Google.

Digital players do not suffer from such a cumbersome legacy. Building organizations from scratch, they hired younger staff and set up highly motivated newsrooms. Pure players such as Politico, Business Insider, TechCrunch and plenty of others are fighting in their beat, sometimes against smaller but sharper blogs. Their journalistic performance (although uneven) translates into measurable audience bursts that turn into advertising revenues.

Financial news also fall into that same category. Bloomberg, DowJones and Reuters are fighting for their market-mover status as well for the quality — and usefulness — of their reporting; subscriptions to their service depends on such performance. Hence the emergence of a “quantifiable motivation” for the staff. At Bloomberg — one of the most aggressive news machine in the world — reporters are provided financial incentives for their general performance and rewarded for exclusive information. Salaries and bonuses are high, so is the workload. But CVs are pouring in — a meaningful indicator.

Digital newsrooms are much more inclined to performance measurements than old ones. This should be seen as an advantage. As gross as it might sound to many journalists, media should seize the opportunity that comes with modernizing their publishing tools to revise their compensation policies. The main index should be “Are we doing better than the competition? Does X or Y contribute to our competitive edge?”. Aside from the editor’s judgement, new metrics will help. Ranking in search engines and aggregators; tweets, Facebook Likes; appearances on TV or radio shows; syndication (i.e. paid-for republication elsewhere)… All are credible indicators. No one should be afraid to use them to reward talent and commitment.

It’s high time to reshuffle the nucleotides and splice in competitive DNA strands, they do contribute to economic performance.

frederic.filloux@mondaynote.com

 

Minding The (Apple)Store

 

As I’ve written many times in the past, I’m part of the vast chorus that praises the Apple Store. And not just for the uncluttered product displays, the no-pressure sales people (who aren’t on commission), or the Genius Bar that provides expert help, but for the impressive architecture. Apple beautifies existing venues (Regent Street in London, rue Halevy near the Paris Opera) or commissions elegant new buildings, huge ones at times.

It’s a relentlessly successful story. Even the turmoil surrounding John Browett’s abbreviated tenure as head of Apple’s worldwide retail organization hasn’t slowed the pace  of store openings and customer visits. (As always, Horace Dediu provides helpful statistics and analysis in his latest Asymco post.)

It has always struck me as odd that in Palo Alto, Apple’s heartland and Steve Jobs’ adopted hometown, Apple had only a modestly-sized, unremarkable venue on University Avenue, and an even smaller store in the Stanford Shopping Center.

All of that changed on October 27th when the black veil that shrouded an unmarked project was removed, and the newest Apple Store — what some are calling a “prototype” for future venues, a “flagship” store — was revealed. (For the civic-minded — or the insomniac — you can read the painfully detailed proposal, submitted to Palo Alto’s Architectural Review Board nearly three years ago, here.)

I came back from a trip on November 2nd, the day the iPad mini became available, and immediately headed downtown. The new store is big, bold, elegant, even more so at night when the very bright lights and large Apple logo on its front dominate the street scene. (So much so I heard someone venture that Apple has recast itself as the antagonist in its 1984 commercial.)

The store is impressive… but its also unpleasantly, almost unbearably noisy. And mine isn’t a voice in the wilderness. The wife of a friend walked in, spent a few minutes, and vowed to never return for fear of hearing loss. She’d rather go to the cramped but much more hospitable Stanford store.

A few days later, I heard a similar complaint from the spouse of an Apple employee. She used to enjoy accompanying her husband to the old Palo Alto store, but now refuses because of the cacophony.

‘Now you know the real reason for Browett’s firing’, a friend said, half-seriously. ‘How can you spend North of $15M on such a strategically placed, symbolic store, complete with Italian stone hand-picked by Jobs himself…and give no consideration to the acoustics? It’s bad for customers, it’s bad for the staff, it’s bad for business, and it’s bad for the brand. Apple appears to be more concerned with style than with substance!’

Ouch.

The sound problem stems from a combination of the elongated “Great Hall”, parallel walls, and reflective building materials. The visually striking glass roof becomes a veritable parabolic sound mirror. There isn’t a square inch of sound-absorbing material in the entire place.

A week later, I returned to the store armed with the SPL Meter iPhone app. As the name indicates, SPL Meter provides a Sound Pressure Level (SPL) measurement in decibels.(Decibels form a logarithmic scale where a 3 dB increase means roughly twice as much sound pressure — noise in our case; +10 dB is ten times the sound pressure.)

For reference, a normal conversation at 3 feet (1m) is 40 to 60 dB; a passenger car 30 feet away produces levels between 60 and 80 dB. From the Wikipedia article above: “[The] EPA-identified maximum to protect against hearing loss and other disruptive effects from noise, such as sleep disturbance, stress, learning detriment, etc. [is] 70 dB.”

On a relatively quiet Saturday evening, the noise level around the Genius Bar exceeded 75 dB:

Outside, the traffic noise registered a mere 65 dB. It was 10 db noisier inside the store than on always-busy University Avenue!

Even so, the store on that Friday was a virtual library compared to the day the iPad mini was launched, although I can’t quantify my impression: I didn’t have the presence of mind to whip out my iPhone and measure it.

Despite the (less-than-exacting) scientific evidence and the corroborating anecdotes, I began to have my doubts. Was I just “hearing things”? Could Apple really be this tone deaf?

Then I saw it: An SPL recorder — a professional one — perched on a tripod inside the store.

I also noticed two employees wearing omnidirectional sound recorders on their shoulders (thinking they might not like the exposure, I didn’t take their pictures.) Thus, it appears that Apple is taking the problem seriously.

But what can it do?

It’s a safe bet that Apple has already engaged a team of experts, acousticians who tweak the angles and surfaces in concert halls and problem venues. I’ve heard suggestions that Apple should install an Active Noise Control system: Cancel out sound waves by pumping in their inverted forms — all in real time. Unfortunately, this doesn’t work well (or at all) in a large space.

Bose produces a rather effective solution…in the controlled environment of headphones.

This prompted the spouse mentioned above to suggest that Apple should hand out Bose headphones at the door.

Two days after the noisy Apple store opened its doors, Browett was shown the exit. Either Tim Cook is fast on the draw or, more likely, my friend is wrong: Browett’s unceremonious departure had deeper roots, most likely a combination of a cultural mismatch and a misunderstanding of his role. The Browett graft didn’t take on the Apple rootstock, and the newly hired exec couldn’t accept that he was no longer a CEO.

Browett’s can’t be scapegoated for the acoustical nightmare in the new Apple Store. Did the rightly famous architectural firm, Bohlin Cywinski Jackson, not hear the problem? What about the highly reputable building contractor (DPR) which has built so many other Apple Stores? Did they stand by and say nothing, or could they simply not be heard?

Perhaps this was a case of “Launchpad Chicken”, a NASA phrase for a situation where many people see trouble looming but keep quiet and wait for someone else to bear the shame of aborting the launch. It reminds me of the Apple Maps fiasco: An obvious problem ignored.

What a waste spending all that money and raising expectations only to move from a slightly undersized but well-liked store to a bigger, noisier, colder environment that turns friends away.

Having tacitly admitted that there’s a problem, Apple’s senior management can now show they’ll stop at nothing to make the new store as inviting as it was intended to be.

JLG@mondaynote.com

The Apple Tax, Part II

Once upon a time, Steve Ballmer blasted Apple for asking its customers to pay $500 for an Apple logo. This was the “Apple Tax“, the price difference between the solid, professional workmanship of a laptop running on Windows, and Apple’s needlessly elegant MacBooks.

Following last week’s verdict against Samsung, the kommentariat have raised the specter of an egregious new Apple Tax, one that Apple will levy on other smartphone makers who will have no choice but to pass the burden on to you. The idea is this: Samsung’s loss means it will now have to compete against Apple with its dominant hand — a lower price tag — tied behind its back. This will allow Apple to exact higher prices for its iPhones (and iPads) and thus inflict even more pain and suffering on consumers.

There seems to be a moral aspect, here, as if Apple should be held to a higher standard. Last year, Apple and Nokia settled an IP “misunderstanding” that also resulted in a “Tax”…but it was Nokia that played the T-Man role: Apple paid Nokia more than $600M plus an estimated $11.50 per iPhone sold. Where were the handwringers who now accuse Apple of abusing the patent system when the Nokia settlement took place? Where was the outrage against the “evil”, if hapless, Finnish company? (Amusingly, observers speculate that Nokia has made more money from these IP arrangements than from selling its own Lumia smartphones.)

Even where the moral tone is muted, the significance of the verdict (which you can read in full here) is over-dramatized. For instance, see this August 24th Wall Street Journal story sensationally titled After Verdict, Prepare for the ‘Apple Tax’:

After its stunning victory against rival device-maker Samsung Electronics Co., experts say consumers should expect smartphones, tablets and other mobile devices that license various Apple Inc., design and software innovations to be more expensive to produce.

“There may be a big Apple tax,” said IDC analyst Al Hilwa. “Phones will be more expensive.”

The reason is that rival device makers will likely have to pay to license the various Apple technologies the company sought to protect in court. The jury found that Samsung infringed as many as seven Apple patents, awarding $1.05 billion in damages.

The $1B sum awarded to Apple sounds impressive, but to the giants involved, it doesn’t really change much. Samsung’s annual marketing budget is about $2.75B (it covers washer-dryers and TVs, but it’s mostly smartphones), and, of course, Apple is sitting on a $100B+ cash hoard.

Then there’s the horror over the open-ended nature of the decision: Apple can continue to seek injunctions against products that infringe on their patents. From the NYT article:

…the decision could essentially force [Samsung] and other smartphone makers to redesign their products to be less Apple-like, or risk further legal defeats.

Certainly, injunctions could pose a real threat. They could remove competitors, make Apple more dominant, give it more pricing power to the consumer’s detriment…but none of this is a certainty. Last week’s verdict and any follow-up injunctions are sure to be appealed and appealed again until all avenues are exhausted. The Apple Tax won’t be enforced for several years, if ever.

And even if the “Tax” is assessed, will it have a deleterious impact on device manufacturers and consumers? Last year, about half of all Android handset makers — including ZTE, HTC, Sharp — were handed a Microsoft Tax bill ($27 per phone in ZTE’s case), one that isn’t impeded by an obstacle course of appeals. Count Samsung in this group: The Korean giant reportedly agreed to pay Microsoftbetween $10 and $15 – for each Android smartphone or tablet computer it sells.” Sell 100M devices and the tax bill owed to Ballmer and Co. exceeds $1B. Despite this onerous surcharge, Android devices thrive, and Samsung has quickly jumped to the lead in the Android handset race (from Informa, Telecoms & Media):

Amusingly, the Samsung verdict prompted this gloating tweet from Microsoft exec Bill Cox:

Windows Phone is looking gooooood right now.

(Or, as AllThingsD interpreted it: Microsoft to Samsung. Mind if I Revel in Your Misfortune for a Moment?)

The subtext is clear: Android handset makers should worry about threats to the platform and seek safe harbor with the “Apple-safe” Windows Phone 8. This will be a “goooood” thing all around: If more handset makers offer Windows Phone devices, there will be more choices, fewer opportunities for Apple to get “unfairly high” prices for its iDevices. The detrimental effects, to consumers, of the “Apple Tax” might not be so bad, after all.

The Samsung trial recalls the interesting peace agreement that Apple and Microsoft forged in 1997, when Microsoft “invested” $150M in Apple as a fig-leaf for an IP settlement (see the end of the Quora article). The interesting part of the accord is the provision in which the companies agree that they won’t “clone” each other’s products. If Microsoft could arrange a cross-license agreement with Apple that includes an anti-cloning provision and eventually come up with its own original work (everyone agrees that Microsoft’s Modern UI is elegant, interesting, not just a knock-off), how come Samsung didn’t reach a similar arrangement and produce its own distinctive look and feel?

Microsoft and Apple saw that an armed peace was a better solution than constant IP conflicts. Can Samsung and Apple decide to do something similar and feed engineers rather than platoons of high-priced lawyers (the real winners in these battles)?

It’s a nice thought but I doubt it’ll happen. Gates and Jobs had known one another for a long time; there was animosity, but also familiarity. There is no such comfort between Apple and Samsung execs. There is, instead, a wide cultural divide.

JLG@mondaynote.com

Apple: Three Intriguing Numbers

No Monday Note last week: I was in The Country of Sin, enjoying pleasures such as TGV trips across a landscape of old villages, Romanesque churches, Rhône vineyards — and a couple of nuclear power plants. All this without our friendly TSA.

Back in the Valley, Apple just released their latest quarterly numbers. They weren’t as good as expected, a fact that launched a broadside of comments ranging from shameless pageview whoring (I’m looking at you, Henry) to calm but worried (see Richard Gaywood’s analysis).

As I’ll attempt to explain below, Apple’s latest quarterly performance is unusual. But, stepping back a bit, the company’s numbers are nonetheless phenomenal.

Net sales, growing 23%, are more than three times larger than Amazon’s — and Apple’s net income is more than 1,000 times larger, $8.8B vs. a tiny $7M for the Seattle giant, whose shares went up after disclosing its earnings release anyway.

Turning to Google, Apple sales of $35B are more than three times Google’s $11.3B (including Motorola, for the first time), with net income numbers in a similar ratio at $8.8B and $2.8B respectively.

Ending comparisons with Microsoft, its revenue grew 4% to $18B, about half of Apple’s and, for the first time, the company posted a net loss of $492M, due to the huge $6.2B aQuantive write off, a one time event. Excluding that number, Microsoft net income would have been about $5.5B, two thirds of Apple’s. iPhone revenue at $16B for the quarter, approaches Microsoft’s number for the entire company, iPad, at $9B is about half.

For in-depth coverage of Apple’s Q3 FY 2012, you can turn to Philip Ellmer-DeWitt’s Apple 2.0 or Horace Dediu’s Asymco — possibly the best source of fine-grained industry analysis. I can also recommend Daring Fireball for John Gruber’s lapidary comments and carefully chosen links, and Brian Hall’s Smartphone Wars — vigorous commentary and insights, occasionally couched in NSFW language. Of course, you can always wade through Apple’s 10-Q SEC filing, if you have the time and inclination. Of particular interest is Section 2 MD&A, Management Discussion and Analysis, starting on page 21.

Out of this torrent of information and argument, I suggest we look at three numbers.

First, the 3% “Miss”, Wall Street’s term for failing to hit the revenue bull’s eye. I’m not referring to the guessing games played by Wall Street analysts, both the pros and the so-called amateurs. In the past, the amateurs have done a consistently better job of forecasting revenue, gross margin, profit, unit volumes, but this time, the pros won. Although almost everyone substantially overestimated Apple’s numbers, the pros weren’t nearly as optimistic as the amateurs.”

Instead of measuring Apple’s performance against the predictions of the traders and observers, we can recall what the company itself told us to expect. About a month into each quarter, management provides an official but non-committal estimate of the quarter’s revenue. This guidance is a delicate dance: You want to be cautious, you want to sandbag a little, but not so much that your numbers aren’t taken seriously. Unavoidably, a lot of second-guessing ensues.

Apple has consistently beaten its own guidance, by 19% on average over the past three years, and as much as 35% in Q1 2010. But in this past quarter, Apple “achieved” a historic low: Actual revenue came in at only 3% above the guidance number. Richard Gaywood provides a helpful graphic in his TUAW piece:

Apple management offered explanations during the conference call following the earnings release: The economy in Europe isn’t doing so well, “rumors” about the iPhone 5 have slowed sales of iPhone 4s… These might very well be the causes of the lackluster performance, but one has to wonder: Weren’t these issues known two months ago when the guidance number was announced? Apple is praised for its superbly managed supply chain, its global distribution network, its attention to detail. How is it possible that it didn’t see that the European economy was already cooling? How could management not have heard the steady murmur about an upcoming iPhone?

Put another way: What did you know and when did you know it? And, if you didn’t know, why didn’t you?

There is a possible alternative explanation: Samsung is making more substantial inroads than expected, as their impressive quarterly numbers just released would attest: 50.5M smartphones shipped, almost twice as many as Apple’s 26M.

Sharp-eyed readers may protest the comparison: Samsung reports the number of devices “shipped” while Apple reports units “sold”. But even if we allow for unsold inventory, Samsung’s performance is impressive.  (And, as circumstantial evidence, I noticed an unusually heavy amount of advertising for the Galaxy S III during my recent overseas trip.)

Samsung’s strong showing will almost certainly continue — so how will Apple react? A new product? Price moves? Both? In the conference call, Tim Cook assured his audience that Apple won’t create a “price umbrella” for competitors, that it won’t insist on premium price tags and thus leave small-margin money on the table.

Which leads us to the second number: Gross Margin guidance for the current quarter, ending September 30th, is 38.5%, down from 42.8% for the quarter that just ended. In consultant-speak, that’s an evaporation of 430 basis points (hundredths of percent) in just one quarter — and we’re already one month into it with no visible change in the product lineup other than the full availability of newer MacBooks (Air, Pro, Retina), and no evidence of heavy-handed discounting.

During the conference call, a Morgan Stanley analyst noted that Apple hadn’t shown Gross Margin numbers below 40% for the past two years. Would Apple care to comment?

We expect most of this decline to be primarily driven by a fall transition and to a much lesser extent, the impact of the stronger U.S. dollar.

The entire Gross Margin drop of about $34B of sales (the latest guidance) amounts to $1.5B, a sum that will shift in less than two months, and probably less than one as any momentous announcement is unlikely before Labor Day (the first Monday of September for our overseas readers). This could portend a strong price move in the “fall transition”. To put the $1.5B shift in perspective, imagine Apple dropping its “usual” Gross Margin by $100 per device (new or existing); this means 15M lower-margins devices in the three weeks of September after Labor Day. Or perhaps Apple’s CFO is sandbagging the guidance once again.

The third curious number is the most perplexing: While the entire company grew by 23% compared to the same quarter last year, Apple Store revenue grew by only 17% — and this in spite of adding 47 stores over the year, for a total of 372. Why would Apple’s much vaunted retail channel grow more slowly than the company? The weak Euro economy can’t be the explanation, there are relatively less Apple Stores there. The same can be said for “rumors” of newer devices, they impact all channels and not just company stores.

We’ll see if this last quarter was simply a manifestation of a natural “granularity” of its business (as opposed to the unnatural smoothing of quarter after quarter numbers favored by Wall Street), or if the company is entering a new chapter of the smartphone wars and, if this is the case, how it will change tactics.

JLG@mondaynote.com

Facebook: The Collective Hallucination

Facebook’s bumpy IPO debut could signal the end of a collective hallucination. Most of it pertains to the company’s ability to deliver an effective advertising machine.

Pre-IPO numbers looked nice, especially when compared to Google at this critical stage of their respective business lives:

Based on such numbers, and on the prospect for a billion users by the end of 2012, everyone began to extrapolate and predict Facebook’s dominance of the global advertising market.

Until some cracks began to appear.

The first one was General Motors’ decision to pull its ads off Facebook. This was due to poor click-through performance compared to other ads vectors such as Google. No big deal in terms of revenue: according to Advertising Age, GM had spent a mere $10 million in FB ads and a total $30 million maintaining its presence on the social network. But Facebook watchers saw it a major red flag.

The next bad signal came during the roadshow, when Facebook issued a rather stern warning about its advertising performance among mobile users.

“We believe this increased usage of Facebook on mobile devices has contributed to the recent trend of our daily active users (DAUs) increasing more rapidly than the increase in the number of ads delivered.”

If Facebook can’t effectively monetize its mobile users, it is in serious trouble. Numbers compiled by ComScore are staggering: last March, the average American user spent 7hrs 21 minutes on mobile versions of Facebook (80% on applications, 20% on the mobile site). This represents a reach of more than 80% of mobile users and three times that of the next social media competitor (Twitter), see below:

(source : ComScore)

More broadly, Facebook experiences the unlimited supply of the internet in which users create inventory much faster than advertising can fill it. This trend is known to push ads prices further down as scarcity no longer contains them. The reason why the TV ad market is holding pretty well is its lasting ability to create a tension on prices thanks to the fixed numbers of ad slots available over a given period of time.

Unfortunately for its investors, in many ways, Facebook is not Google. First of all, it has no advertising “killer format ” comparable to Google’s AdWords. The search engine text ads check all the boxes that make a success: they are ultra-simple, efficient, supported by a scalable technology that makes them well-suited for the smallest advertisers as well as for the biggest ones; the system is almost friction-free thanks to an automated market place; and its efficiency doesn’t depend on the quality of creation (there is no room for that). One cent a time, Google churns its enormous revenue stream, without any competition in its field.

By contrast, Facebook’s ad system looks more traditional. For instance, it relies more on creativity than Google does. Although the term sounds a bit overstated considering the level of tactics Facebook uses to collect fans and raise “engagement” of any kind. For example, Tums, the anti-acid drug, developed a game encouraging users to throw virtual tomatoes at pictures of their friends. On a similar level of sophistication, while doing research for this column, I landed on the Facebook Studio Awards site showcasing the best ads and promotional campaigns. My vote goes to the French chicken producer Saint Sever, whose agency devised this elegantly uncomplicated concept: “1 ami = 1poulet” (one friend, one chicken):

If this is the kind of concept Facebook is proud to promote, it becomes a matter of concern for the company’s ARPU.

Speaking of Average Revenue Per User, last year, Facebook made $4.34 per user in overall advertising revenue. A closer look shows differences from one market to another: North America, the most valuable market, yielded $9.51 per user vs. $4.86 for the European market, $1.79 in Asia and only $1.42 for the rest of the world. Facebook’s problem lies exactly there: the most profitable markets are the most saturated ones while the potential for growth resides mostly in the low-yield tier. In the meantime, infrastructure costs are roughly identical: it costs the same to serve a page, or to synchronize a photo album located in Pennsylvania or in Kazakhstan (it could even cost more per user in remote countries, and some say that FB’s infrastructure running costs are likely to grow exponentially as more users generate more interactions between themselves).

Facebook might be tempted to mimic a rather questionable Google trait, that is “The Theory Of Everything”. Over the last years, we’ve seen Google jumping on almost everything (including Motorola’s mobile business), trying a large, confusing array of products and services in order to see what sticks on the wall. The end result is an impressive list of services that became very valuable to users (mail, maps, docs). But more than 90% of Google revenue still come from a single stream of business, search ads.

As for Facebook, we had a glimpse already with the Instagram acquisition (see a recent Monday Note), which looked more like a decision triggered by short-term agitation than by long-term strategic thought. We might see other moves like this as Mark Zuckerberg retains 57% of the voting shares and as the company sits on a big (more than $6 billion) pile of cash. Each month brings up a new business Facebook might be tempted to enter, from mobile phones, to search.

All ideas that fit Facebook’s vital need for growth.

frederic.filloux@mondaynote.com