Last week’s Intel Developers’ Forum brought the expected crop of new CPU chips. The simplest way to summarize what’s taking place is this:

  • We’re stuck at 3GHz, so we add more processors on the CPU chip.

  • Intel continues to lead with small “geometries”, 32 nanometers today, 22 nm tomorrow.

  • The company pitches its x-86 processors for mobile devices.

More processors: Once upon a time, each year brought a significant increase in processor speed. Not to be too wistful about the early PC days, but a 1 MHz processor ran “perfectly good” spreadsheets. Like many bouts of nostalgia, this one omits important bits of context such as the complexity of said VisiCalc model, what other software ran concurrently, if any, what storage and networking devices were supported, what kind of display and audio devices were offered. Still, I’d love to see the original assembly language version of Lotus 1-2-3 run on a “bare metal” DOS configuration brought up on a 3GHz Intel machine -- a CPU clock 3,000 times faster than the 1983 vintage machine.

In the early 90’s, luxury was a 33MHz Pentium. Now we’re at 3GHz, apparently stuck there  for the last 4-5 years. (A history of Intel processors can be found here.).
The faster you move something around, the more power you need. Try lifting and lowering a 10 pound weight. Slowly at first, once every 5 seconds, then every second, then twice per second. Your own body temperature will give you the answer.
Inside a processor, we have transistors. These are logic gates, they open and close. In doing so, they shuttle electrons back and forth at the circuit’s clock speed. These electrons are not “weightless”, moving them consumes power, just as we do lifting weight. As the clock rate increases, more power is needed, the transistor temperature increases. There are more precise, more technical ways of expressing this; but the basic fact remains: faster chips are hotter chips. Knowing this, chip designers found ways to counter the temperature rise such as using smaller gates shuttling a smaller “mass of electrons” back and forth. Air or liquid cooling of chips does help as well. Still, we hit a wall. With today’s (and tomorrow’s foreseeable) silicon technology, we’re out of GHz.
So, what do we do for more powerful CPU chips?

We put more processors on the same chip and, voilà, more computing power to feed our hungry operating systems and applications.
Unfortunately, it’s not that simple. Consider a database search for records containing the word “epistemology”. With one processor, you start at the first record, scan and iterate until you get a hit, note the result and continue until you’ve scanned all records. With two processors, you split the database in two and get the work done twice as fast. (Sharp-minded readers will know how to build counterexamples but, for our purposes, the basic idea is correct.) This is a type of problems where more processors mean more effective computing power.
Unfortunately, many real-world problems are not “parallel”. For simplicity’s sake, take a spreadsheet. In most cases, a cell relies on the value of another cell which in turn relies on… You see the point, the computation cannot be parallelized, it is inherently sequential. As a result, it will speed up if and only if we can increase the basic calculation speed, that is if we get more GHz. To get into real-world examples, most simulations, financial or physics one are very sequential.
More bad news: you need skilled programmers to write software that really uses multiple processors. Intuitively, when you go back to the database search example, the story you tell the computer, the program, changes when you move from a single processor to two processors. You must add phrases to tell each processor what part of the task it’ll handle. And you must also add instructions on how to coordinate their work, how to avoid stepping on each other’s work. In other words: software written for single processor machines needs modifications to effectively use more processors. Further, these edits aren’t trivial at all.
In real world product terms, more processors doesn’t automagically mean faster computers. In the best of cases, we’ll need time to see applications modified to make fuller use of the new chips, four processors for higher-end laptops, eight for desktops.
This hasn’t escaped Microsoft’s or Apple’s attention. They both offer or plan to offer tools allowing programmers to make better use of the new multiprocessor chips. Apple’s solution, called Grand Central Dispatch (GCD) looks good -- on Keynote slides. We’ll see how it does in user experience terms and against Microsoft’s own solution.
(Apple also has a tool, OpenCL, targeted at helping programmers harness different types of processors, such as Graphics processors and conventional CPUs. That’s for another column.) Expect benchmark wars.

Smaller geometries: Here, Intel shows its technology and manufacturing might, to say nothing of its financial strength. It now takes about $4bn to build a new generation “fab”, meaning a factory where chips are fabricated. Each reduction in the size of a basic silicon building block, each smaller geometry requires new technical feats and billions of dollars. For reference, a nanometer is one billionth of a meter, the width of a human hair is about 100 micrometers. So, a human hair is about 3,000 times the width of a 32nm silicon building block. Tom’s hardware, one of the better techie sites, offers a fuller discussion of Intel’s latest and future feats.
In practical terms, and beyond adding more processors, this means integrating more functions such as graphics and I/O (controlling Input/Output devices, peripherals). In some cases this leads to a System On a Chip (SOC), yielding another type of “more bang for the buck”. Eight processors at one end of the range, or everything on one chip at the other end, for smaller, less expensive devices. Think better netbooks.

Intel Inside Mobile Devices: This is almost becoming an old saw. Following their “Not A Single Crack In The Wall” strategy, Intel doesn’t want to miss any emerging computing genre. As a result, they’ve been peddling their mobile Linux, called Moblin, for a putative genre of Mobile Internet Devices. And they’ve been thoroughly ignored. On the one hand, netbooks have been using Windows Xp rather than Linux, or Vista. On the other hand, an annoying type of devices called smartphones have been using various flavors of the ARM processors. So far, these processors are no match fort Intel in laptops and desktops, but ARM just announced a 2GHz version. This could lead to more powerful pocket computers as Apple now calls them and become the crack in the wall Intel fears. (For the time being, we’ll leave aside Apple’s April 2008 acquisition of PA Semiconductor, a microprocessor design firm.)
Intel is making noises about very-low power x-86 chips. In the abstract, smaller geometries and lower clock frequencies could yield chips with low enough battery power requirements to fit smartphones. And then what? To run Windows? No. Apple’s OS X? Neither, they have their own smartphone OS running on ARM. The same holds true for Android, RIM, Nokia and Palm. Even if yet another genre of coat pocket computers, a.k.a. tablets emerge, it’s hard to see Intel cracking that market. In other words, an interesting transition is about to take place, a new “Intel Outside” era.      —

Print Friendly