In the (now waning) days of analog photography, much was made of which film was best: Kodak’s Kodachrome, Ektachrome, Fuji’s, Konica, Agfa, Ferrania... to name but a few of the old standards. Today, a similar debate goes on regarding the altogether simpler digital sensors. In the April 5th Monday Note #80, I took a first pass at the sensor size question, one that is, I believe, deliberately obscured by manufacturers. Showing their always flattering view of our intelligence, they peddle the number of pixels in the sensor, regardless of the size of those pixels. Never mind that (everything else being equal) pixel size makes the most important contribution to image quality.

Fortunately, the Web comes to the rescue with tutorials, charts and even calculators. Cambridge In Color features nice tutorials such as this one. A French company, DxO Labs, offers a sophisticated sensor rating site: You’ll see what I mean by sophisticated as the site provides numbers for color depth, dynamic range and low-light ISO.

Color depth refers to the range of color shades rendered by the sensor. Imagine pictures built with only three basic colors, no nuances. Then, the same image built with 16 variations of each one of the three red, green and blue (RGB) components. Or 256 shades for each basic color and so on.

Dynamic range is, forgive the expression, a BFD, as in Big Frigging Deal. Why? Because everyday scenes exhibit a range of light values far exceeding what sensors (and, before them, film) can capture. Without going into logarithmic scales, just consider pictures we’ve all taken where the highlights are “burned” or the lowlights show no detail at all. Our eyes see the details, the clouds in the sky or the leaves in the bushes but the sensor doesn’t have enough dynamic range. Entire books have been written on dynamic range: Ansel Adams’s(rightly) celebrated Zone System, see the bibliography at the end of the Wikipedia article, dealt with the choices to be made when taking the picture and, later, printing it.

The key insight is this: neither the film (or the sensor) or the paper (or the screen) on which the picture ends up can faithfully reproduce the range of light values in the original scene. As a result, you want to carefully select which parts of the scene are meaningful to you; then, you adjust the picture taking and the printing accordingly. To simplify, if the scene has a range of 100 (light values) and your film/sensor a range of 50, you must choose and sacrifice highlights or lowlights. You’ll sacrifice the clouds to show people under a dark awning, or you’ll take deeply moving pictures of clouds with the mountain range rendered as a mere black silhouette.

But this is the digital era and, in a future note, I’ll discuss High Dynamic Range (HDR) pictures, a way to make landscape pictures the much beloved Ansel would be jealous of. In a nutshell, you take two digital pictures of the same landscape, one sacrificing the low lights, the other giving up the highlights. Then, with digital magic, you add them up and adjust (a kind of range compression) the result to fit what paper (or the screen) can render. “Can render” means this: two different values (light intensity or color nuance) inside the computer are effectively perceived as different by the human brains when observing the print or the screen.

Low-light ISO, in DxOMark, refers to the sensor’s noise level and artifacts, to the overall quality of pictures taken in low-light, low contrast conditions. (Add bad color balance, a future sub-topic.)
For more on sensor data, you can turn to the aptly named sensor-size.com site. You can also browse two encyclopedic sites, Digital Camera Resource, a Silicon Valley expert or DP Review, a British site, recently acquired by Amazon.

Moving right along, let’s turn to the color-blind sensor and, while we’re at it, to the analog vs. digital myth.

All digital camera sensors are color-blind. Yet, we get color pictures. The trick is a “mere matter of software”, preceded by a little bit of filtering. The filter in question is known as a Bayer Matrix, a mosaic placed in front of the sensor pixels. The mosaic’s basic motif is two green cells, one blue, one red. (Two greens because our eyes/brains are more sensitive to green.) As a result, the filtered pixel bits now measure color intensity, so much green, so little green, etc. The resulting binary values are thrown into a computer algorithm for an operation known as convolution or, more barbarically, demosaicing. And, presto, this is indeed a quick computation, a reconstructed color image appears, first on the camera’s back screen and, later, in Picasa, iPhoto or Photoshop. Some prefer it RAW, meaning they don’t want the camera to “develop the digital negative”. In plainer English, this means I take the unprocessed, un-demosaiced 10, 12 or 14 bits per pixel files and ask my favorite image-processing program to do the conversion from color-blind to colored.

We’re getting into the finer points of digital photography, here. A sinner myself, I cling to the belief I must set my camera to store pictures in RAW format, to be decoded later by Aperture. But I’m not sure it does much for the end result, at least in my hands. (I’ll discuss picture editing in a future note.)

Professionals and advanced amateurs will disagree: for fashion or product work (pictures of objects for advertising), extracting details or twisting the process for a desired effect is justified, its makes sense and money.

Turning back the clock: analog vs. digital. We’re used to think of the disappearing silver based film as “analog”. Actually, film isn’t really analog, as in continuous like the continuum of real numbers. You will recall we used to worry about the “grain” in our pictures. Digital sensors and silver-based film are both discrete, they both capture light using a discontinuous sensor matrix. The difference is, in film, the sensor is made of grains of silver compounds (halides). Photons falling on the silver-based grains in the film switch the state of halide molecules. Later, in the development process, the switched grains will be “revealed” as black and the ones that didn’t get any light are washed out and leave the film transparent.

One difference and one similarity.

The difference is film grains are not regular, their microscopic size and shape aren’t constant. Our brains like the variety, or they got used to it; the variation in shape/size provides a nice random (experts say dithering) effect, a more pleasant one than ones arising from the regular (some say boring) digital matrix.
So much so you’ll see commercial algorithms aimed at reproducing the beloved irregularities of “analog” film. The most elaborate example might come from DxO Labs. They provide “film packs” endowing our digital pictures with the analog flavor of yore, with a choice of Polaroid, Fuji Velvia, Kodachrome 64 and the like. “Give your enlargements style and depth with silver-halide grain”.

The similarity is color-blindness. Silver halides are color blind, the molecules are binary switches, touched by a photon or not. Getting color into film involves truly marvelous feats of chemistry, physics and manufacturing. Color film is built in independent sensing layers separated by color filters, all with the most exquisite precision. At the apex of analog film, it was manufactured in huge quantities, high quality and durability, able to tolerate the vagaries of travel and processing by mini-labs in strange places around the world. A true marvel, a now dying world where, near Paris’ Champ-Elysées, Eastman Kodak could maintain a free library of the company’s best technical and artistic publications.

No need to be too nostalgic, we have the Web, we have digital sensors and software.
A $100 camera makes very good every day street and social (party) pictures.
A sub-$500 DSLR (Digital Single Lens Reflex) makes great pictures with exquisitely balanced flash lighting at dusk in the street.
A $1000 camera makes beautiful street pictures at night, no flash, no tripod.

All without consuming silver, all without photo-processing chemical pollution. –JLG

Print Friendly