David Saffir has written a blog entry about bit depth and posted it on the HP Professional Photography site.
David titled his blog entry, Bit Depth Basics: More Than A Numbers Game.
Well, yes!? It is more than a numbers game.
I don't know why David chose to talk about dynamic range in the same blog entry. Please, read his blog and tell me if I've unfairly interpreted his comments and his example. He doesn't come right out and say it, but you sure get the impression that more bit depth means more dynamic range.
"The major benefit of working with high-bit images is increased dynamic range— the range of tones and detail that the camera can record from the darkest dark to lightest light."
I've seen this sort of comment floating around on forums. I'm not sure where the logic comes from, except that HDR photos are 32-bits as opposed to 16-bits.
The relationship between bit depth and dynamic range is not 1:1. More bit depth does not mean that a 14-bit Nikon D3 can capture more dynamic range than a 12-bit Nikon D2X. Yes, theoretically, the D3 can. Theoretically, the maximum difference between the lightest and darkest pixels can be 16,384 for the D3 and only 4,096 for the Nikon D2X.
As we say in multivariate statistics, ceterus paribus, which is a fancy Latin expression that means assuming everything else is equal. It also means assuming both cameras capture light perfectly and convert that to digital signals perfectly. Well, none of that is even close to real-world conditions.
The bit depth of a RAW file from a digital SLR tells us nothing of practical note about the dynamic range captured in that RAW file.
There are definite editing virtues to greater bit depth. Photos with more bit depth are less likely to posterize during editing. The difference between 16,384 distinct levels and 4,096 distinct levels means that you can likely make some more extreme editing transformations without noticing visible posterization.
I said likely because much of that difference is more marketing claim than visible difference. People will show you histograms and numeric analyses to demonstrate how 14-bits is superior to 12-bits. Here's a point where I agree with Jeff Schewe and Bruce Fraser. As they write in Real World Camera RAW with Photoshop CS4, this is largely a marketing ploy.
I don't strive for pretty histograms. I aim for pretty prints. I'll acknowledge, there might be the rare RAW file where the edits a digital photographer might apply to that same image from the same camera to make a print or image for the Web would show visible posterization in 12-bits and none in 14-bits. Honestly, I haven't seen even one yet. Not where someone didn't do an extreme close-up or I'd need a loupe to see the posterization difference on a print.
If I was to sell my Canon 1Ds MkII to buy a 1Ds MkIII, it wouldn't be for 14-bit depth. That's not worth the thousands of dollars to me. Even the extra resolution is marginal in its visible impact. No, what would make me consider the trade-in would be the improved noise reduction. Each generation of DIGIC just gets better!
Bit-depth is also not a good indicator of the dynamic range of the scene reproduced in an HDR file. A 32-bit HDR image has to be tonemapped before it can be displayed on a monitor or printed. You can take a series of photos in 8-bits or in 12-bits or in 14-bits and once the resulting 32-bit HDR photo is properly tonemapped, even the 8-bit version will show the original dynamic range that was captured.
For more on HDR, bit depth, and dynamic range, I recommend readers view the following blog entry:
The discussion of bit-depth and dynamic range also misses an important element. It ignores the effects of noise on what engineers refer to as quantization error. That's rounding error that results when the in-camera noise reduction takes tiny fluctutions in nearby pixels and rounds them all to the same digitized value in the RAW file. The posterization difference between 12-bits and 14-bits are likely to be less than residual noise, and that means it will almost certainly not be visible to the eye. In other words, the dithering effect of noise will be stronger than any likely difference in perceptible posterization between 14-bits and 12-bits.
For more on this point, I recommend readers see Emil Martinec's technical article on noise.
As Emil notes, "Curiously, all the 14-bit cameras on the market (as of this writing) do not merit 14-bit recording. The noise is more than four levels in 14-bit units on all of these cameras (Nikon D3/D300, Canon 1D3/1Ds3 and 40D); the additional two bits are randomly fluctuating, since the levels are randomly fluctuating by +/- four levels or more. Twelve bits are perfectly adequate to record the image data without any loss of image quality, for any of these cameras (though the D3 comes quite close to warranting a 13th bit)."
He argues that the recording method in the Nikon D300 explains that the difference in D300 files compared with, say, the Nikon D2X, is not the result 14-bits per channel. It's the result of the D300 reading the sensor data more slowly. Reading the data slower by a factor of 3 or 4 means the data can be read more accurately and when it is read more accurately, you get less noise. That will have a bigger impact on perceived image quality between the Nikon D300 and the Nikon D2X than 14-bit depth v. 12-bit. Remember that Latin phrase, ceterus paribus. It's not just bit depth that distinguishes what information is captured in a Nikon .NEF file.
Bit depth and dynamic range are not completely unrelated. Regrettably, David Saffir's recent blog entry promotes the blarney from camera manufacturers that a DSLR with 14-bit depth is going to result in perceptibly better photos than one with 12-bits. That just ain't so.
David's other observations are good advice. You want to start with as much detail as possible and you want to work with your photographic images in a way that preserves as much detail as possible (usually -- sometimes we reduce detail on purpose through blurs, blends, and other edits).