Bit depth of the image is one of the key factors of digital file quality. 16-bit color depth can make a difference and gives you the best chance of making the finest prints.
Why high-bit depth
Color depth, or in other words, bit depth defines the number of bits needed to describe a particular color captured in a single pixel of a bitmap image such as a photograph. Simply said, the greater the color depth, the higher the quality of the photograph. More specifically, the photograph has smoother gradient along the continuous tones and has a larger amount of information that is needed to display it.
Computers use relatively simple process to do the conversion of colors into their numerical form: the colors are separated into channels and each channel is divided into tone levels. If the color depth increases, the number of channels remains unchanged, but the number of tone levels in every channel increases. There are always three channels (at least in our working RGB color space) as their number corresponds to our perception and the recognition of the three primary colors. That’s why we call these channels red, green and blue.
Thus, if there are more levels, the smoother the tonal gradient, and the photograph looks better. 8-bit color depth of a photograph containing 256 levels in each channel provides perfect illusion of continuous tone to our eye so we no longer perceive banding or other disturbances.
You may ask then: why go for 16-bit color depth, if our eyes already perceive 8-bit as completely sufficient and cannot appreciate the greater bit depth. Well, that's true, but what is sufficient for our eyes is not always enough for the computer or the adjustments we make when working with photos. Unlike us, our computer is happy to receive as much information as possible because it can work with it, even if our eyes cannot.
We can imagine such an example using a handsaw: if it has few teeth, the contour seems jagged to us. However the more teeth and the denser they become, the contour gets smoother and more continuous. And this is how the computer "sees" the color depth.
To be more specific, there are 256 levels in 8-bit depth because 2 to the 8th power is 256. The number of levels in 16-bit depth is 2 to the 16th power, which equals 65,536. Quite a difference, wouldn't you say?
What numbers say
Consider a situation when we edit a photo with color depth of a certain number of bits per color channel and a certain number of levels per each channel. Data conversion occurs at each step of the editing process. During each conversion some levels in each channel are lost, thus distorting the colors. It is due to the rounding error, arising from the transfer of integer values before a conversion and the subsequent rounding to the nearest integer value after the conversion. So, any image editing you perform will result in some data getting lost.
The loss of levels can be quite significant with 8-bit color depth images and gets even worse with the number of adjustments performed. In case of compressed extensions such as JPEG, the loss multiplies substantially when compared to the uncompressed lossless formats such as TIFF or RAW, in case of digital cameras.
At 16-bit color depth the resulting colors will be significantly less distorted after editing, because instead of having 256 levels per each channel, we have more than 65,000. Some distortions occur even at 16-bit depth, but the large amount of data makes these errors insignificant.
Nowadays there are cameras available with 10, 12, or 14-bit depth. This contributes to a common misunderstanding that these bit depths offer more levels per channel when compared to a 8-bit photograph. This is only true up to the point when the photo is opened, adjusted in a graphics editor and saved. That's when a 10, 12, or 14-bit photo becomes 8-bit. No application, not even professional raster graphics editor such as Photoshop, is capable of processing and saving photos with bit depths other than 8 or 16. And it does not matter at all whether the photo has been taken at a high resolution or not.
You may notice misleading recommendations that all it takes is just to convert 10, 12 or 14 bit RAW file to TIFF format while selecting 16 bits per channel to receive an image with native 16-bit depth. However, 14-bit image can never be converted to 16-bit using a software conversion in order to obtain an increase of the number of tone levels to 65,536 per channel together with an increase of unique information typical to 16-bit photo. It can be converted, and your application will indeed detect it as a 16-bit file, but in reality it will be only a double-sized 8-bit image.
It is true that computer programs offer the option of converting any RAW file to 16 bit depth. However, this only applies to digital cameras that allow to capture 16 bits file directly at the output. Thus in reality a photograph with 16 bit depth is only created when corresponding number of tone levels is created and captured from the very beginning of the entire process. This applies to the process of scanning films or at the point of A/D conversion on the sensor of the capturing device, in case of digital cameras. And there is only a handful of medium-format cameras, which allow such process, but certainly none of the DSLR cameras.
A/D converter is used to translate analog signal to digital, in our case, it converts the optical information captured on the camera sensor into its digital signal. Simply put, it converts the colors to numbers by digitizing the colors. The higher the bit depth the A/D converter can capture, the more accurate it is, and the more powerful and expensive it is. Since this is one of the most important and the most costly part of any digital camera, only very expensive medium-format cameras have the ability of a 16-bit converter.
As we have already learned, the bit depth corresponds to the number of tone levels per color channel. In the case of 8-bit color depth we can therefore talk about 8-bit encoding. One of the simple reasons why we go with 8 bits is that computer data is stored by bytes, where one byte equals exactly eight bits. That’s why we can only save files whose bit depth is given by an integer multiple of a byte, therefore there are no 10, 12 or 14-bit JPEG, TIFF, BMP or PNG bitmap files, only 8 or 16-bit.
In conclusion let us sum up the advantages of a native 16-bit depth:
- Capable of a wide range of adjustments based on a much larger amount of data per pixel;
- Smoother and more continuous gradations in color and monochrome tone;
- More details in very dark and very bright tones;
- Significantly less color distortion resulting from data loss at the conversion caused by rounding error when editing the image.