| Hdr Images For Photography Updated: 10/03/2008 01:09 PM
From http://www.hdrsoft.com/resources/dri.html#bit_depth
|
› | What is Dynamic Range? |
| The dynamic range is the ratio between the maximum and minimum values of a physical measurement. Its definition depends on what the dynamic range refers to. |
| For a scene: ratio between the brightest and darkest parts of the scene. |
| For a camera: ratio of saturation to noise. More specifically, ratio of the intensity that just saturates the camera to the intensity that just lifts the camera response one standard deviation above camera noise. |
| For a display: ratio between the maximum and minimum intensities emitted from the screen. |
|
| |
|
|
› | What is the unit of dynamic range? |
| Dynamic range is a ratio and as such a dimensionless quantity. In photography and imaging, the dynamic range represents the ratio of two luminance values, with the luminance expressed in candelas per square meter. |
| The range of luminance human vision can handle is quite large. While the luminance of starlight is around 0.001 cd/m2, that of a sunlit scene is around 100,000 cd/m2, which is hundred millions times higher. The luminance of the sun itself is approximately 1,000,000,000 cd/m2. The human eye can accommodate a dynamic range of approximately 10,000:1 in a single view. |
| To more easily represent such different values, it is common to use a logarithmic scale to plot the luminance. The scanline below represents the log base 10 of the luminance, so going from 0.1 to 1 is the same distance as going from 100 to 1000, for instance. |
|
|
Lum. | 0.00001 | 0.001 | 1 | 100 | 10,000 | 1,000,000 | 10^8 | |
|
(cd/m^2) | | | | | | | | | | | | | | |
| starlight | moonlight | indoor
lighting | outdoor
shade | outdoor
sunlit | sun |
|
|
| A scene showing the interior of a room with a sunlit view outside the window, for instance, will have a dynamic range of approximately 100,000:1. |
|
| |
|
|
› | What is an HDR image? |
| The Dynamic Range of real-world scenes can be quite high -- ratios of 100,000:1 are common in the natural world. An HDR (High Dynamic Range) image stores pixel values that span the whole tonal range of real-world scenes. Therefore, an HDR image is encoded in a format that allows the largest range of values, e.g. floating-point values stored with 32 bits per color channel. |
| Another characteristics of an HDR image is that it stores linear values. This means that the value of a pixel from an HDR image is proportional to the amount of light measured by the camera. In this sense, HDR images are scene-referred, representing the original light values captured for the scene. |
| Whether an image may be considered High or Low Dynamic Range depends on several factors. Most often, the distinction is made depending on the number of bits per color channel that the digitized image can hold. However, the number of bits itself may be a misleading indication of the real dynamic range that the image reproduces -- converting a Low Dynamic Range image to a higher bit depth does not change its dynamic range, of course.
· | 8-bit images (i.e. 24 bits per pixel for a color image) are considered Low Dynamic Range. |
· | 16-bit images (i.e. 48 bits per pixel for a color image) resulting from RAW conversion are still considered Low Dynamic Range, even though the range of values they can encode is much higher than for 8-bit images (65536 versus 256). Converting a RAW file involves applying a tonal curve that compresses the dynamic range of the RAW data so that the converted image shows correctly on low dynamic range monitors. The need to adapt the output image file to the dynamic range of the display is the factor that dictates how much the dynamic range is compressed, not the output bit-depth. By using 16 instead of 8 bits, you will gain precision but you will not gain dynamic range. |
· | 32-bit images (i.e. 96 bits per pixel for a color image) are considered High Dynamic Range. Unlike 8- and 16-bit images which can take a finite number of values, 32-bit images are coded using floating point numbers, which means the values they can take is unlimited. It is important to note, though, that storing an image in a 32-bit HDR format is a necessary condition for an HDR image but not a sufficient one. When an image comes from a single capture with a standard camera, it will remain a Low Dynamic Range image, regardless of the format used to store it. |
|
| There are various formats available to store HDR images, such as Radiance RGBE (.hdr) and OpenEXR (.exr) among the most commonly used. See Greg Ward's HDR Image Encodings page for an excellent overview of HDR formats. |
|
| |
|
|
› | But aren't we confusing Dynamic Range with bit depth here? |
| Good question. Bit depth and dynamic range are indeed separate concepts and there is no direct one to one relationship between them. |
| The bit depth of a capturing or displaying device gives you an indication of its dynamic range capacity, i.e. the highest dynamic range that the device would be capable of reproducing if all other constraints are eliminated. For instance, a bit-depth of 12 for a CCD tells you that the maximum dynamic range of the sensor is 4096:1, but the captured dynamic range is likely to be much less once noise is taken into account (most 12-bit sensors have on average a dynamic range around 1,000:1 only). |
| In the case of an image file, the bit-depth in itself does not tell much about the dynamic range captured or reproduced by the file. |
| First, the bit depth of an image file is not a reliable indicator of the dynamic range of a reproducing device. For instance, when a RAW file is converted into a 16-bit TIFF file in linear space, the real bit-depth -and thus maximum dynamic range- of the captured data is most probably 12-bit only, which is the bit-depth of standard digital cameras. It is just because 12 bits are not convenient for computers that the file will be stored in 16 bits, but of course it does not change the dynamic range of the information stored. |
| Second, the bit-depth of an image file is even less a reliable indicator of the dynamic range of the scene reproduced. When a 32-bit HDR image has been properly tone mapped, it will show the original dynamic range captured, even when it is saved in an 8-bit image format. This is why a tone mapped image is often confused with an HDR image. A tone mapped image is not an HDR image as it does not represent the original values of light captured anymore. It just reproduces the dynamic range captured on standard monitors or prints. |
|
| |
|
|
› | Shouldn't 24-bit be higher than 16-bit? I'm lost with all those bit numbers! |
| There are two ways to "count" bits for an image -- either the number of bits per color channel or the number of bits per pixel. |
| A bit is the smallest unit of data stored in a computer. For a grayscale image, 8-bit means that each pixel can be one of 256 levels of gray (256 is 2 to the power 8). |
| For an RGB color image, 8-bit means that each one of the three color channels can be one of 256 levels of color. Since each pixel is represented by 3 colors in this case, 8-bit per color channel actually means 24-bit per pixel. Similarly, 16-bit for an RGB image means 65,536 levels per color channel and 48-bit per pixel. |
| To complicate matters, when an image is classified as 16-bit, it just means that it can store a maximum 65,535 values. It does not necessarily mean that it actually spans that range. If the camera sensors can not capture more than 12 bits of tonal values, the actual bit depth of the image will be at best 12-bit and probably less because of noise. |
| The following table attempts to summarize the above for the case of an RGB color image. |
|
|
Type of digital support | Bit depth per color channel | Bit depth per pixel | Theoretical maximum Dynamic Range | Reality |
12-bit CCD | 12 | 36 | 4,096:1 | real maximum limited by noise |
14-bit CCD | 14 | 42 | 16,384:1 | real maximum limited by noise |
16-bit TIFF | 16 | 48 | 65,536:1 | bit-depth in this case is not directly related to the dynamic range captured |
HDR image (e.g. Radiance format) | 32 | 96 | infinite | real maximum limited by the captured dynamic range |
|
|
| |
|
|
› | How do I shoot an HDR image? |
| Most digital cameras are only able to capture a limited dynamic range (the exposure setting determines which part of the total dynamic range will be captured). This is why HDR images are commonly created from photos of the same scene taken under different exposure levels. |
| Here are some recommendations for taking different exposures for the HDR image: |
|
1. | Mount your camera on a tripod |
2. | Set your camera to manual exposure mode. Select an appropriate aperture for your scene (e.g. f/8 or less if you need more depth of field) and the lowest ISO setting. |
3. | Measure the light in the brightest part of your scene (spot metering or in Av mode to point only the highlights) and note the exposure time. Do the same for the darkest shadows of your scene. |
4. | Determine the number and value of exposures necessary. For this, take as a basis the exposure time measured for the highlights. Multiply this number by 4 to find the next exposure with a stop spacing of 2 EV. Multiply by 4 successively for the next exposures till you pass the exposure measured for the shadows. (Note: For most daylight outdoor scenes excluding the sun, 3 exposures spaced by two EVs are often sufficient to properly cover the dynamic range). |
5. | You can make use of Auto-Exposure Bracketing if your camera supports it and if it allows a sufficient exposure increment and number of auto-bracketed frames to cover the dynamic range determined in step 4. Otherwise, you will have to vary the exposure times manually. |
|
|
| |
|
|
| |
|
|
| |
|
|
› | What are HDR Images used for in 3D rendering? | |
| High Dynamic Range Images (HDRIs) are used for realistic lighting of 3D scenes through a technique called Image Based Lighting. Given that HDRIs store the whole range of "real-word" luminance information, Global Illumination algorithms use them to simulate natural light. | |
| In order to capture the lighting of the scene in all directions, HDRIs intended for 3D Lighting are often 360º panoramic images. Those can be obtained by photographing a mirror ball (fast and easy but low quality), stitching several views or direct capture with a high-end panoramic camera. A 360º panorama is not strictly necessary, though -- an HDRI taken from a single view, preferably with a wide angle lens, may be sufficient in some cases. | |
| What is always necessary is to use a real High Dynamic Range Image. A JPEG image produced by your camera is not an HDRI and will not work for Image Based Lighting. First, because it is unlikely to capture the whole dynamic range of the scene. You will need to take several exposures to ensure that. Second, because its image values are non-linear (which is necessary to make it look good on monitors), whereas the rendering algorithms assume linear values, i.e. values that are proportional to the luminance captured. | |
| |
| |
|
|
| |
|
|
| |
|
|
› | What do you mean by Dynamic Range Increase? |
| A general problem in photography is the rendering of scenes presenting very bright highlights and deep shadows. The problem already exists with traditional silver halide photography and is more pronounced with slide films. In digital photography, the problem is made even worse as the linear response of the sensors imposes an abrupt limit to the dynamic range captured once the sensor capacity is reached. |
| This is why you can not get what the human eye is seeing when capturing an HDR scene with standard cameras. If you capture details in the shadows thanks to long exposure times, you then get blown-out highlights. Conversely, you can capture details in the highlights with short exposure times, but you then lose contrast in the shadows. |
| Creating an HDR image from differently exposed shots is a way to solve this problem. However, HDR images present a major inconvenience for photography: they can not be displayed correctly on standard computer screens and can be reproduced even less well on paper. |
| What we call Dynamic Range Increase is the process of correctly reproducing the highlights and shadows of a high contrast scene on common monitors and printers. That is, producing a standard 24-bit image that represents the original high dynamic range scene as the human eye has seen it. |
| There are basically two ways to increase the dynamic range of digital photographs or scanned films. |
|
1. | Exposure blending |
| This process merges differently exposed photographs of the scene into an image with details in both highlights and shadows. |
|
2. | Tone Mapping |
| This process compresses the tonal range of an HDR image of the scene in order to reveal its details in highlights and shadows. The input HDR image is either:
· | generated from differently exposed photos |
· | produced by an HDR camera (to date, very few cameras offer the direct capture of HDR data. The SpheroCam HDR is the most well-known of those). |
|
|
|
| |
|
|
› | What is Tone Mapping? |
| Tone Mapping is the process of converting the tonal values of an image from a high range to a lower one. For instance, an HDR image with a dynamic range of 100,000:1 will be converted into an image with tonal values ranging from just 1 to 255. |
| You may wonder why someone would want to reduce the tonal range when an HDR image provides so many benefits compared to a Low Dynamic Range image. After all, HDR images contain a much higher level of detail and are closer to the range of human vision. The reason is simple: standard display devices can only reproduce a low range (around 100 or 200:1), and for paper, the range is even lower. |
| So, the goal of Tone Mapping is to reproduce the appearance of images having a higher dynamic range than the reproducing media such as prints or standard monitors. |
| Many scenes we are photographing have a high contrast, or properly speaking a high dynamic range: part of the scene is in the shadows, part in the highlights. Photographers have to deal with two types of issues with such High Dynamic Range scenes. |
|
· | Issue 1: Camera limitation |
| The first issue is to capture the dynamic range of the scene. This is commonly addressed by taking several photos of the scene under different exposure settings, and then merging those photos into an HDR image. |
· | Issue 2: Display limitation |
| The second issue is to reproduce the dynamic range captured on low dynamic range displays. That is, to ensure that the details in highlights and shadows in the HDR image can be correctly viewed on prints and standard monitors in spite of their limited dynamic range capability. Tone mapping deals specifically with this issue of reproducing the dynamic range captured. |
|
| In a way, tone mapping has the same purpose as blending exposures which is traditionally used in digital imaging for processing HDR scenes. The differences and similarities between both are detailed here. |
|
| Types of tone mapping |
| Tone mapping algorithms scale the dynamic range down while attempting to preserve the appearance of the original image captured. Tone mapping operators are divided into two broad categories, global and local. |
|
· | Global operators |
| Each pixel is mapped based on its intensity and global image characteristics, regardless of the pixel's spatial location. An example of a global type of tone mapping is a tonal curve. |
| Global tone mapping is OK for processing 12-bit sensor data but is less able to produce photographically pleasing images when the dynamic range of the scene is particularly high. This is because all pixels of the image are processed in a similar way, regardless of whether they are located in a bright or dark area. This often results in a tone mapped image that looks "flat", having lost its local details in the conversion process. |
|
· | Local operators |
| The pixel's location in the image is taken into account in order to determine the appropriate scaling for this pixel. So, a pixel of a given intensity will be mapped to a different value depending on whether it is located in a dark or bright area. |
| Local tone mapping requires looking up surrounding values for each pixel mapped, which makes it slower (memory access is the major speed bottleneck on today's computers) but tends to produce more pleasing results (our eyes react locally to contrast). If correctly done, this results in an image preserving local contrast as well as details in highlights and shadows, as shown in these examples. |
|
|
| |
|
|
› | Is RAW conversion related to tonemapping? |
| Yes. Tone mapping is necessary when the image reproduced has a higher dynamic range than the reproducing media, and this is the case with the RAW data of digital cameras. Standard 12-bit sensors may be able to capture a tonal range of 1,000:1 and this is much more than your monitor or prints can reproduce (standard display devices have a dynamic range of about 100:1). So, when a camera -or a RAW converter- processes 12-bit RAW data into an image that looks good on your 8-bit monitor, this is a form of tone mapping. |
| However, tone mapping 12 bits worth of tonal range is relatively simple compared to doing it for a "real" high dynamic range, such as 100,000:1 or more captured in 32-bit images. This is why tone mapping of HDR images is an active area of research and people usually associate the term "tone mapping" with the work done in this area. |
|
| |
Url to this pages:
http://nade.dk/web/nade/site.nsf/FramesetHP?readform&wmain=files/Hdr_Images_For_Photography
|