Very techie HDR questions

Cuh5

Suspended / Banned
Messages
19
Edit My Images
Yes
So with help from people here I've been using HDR in some situations with difficult lighting, and it's been successful. Now just working on how to make HDR photos look natural :)

These questions are just me trying to get an understanding of the background theory. It's not important to the process, I know that, it's just purely as theory. I like to know what's going on in the background.

Ok, so here goes:

1. Do I understand this correctly: An actual HDR image is 32 bit. So the images that people post on the internet and call HDR are not actually HDR? They are "derived from a HDR image"? A HDR image won't display on any monitor, so it is reduced back to 16 bit, making it no longer HDR? But the luminosity of the pixels has been adjusted to bring it back within the range that a monitor or printer can display?

2. When using the Merge to HDR feature in photoshop, the interim 32 bit image that Photoshop displays appears to have blown areas and dark areas like a badly exposed image. But is this simply because the blown/black areas are not displaying on the monitor due to the monitors limits? But the details are there in the file data?

3. When learning photoshop I was taught that the number of "bits" in an image file was the "size" of the number that defines the colour of each pixel (i.e. more bits means more possibility to define more colours). No one ever mentioned that the number of bits has anything to do with defining the luminosity/brightness of the pixels. Seeing as HDR is about the range of luminosity values throughout the photo, I don't understand why a HDR image must be 32 bits. I was taught the bits is the colour range, not the luminosity range.

4. How is dynamic range measured? How do I know what dynamic range my monitor displays, or my printer prints?

5. How do we measure the dynamic range that our cameras can record i a single scene/image?

6. Still can't figure out what the heck tone mapping it. I've Googled it over and over and can't find a palatable explanation.

7. Does a human eye have a high dynamic range, or does our brain just trick us into seeing a HDR image by piecing together an image from sections that it takes at with different apertures of our eye's iris?

8. Ok, lets say we have a scene where the darkest pixel has luminosity value 1 and the brightest has value 10. But our monitor can only display a range of 5 luminosity values. Why can't the monitor just squish the range. So pixels with values 1 and 2 display as value 1 on the monitor, those with values 3 and 4 display as 2.... values 9 and 10 display as 5. Result is that the luminosity range is 1 to 5 and can be displayed on the monitor. Is the solution not as simple as that? Why does Photomatix or whatever software need to do anything else?
 
Last edited:
Some good questions there hope you get some answers. I'd like to see them as well. Wonder if this might have got more chance of a reply in the main TalkPhotography thread? We shall see.
 
Let me try to answer this as much as possible.

Firstly our eyes have a dynamic range of about 1,000,000 to 1 which is the range between the darkest scene we can see and the brightest - think of a dark night with no lighting apart from a few stars, and, say, the seaside on a hot noon.

But our retinas DON'T have that much dynamic range!

Because we "cheat" - the iris in our eyes acts like an automatic camera diaphragm and opens and closes depending on the amount of light available so we can see ultra dark scenes and ultra bright scenes.

And our camera is like that with a fixed sensor (retina) range and a diaphragm which alters the amount of light falling on it.

But the dynamic range of a sensor is, like our retinas, very limited and only capable of recording a limited range of brightness at any one time.

On a dull day this doesn't matter because the dynamic range of a dull day doesn't outstrip the dynamic range of a sensor.

But on a bright day it can, and often does, outstrip this range.

To overcome this inherent problem HDR programs came along where more than one picture is taken of the same scene and then combined to attempt to increase the DR over what was possible with only 1 exposure.

And this works extremely well, however the DR possible with this technique cannot be shown on a normal monitor which has a DR of about 600-1000:1.

And a print has a dynamic range of about 100:1.

So the DR of a photograph has to be "squashed" to fit into this very limited dynamic range.

And that is what "Tone Mapping" does.

It adjusts the relative brightness of the multiple exposures to bring them within the DR of a monitor or a print as much as possible and a range of controls allow for "fine tuning" of the final image.

Now to answer some of your questions.

1. The HDR image is not 32 bit but the program may work in 32 bits - Oloneo for instance works in 96 bits for extra speed - so nothing to do with the actual DR of the image being processed.

2. I don't use Photoshop but if you are not merging 2 or more images then what you see is probably Photoshop processing them (someone more au fait with Photoshop can, no doubt, answer this in more detail).

3. Since the number of bits defines a colour from dark to light the luminosity is "built in", so red at the bottom end of the colour scale will be very dark while red at the high end of the colour scale will be very light.

Combine the three colours (RGB) in equal proportions and you also get a scale from black through grey to white which is the Luminosity scale.

4. In fact it's very difficult to measure dynamic range but most monitors are in the range of 600:1 to 1000:1 and glossy prints about 200:1 and, as explained, the output from a sensor is fixed but, depending on the camera, will probably range from about 9-14 stops, which explains the difficulty of trying to encompass a range which can vastly exceed this (14 stops is 16384:1).

5. The simplest way is by the histogram on the display LCD. If the scene cannot fit on there then the DR of that scene outstrips the ability of your camera's sensor to record it.

6. Already explained.

7. Already explained.

8. They already do, it's called Gamma Compression, but is limited in the fact that monitors are not all equal and also if this was taken to the extent necessary to display High dynamic Range scenes (and photos) correctly then scenes and photos which were not High dynamic Range scenes would appear extremely flat and lifeless.

I hope this has helped to explain HDR a little bit for you.
 
Last edited:
Back
Top