So with help from people here I've been using HDR in some situations with difficult lighting, and it's been successful. Now just working on how to make HDR photos look natural 
These questions are just me trying to get an understanding of the background theory. It's not important to the process, I know that, it's just purely as theory. I like to know what's going on in the background.
Ok, so here goes:
1. Do I understand this correctly: An actual HDR image is 32 bit. So the images that people post on the internet and call HDR are not actually HDR? They are "derived from a HDR image"? A HDR image won't display on any monitor, so it is reduced back to 16 bit, making it no longer HDR? But the luminosity of the pixels has been adjusted to bring it back within the range that a monitor or printer can display?
2. When using the Merge to HDR feature in photoshop, the interim 32 bit image that Photoshop displays appears to have blown areas and dark areas like a badly exposed image. But is this simply because the blown/black areas are not displaying on the monitor due to the monitors limits? But the details are there in the file data?
3. When learning photoshop I was taught that the number of "bits" in an image file was the "size" of the number that defines the colour of each pixel (i.e. more bits means more possibility to define more colours). No one ever mentioned that the number of bits has anything to do with defining the luminosity/brightness of the pixels. Seeing as HDR is about the range of luminosity values throughout the photo, I don't understand why a HDR image must be 32 bits. I was taught the bits is the colour range, not the luminosity range.
4. How is dynamic range measured? How do I know what dynamic range my monitor displays, or my printer prints?
5. How do we measure the dynamic range that our cameras can record i a single scene/image?
6. Still can't figure out what the heck tone mapping it. I've Googled it over and over and can't find a palatable explanation.
7. Does a human eye have a high dynamic range, or does our brain just trick us into seeing a HDR image by piecing together an image from sections that it takes at with different apertures of our eye's iris?
8. Ok, lets say we have a scene where the darkest pixel has luminosity value 1 and the brightest has value 10. But our monitor can only display a range of 5 luminosity values. Why can't the monitor just squish the range. So pixels with values 1 and 2 display as value 1 on the monitor, those with values 3 and 4 display as 2.... values 9 and 10 display as 5. Result is that the luminosity range is 1 to 5 and can be displayed on the monitor. Is the solution not as simple as that? Why does Photomatix or whatever software need to do anything else?
These questions are just me trying to get an understanding of the background theory. It's not important to the process, I know that, it's just purely as theory. I like to know what's going on in the background.
Ok, so here goes:
1. Do I understand this correctly: An actual HDR image is 32 bit. So the images that people post on the internet and call HDR are not actually HDR? They are "derived from a HDR image"? A HDR image won't display on any monitor, so it is reduced back to 16 bit, making it no longer HDR? But the luminosity of the pixels has been adjusted to bring it back within the range that a monitor or printer can display?
2. When using the Merge to HDR feature in photoshop, the interim 32 bit image that Photoshop displays appears to have blown areas and dark areas like a badly exposed image. But is this simply because the blown/black areas are not displaying on the monitor due to the monitors limits? But the details are there in the file data?
3. When learning photoshop I was taught that the number of "bits" in an image file was the "size" of the number that defines the colour of each pixel (i.e. more bits means more possibility to define more colours). No one ever mentioned that the number of bits has anything to do with defining the luminosity/brightness of the pixels. Seeing as HDR is about the range of luminosity values throughout the photo, I don't understand why a HDR image must be 32 bits. I was taught the bits is the colour range, not the luminosity range.
4. How is dynamic range measured? How do I know what dynamic range my monitor displays, or my printer prints?
5. How do we measure the dynamic range that our cameras can record i a single scene/image?
6. Still can't figure out what the heck tone mapping it. I've Googled it over and over and can't find a palatable explanation.
7. Does a human eye have a high dynamic range, or does our brain just trick us into seeing a HDR image by piecing together an image from sections that it takes at with different apertures of our eye's iris?
8. Ok, lets say we have a scene where the darkest pixel has luminosity value 1 and the brightest has value 10. But our monitor can only display a range of 5 luminosity values. Why can't the monitor just squish the range. So pixels with values 1 and 2 display as value 1 on the monitor, those with values 3 and 4 display as 2.... values 9 and 10 display as 5. Result is that the luminosity range is 1 to 5 and can be displayed on the monitor. Is the solution not as simple as that? Why does Photomatix or whatever software need to do anything else?
Last edited: