Dynamic Range Question

The dynamic range of the sensor is made up from the dynamic range of every individual photosite where the lowest amount of light needed to record a signal greater than the base noise level gives you the deepest available black with detail. and the highest amount of light that a photosite can record before becoming full and blown to complete white, gives you the maximum available white with detail.

As sensors (photosites) extend their dynamic range it is usually extended into the recordable shadow detail,

Which takes me back to my earlier explanation of why the number of photosites per unit area is important. Within the shadow areas, the more photosites capturing light the greater the chance of recording detail and thus extending the dynamic range. When you only use part of the sensor you have fewer photosites available, so there is less chance of capturing enough photons to obtain shadow detail, and thus and area of shadow will go pure black quicker in the cropped sensor than it would when using the full frame and there is a reduction in dynamic range.

Maybe going into more detail about photons would help. The theory of light energy arriving at a photosite as discrete photons of energy is the starting point.

When a lot of light is being reflected from a subject, there is a constant stream of Photons. But In the deepest shadow areas far fewer photons are being reflected and these are being reflected randomly. It isn't a constant stream of photons.

As before I'm going to use simple numbers just to demonstrate the idea.

When you make the exposure, which say it’s a second, the photosites receiving light from the bright areas of the subject will get hit with thousands of photons. So exposure in the brigher areas are more consistent and predictable

But during that same 1s exposure a photosite receiving photons from the deepest shadow areas will receive a small number of randomly reflctedphotons,

It might be none, it might be 20 or it might be hundreds. So an individual photosite might record the part of the subject it's receiving light from as pure black. However the sensor next to it, receiving a slightly different number of random photons might record some detail.

The bigger the photsosite, the greater the chance of it recording some the low frequency random photons. BUT equally the more photosites receiving light from an area of the subject, the more chance there is that "some" of them will record detail. And the demosaicing process can extrapolate data from the photosites with data to the areas where a photosite didn't record data.

Which as a say I takes me back o my earlier example of where the FF sensor was using data from 1500 photosites but to gather information from the same size of subject area as the cropped sensor had only 1000 photosites available.

To give an unrealistic "example" comparison,
In a very deep shadow the 1000 photosites available in the cropped sensor might not record any detail, but because you have an additional photosites available in the full frame sensor, covering the same subject area, you may still have 1000 Photosites failing to record any detail, but some the additional 500 photosites might record somel.

This provides some detail above pure black, and extends the dynamic range of the FF sensor compared to the crop sensor.

This also applies to the DXO testing, because their tests will have this exact same issue when measuring details in the shadow areas.
Thanks for the explanation, thanks to the link David posted I do already understand how photons hit the sensor and that it is not uniform. This explanation makes complete sense when actually capturing the image, it's once the image is captured and then cropped that I don't understand why the DR drops. However, I'm starting to think that maybe it doesn't and it's just when using crop mode in camera it drops for the reason you state (y)
 
Any image is made up of random photon strikes this is the similar randomness that make up airy circles formed when an image is formed from a point source. Or a quantum split beam experiment. This results in an image becoming more defined the more photon strikes that are recorded
. The fewer the pixels that are recorded the less the detail and the noiser the image appears. Sensor pixel sites are essentially tiny capacitors that capture photons / Electrons during an exposure. At the end of the exposure some of the pixels will have recorded no photons from the image. Most will have recorded a finite number of photons, and still others will have recorded a totally full or overflowing number of photons. In effect sensors are bean counters.
Some of these recorded electrons/photons strikes will be caused by random system noise and others by shot noise caused by the random nature of photons. Counter intuitively the more photons the more shot noise. All noise degrades the sharpness and tonal clarity of the image. However as we have seen the greater the number of photons hits
the better defined the image.

The only essential difference between a large sensor and a smaller one of the same specification is the magnification the resulting image is viewed at.
If there is no difference in magnification one image will be identical to the other except in size.

There is of course the possibility that larger more expensive sensors are built to higher standards than smaller ones, but I doubt it.
Magnified images are always degraded in proportion to the magnification. Faults become more apparent.
 
Last edited:
Thanks for the explanation, thanks to the link David posted I do already understand how photons hit the sensor and that it is not uniform. This explanation makes complete sense when actually capturing the image, it's once the image is captured and then cropped that I don't understand why the DR drops. However, I'm starting to think that maybe it doesn't and it's just when using crop mode in camera it drops for the reason you state (y)
Cropping the image on the computer is still linked to number of pixels providing data. When you crop, you lose data.

Again going back to an earlier post. Generally, the main subject is in the midtones, and when you crop you are most likely to lose areas of deeper shadow and brighter highlights so you end up with histogram bunched up in the middle (lower dynamic range) compared to the one you started with.
 
Cropping the image on the computer is still linked to number of pixels providing data. When you crop, you lose data.

Again going back to an earlier post. Generally, the main subject is in the midtones, and when you crop you are most likely to lose areas of deeper shadow and brighter highlights so you end up with histogram bunched up in the middle (lower dynamic range) compared to the one you started with.

That is comparing the DR tonal range of the subject not the DR of he sensor.
The DR of the sensor is a constant. It does not change with subject or area.
It is the dynamic of darkest to slightest that. a single pixel is capable of capturing.
 
That is comparing the DR tonal range of the subject not the DR of he sensor.
The DR of the sensor is a constant. It does not change with subject or area.
It is the dynamic of darkest to slightest that. a single pixel is capable of capturing.
Based on this then, is it fair to say that if you crop and image in post you don't lose any dynamic range?
 
Based on this then, is it fair to say that if you crop and image in post you don't lose any dynamic range?

You might lose subject tonal range, in fact that is quite likely. But it can not change the DR of what the sensor can capture in camera.
Taken to absurdum a crop to a single pixel in pp would have zero Tonal range.

It is better to describe DR as applying to the sensors capability
And tonal range as to the final image. They are only indirectly related.
 
Last edited:
Thanks, the first bit in bold makes sense, and how my brain is thinking. I kind of understand your second bit in terms of something like your example, however when it comes to something like a landscape it then falls apart again for me.

For example, if I'm taking a landsape on a FF camera and also an APS-C camera using the same effective focal length then the cameras are going to be in the same position, therefore one is not closer to the light than the other. I understand that the larger sensor gathers more light overall simply because it is bigger, but each section of both sensors captures the same amount of light.

To try an illustrate what I mean, if you assume rain is light and buckets are the photosites. If you have a FF sensor that is made up of a grid of 8 buckets wide and 6 buckets high and expose it to rain you may find that each bucket has collected roughly 5 litres each, 240 litres in total. If you remove the outer 'circle' of buckets you'd be left with a grid of 6 buckets wide and 4 buckets high. Whilst the total water is now 'only' 120 litres each bucket still has 5 litres in it, so each bucket has been exposed to the same amount of rain (light). This to me suggests that the shot noise of each 'bucket' would therefore be the same, and therefore dynamic range would be the same. I don't see what influence the removed outer buckets would have on the inner ones?

I'm clearly missing something in my logic, however I followed a link from the link David sent me above and it seems to confirm my suspicions.


In this link they're trying to demonstrate that a larger sensor has less noise, but there example shows otherwise. They themselves say there's not much difference initially however if they then downsample the larger sensor there's less noise. This suggests to me that it is not then necssarily true that larger sensors are better at noise handling (and DR) just becasue they are larger per se but it is either because they have bigger photosites and/or downsampled more for viewing.

I'm also still struggling to see why noise affects dynamic range. Reading the link that David posted about shot noise suggests that not all pixels are subject to noise, or at least the same degree, therefore in my head at least some pixles are subject to the brightest brights and some to the darkest darks and therefore have the same dynamic range. Yes a larger sensor may have more of the brightest brights and more of the darkest darks but AFAIK DR is not a measure of how mcuh brightness/darkness there is but simply the difference between the bright and dark parts.
The DR capability measurement is of the sensor as a whole; it is not of the individual pixel's full well capacity; although it is directly related. DR is a measure of the minimum light the sensor can record, and the maximum light the sensor can record, not per pixel... and it doesn't really tell you how many levels (values/stops) it can discern in-between the min-max (DXO has a separate "tonal range" measurement for that). You will note that the dynamic range decreases by approximately 1 stop (.75-1) when operating in APS mode; this is the same ~ 1stop advantage a FF sensor has in terms of low light performance (noise), and for the same reason... discarded light collecting area/potential.

Photon shot noise is random... it's like me throwing darts at a board. I might be aiming for the triple 20, but the dart may very well land in a cell next to it instead. And if I throw enough darts at the board; eventually all of the cells will get their fair share; collect more light (darts) and the *appearance of noise (empty spaces) decreases. In actuality the total noise increases, but it increases much less than the total light (photon shot noise is equal to the sq rt of photons received). I.e. the signal to noise ratio is much better. So, no, noise is not equal between the photosites... if it were, it would simply be "exposure" and not noise (differences in exposures).

The reason noise affects dynamic range is because it limits the minimum signal that can be discerned/recorded. One photon of visible light is capable of releasing one electron into capacitance. And the exposure (voltage) of a photosite is it's charge (conversion gain/electrons converted from photons) divided by it's capacitance... e.g. a smaller capacitance (photosite) is more "exposed" by the same number of photons. The ADC then simply converts that voltage relative to the maximum the photosite can hold/deliver (it's full well capacity) as a relative exposure value (e.g. 128, 50%).
But there are all sorts of other electrical things going on... maybe a photon fails to release an electron, or it releases two instead. And maybe the transistor switching releases a little electricity into the signal path. And maybe the ADC can't discern the difference between .01mV and .014mV. These are just some of the types of noise that can be contributed by the camera itself. "Noise" is really just an error in the recorded exposure value. E.g. you get one pixel with 1mV from exposure, one pixel at 2mV (1 exp, 1 noise), and one pixel at 1mV from a transistor... which pixel is correct? You can't tell, and the camera cannot either (demosaicing algorithm, etc). And therefore the minimum value that can be discerned as being "correct" (enough) is higher, and thus the minimum DR value is higher. The exif of a raw file actually has this value recorded (the black point).

Dynamic range/tonality/noise etc are measures of what are "discernable" in the image as displayed/viewed. It is much like depth of field... it is variable based on the viewing/measuring conditions. The depth of focus and signal, noise, DR are fixed at the sensor and have measurable values there, but they are not the same as what will exist in the final image as viewed. I.e. display an image smaller (print/monitor/whatever) and the individual values/details are compressed/combined. Noise reduces, tonality reduces, DR reduces (min increases), and blur radius reduces (DOF increases)... because not all of the individual values exist anymore (in current state); it doesn't matter if you are trying to measure the values with your eyes or with an instrument. The opposite is also true if you display an image larger. And a smaller sensor requires more enlargement for any size of display.
 
Last edited:
The DR capability measurement is of the sensor as a whole; it is not of the individual pixel's full well capacity; although it is directly related. DR is a measure of the minimum light the sensor can record, and the maximum light the sensor can record, not per pixel... and it doesn't really tell you how many levels (values/stops) it can discern in-between the min-max (DXO has a separate "tonal range" measurement for that). You will note that the dynamic range decreases by approximately 1 stop (.75-1) when operating in APS mode; this is the same ~ 1stop advantage a FF sensor has in terms of low light performance (noise), and for the same reason... discarded light collecting area/potential.

Photon shot noise is random... it's like me throwing darts at a board. I might be aiming for the triple 20, but the dart may very well land in a cell next to it instead. And if I throw enough darts at the board; eventually all of the cells will get their fair share; collect more light (darts) and the *appearance of noise (empty spaces) decreases. In actuality the total noise increases, but it increases much less than the total light (photon shot noise is equal to the sq rt of photons received). I.e. the signal to noise ratio is much better. So, no, noise is not equal between the photosites... if it were, it would simply be "exposure" and not noise (differences in exposures).

The reason noise affects dynamic range is because it limits the minimum signal that can be discerned/recorded. One photon of visible light is capable of releasing one electron into capacitance. And the exposure (voltage) of a photosite is it's charge (conversion gain/electrons converted from photons) divided by it's capacitance... e.g. a smaller capacitance (photosite) is more "exposed" by the same number of photons. The ADC then simply converts that voltage relative to the maximum the photosite can hold/deliver (it's full well capacity) as a relative exposure value (e.g. 128, 50%).
But there are all sorts of other electrical things going on... maybe a photon fails to release an electron, or it releases two instead. And maybe the transistor switching releases a little electricity into the signal path. And maybe the ADC can't discern the difference between .01mV and .014mV. These are just some of the types of noise that can be contributed by the camera itself. "Noise" is really just an error in the recorded exposure value. E.g. you get one pixel with 1mV from exposure, one pixel at 2mV (1 exp, 1 noise), and one pixel at 1mV from a transistor... which pixel is correct? You can't tell, and the camera cannot either (demosaicing algorithm, etc). And therefore the minimum value that can be discerned as being "correct" (enough) is higher, and thus the minimum DR value is higher. The exif of a raw file actually has this value recorded (the black point).

Dynamic range/tonality/noise etc are measures of what are "discernable" in the image as displayed/viewed. It is much like depth of field... it is variable based on the viewing/measuring conditions. The depth of focus and signal, noise, DR are fixed at the sensor and have measurable values there, but they are not the same as what will exist in the final image as viewed. I.e. display an image smaller (print/monitor/whatever) and the individual values/details are compressed/combined. Noise reduces, tonality reduces, DR reduces (min increases), and blur radius reduces (DOF increases)... because not all of the individual values exist anymore (in current state); it doesn't matter if you are trying to measure the values with your eyes or with an instrument. The opposite is also true if you display an image larger. And a smaller sensor requires more enlargement for any size of display.
Thanks for this, it’s starting to make sense (y)
 
I am pretty sure that DR isn't measured from pure white to pure black.
It's the measure of the difference in how much light is discernible/recordable from black to white... it could be ten stops levels 2 & 12, or ten stops levels 5 & 15.
DXO's engineering DR uses a lower signal level for the noise floor/black point than PTP's photographic DR measure.
 
That is comparing the DR tonal range of the subject not the DR of he sensor.
But there are two questions being asked, one about the sensor, which I have answered in other posts, and a secondary question about changes in dynamic range when you crop the file during processing. It was the latter question I was answering, so nothing to do with the sensor..
The DR of the sensor is a constant. It does not change with subject or area.
It isn't really, because as I've explained it depends on how many photosites you have per unit area
It is the dynamic of darkest to slightest that. a single pixel is capable of capturing.
And, as I've explained when answering the question related sensor DR, this is why DR is reduced when you crop the sensor.
 
It's the measure of the difference in how much light is discernible/recordable from black to white... it could be ten stops levels 2 & 12, or ten stops levels 5 & 15.
DXO's engineering DR uses a lower signal level for the noise floor/black point than PTP's photographic DR measure.
Yes, I understand this, I wasn't specifically aware of the differences between DXO and PTP, even though I knew there was a difference.
 
But there are two questions being asked, one about the sensor, which I have answered in other posts, and a secondary question about changes in dynamic range when you crop the file during processing. It was the latter question I was answering, so nothing to do with the sensor..

It isn't really, because as I've explained it depends on how many photosites you have per unit area

And, as I've explained when answering the question related sensor DR, this is why DR is reduced when you crop the sensor.


You are wrong.
the DR of a sensor is at pixel level every pixel has the same DR capability. Cropping makes no difference.
When measuring the tonal values of an image every pixel can differ, but only within the DR range of the sensor.

sensor DR is about potential.
Tonal range capture is about actual, but it can not exceed the Sensor DR. That is baked in.

When you are talking about the DR of a sensor you are talking about the limits of the values that a sensor can capture. This usually reduces the higher the ISO.
But over a limited range may be invariant. You are not talking about the contrast or tonal range of an actual image.
 
You are wrong.
the DR of a sensor is at pixel level every pixel has the same DR capability. Cropping makes no difference.
When measuring the tonal values of an image every pixel can differ, but only within the DR range of the sensor.
I don't think I'm saying anything different, Cropping obviously doesn't make any difference to the DR of the photosite (I've never suggested otherwise), but it reduces the number of photosies available to the processor.
sensor DR is about potential.
Tonal range capture is about actual, but it can not exceed the Sensor DR. That is baked in.

When you are talking about the DR of a sensor you are talking about the limits of the values that a sensor can capture. This usually reduces the higher the ISO.
But over a limited range may be invariant. You are not talking about the contrast or tonal range of an actual image.
But, as I said, in this instance, I wasn't answering a question about the sensor DR, I'm answering the question that was asked about image tonal range.
 
You are wrong.
the DR of a sensor is at pixel level every pixel has the same DR capability. Cropping makes no difference.
When measuring the tonal values of an image every pixel can differ, but only within the DR range of the sensor.

sensor DR is about potential.
Tonal range capture is about actual, but it can not exceed the Sensor DR. That is baked in.

When you are talking about the DR of a sensor you are talking about the limits of the values that a sensor can capture. This usually reduces the higher the ISO.
But over a limited range may be invariant. You are not talking about the contrast or tonal range of an actual image.
This has confused me again :lol:

IF DR of a sensor is purely about the limits of the values the sensor can capture then surely this would mean that it’s irrelevant how many photons hit the sensor as they are only responsible for the tonal range and not the actual DR capacity the sensor has. If this is true, then it would also mean that it would not make any difference if you used that same sensor in FF or crop mode as each pixel has the potential of capturing that DR.

I’m assuming from all of this then that when we have DR charts showing what the DR of a sensor is it is actual captured DR rather than the maximum potential a sensor has?
 
You are wrong.
the DR of a sensor is at pixel level every pixel has the same DR capability.
Sorry, no... if that were the case, a sensor with smaller pixels which have smaller full well capacity would have a significantly reduced dynamic range capability. And that is simply not necessarily the case. In fact, if you compare the D850 to the D750, the D850 has an even greater DR.

Untitled-1.jpg

It is true that each photosite has it's own DR capability... it's own full well capacity. And each photosite is relatively the same as the next (though they do exhibit some PRNU). But that is irrelevant when recording a scene. Because if you are using a sensor with smaller photosites the light/area is simply divided between more of them, and therefore they clip/fill at the same time as a larger photosite would.
Individual pixel DR/FWC is an engineering consideration that only exists as you are describing it when considered as a stand alone factor... the whole "larger pixels have more DR" idea.
 
Last edited:
The DR capability measurement is of the sensor as a whole; it is not of the individual pixel's full well capacity; although it is directly related. DR is a measure of the minimum light the sensor can record, and the maximum light the sensor can record, not per pixel... and it doesn't really tell you how many levels (values/stops) it can discern in-between the min-max (DXO has a separate "tonal range" measurement for that). You will note that the dynamic range decreases by approximately 1 stop (.75-1) when operating in APS mode; this is the same ~ 1stop advantage a FF sensor has in terms of low light performance (noise), and for the same reason... discarded light collecting area/potential.

If you are to think about it, the maximum and minimum light that a sensor can capture, is the same as can be captured by an individual pixel x the total number of pixels.
This is proportionally identical .
So DR can accurately expressed as of an individual pixel. It will have the same range of possibilities as the entire sensor.

Whatever Dxo might say, the DR of a sensor must remain the same whatever irregular selection of pixels or what ever crop is chosen.
Of course an individual manufacturer may apply a different algorithm to a full frame and crop from it. But there is little logic in doing so.

The more likely reason for finding a difference, is experimental error. Or light scatter with in the camera. Reflected light is likely to be greatest at the perimeter of the sensor.

I have not covered the rest of your post as I have addressed the basics of those points previously.
 
This has confused me again :LOL:

IF DR of a sensor is purely about the limits of the values the sensor can capture then surely this would mean that it’s irrelevant how many photons hit the sensor as they are only responsible for the tonal range and not the actual DR capacity the sensor has. If this is true, then it would also mean that it would not make any difference if you used that same sensor in FF or crop mode as each pixel has the potential of capturing that DR.

I’m assuming from all of this then that when we have DR charts showing what the DR of a sensor is it is actual captured DR rather than the maximum potential a sensor has?
The dynamic range of the "sensor", is down to multiple things (see SK66s long post).

The DR of the photosites is only one of them. I was only concentrating on photosites per unit area because in your example of "same sensor" "same camera" with the only difference when cropped being the number of photosites it seemed the easiest way to visualise why the cropping might be giving you a smaller dynamic range.
 
So DR can accurately expressed as of an individual pixel. It will have the same range of possibilities as the entire sensor.
Yes, and that is known; it is the photosite's full well capacity and black point... you may be able to find that information. For instance, the FWC of a D850 pixel is ~ 60,000 electrons; and the reference black level is 400. But that is only a DR consideration when factored in isolation. It is not the sensor's DR capability.

Again, it comes down to light/physical area; and that area in terms of "sensor DR" is sensor area. And larger areas actually receive/record more light for a given exposure; and therefore have a greater DR highlight capability. The fact that larger sensors also have larger photosites when comparing the same resolution capability (MP) contributes to some of the misconceptions.
 
Last edited:
I’m assuming from all of this then that when we have DR charts showing what the DR of a sensor is it is actual captured DR rather than the maximum potential a sensor has?
It is both... but it is not photosite full well capacity (which would be meaningless to most anyway).
 
Sorry, no... if that were the case, a sensor with smaller pixels which have smaller full well capacity would have a significantly reduced dynamic range capability. And that is simply not necessarily the case. In fact, if you compare the D850 to the D750, the D850 has an even greater DR.

View attachment 435422

It is true that each photosite has it's own DR capability... it's own full well capacity. And each photosite is relatively the same as the next (though they do exhibit some PRNU). But that is irrelevant when recording a scene. Because if you are using a sensor with smaller photosites the light/area is simply divided between more of them, and therefore they clip/fill at the same time as a larger photosite would.
Individual pixel DR/FWC is an engineering consideration that only exists as you are describing it when considered as a stand alone factor... the whole "larger pixels have more DR" idea.

If you change the size or quality of the photo sites of course you may end up with a different DR . That has never been in dispute. Nor have I ever suggested that all photo sites are identical between cameras of what ever format.
However some cameras allow both full frame and crop to APS . In which case there should be no difference in DR. When selecting either format.
The light captured is always the product of the intensity of the light times the shutter open time. The sensor size is not a factor, however the efficiency of the capture and measurement from the pixel well will certainly have a bearing.on the final value.

ANY differences in DR not down to cropping. Which is the argument at hand.
 
Sorry, no... if that were the case, a sensor with smaller pixels which have smaller full well capacity would have a significantly reduced dynamic range capability. And that is simply not necessarily the case. In fact, if you compare the D850 to the D750, the D850 has an even greater DR.

Perhaps there's more to DR than the area of the sensor and the size of the pixels / wells. Perhaps other things are in play and having an effect?

For example I have MFT cameras which have a higher DR than my FF Canon 5D had.
 
Let me try to explain it in terms of recording a scene, because that is what we are actually concerned with; how much dynamic range within a scene can be recorded at once. It doesn't really matter how much light we have on an individual/pixel basis (i.e. max only)... you can (almost) always find exposure settings suitable for that.

Say we have a scene that spans the range of black to white that the FF sensor is capable of... call it 10 stops. And we record the scene on a FF sensor so that the highlights just reach clipping (FWC). No problem, we have recorded 10 stops of dynamic range.

Now, in order to record that same scene on a smaller sensor/area (DX crop area) we have to concentrate it into that smaller area; we can do that by using a shorter FL lens. If that's all we do the previously recorded highlights clip because we have increased the light/area on the sensor, so the recorded DR is reduced due to loss of highlights.
Or, what we normally do is simultaneously reduce the light intensity/density collected by also using a smaller entrance pupil... a shorter lens at the same f#; this keeps "the exposure" the same and the highlights do not clip, but now we lose DR in shadow areas instead because we have reduced the light intensity from all areas equally... some of what previously recorded above black/noise now falls below. This will show up as increased noise in the shadows of the smaller source image/area.

In either case, the DR that exists (recorded or capability) per pixel in any area does not change, but that's not the relevant factor when recording an image.
 
Last edited:
Perhaps there's more to DR than the area of the sensor and the size of the pixels / wells. Perhaps other things are in play and having an effect?

For example I have MFT cameras which have a higher DR than my FF Canon 5D had.
Certainly. BIS, gapless lens arrays, and other technological advancements/differences make a difference as well; I addressed that in my first response.
 
Yes, and that is known; it is the photosite's full well capacity and black point... you may be able to find that information. For instance, the FWC of a D850 pixel is ~ 60,000 electrons; and the reference black level is 400. But that is only a DR consideration when factored in isolation. It is not the sensor's DR capability.

Again, it comes down to light/physical area; and that area in terms of "sensor DR" is sensor area. And larger areas actually receive/record more light for a given exposure; and therefore have a greater DR highlight capability. The fact that larger sensors also have larger photosites when comparing the same resolution capability (MP) contributes to some of the misconceptions.

I do not accept that argument .
What is important to establish DR is the intensity of the light at the least and greatest visible difference captured on the sensor. This has been so since DR was measured as density on processed film and plotted as a gamma curve. The total amount of light falling on the sensor is irrelevant. It is the range of useful tones that has value.
Highlights are limited by the capacity of the photo sites, more light results in clipped highlights not greater DR

For a totally invariant sensor there would be sufficient DR to capture an image so as to give a full range of tones what ever the iso might have been chosen.
This is of course impossible but within a reasonable range is doable.
A more reasonable target is to have sufficient DR to be able to shoot at the native iso so as to resolve a full tone to image in moon light or full sun. At any reasonable shutter speeds and apertures
 
Let me try to explain it in terms of recording a scene, because that is what we are actually concerned with; how much dynamic range within a scene can be recorded at once. It doesn't really matter how much light we have on an individual/pixel basis (i.e. max only)... you can (almost) always find exposure settings suitable for that.

Say we have a scene that spans the range of black to white that the sensor is capable of... call it 10 stops. And we record the scene on a FF sensor so that the highlights just reach clipping (FWC). No problem, we have recorded 10 stops of dynamic range.

Now, in order to record that same scene on a smaller sensor/area (DX crop area) we have to concentrate it into a smaller area; we can do that by using a shorter FL lens. If that's all we do the previously recorded highlights clip because we have increased the light/area, so the recorded DR is reduced due to loss of highlights.
Or, what we normally do is simultaneously reduce the light intensity/density by also using a smaller entrance pupil... a shorter lens at the same f#; this keeps "the exposure" the same and the highlights do not clip, but now we lose DR in shadow areas instead because we have reduced the light intensity in all areas equally... some of what previously recorded above black/noise now falls below. This will show up as increased noise in the shadows of the smaller source image/area.

In either case, the DR that exists (recorded or capability) per pixel in any area does not change, but that's not the relevant factor when recording an image.
Ahh, that makes sense, the penny is starting to drop ;)

One last thing to clear up, if we record a scene where the central part of that image has all of the tonal ranges in it and gives a DR of 15ev and then we crop that image in post retaining that same central part of the image would the DR still be 15ev?
 
Ahh, that makes sense, the penny is starting to drop ;)

One last thing to clear up, if we record a scene where the central part of that image has all of the tonal ranges in it and gives a DR of 15ev and then we crop that image in post retaining that same central part of the image would the DR still be 15ev?
You can't have both.
Staying with the FF/APS crop comparison... the cropped area would still retain the 15EV it originally recorded. BUT, the full sensor would have to have a DR capability of at least 16 stops... you can't record the full 16 stop capability of the full sensor within the cropped area.

This is because we are talking about the ability to record the difference in brightness (DR) of light sources within the scene. Another way to think of it is that varying the diameter of the lens aperture/entrance pupil as you change focal length for composition (crop) is varying the maximum amount of light being transmitted through the lens, and being recorded from a source area; and therefore it is varying the dynamic range transmittable/recordable (the minimum remains constant). I.e. the same f# for a constant exposure (light density per pixel) with different focal lengths is simultaneously variable light density/brightness of/from the source(s).

Of course you can have black and white side by side in two pixels regardless of crop/composition... that could be 2 stops of DR, or it could be 20 (theoretically). DR is a measure of how much brightness/density from the source causes black and white to be recorded.
 
Last edited:
The total amount of light falling on the sensor is irrelevant. It is the range of useful tones that has value.
Tonal values are important; but that is not what DR is measuring (with digital). It is only the difference between min/max. DXO has a different measure called "Tonal Range" measured in bits...
Highlights are limited by the capacity of the photo sites, more light results in clipped highlights not greater DR
Correct. It will result in the same DR being recorded by the same area; unless there really is a point of zero light in the scene. And in the same way "less light" is not reduced DR, unless there is no point within the scene that exceeds the DR capability (pixel level FWC).

But that's not what we are talking about... the question was, why does a smaller sensor (area) have a lower DR capability. At a pixel level it doesn't, but then you are not measuring/comparing equivalent images/output/use/usability.

I.e. if you use DXO to compare the D850 to the D500, which are essentially the same sensors other than size, the "screen" tab shows the DR as being equivalent... that is pixel level DR of different outputs. But if you use the print tab the dynamic range is greater for the D850... this is essentially the "usable" effective DR capabilities.
(except they ar engineering measurements and higher than really useful)
For a totally invariant sensor there would be sufficient DR to capture an image so as to give a full range of tones what ever the iso might have been chosen.

Kind of lost me with this section; ISO has very little to do with the DR rating of a sensor. Increasing ISO can only reduce the DR recorded (due to pushing highlights over).... ISO isn't actually exposure/sensitivity with digital (other than base(s))
 
You can't have both.
Staying with the FF/APS crop comparison... the cropped area would still retain the 15EV it originally recorded. BUT, the full sensor would have to have a DR capability of at least 16 stops... you can't record the full 16 stop capability of the full sensor within the cropped area.


This is because we are talking about the ability to record the difference in brightness (DR) of light sources within the scene. Another way to think of it is that varying the diameter of the lens aperture/entrance pupil as you change focal length for composition (crop) is varying the maximum amount of light being transmitted through the lens, and being recorded from a source area; and therefore it is varying the dynamic range transmittable/recordable (the minimum remains constant). I.e. the same f# for a constant exposure (light density per pixel) with different focal lengths is simultaneously variable light density/brightness of/from the source(s).

Of course you can have black and white side by side in two pixels regardless of crop/composition... that could be 2 stops of DR, or it could be 20 (theoretically). DR is a measure of how much brightness/density from the source causes black and white to be recorded.
This is what doesn’t make sense to me, I don’t understand why the outer pixels would influence the inner ones in terms of dynamic range?

Let me try to explain my confusion using another example. Say you take a photo of a sunset through a glassless window but framed your image so that the window frame was visible around the border of the whole frame. Now let’s assume that frame is neutral grey so is not contributing to the DR as it’s neither very dark nor very bright. In the central area of the frame you have the sun providing the whitest whites and you have some silhouettes providing the darkest darks.

In this example the highest DR is in the centre of the frame and this is measured at 15ev. If you then crop out the grey frame from the outer part of the scene then in my head the DR should still be 15ev as the bit you’ve cropped out is not contributing to the maximum dynamic range of the scene.

Does this make sense?
 
Last edited:
A lot of the confusion in this thread is people using different definitions of the phrase “dynamic range” - I count at least three implicit definitions.

PTP use two, DXO has two of the definitions here also (if you count Tonal).
 
This is what doesn’t make sense to me, I don’t understand why the outer pixels would influence the inner ones in terms of dynamic range?

Let me try to explain my confusion using another example. Say you take a photo of a sunset through a glassless window but framed your image so that the window frame was visible around the border of the whole frame. Now let’s assume that frame is neutral grey so is not contributing to the DR as it’s neither very dark nor very bright. In the central area of the frame you have the sun providing the whitest whites and you have some silhouettes providing the darkest darks.

In this example the highest DR is in the centre of the frame and this is measured at 15ev. If you then crop out the grey frame from the outer part of the scene then in my head the DR should still be 15ev as the bit you’ve cropped out is not contributing to the maximum dynamic range of the scene.

Does this make sense?
As I said earlier, as with most things digital it's measurement is dependent on the viewing/measuring condition. You're stuck thinking of it as essentially a pixel measurement, and it's not.

Maybe it is easiest to just think of "DR" as simply a measurement of system noise... how far into shadows the camera can reliably read a signal. In theory, there are only two things that limit the DR capability/tonality; the camera's noise floor sets the minimum, and the ADC accuracy sets the maximum (it cannot be higher than 14stops/bits with a 14bit ADC). I think it's pretty evident to most how pixel level noise varies in visibility/measurability/impact depending on the viewing conditions (magnification).

A few excerpts:
From PTP (link)
1728051141324.png

From Dr. Theuwissen (link)
1728051247505.png

From DXO (note that the comparison is for a standardized print)
1728051864445.png


And as I said before, on a pixel level it does not vary (in the same way, for the same reasons)
1728052294230.png
 
Last edited:
As I said earlier, as with most things digital it's measurement is dependent on the viewing/measuring condition. You're stuck thinking of it as essentially a pixel measurement, and it's not.

Maybe it is easiest to just think of "DR" as simply a measurement of system noise... how far into shadows the camera can reliably read a signal. In theory, there are only two things that limit the DR capability/tonality; the camera's noise floor sets the minimum, and the ADC accuracy sets the maximum (it cannot be higher than 14stops/bits with a 14bit ADC). I think it's pretty evident to most how pixel level noise varies in visibility/measurability/impact depending on the viewing conditions (magnification).

A few excerpts:
From PTP (link)
View attachment 435469

From Dr. Theuwissen (link)
View attachment 435471

From DXO (note that the comparison is for a standardized print)
View attachment 435473


And as I said before, on a pixel level it does not vary (in the same way, for the same reasons)
View attachment 435475
I don't think I'm ever going to get it :ROFLMAO:

Why smaller sensors have smaller DR makes complete sense to me now thanks to this thread, but I still can't get how cropping an image in post affects DR (unless you crop out the brightest and/or darkest section) as the DR has already been recorded and can't be changed. I should really just accept that it does and leave it be, but I do like to understand things and I fear curiosity will get the better of me :lol:
 
A lot of the confusion in this thread is people using different definitions of the phrase “dynamic range” - I count at least three implicit definitions.

PTP use two, DXO has two of the definitions here also (if you count Tonal).
PTP has only one definition "photographic dynamic range" (effective/usable DR).
For some cameras Bill has measurements for the crop mode as well (because they deviate from the simple "crop factor" value one could easily estimate).

DXO essentially has three (or four). There is the print Tonal Range measurement; which is nearly the same as PTP's definition (Engineering DR corrected for noise; measured in bits). There is the print (engineering) DR measurement. And then there is the pixel level "screen" measurement of both.

But none of the definitions include DR as we tend to think of it... "tonality/gradation."
 
Last edited:
I don't think I'm ever going to get it :ROFLMAO:

Why smaller sensors have smaller DR makes complete sense to me now thanks to this thread,
Great!
So, what is the difference between using a smaller sensor, using a smaller area of the sensor (crop mode), and cropping the sensor area in post?
The answer is nothing... they all cause the same effects and for the same reasons.

(the assumption here is that all three are used to generate the same output composition/image)
as the DR has already been recorded and can't be changed.
Just like depth of focus vs depth of field... The depth of focus is fixed (and measurable) at the sensor, but the resulting depth of field is image viewing dependent. The pixel level DR (noise limited) is also fixed, but the DR (noise) being discussed is image viewing dependent... it is essentially a measurement of the resulting image, not the sensor.
 
Last edited:
Great!
So, what is the difference between using a smaller sensor, using a smaller area of the sensor (crop mode), and cropping the sensor area in post?
The answer is nothing... they all cause the same effects and for the same reasons.

(the assumption here is that all three are used to generate the same output composition/image)

Just like depth of focus vs depth of field... The depth of focus is fixed (and measurable) at the sensor, but the resulting depth of field is image viewing dependent. The pixel level DR (noise) is also fixed, but the DR (noise) being discussed is image viewing dependent... it is essentially a measurement of the resulting image, not the sensor.
Ahh, I think I might have it. So by cropping the image you're enlarging the noise making it more apparent and thus reducing DR?
 
Ahh, I think I might have it. So by cropping the image you're enlarging the noise making it more apparent and thus reducing DR?
Yes. As dark regions become more noisy they become "less dark" (speckled with lighter values)... i.e. the minimum level increases and thus the difference between min/max (the DR) is reduced. The effect is most apparent in the noise/tones/details very near to black. It can be misleading if there are regions that are clipped black; but those regions are actually outside of the camera's DR capability/rating.

Edit:
Also note that when we are talking about comparing like for like, the same output images...
In order to crop a FF image to the same composition as an image taken with a smaller sensor, the relevant area would have to be recorded at the same physical size. Which means the light/area was concentrated and reduced just as if a smaller sensor was used. So the noise is greater and DR reduced compared to what the full sensor would be capable of when recording the same image using the full sensor area.
 
Last edited:
Back
Top