Variable ISO sensor plane

I think he was probably referencing the first part of breaking an image into zones and exposing them differently (digital gain)... I think some cameras already do this for the automated HDR feature (rather than combine actual multiple exposures).


IDK why you would think that; pixels are already individually switched/read out/processed... they have to remain separate "data packages." And the processing capabilities are already amazing... Nikon's matrix metering scene recognition system processes the data through a database of over 30,000 images in the process of determining the optimal exposure.

I think we will see developments in this direction, probably evolving out of smartphone tech, but at the heart of it is the fundamental problem of getting a higher quality signal out of shadow areas, either by capturing more light (basically multiple HDR captures of some form) or improving high ISO/brightening performance (eg post-capture shadows lift with ISO-invariant sensor) as we do now in post-processing. But it would take some fairly amazing AI-type algorithm to distinguish reliably between dark tones that need to stay dark and those that might need brightening, and they would obviously not follow any kind of matrix metering grid.

I don't see this happening with variable ISO at pixel level any time soon though. Reading individual pixels post-capture is one thing, but monitoring 30 million of them in real time and switching them off individually seems like a bit of a stretch (I may be wrong on that) even if it was desirable. And we'd still be left with the 'long exposure' problem. Sledgehammer to crack a nut.
 
Last edited:
I think we will see developments in this direction, probably evolving out of smartphone tech, but at the heart of it is the fundamental problem of getting a higher quality signal out of shadow areas, either by capturing more light (basically multiple HDR captures of some form) or improving high ISO/brightening performance (eg post-capture shadows lift with ISO-invariant sensor) as we do now in post-processing. But it would take some fairly amazing AI-type algorithm to distinguish reliably between dark tones that need to stay dark and those that might need brightening, and they would obviously not follow any kind of matrix metering grid.

I don't see this happening with variable ISO at pixel level any time soon though. Reading individual pixels post-capture is one thing, but monitoring 30 million of them in real time and switching them off individually seems like a bit of a stretch (I may be wrong on that) even if it was desirable. And we'd still be left with the 'long exposure' problem. Sledgehammer to crack a nut.

Right on cue, this article from DPReview explores how the new Google smartphone uses its multiple cameras and computational power to defy the laws of small-sensor physics
https://www.dpreview.com/articles/7...s-the-boundaries-of-computational-photography
 
Thanks hoppy - good article - there are some clever people out there! (It does stress the pixel 3 has only one lens not multiple I think?)

Computational software is going to be big in the photographic world I think - probably a bigger game changer than mirror less?
 
Last edited:
Thanks hoppy - good article - there are some clever people out there! (It does stress the pixel 3 has only one lens not multiple I think?)

Computational software is going to be big in the photographic world I think - probably a bigger game changer than mirror less?

Google Pixel 3 has one rear-facing camera and two front-facing, other smartphones have more and there appear to be different ways of approaching this, with different pros and cons.

Basically the Google device tends to use one lens/camera but combines multiple images - up to 15 I think. In terms of sheer light gathering, that's like having 15 cameras. I think it says in the article this equates to an APS-C size sensor in terms of physical sensor area available. Other devises use multiple different camera/lens modules working simultaneously to both collect more light and also drive data calculation for the software to measure distance for background blurring etc. All this stuff is currently at a very early stage of development.
 
Last edited:
I don't see this happening with variable ISO at pixel level any time soon though. Reading individual pixels post-capture is one thing, but monitoring 30 million of them in real time and switching them off individually seems like a bit of a stretch (I may be wrong on that) even if it was desirable. And we'd still be left with the 'long exposure' problem. Sledgehammer to crack a nut.
I agree... I don't think the switching isn't really the problem, it would be the randomization of readout which would probably require some kind of encoding for each packet/pixel to identify where it came from. I think something like this would be more feasible with a global electronic shutter where each pixel is offloaded to storage until sequential readout. But as you said, sledgehammer...
But it would take some fairly amazing AI-type algorithm to distinguish reliably between dark tones that need to stay dark and those that might need brightening, and they would obviously not follow any kind of matrix metering grid.
It's already being done... how does the camera decide which parts to lighten/include when doing automated HDR?

This got me to thinking about the potential benefit of sensor/pixel saturation that I had envisioned...
A few years ago I filed a patent for a sensor made from layers of transparent photo diodes (i.e. graphene) separated by layers of dichroic filters. The idea being that you would get much more accurate/true spectral data. The camera would record each selected wavelength separately, which would also allow you to use/reject them individually in the output (i.e. IR/UV/yellow).
The problem is that "transparent" also means the material doesn't interact with the light much at all (low electron generation), and dichroic filters are only about 90-95% efficient in transmitting the light that is not filtered/rejected. By the time the light got to the bottom of the stack there could be (currently is) insufficient electron production to measure. But as long as there was at least one measurable electron there is a solution... math... If the losses are known (i.e. 10% per layer) they can then be added back into the measured totals.

The point of this all is that we don't actually need more photons/electrons/data, all that is really needed is high accuracy... once you have that you can do anything you want with the data mathematically. This is why ISO invariant sensors work so well (lower noise/higher accuracy). When you realize that digital imaging is 99% mathematics it changes a lot of things... I forget that sometimes...
 
Last edited:
That's interesting, but surely you could achieve a similar effect (though not complete with regard to UV and IR) using a 3 chip colour camera, to record the RGB data separately. The 3 chip camera will also give better spatial accuracy of any one 'pixel' than something with a Bayer filter slapped in front of it.

the last project I worked on with a 3CCD camera, determined the colour temperature of each pixel using the RGB values and reverse engineering the colour temperature to RGB formula, its being used for checking that the right colour temperature LEDs have been inserted into lighting assemblies. It didn't have to be super accurate but merely differentiate the colour temperatures of the different LEDs on the production line
 
That's interesting, but surely you could achieve a similar effect (though not complete with regard to UV and IR) using a 3 chip colour camera, to record the RGB data separately. The 3 chip camera will also give better spatial accuracy of any one 'pixel' than something with a Bayer filter slapped in front of it.
Yes, kind of. What you describe is essentially the Foveon sensor technology.
The problem (as I see it) is that "RGB" isn't truly R-G-B... it is red/green/blue centric with varying amounts of overlap between them. And wavelengths like yellow/orange/cyan/etc have to be discerned mathematically/theoretically. Even the true RGB colors have to be discerned/separated theoretically/mathematically. This is why different cameras have different issues reproducing certain colors. It also contributes to the color noise issues all sensors exhibit with certain scenes/colors (i.e. blue skies).

In fact, the cones in the human eye are similarly "RGB centric" (usually described as long/medium/short wavelength), so it works well enough for the most part.

EDIT: here is a link that shows the response curves for several different Bayer RGB CMOS sensors. When you see how convoluted the response curves are it's kind of amazing that they can generate an image as well as they do...
 
Last edited:
Right on cue, this article from DPReview explores how the new Google smartphone uses its multiple cameras and computational power to defy the laws of small-sensor physics
https://www.dpreview.com/articles/7...s-the-boundaries-of-computational-photography

TBH that article only proves that you CAN'T defy the laws of physics since the photos are pretty awful, especially the "Super Res Zoom" of the girl.

The normal picture of the girl is better and all the pictures on there are perfectly acceptable as small photos, or to be kept on a smartphone, which is where most will probably stay, but in photographic terms they are all terrible.

Even the one of the lake shows very bad noise in the dark areas - and that is at 60 ISO!

In fact they all show very high noise levels due to the extremely small sensor elements - which is exactly what you would expect.

Until there is a quantum breakthrough in sensor design, and that is only likely to come about with completely NEW materials, not silicon, then the laws of sensor physics will remain the same no matter what computational work arounds are used.
 
A use for variable ISO would be in landscapes. Top section of picture is low ISO graduating to bottom section at higher ISO. Acts as a built in graduated filter function without using too much processing power. You might need to simulate lots of in-between iso values to make the transition smooth.
 
A use for variable ISO would be in landscapes. Top section of picture is low ISO graduating to bottom section at higher ISO. Acts as a built in graduated filter function without using too much processing power. You might need to simulate lots of in-between iso values to make the transition smooth.

Who/ how would this choice be made on practice. Would it be better or worse than a grad filter or ps adjustment with blends/ masks?
 
Better than a grad filter which will have a straight edge between light and dark parts while realise does not. I do not expect to see this actually happen but it could be very useful.
 
Better than a grad filter which will have a straight edge between light and dark parts while realise does not. I do not expect to see this actually happen but it could be very useful.

Depends on the subject whether it will better than a grad filter but i come back to the question. Standing on a mountain composing an image, how in practice do you allocate individual iso’s to individual pixels to make this work?

Not asking whether it will happen, just cant get my head around how it would work
 
Depends on the subject whether it will better than a grad filter but i come back to the question. Standing on a mountain composing an image, how in practice do you allocate individual iso’s to individual pixels to make this work?

Not asking whether it will happen, just cant get my head around how it would work
How does a camera make an automatic HDR? I think it's probably about as simple as the camera determining that nothing (or only a small %) will be recorded as clipped to black (0) or white (255) and that the vast majority of the pixels record as mid-tones. The reality is that a lot of the automated HDR/extended DR results are not/will not be what you might want ideally and you would get better results doing it yourself manually. But they are often better than the alternative...
 
Last edited:
Standing on a mountain composing an image, how in practice do you allocate individual iso’s to individual pixels to make this work?
This is why we employ software engineers rather than write our own demosaicing algorythms or photo-editing software. I do not know how nor do I need to. The experts should know.
 
You could do it as simply as top of picture was iso 100 while bottom was 3200 (or whatever you set it as). Everything else was iso in proportion to its position.

More advanced would be for a camera to allocate iso based on an average in a region (like your Sony with a zillion light level points across the whole screen). The camera knows what iso is in each region, what the light level for each pixel is based on this iso so can add the extra boost from the iso to the pixels across the region. So a region at iso 3200 has 14 bits. A region at iso 100 also has 14 bits plus 5 relative to the iso 3200 region (the camera has to do it backwards to make the sensors work in the right iso range). If this makes sense, and it does to me, you get a huge increase in dynamic range by whatever number of bits you can adjust the iso by. Where was that patent form...
 
This is why we employ software engineers rather than write our own demosaicing algorythms or photo-editing software. I do not know how nor do I need to. The experts should know.

Im not asking you to write the firmware, i’m asking you as a photographer what you would expect to have to do when getting ready to take a shot to allocate individual iso to individual pixels.

Do you see it as the magic button decision by the camera firmware as enhanced auto mode or do you see there being some artistic choices by the photographer. If photographer involvement then what do you think you need to do to allocate individual iso to 29,24,36,50 megapixels or whatever number we are moving to
 
You could do it as simply as top of picture was iso 100 while bottom was 3200 (or whatever you set it as). Everything else was iso in proportion to its position.

More advanced would be for a camera to allocate iso based on an average in a region (like your Sony with a zillion light level points across the whole screen). The camera knows what iso is in each region, what the light level for each pixel is based on this iso so can add the extra boost from the iso to the pixels across the region. So a region at iso 3200 has 14 bits. A region at iso 100 also has 14 bits plus 5 relative to the iso 3200 region (the camera has to do it backwards to make the sensors work in the right iso range). If this makes sense, and it does to me, you get a huge increase in dynamic range by whatever number of bits you can adjust the iso by. Where was that patent form...

So it’s an electronic grab but with noisier boosted iso for the shadow areas or an automatic camera mode? Limited photographer choice after on or off?
 
How does a camera make an automatic HDR? I think it's probably about as simple as the camera determining that nothing (or only a small %) will be recorded as clipped to black (0) or white (255) and that the vast majority of the pixels record as mid-tones. The reality is that a lot of the automated HDR/extended DR results are not/will not be what you might want ideally and you would get better results doing it yourself manually. But they are often better than the alternative...

I prefer the alternative of bracketing and blending the shot myself which generally produces more satisfactory results than letting the camera choose whats imoortant
 
Either would work for the first paragraph. You could do it through a menu or have the camera look at the light levels on the scene.

For the second, it would have to be automatic as no one would want to set that up on a menu.
 
Im not asking you to write the firmware, i’m asking you as a photographer what you would expect to have to do when getting ready to take a shot to allocate individual iso to individual pixels.

Do you see it as the magic button decision by the camera firmware as enhanced auto mode or do you see there being some artistic choices by the photographer. If photographer involvement then what do you think you need to do to allocate individual iso to 29,24,36,50 megapixels or whatever number we are moving to
I would expect to select "variable ISO" from a menu with sub-menus for landscapes, interiors, etc with other options for narrow, medium or wide variation. How else?
 
You'd also need variable noise reduction to go with this. That way the noise reduction could be applied to areas with higher iso. The darker regions would have less detail but the brighter ones, with low iso, would need less noise reduction. Fortunately there could be a map on the raw file saying where the iso boost was. For 100*100 regions and say 8 levels of iso you would increase file size by at least 10K to include a iso map. That could be worth it to get 8 extra bits of dynamic range on a single picture and lower noise on the bright bits. The part that won't work well where you have big changes in brightness in the middle of a region, just like a normal camera but much more localised.
 
I don't see how any of this would work. Imagine a two pixel sensor taking a picture of a 1x2 image, one half black, the other white. One pixel sees an entirely black image, the other pure white. Can someone please explain how you'd obtain a correctly exposed image using variable ISO on each pixel? It seems like Star Trek science to me.
 
I don't see how any of this would work. Imagine a two pixel sensor taking a picture of a 1x2 image, one half black, the other white. One pixel sees an entirely black image, the other pure white. Can someone please explain how you'd obtain a correctly exposed image using variable ISO on each pixel? It seems like Star Trek science to me.
The idea (automated HDR) is that data is recorded at every point/area. In this scenario it might be simply that instead of writing the two pixels as 0/255 they are instead written as 10/245 (digital gain, using analog gain is unlikely). If it's using two images it would take the lighter areas from the dark image and the darker areas from the lighter image.
 
While I would love this feature, for landscapes I don't think this is the best way to go.

A 3D RAW file matrix (x vs y vs time -- think of it as a special RAW video file!) would have far more processing latitude whilst retaining base ISO noise levels. Effectively you could let every pixel fill up full and work back exposure from time and do all the fancy editing work from there. Even if things moved you might have a say I want first 30% of exposure or last 30% and I may or may not choose to boost it.
 
Can someone please explain how you'd obtain a correctly exposed image using variable ISO on each pixel? It seems like Star Trek science to me.
There is no such thing as a "correctly exposed" image. There are either images that are exposed to the photographer's vision or those that are not.
 
There is no such thing as a "correctly exposed" image. There are either images that are exposed to the photographer's vision or those that are not.

It is a great argument but I think there is a point where you have to draw a line between creative exposure and crap exposure.

Technically correct exposure is certainly quantifiable measure and you can bet all those camera manufacturers put some real science into metering systems to help getting there. Correct exposure should avoid any loss of data to clipped highlights and shadows and allowing for maximum adjustment latitude. That normally means histogram slap bang in the middle or ideally from edge to edge. From there you can do whatever you fancy in post.
 
It is a great argument but I think there is a point where you have to draw a line between creative exposure and crap exposure.

Technically correct exposure is certainly quantifiable measure and you can bet all those camera manufacturers put some real science into metering systems to help getting there. Correct exposure should avoid any loss of data to clipped highlights and shadows and allowing for maximum adjustment latitude. That normally means histogram slap bang in the middle or ideally from edge to edge. From there you can do whatever you fancy in post.
I spend a lot of my time photographing mediaeval churches. If I am photographing a stained glass window and end up with a histogram slap bang in the middle I would lose all the subtle variations in the colours of the glass. I want a histogram seriously bunched up to the left. If I am photographing the carving in the stone around the stained glass, that slap bang in the centre histogram will lose me the grain structure of the stone. I want a histogram seriously bunched up to the right. With a variable ISO system, I could photograph both the glass and the stone with one exposure which I would rather like. Still would not have a histogram slap bang in the centre.
 
I spend a lot of my time photographing mediaeval churches. If I am photographing a stained glass window and end up with a histogram slap bang in the middle I would lose all the subtle variations in the colours of the glass. I want a histogram seriously bunched up to the left. If I am photographing the carving in the stone around the stained glass, that slap bang in the centre histogram will lose me the grain structure of the stone. I want a histogram seriously bunched up to the right. With a variable ISO system, I could photograph both the glass and the stone with one exposure which I would rather like. Still would not have a histogram slap bang in the centre.

I hear you and it is very tough subject, commonly requiring >= 2 different exposures and advanced blending techniques (or bring a truck load of lights in to light the interior - given choice would be my favourite method). Ideally what you want is all exposure contained well within the histogram area and presuming it was possible with next gen camera that would be your correct single exposure from where you'd do all the subsequent editing. My concern with variable iso is that you would have windows at 100 and the bricks likely at 6400 or higher so still plenty of noise, so probably inferior to your average HDR merge. Variable exposure time would take care of it better. There is no reason not to be able to do both methods.
 
Variable exposure time would take care of it better. There is no reason not to be able to do both methods.
Obviously not with a mechanical shutter but with a virtual 'shutter' it would be relatively simple to implement. Actually, a better idea than variable ISO. Or both together would give us major control where we currently have almost no control.
 
You could do it as simply as top of picture was iso 100 while bottom was 3200 (or whatever you set it as). Everything else was iso in proportion to its position.

More advanced would be for a camera to allocate iso based on an average in a region (like your Sony with a zillion light level points across the whole screen). The camera knows what iso is in each region, what the light level for each pixel is based on this iso so can add the extra boost from the iso to the pixels across the region. So a region at iso 3200 has 14 bits. A region at iso 100 also has 14 bits plus 5 relative to the iso 3200 region (the camera has to do it backwards to make the sensors work in the right iso range). If this makes sense, and it does to me, you get a huge increase in dynamic range by whatever number of bits you can adjust the iso by. Where was that patent form...
Makes sense. As far as I understand it, that's how the human eye works. The dynamic range for adjacent pixels (small groups of rod or cone cells) is around 10, or a little more than 3 stops, yet the DR of whole eye is much greater than 3.
 
As I see this; the best comparison is with our mixing of flash and ambient. We as photographers make an artistic decision which then drives technical exposure changes for different elements within the frame.

My cameras can all ‘get a shot’ in auto with flash, but it’s rarely the ‘shot’ I would want.

The camera can relatively easily spot light and dark areas and guess what I might want to do, but it can’t ‘know’ what I’d want to do as an artistic decision.
 
As I see this; the best comparison is with our mixing of flash and ambient. We as photographers make an artistic decision which then drives technical exposure changes for different elements within the frame.

My cameras can all ‘get a shot’ in auto with flash, but it’s rarely the ‘shot’ I would want.

The camera can relatively easily spot light and dark areas and guess what I might want to do, but it can’t ‘know’ what I’d want to do as an artistic decision.

Maybe in the not too distant future the camera software will learn from previous shots and your input and you'll get the artistic or realistic shot you want straight out of the camera more often, maybe we'll have presets to choose from for different outputs. I think it could be done, possibly relatively easily. Or maybe we'll choose the look after the shot is taken, like with Lightfield type cameras. Doubtless there's already something like this in mobile phone tech.
 
Last edited:
Maybe in the not too distant future the camera software will learn from previous shots and your input and you'll get the artistic or realistic shot you want straight out of the camera more often, maybe we'll have presets to choose from for different outputs. I think it could be done, possibly relatively easily. Or maybe we'll choose the look after the shot is taken, like with Lightfield type cameras. Doubtless there's already something like this in mobile phone tech.
I did consider this, but it’s not really that uniform.
In broadly similar light levels I might choose to make flash a ‘fill’ or a key light.

This can even change shot to shot in some scenarios.
 
I don't think it would require "artistic decisions". It is just dynamic range. The sensor may be able to see 12bits, but that can be translated to 20 bits in other bits of hardware.
 
Back
Top