Resolution

Me too. Like many such threads, I don't fully understand all the answers but, like women, being fairly unfathomable doesn't reduce my interest in the subject!!!

Hahahaha! So true! It also won't make any difference at all as to what camera I get but still, it's good to have an idea of how what you're using functions, if only to show off about it :P
 
I did anticipate a fairly technical reply and while I've not contributed much to the thread of late, I am reading it with great interest :) Even if it's making my brain hurt at times!
 
Even if it's making my brain hurt at times!
Let me try to simplify... It's all about "matching dots." You have a scene with a certain amount of dots in it (details), which is then projected as dots (points of light) by the lens, which is collected by dots (pixels) on the sensor... finally to be output as dots (screen pixels/printer DPI).
When "the dots" are the same size at every stage you have maximum detail. If at any point the number of dots is smaller/size of dots larger, then that will be the limiting factor in terms of detail.

* If you take a picture of a white wall w/o texture, there will be no detail because it doesn't exist...
* A lens projects larger dots at smaller apertures**. You need to use a lens that is sharp at a wide aperture and set to that aperture in order to resolve fine details/fit on small pixels...
i.e. at f/11 a (non-existent) perfect lens projects dots of ~15 micron diameter (yellow/green wavelength). ~ 16M can be resolved by a FF sensor, ~7M can be resolved by an APS sensor, and ~4M can be resolved by a M4/3 sensor. But at f/2 the projected airy disks are (theoretically) only ~2.7microns in diameter and ~470M could be resolved by a 470MP FF sensor (1.4 micron pixels).
* the sensor always has it's resolution (i.e. 24MP). The only question is how much detail it is actually resolving as projected by the scene/lens (and also affected by AA filters, Bayer array/demosaicing, etc).
* the display resolution will (potentially) limit the viewable detail. I.e. a 100PPI display, an inkjet print with "dot bleed," and the human eye's limit of ~ 12MP*** (for critical sharpness).


Technical clarifications/unsimplified:
**a less than perfect lens (i.e. all of them) will project larger dots at maximum/larger apertures due to optical errors. The "sweet spot" is the point of stopping down to correct for optical errors without adding diffraction (i.e. the smallest dots the lens is capable of). Typically ~ 2 stops from wide open.
**The yellow/green wavelength is the most important for digital; you get (roughly) 2x more blue and 1/2 as many red (dots).
**The maximum resolution noted is based on 2 pixels per airy disk which is optimal for a Bayer array w/ AA filter.
*** for undetectable dots the requirement is ~39MP for a 45* diagonal 3:2 print; if you consider scanning a 120* FOV in all directions then the MP's required to be *undetectable* by the human eye at any distance is ~580... something no digital system is capable of (at the moment/AFAIK).
**** More/smaller pixels gather less light resulting in more (relative) signal noise which can obscure detail/resolution.
***** The Bayer filtering restricts light/color per pixel. More pixels (i.e. smaller) is better in order to compensate.

To put this in a "workable example":
Most scenes contain more detail than we can record. In order to record 36MP on a D800/e you need a lens that is w/o optical errors at/by f/5.6 and used at/wider than f/5.6 (it's actually closer to f/7.1, but I don't have a specific/technical reference for that). And you need to be at/near minimum ISO with good light. This will provide maximum resolution/detail (2 pixels per airy disk) with minimum signal/photon noise and maximum color information. And you need to use a SS (or tripod) to prevent any blurring of the image across the tiny pixels.
If any of that is not happening, then you are not getting the full 36MP/DR/Color/Tonality the camera is potentially capable of. If all of that is happening and the image is then printed in a 3:2 aspect the pixels/dots will be virtually undetectable, but the print would need to be made at ~ 700ppi (actual) on very high quality paper.
 
Teknobollix measurbation?
 
What I'm surprised at after having read all this (and researching further) is that we still use Bayer array which can only capture one colour of light, when Foveon can capture more and therefore produce better images. Why aren't the big manufacturers using foveon sensors instead of Bayer? This is probably for a whole other thread tbh...!
 
What I'm surprised at after having read all this (and researching further) is that we still use Bayer array which can only capture one colour of light, when Foveon can capture more and therefore produce better images. Why aren't the big manufacturers using foveon sensors instead of Bayer? This is probably for a whole other thread tbh...!

Possibly because Sigma now own the patent?
 
Oh really? That would make sense then wouldn't it!
 
Me too. Like many such threads, I don't fully understand all the answers but, like women, being fairly unfathomable doesn't reduce my interest in the subject!!!

If you needed to understand women to be interested in them, the human race would have died out millions of years ago!


Steve.
 
one way to solve the problem and one way only.

Get both cameras (somehow) and shoot the same thing, print it and you decide.

Better is perceptual and unless done, it theoretical
 
Possibly because Sigma now own the patent?

Yes, that probably has something to do with Foveon not being more popular, and Sigma sensors are certainly very high resolution, but on the other hand we don't seem to be short of pixels with Bayer sensors and a fundamental drawback of Fovoen is poor high ISO performance because of the way the colour filters are layered. Not much light gets through to the bottom of the stack. There's a lot more to it than just resolution though, with things like power consumption, heat build-up, suitability for video, electronic shutter functions, phase-detect AF potential, cost, etc etc. I can't help thinking that if Foveon was fundamentally better as an all-round bet then the bigger players would have found a way round it or made Sigma an offer they couldn't refuse.

Edit: there was story recently that Canon was interested in Sigma, possibly for Foveon, but even if true, nothing seems to have come of it.
 
Last edited:
Yes, that probably has something to do with Foveon not being more popular, and Sigma sensors are certainly very high resolution, but on the other hand we don't seem to be short of pixels with Bayer sensors and a fundamental drawback of Fovoen is poor high ISO performance because of the way the colour filters are layered. Not much light gets through to the bottom of the stack. There's a lot more to it than just resolution though, with things like power consumption, heat build-up, suitability for video, electronic shutter functions, phase-detect AF potential, cost, etc etc. I can't help thinking that if Foveon was fundamentally better as an all-round bet then the bigger players would have found a way round it or made Sigma an offer they couldn't refuse.

I was going to ask about a thought I had of the Foveon sensor in my post above. When I have looked at the Sigma DP threads previously, the biggest issues with the cameras seem to be, battery life, High ISo and slow AF but the some of the capabilities of the sensor look very good and I was close to buying when the shops were selling them off cheaply in readiness for the Quattro realease.

So the Foveon sensor responsible for some of the DP shortcomings?
 
Last edited:
I was going to ask about a thought I had of the Foveon sensor in my post above. When I have looked at the Sigma DP threads previously, the biggest issues with the cameras seem to be, battery life, High ISo and slow AF but the some of the capabilities of the sensor look very good and I was close to buying when the shops were selling them off cheaply in readiness for the Quattro realease.

So the Foveon sensor responsible for some of the DP shortcomings?

Foveon is responsible for some of those shortcomings for sure. The Sigma Quattro series has a reconfigured Foveon sensor, still with amazing resolution but also poor high ISO performance.
 
Don't forget that Fuji don't use the Bayer array either. Their arrangement doesn't seem to suffer the same problems as the Foveon sensor in terms of AF speed and high ISO capabilities.
 
Don't forget that Fuji don't use the Bayer array either. Their arrangement doesn't seem to suffer the same problems as the Foveon sensor in terms of AF speed and high ISO capabilities.
It still uses an RGB "absorptive" filter system, and that's part of "the problem." The other part of the problem is the very coarse RGB spectral sensitivity and "fuzzy math" required to put it back together. ("RGB" is oversimplification... a blue sensor site is sensitive to a "blue centric" spectrum)

I actually have patent pending on a sensor design using layers of "transparent" photo-reactive material (i.e. Graphene) and Dichroic filters for more accurate color separation with much less light loss.
 
Last edited:
Don't forget that Fuji don't use the Bayer array either. Their arrangement doesn't seem to suffer the same problems as the Foveon sensor in terms of AF speed and high ISO capabilities.

Fuji X-Trans sensor is not without its issues either, though I think Adobe has got early post-processing issues pretty much sorted now, and it doesn't have the inherent resolution of Foveon. Fuji has stopped using it in some of its cameras.

The advantage of X-Trans is really only resistance to moire, so it can run without an AA filter for improved sharpness. But if you have a higher resolution sensor with more pixels to start with, moire is much less likely anyway.
 
Fuji X-Trans sensor is not without its issues either, though I think Adobe has got early post-processing issues pretty much sorted now, and it doesn't have the inherent resolution of Foveon. Fuji has stopped using it in some of its cameras.

The advantage of X-Trans is really only resistance to moire, so it can run without an AA filter for improved sharpness. But if you have a higher resolution sensor with more pixels to start with, moire is much less likely anyway.

Fuji do not use it on the less expensive x models in the ranges.
However they do use it on the X30 which includes nearly all their high end technology but in a much smaller sensor.
Fuji have always had rather original specification sensors.
Foveon type layered sensors will always be limited to Low Iso sensitivity,until the invention of colourless filters and light receptors? which is perhaps never
 
until the invention of colourless filters and light receptors? which is perhaps never
Actually, Graphene (and similar new materials) is a colorless/transparent photo-reactive material. And Dichroic filters pass about 95% of light when it is passed thru the filter backwards (probably more when made at sensor thicknesses).

Graphene-Sensor-3.jpg

This is the basics of my design. All of the light (95+%) passes thru the top filter backwards and then a specific spectrum is rejected/trapped. The remaining light passes thru another filter backwards and another specific spectrum is rejected/trapped. The top layer would react to all light with a slight amplification of green (in this example). And the second layer would react to all remaining light with a slight amplification of red. Since the top layer reacts to all spectrums and each successive layer reacts to less, the exact ratio/strength of each filtered spectrum can be determined mathematically.

The filters could be as specific or wide band as desired. And you can also selectively separate/collect IR and UV spectrums since Graphene is reactive/sensitive to those spectrums. The filter stack "order" can be modified based upon priorities, but the most efficient arrangement would be to filter low energy first (IR) and high energy last (UV). In this example Green is filtered first as it is the most important in regards to "exposure" in the RBG scheme. And IR is last as it is a "specialty spectrum" of a low priority in color photography.

There *is* a problem with the design... Graphene and similar materials generate very little energy *because* they are "transparent" (you can't have it both ways). The circuitry would have to generate very little noise of it's own and be very sensitive to generated signal.
 
Last edited:
Actually, Graphene (and similar new materials) is a colorless/transparent photo-reactive material. And Dichroic filters pass about 95% of light when it is passed thru the filter backwards (probably more when made at sensor thicknesses).

View attachment 49652

This is the basics of my design. All of the light (95+%) passes thru the top filter backwards and then a specific spectrum is rejected/trapped. The remaining light passes thru another filter backwards and another specific spectrum is rejected/trapped. The top layer would react to all light with a slight amplification of green (in this example). And the second layer would react to all remaining light with a slight amplification of red. Since the top layer reacts to all spectrums and each successive layer reacts to less, the exact ratio/strength of each filtered spectrum can be determined mathematically.

The filters could be as specific or wide band as desired. And you can also selectively separate/collect IR and UV spectrums since Graphene is reactive/sensitive to those spectrums. The filter stack "order" can be modified based upon priorities, but the most efficient arrangement would be to filter low energy first (IR) and high energy last (UV). In this example Green is filtered first as it is the most important in regards to "exposure" in the RBG scheme. And IR is last as it is a "specialty spectrum" of a low priority in color photography.

There *is* a problem with the design... Graphene and similar materials generate very little energy *because* they are "transparent" (you can't have it both ways). The circuitry would have to generate very little noise of it's own and be very sensitive to generated signal.

Interesting but might be Circuitry heavy and still ISO sensitivity restricted.
 
Circuitry heavy yes... depending upon the amount of separation/filtration.
Why do you say ISO restricted? I believe it would (could) be truly ISO-less...
 
Last edited:
What I'm surprised at after having read all this (and researching further) is that we still use Bayer array which can only capture one colour of light, when Foveon can capture more and therefore produce better images. Why aren't the big manufacturers using foveon sensors instead of Bayer? This is probably for a whole other thread tbh...!
Quite simple, as the light passes through the layers it loses intensity. Which is why the foveon images look 'super real' at low ISO's but instantly look like s*** as soon as light levels drop. People who love them completely ignore the fact that some of us only shoot at 100ISO less than 10% of the time.

For real world use they're totally impractical
 
There should be a way to make the Fovian sensor work effectively. It's no different to the way colour film emulsions are layered.

And contrary to Phil, it would probably be fine for me as I rarely use anything above ISO 100!


Steve.
 
I rarely use anything as fast as 100 iso.

Not every one has the same "real world" usage case.
 
Just for the filmies, ISTR a recent announcement of an ISO 6 (yup, six) film...
 
I rarely use anything as fast as 100 iso.

Not every one has the same "real world" usage case.

Modern digital cameras seem to be designed for available light portraits of black cats in coal cellars at night during a power cut.


Steve.
 
It's a wrongly spelled coal cellar. i.e. someone who cells coal.


Steve.
 
Back
Top