A Question About "Crop" Sensors and Lens.

This appeared on TPF years ago, can't remember who originally posted it :)

Crop-factor.jpg

It was mine. Glad you found it useful.

To explain what it's showing: the left hand circle represents the image seen through a full frame lens (Canon EF, Nikon FX) and the right hand circle represents the image seen through a crop sensor lens (Canon EF-S, Nikon DX) with the same focal length. Some people think that crop-sensor lenses give them more reach, i.e. a narrower field of view, than full frame lenses with the same focal length. For example a they think that a 16mm full-frame lens is wider than a 16mm crop-sensor lens. But that isn't the case.
 
Depth-of-field is one of the subjects where pixel density can be comfortably ignored - the DoF standard is too coarse for them to be significant. If you take the specified CoC for APS-C 1.6x (then smallest for DSLR users) and an 18mp camera (quite a modest total these days) then the CoC at sensor level is 0.019mm, which equates to 44 pixels. A few more or less makes zero difference to DoF.
I don't think it makes a difference to your argument, but do you want to try doing that calculation again? You're out by a factor of 10.
 
Yes, I'm saying the same thing but I think I'm saying it clearer and one of my problems with saying that DoF doesn't exist until you print and look is that I think it adds confusion for anyone who maybe hasn't got a working understanding.

The DoF shouldn't be some big reveal that comes as a surprise in the final image... "My gosh! I've printed the picture x size and now I see.... THE DOF!" That shouldn't happen. It should be pretty predictable, and it is. It is what it is. It's set by the gear, the settings and where whatever you're pointing the gear at is when you press the button and once you've pressed the button what you see in a 1" print or a 6m wide one is pretty predictable.

Maybe I'm reading too much into this and no newbies are confused but I find "DoF is not fixed at the moment of capture on the sensor, and changes with any aspect of magnification at any point in the chain right up to final viewing of the image." needlessly confusing and almost mystical. It's not magic, there's no big reveal or mystery here and Yoda and The Force have nowt to do with it. The DoF you will see at any given magnification and viewing distance is predictable and is decided at the moment of capture.
I agree. You're both right but I think your way of saying it is clearer.
 
I bet Olympus Pen users took a disproportionate number of shots in portrait format (like smartphone users today).
We did. I had Pen FT & Pen F bodies plus four or five lenses back in the seventies and, yes, portrait format was the norm.
 
An interesting topic and how great it is to read the discussion being carried out sensibly and politely... In the past I have witnessed DoF and equivalence discussions deteriorate into name calling etc.

Back in the days of film it was a subject that was probably not even considered and has become more 'important' now that the same lens can be more easily or commonly used on different formats.

A 300mm f2.8 lens will always be a 300mm f2.8 lens whatever format is is used on, when used on a 'crop' sensor body it does not magically become another lens. Now assuming (just for the sake of my post) the lens and subject are set up a fixed distance apart and we have two camera bodies one 'full frame' the other a 'crop' and both sensors have the same pixel density (not the same resolution but the same density) then at the sensor the acceptable DoF is going to be the same, just that the sensor in the crop body has done just that, it has cropped the image.

Now we come to view the image, for the sake of argument, we want a 10" x 8" print off each... The image from the 'crop' sensor is going to need to be enlarged more than the 'full frame' sensor so therefor everything gets enlarged/magnified more including the 'blur' so the DoF, looking at the print now 'appears' to be less on the crop sensor. In fact the more we enlarge each sensor the less the DoF will 'appear' on both. Of course this has been true from the very first use of enlarging.

So as Woof Woof says it makes sense to consider your final output at the taking stage. (although maybe not always possible).
 
Yes, I'm saying the same thing but I think I'm saying it clearer and one of my problems with saying that DoF doesn't exist until you print and look is that I think it adds confusion for anyone who maybe hasn't got a working understanding.<snip>

This subject does cause confusion whether we like it or not, which is why a bit of understanding is needed, but it's a very easy concept to grasp once you know the basics. Depth-of-field is not set in stone at the moment of capture, neither do subjects suddenly go from sharp to unsharp when the DoF calculation is exceeded, it's a gradual process. The confusion arises when someone says, 'it looked sharp on the LCD, but on the computer screen it's gone out of focus.' Well yes, and then when you crop it a bit, things get even more our of focus, and if you hit the 100% button almost everything looks blurry. But there's nothing wrong with the lens or the focusing, it's just DoF at work.

Depth-of-field is an optical illusion, based on the assumption that at normal viewing distance there is a limit to the detail we can see in an image, and below a certain point everything is discerned as perfectly sharp. That is the DoF zone, based on normal viewing, but if you change either the viewing distance or the size of the image (or the cropping in post-processing), the rules of the game have been broken. We have no control over how others view our images, which is highly variable, so it's impossible to make firm assumptions at the shooting stage. It's better to stick to the rules (that have served us well for many decades and are universally recognised) while knowing and understanding that things can and do change.
 
I don't think it makes a difference to your argument, but do you want to try doing that calculation again? You're out by a factor of 10.

It's right Stewart. 18mp on a Canon 7D is 233 pixels per mm, therefore 0.019mm = 44 pixels.
 
...Dunno about you but I often choose my gear with an eye to what the final image is going to be used for. Many of my pictures are sized 2000 pixels wide and saved as quality 9 and sent electronically for viewing on tablets and phones...
I think this is probably the most common use of photographs these days; electronic distribution/viewing. And you must realize that puts the DOF as viewed well out of your control... even if you limit the max resolution at which it may be displayed, you cannot control the minimum. You cannot control the physical display size (screen size/resolution) nor the viewing distance, and IME those things vary to a great extent.

But, if we go back to the standard which is (I believe) based upon "normal/comfortable viewing" of an image as a whole, then we can assume a fixed DOF in an image as indicated by the calculators. Of course, this also assumes the image will not be cropped, and the CoC used is appropriate (debatable).

For myself it's generally pretty simple; I either want the least DOF possible, or the sharpest image possible... it's not really about DOF (the middle ground) at all.
 
It's right Stewart. 18mp on a Canon 7D is 233 pixels per mm, therefore 0.019mm = 44 pixels.
233 pixels per mm, so 44 pixels is 44/233 mm, which is ....? (Hint: it's not 0.019.)
 
It was mine. Glad you found it useful.

To explain what it's showing: the left hand circle represents the image seen through a full frame lens (Canon EF, Nikon FX) and the right hand circle represents the image seen through a crop sensor lens (Canon EF-S, Nikon DX) with the same focal length. Some people think that crop-sensor lenses give them more reach, i.e. a narrower field of view, than full frame lenses with the same focal length. For example a they think that a 16mm full-frame lens is wider than a 16mm crop-sensor lens. But that isn't the case.

That’s a very useful comparison Stewart. I have a question. Why is it then that a 1.6x image on a 7D is 18MP whereas I’m told that the same image taken on a 5Div using the same lens at the same focal length and cropped down to the same dimensions as the 7D image only gives 12MP? Shouldn’t it be the same 18MP?
 
That’s a very useful comparison Stewart. I have a question. Why is it then that a 1.6x image on a 7D is 18MP whereas I’m told that the same image taken on a 5Div using the same lens at the same focal length and cropped down to the same dimensions as the 7D image only gives 12MP? Shouldn’t it be the same 18MP?
1.6x crop leaves a bit less than 1/2 the FF sensor area, so a bit less than 1/2 the original resolution (i,e, less than 15MP). Even a 1.5x crop leaves a bit less than 1/2 the original resolution because MP is a LxW calculation... i.e. if you loose 3 pixels in both L&W it's not 6pixels, it's 9.
 
That’s a very useful comparison Stewart. I have a question. Why is it then that a 1.6x image on a 7D is 18MP whereas I’m told that the same image taken on a 5Div using the same lens at the same focal length and cropped down to the same dimensions as the 7D image only gives 12MP? Shouldn’t it be the same 18MP?
Why do you think it should? I mean, if you think the 5D Mk IV image should be 18MP when cropped, what do you think you should get if you cropped a 5D Mk III image, or a 5D Mk II image, or a 5D Mk I image? And if they aren't 18MP, why do you think the 5D Mk IV should be?
 
233 pixels per mm, so 44 pixels is 44/233 mm, which is ....? (Hint: it's not 0.019.)

Haha sorry Stewart, I was checking the other end of the calculation - yes, 4.4 not 44. Apologies :)

But as you say, the point remains - pixels are just too small to affect DoF calculations.
 
That’s a very useful comparison Stewart. I have a question. Why is it then that a 1.6x image on a 7D is 18MP whereas I’m told that the same image taken on a 5Div using the same lens at the same focal length and cropped down to the same dimensions as the 7D image only gives 12MP? Shouldn’t it be the same 18MP?

Crop-factor is a linear measure. When comparing sensor area, the figure to use is the crop-factor squared. ie compared to full-frame, an APS-C 1.6x sensor is 1.6x1.6=2.56x smaller - a bit less than half the area. A crop factor of 1.4 would be exactly half the area of full-frame.
 
Well, close enough for Rock'n'Roll! :P

Depth of field is in itself a bit of a misnomer, it's more depth of acceptable sharpness.
 
?Wouldn't you just get a photo of a nose?
No, not at all.

There's a simple relationship which holds good for telephoto lenses. (It's probably a bit off for wide angles and macro.) It is this:

Focal length / Sensor size = Subject distance / Subject size

So plug in the numbers. Focal length = 300mm. Sensor size = 36mm (full frame camera). Subject distance = 25 yards. Hence subject size = 3 yards. You'd get the whole person from head to toe with some extra space.

If you're not convinced, check out the blog article I wrote several years ago. At the bottom there are a load of photos that illustrate this exact point.

http://lensesforhire.blogspot.co.uk/2012/03/how-big-lens-do-i-need.html
 
It was mine. Glad you found it useful.

To explain what it's showing: the left hand circle represents the image seen through a full frame lens (Canon EF, Nikon FX) and the right hand circle represents the image seen through a crop sensor lens (Canon EF-S, Nikon DX) with the same focal length. Some people think that crop-sensor lenses give them more reach, i.e. a narrower field of view, than full frame lenses with the same focal length. For example a they think that a 16mm full-frame lens is wider than a 16mm crop-sensor lens. But that isn't the case.
Correct. Used on an apsc body 16mm gives the same angle of view regardless which format the lens was originally designed for, offcource excluded those for even smaller formats ;). A little more
http://www.acapixus.dk/photography/angle_of_view.htm
 
Last edited by a moderator:
Back
Top