Crop Factor vs Full frame (a different request)

antc

Suspended / Banned
Messages
4,355
Name
Anthony
Edit My Images
Yes
Hi all,

Couldn't find any examples of what I am asking (sorry if i missed any) but myself and a couple of others have been having a chat with a chap at our camera club who says that if you stick a XXmm lens on a full frame camera, its no different at all from sticking it on a crop frame camera.

This is where I would love your help/input. I am not entirely sure what he is saying is correct to my/my friends understanding. When you look at various camera websites and so on, take for example my Fuji X Pro1, you look at say the 14mm lens they do and next to it in brackets it says, 35mm equivalent length is 21mm. It is mine and my friends opinions that using the same lens on full frame, then on crop factor gives a different field of view hence why nearly always it states a "35mm equivalent length"

Now, can someone please confirm of deny this for me and my friends. Also, if someone would be kind enough, that owns both a full frame and crop body Canon (this is what me and my friends use) could you shoot a couple of images of the same scene, with the same lens, on both the crop body and the full frame body (I can't do this as I just have a crop body which is a 7d)

Many many thanks in advance for all help and assistance!
 
On the full frame you will get a wider field of view. Simples.:)
 
the lens itself doesn't change (it is a physical thing). Tha apparent field of view changes (when the camera is stationary and the lens is changed). The crop factor ratio tells you what lens you'd need to put on a 35mm camera and stand in that same spot for the same framing.

thanks
rick
 
Crop is just a crop. There is nothing magic about it.

If you want to see what the effect is then load any picture to photoshop and crop out slightly more than half pixels in each axis.

A lens works exactly same on full frame and on crop. Cropped sensor just don't see part of the image.
 
Last edited:
The lens is what it is and when you mount it on a "crop" camera you only see a section from the middle of the frame.

24mm lens at f2.8 on a Panasonic G1 which is x2 crop.


The same lens mounted on a Sony A7 which is FF.


This is a crop cut from the middle of that last shot...


And just for fun this is the 24mm f2.8 shot taken with the G1 again...


And this is a 50mm shot at f5.6 taken with the A7...


So, we should see that 24mm at f2.8 on a x2 crop G1 = 50mm f5.6 on FF.
We should also see that cutting the middle out of a 24mm FF shot makes it look just like a 24mm shot taken with a x2 crop camera.

I hope that helps :D

PS. I can send you the raw shots if you want them.
 
Last edited:
Saw this video recently and I'd no idea the crop factor affected aperture too...

No, it doesn't. Crop factor has no impact on the aperture/depth of field/etc. The article is very confusing if they say so.

It is only framing (subject to camera shooting distance) that's makes the difference.
 
Saw this video recently and I'd no idea the crop factor affected aperture too...

http://petapixel.com/2014/03/28/concise-explanation-crop-factor-affects-focal-length-aperture/
it doesn't affect aperture of the lens or anything like that. but DX lens having same effective Focal Length of a FX lens will have different Depth of Field. when comparing lens on FX (FF) and DX (Crop) it is useful to quote the effective aperture and length for DX lens as if it is on FX.

as posted by woof woof above, 24mm @ f2.8 on a 2x crop body has the same focal and depth of field as 50mm lens @ f5.6 on full frame

anyway don't worry about it too much just go and play with the lens and see the results.

in a way the guy was right and in many other ways he was wrong. but ultimately try not to get your mind too deep into this.
 
Last edited:
  • Like
Reactions: ST4
No, it doesn't. Crop factor has no impact on the aperture/depth of field/etc. The article is very confusing if they say so.

It is only framing (subject to camera shooting distance) that's makes the difference.

You didn't watch the video. It's actually quite a full outline of the many changes that happen when sensor format is altered.

The key to understanding depth of field changes is to know that DoF only exists as a measurable concept when an image is output and viewed, as a print or on-screen, and not at the moment of capture in the camera. When viewing a print, the greater magnification required with smaller formats comes to life, resulting in increased depth of field (when the subject is framed the same, from the same distance, at the same f/number).
 
You didn't watch the video. It's actually quite a full outline of the many changes that happen when sensor format is altered.

The key to understanding depth of field changes is to know that DoF only exists as a measurable concept when an image is output and viewed, as a print or on-screen, and not at the moment of capture in the camera. When viewing a print, the greater magnification required with smaller formats comes to life, resulting in increased depth of field (when the subject is framed the same, from the same distance, at the same f/number).

This is one of the many points that we disagree on :D

The way I see it is that the DoF you will see at any print size and viewing distance is decided by the hardware and circumstance at the moment of capture.

To put it another way...

Once you've chosen your camera and lens and set the aperture and positioned yourself at a certain distance from your subject the DoF you will see at any print size and viewing distance is set. That's why it's important to choose your equipment and settings and shooting circumstance with the final output and viewing in mind. Do it the other way and decide the print size and viewing circumstance after the image is taken and it's too late as you're stuck with the shot but keep the output and how it'll be viewed in mind when you choose your gear and set it and the shot up and you'll have a much greater chance of getting the final result you want.
 
The way I see it is that the DoF you will see at any print size and viewing distance is decided by the hardware and circumstance at the moment of capture.

...

Once you've chosen your camera and lens and set the aperture and positioned yourself at a certain distance from your subject the DoF you will see at any print size and viewing distance is set.
No. The DoF is set for a given print size and distance, not any print size and distance. As the print size/distance varies, so the perception of DoF will change.

You absolutely cannot predict the depth of field at capture time UNLESS you know the print size and viewing distance. DoF calculators make assumptions about the viewing distance and print size in their equations. Alter those assumptions and DoF will change.

If you don't believe me, take a billboard photo and view it from up close and across the road. Close up and the image is just blobs and will be "out of focus" (i.e. it has almost 0 DoF). From across the road, it will be "sharp" across the subject and intelligible.

What you appear to be saying is that if you know the final output size & viewing distance, then you can optimise the equipment choice to give a particular perceived DoF output to the viewer. What Richard and I are saying is that if you don't have a fixed output/viewing distance, the perceived DoF is not known at image capture time. BOTH statements are correct. What is incorrect is the belief that some have that when you capture the image the DoF is fixed for all viewing sizes & distances.
 
Thanks Andy :) I'll just add this.

To make meaningful comparisons of DoF, print size and viewing distance are not a variable. They are set and fixed by universally agreed standard, and apply to every depth-of-field scale and DoF calculator (such as this one http://www.dofmaster.com/dofjs.html ) as they have since the year dot.

The whole concept of DoF measurement starts and finishes with the final image/print. And it is this: if you make a 10in print and view it from a distance equal to the diagonal (ie 12in) then the human eye cannot detect detail smaller than 0.2mm. That's it, and all the calculations and formulae are worked back from that single fundamental. The beauty of it is that it works just the same for any size of output/print, so long as the viewing distance relationship (equal to the diagonal) is maintained. Eg, as Andy says, if you look at a street poster close up, it's a hopelessly blurred mush of dots, but move back across the street to bring the viewing distance back in line, the DoF standard is restored and it looks sharp.
 
Last edited:
Wot! U can make a blury image sharp again by staying further away? Wot! Has Nikon or Canon invented new sensor and new printers that u can print 3D info.

Dude if the out of focus area on a print is out of focus no matter where u stand it will be out of focus. Regardless u standing in front of the image with your nose in it or 10m away or a mile away. The fact is tjat part of image is blury. When we capture an image the camera converts the 3D spatial information into a 2D flattened picture. And that depth of 3D is represented by foreground mid ground and back ground and by the varying levels of blur.

The only reason u may struggle to clearly see the billboard or see it in full focus is because human eyes have very small selective focus area, similarly to the PC lenses. Anything outside of that is effectively peripheral focus. When u stand up or up close something that is huge u won't be able to see the whole thing in focus. And if u think u are seeing everything in focus by moving a few yards away then I seriously think u need to go to spec savers. Because chances are they are all the same blur to u if u cannot differentiate that parts of the bill board picture is tact sharp and parts of it is blur. :D

Regardless of the above, might have misunderstood. But still gonna leave it as its funny :)
 
Last edited:
Wot! U can make a blury image sharp again by staying further away? Wot! Has Nikon or Canon invented new sensor and new printers that u can print 3D info.

Dude if the out of focus area on a print is out of focus no matter where u stand it will be out of focus. Regardless u standing in front of the image with your nose in it or 10m away or a mile away. The fact is tjat part of image is blury. When we capture an image the camera converts the 3D spatial information into a 2D flattened picture. And that depth of 3D is represented by foreground mid ground and back ground and by the varying levels of blur.

The only reason u may struggle to clearly see the billboard or see it in full focus is because human eyes have very small selective focus area, similarly to the PC lenses. Anything outside of that is effectively peripheral focus. When u stand up or up close something that is huge u won't be able to see the whole thing in focus. And if u think u are seeing everything in focus by moving a few yards away then I seriously think u need to go to spec savers. Because chances are they are all the same blur to u if u cannot differentiate that parts of the bill board picture is tact sharp and parts of it is blur. :D

Regardless of the above, might have misunderstood. But still gonna leave it as its funny :)

Yes.
 
DOF is primarily a function of magnification.

Dude if the out of focus area on a print is out of focus no matter where u stand it will be out of focus.

But if you print it small enough it will appear to be in focus. Or more accurately, more of it will appear to be in focus.


Steve.
 
Last edited:
If you don't believe me, take a billboard photo and view it from up close and across the road. Close up and the image is just blobs and will be "out of focus" (i.e. it has almost 0 DoF). From across the road, it will be "sharp" across the subject and intelligible.
That's not actually a very good example though, because the most obvious effect is that the billboard photo will appear pixellated when viewed close up.

I can't quite get my head around whether or not visible pixellation affects the apparent DOF, but I suggest it's best not to go there in the first place. The effect you're trying to describe is not an artefact of the image being made up of pixels.

I think a better example would be looking at an image in the back of a camera. Everybody knows that you can't judge sharpness from a 3" image on the camera LCD, and you need to zoom into it (ie magnify it) in order to tell whether it's sharp. Surely that's because, at smaller magnifications, there is more DOF and everything looks sharp. At larger magnifications there is less DOF and it's possible to tell whether parts of the image are OOF.

@ricky1980 : Does that make more sense?
 
Last edited:
Dude if the out of focus area on a print is out of focus no matter where u stand it will be out of focus. Regardless u standing in front of the image with your nose in it or 10m away or a mile away.

Afraid not. As a simple explanation, no image is ever completely sharp. Lenses can not resolve infinitely. A single point will always be rendered a blurry circle and technically there is no such thing as perfect focus.. only as perfectly focused as possible. How blurry that circle is depends on how much it is magnified (and how good the lens is of course). It will be magnified more with a smaller sensor, and less with a larger one to achieve the same image size. This is why larger formats appear sharper despite the same field of view lens being used. It's all about magnification.
 
Last edited:
Oh.. and as no one has done what the OP asked (post comparitive images up)..... here you go.








CLICK FOR FULL RES



Nikon D7000 with Nikkor 35mm f1.8G@ f8, ISO100. Manual focusing using live view, tripod mounted and mirror lock up used.



Nikon D800 with Nikkor 50mm f1.8G @ f8 ISO100. Manual Focusing using Live View.







D800
QVSi1rc.jpg


D7000
CZxamZx.jpg


Both images were manually focused on the back wall of the set, and both were shot at the same aperture, so the D800 file has less depth of field (it's a longer lens) but the printed material on the back wall is intended as the sharpness test, the foreground objects for colour and shadow detail more than anything else.

If the same lens was used on both, for example, the 50mm on both, the field of view on the D7000 image would have been less. The 35mm on the D7000 is roughly 50mm equivalent. However, as the D7000 image would be a crop of the lens's field of view, it's magnified more, and the lens defects will be exaggerated. Bigger sensors (all other factors being equal) give sharper images because of this reason. Other factors come into play at a more critical level, but for the purpose of this thread, they're not a factor.

Also, bear in mind these two cameras are also different in terms of resolution, so the D800 has an additional advantage.

However, if you resize the 36MP D800 file down to 16MP using no aliasing, the result is still apparently sharper.


Identical sized prints from files for comparison

4axnPfZ.jpg

A4UUjAm.jpg



16MP D800 images are not taken in DX crop mode, but are resized FX images. All I've done is made the pixels bigger. A 16MP FX sensor would look pretty much the same as the resized D800 image.
 
Last edited:
  • Like
Reactions: BBR
Everybody knows that you can't judge sharpness from a 3" image on the camera LCD, and you need to zoom into it (ie magnify it) in order to tell whether it's sharp.

Even with an 8" x 10" print (for example) hanging on a wall. From thirty feet away, you can't really tell if it's in sharp focus. Ten feet away, it might still look fine but if you put your nose up to it, you will be able to see if the focus was off a bit.


Steve.
 
I think u guys are confusing the ability of our sight with depth of fill.

I am short sighted which means the images I see doesn't focus properly at the back of my eye - the retina. So I go to the optician and see these images on the wall. Now they are always pin sharp regardless. A 20-20 person will say its sharp standing 6 feet away and a short sighted person will see it blur until I am 2 feet away. But inage is sharp and this has nothing to do with DoF of an image as it is completely flat (as the images I have been talking about). It is simply because human vision has a very shallow focus plane and limited resolution capability.

A 2D image viewed flat to your viewing focus plane I.e. hunged, vertically cannot give u any DoF as it occupies a very small vertical plane - thickness of the paper and paint.

Basically if the image has blur in it no matter where u stand it is blurred. The only reason why u think it is sharp while standing further away is because the percieved resolution our eyes can achieve.

The image looks sharp when it is smaller is not because it is actually sharp, it is simply because all the pixel is compressed to give the impression of being sharp. The captured DoF hasn't changed on the image. It is just we humans cannot detect it.
 
Last edited:
Basically if the image has blur in it no matter where u stand it is blurred. The only reason why u think it is sharp while standing further away is because the percieved resolution our eyes can achieve.
Yup 100%. Even a perfect lens (and none are actually perfect) can only achieve perfect focus at one distance. The consequence of that is that everything off the focal plane is out of focus to a greater or lesser extent. Whether you see something as in or out of focus depends on how big it has been printed and how far away you are viewing it. Depth of field is therefore only "visible" when you view the image.
 
I think u guys are confusing the ability of our sight with depth of fill.

I am short sighted which means the images I see doesn't focus properly at the back of my eye - the retina. So I go to the optician and see these images on the wall. Now they are always pin sharp regardless. A 20-20 person will say its sharp standing 6 feet away and a short sighted person will see it blur until I am 2 feet away. But inage is sharp and this has nothing to do with DoF of an image as it is completely flat (as the images I have been talking about). It is simply because human vision has a very shallow focus plane and limited resolution capability.

A 2D image viewed flat to your viewing focus plane I.e. hunged, vertically cannot give u any DoF as it occupies a very small vertical plane - thickness of the paper and paint.

Basically if the image has blur in it no matter where u stand it is blurred. The only reason why u think it is sharp while standing further away is because the percieved resolution our eyes can achieve.

The image looks sharp when it is smaller is not because it is actually sharp, it is simply because all the pixel is compressed to give the impression of being sharp. The captured DoF hasn't changed on the image. It is just we humans cannot detect it.

DOF does not tell you which parts of the image are perfectly sharp - it tells you which parts of the image are close enough that you can't see the difference, and that depends on print size and viewing distance.

Technically, only objects at one exact distance are perfectly in focus. Even a 1/10 of a millimetre different from that distance is not in focus. As you get further from that distance objects become gradually more and more out of focus. DOF is a measure of how far out of focus objects have to be before you can see that they are not sharp. Obviously that depends on eyesight, print size and viewing distance. DOF calculations make certain assumptions about all these parameters to allow for meaningful comparisons.
 
The image looks sharp when it is smaller is not because it is actually sharp, it is simply because all the pixel is compressed to give the impression of being sharp.

Whilst I disagree with most of your post, This sums it up - It appears to be sharp. And if it appears to be sharp, it is sharp - for that particular viewing scenario.

Bear in mind that the formula for working out depth of field and hyperfocal focussing includes a variable known as the circle of confusion. Confusion being the important bit.


Steve.
 
Groan.... :D Still, it's a slow day :D

No. The DoF is set for a given print size and distance, not any print size and distance. As the print size/distance varies, so the perception of DoF will change

Apart from the "No" I agree and that's pretty much my point. Thanks :D

I don't agree with the "No" because I didn't say or mean that the DoF is set for "any" print size and viewing distance in isolation. What I said was "Once you've chosen your camera and lens and set the aperture and positioned yourself at a certain distance from your subject the DoF you will see at any print size and viewing distance is set." That's perfectly true. But read on...

The DoF will look different to you at different print sizes and viewing distances but how it will look to you at all of these sizes and distances is set when you pick up the camera and press the shutter.

You absolutely cannot predict the depth of field at capture time UNLESS you know the print size and viewing distance. DoF calculators make assumptions about the viewing distance and print size in their equations. Alter those assumptions and DoF will change.

Thanks. That's my point :D

What I actually said was "That's why it's important to choose your equipment and settings and shooting circumstance with the final output and viewing in mind." So obviously I meant that you need to know your output size and viewing. And read on...

My point was and is that it's important to know the output size and viewing before you choose and set the hardware and setup and take the shot.

If you don't believe me, take a billboard photo and view it from up close and across the road. Close up and the image is just blobs and will be "out of focus" (i.e. it has almost 0 DoF). From across the road, it will be "sharp" across the subject and intelligible.

Thanks. That's my point. :D

What you appear to be saying is that if you know the final output size & viewing distance, then you can optimise the equipment choice to give a particular perceived DoF output to the viewer.

Yes. Absolutely you can. Read on...

If you know you are going to be printing at A3 and viewing from 5 ft away you can select your camera and lens and aperture setting to achieve the result you want. It all seems perfectly obvious to me :D

What Richard and I are saying is that if you don't have a fixed output/viewing distance, the perceived DoF is not known at image capture time. BOTH statements are correct. What is incorrect is the belief that some have that when you capture the image the DoF is fixed for all viewing sizes & distances.

That's not what I said and it's not what I meant. Read on dude.

If you don't have a fixed output size and viewing distance in mind you should still know that the DoF you will get at any future decided output size and viewing distance is decided when you take the shot! :D How can it be anything else unless you are using a light field camera?

Once you take the shot the DoF you will perceive at A3 from 5ft away is set just as is the DoF you will perceive at A3 and 20ft. In each case the DoF is perceived differently but what you perceive in each case is still decided by the gear and settings and the scene and your distance from it when you took the image.

Shoot with FF and an 85mm lens set to f1.4 at 2 ft from your subject and print to A3 and view from 1ft away... shallow DoF is obvious.

Print the same image the size of a postage stamp and view from 4ft away... the image may appeear to have deep DoF.

Shoot with MFT and a 6mm lens set to f22 and there's next to nothing you can do to perceive shallow DoF unless you print it the size of the moon and view it from 1 ft.

These things should be no surprise and what you perceive in each case, although different in each case as you change the camera, the settings and the print size and viewing distance, was set when you pressed the button.

The output size and the viewing distance alter your perception of the DoF you see but the hardware and distance to the subject and the spacial relationships of the things within the image are set and different perceptions at different output sizes and viewing distances can only make the inherent properties of the image either more or less visible to the viewer.

Oooh, there's nothing like DoF to lead to endless hair splitting.

Yes. I am indeed saying that you can optimise your kit and settings to achieve the output you want and that what you will see at any output size and viewing distance you choose is indeed set when you press the button.

This is why I take a lot of MFT at wide apertures. The reason is that with a smaller format system I'm more likely to be using shorter lenses and to perceive shallow DoF (which I sometimes like) in small prints and on screen at normal viewing distances I need to use a wide aperture. For example when shooting with MFT I often use a lens in the region of 20 to 50mm and rarely use apertures smaller than f8 and indeed very often shoot at f1.4 to f4.

However, with a full frame camera the chances are that I'll be using a longer lens and may be shooting with reduced camera to subject distances and as a result I will probably be using apertures in the f4-f11 range.

This will tie in nicely with the crop factor as MFT and 24mm at f2.8 = (more or less) FF at 50mm and f5.6 and both setting are within the ranges I previously mentioned for the two formats.

The reason I posted, and as always with you guys I end up regreting doing so :D is to try and add some balance to the view that DoF is decided by output size and viewing distance. Full stop.

What I'm saying is that it's hardware and how the shot is set up that decide what you see at the various output sizes and viewing distances you choose.

My point is that you should think about these things before you take the shot and not believe that the DoF is decided later by the output size and viewing as that's not the whole story. It's the END of the story.

The DoF you perceive at differing output sizes and viewing distances changes with output size and viewing distance but overriding the whole experience is the kit and its settings and the spacial relationships when you took the shot.

And to get back to the OP and the example images I posted...

When I took the FF shot at 50mm and f5.6 I had a good idea what the image would look like and I knew that to make a very similar image with the Panasonic G1 I needed to use a wide lens and a wider aperture, hence 24mm f2.8 on MFT = 50mm f5.6 on FF.

These two images will look very similar no matter what the output size and / or viewing distance until you go beyond what the hardware can do and print them the size of a barn by which time the G1 image will probably fall apart first.

We do come at this from different angles and my point as usual is just to add a different view and balance and that is that YES output size and viewing distance are what decide how the viewer perceives DoF... but (Drum Roll)... what you perceive at any output size and viewing distance you decide on is set when you press the button.

Cup of tea time.
 
Last edited:
<snip>

I can't quite get my head around whether or not visible pixellation affects the apparent DOF, but I suggest it's best not to go there in the first place. The effect you're trying to describe is not an artefact of the image being made up of pixels.

<snip>

DoF calculations take no account of pixels, but they're of no consequence anyway, because the DoF standard is actually quite loose.

If you take that 0.2mm* resolution figure in a 10in image I mentioned above (the primary circle of confusion from which the whole shooting match evolves) then the DoF standard only requires 1.1mp resolution (1270x847=1.1m dots) and that's quite close to most images displayed fairly large on a monitor. Eg Pookeyhead's still life shots above are less than half that (800x530=0.42m dots).

That doesn't mean that our higher resolution sensors are of no benefit. There is some truth in that, though it's not quite that simple and I don't think we want to get into cascading MTF curves right now!

*Edit: My 1080 HD monitor has 0.25mm resolution, ie 4x RGB dots per mm, but I have to get within a few inches of the screen to see them. Of course, they're invisible at normal viewing distance and it is this fact that underpins the whole concept of DoF. In a way, it's an optical illusion :)
 
Last edited:
DOF *IS* the perceived "acceptable sharpness" in an image, not "actual sharpness."
It is based upon a number of assumptions which include display/print size, viewing distance, and typical visual acuity (the COC factor). If one of these factors change then the COC factor changes and the DOF changes. In order for a an image to "have" a given DOF you have to control all of these factors, which you really can't. So, in fact, an image does not actually have "a DOF."
However, most people will tend to view an image of a given size from a distance that causes it to fill their primary FOV so that they can see "all of it." And a normal lens is one that also records this same primary FOV (~ 45*). Basically, the FOV's match... (this works out to be FL ~ equal to sensor diagonal)

It's also debatable as to the validity of the COC standard factor as display sizes and viewing distances were conservative (at least compared to today), as was the "typical visual acuity" which is actually variable with situation/subject.

"Actual sharpness" only occurs at one point in an image and doesn't change.
 
DoF calculations take no account of pixels, but they're of no consequence anyway, because the DoF standard is actually quite loose.
I *believe* sensor pixels are of no consideration because they are "self negating" within the DOF/COC standard. More MP's for a given sensor size means less enlargement for a given print/display size... the COC remains the same.

Edit to add:
I find it's best to think of pixels as "detail," and contrast (MTF) as sharpness... two separate but interdependent factors.
 
Last edited:
I *believe* sensor pixels are of no consideration because they are "self negating" within the DOF/COC standard. More MP's for a given sensor size means less enlargement for a given print/display size...

Pixels are of no consideration because we've always got way more of them than we need anyway just to satisfy the DoF standard.
 
Pixels are of no consideration because we've always got way more of them than we need anyway just to satisfy the DoF standard.
I finally found a technical article that explains this... and it makes complete sense.
The same article also gave reference to different COC standards related to use; "technical/scientific" use having a common requirement of .015 (only ~4MP).
 
DoF calculations take no account of pixels ...
I *believe* sensor pixels are of no consideration ...
Pixels are of no consideration ...
Crikey, talk about straw men.

You're responding to this:
I can't quite get my head around whether or not visible pixellation affects the apparent DOF, but I suggest it's best not to go there in the first place. The effect you're trying to describe is not an artefact of the image being made up of pixels.
But it wasn't me who bright up pixels. It was @arad85:
If you don't believe me, take a billboard photo and view it from up close and across the road. Close up and the image is just blobs and will be "out of focus" (i.e. it has almost 0 DoF).
My point was that @arad85's argument was confused because he had - unnecessarily - brought up the subject of the image appearing pixellated when viewed from close range.
 
Good grief this thread has got complicated.:)
 
Back
Top