It's a moot point, but you're clutching at straws. The truth is that both lens capability and the practicalities of real world picture taking are serious limitations when you're trying to max out the potential of a 36mp sensor.
I never disagreed with this. There are lots of obstacles to overcome, but
they can be overcome. Technique can be improved, tripods deployed, shutter speeds increased, better lenses used. (but the lens quality thing is incredibly overblown, and we will see that in a month for sure - we can see it now in the D800 raws shot with the 50mm f/1.8 (a £170 lens) but getting the bodies into lots of hands will show it better). That said, for applications where across the frame performance is as important as centre, the lens issue will be relevant sooner.
You're good at quoting obscure theory (I'm not bad at it myself

) but whatever this deconvolution business claims to be able to do, it has not found its way into any cameras yet. I will make a prediction too - it's not coming our way any time soon.
Hey, it can't be
that obscure if teenagers are familiar with it (any electronic, mechanical, aeronautical engineering, physics or maths undergrad should know what it is and how it works)
It won't be in cameras for a while, though I can see it hitting desktops in the next few years. Deconvolving a 70+MP image (that's about where it could start being really useful as the colour resolution of the sensor is then high enough) will take a
lot of memory and processing power, and you need to have the response of the various lenses and sensors profiled. But computing power is getting cheaper all the time (and the chip companies are looking for ways to sell more to us, because they're getting so good that the average person no longer needs to upgrade their laptop/desktop except when it breaks - incidentally, a situation camera makers don't want to fall back into).
I like a bit of theory, which surely underpins everything, but one of the great things about photography is that everything is netted out in an actual finished photograph and you can't argue with it. If I can't see a theory making any visible difference, even when examined very closely, then I'm not going to worry about it.
Very true. But where something
can make a difference, it's good to know how practical/plausible it is, and in some cases, even that it exists
But one theory holds good - optical diffraction caps potential sharpness, and you don't have to look too closely to see it. It's real, it's relevant, and it's not hard to prove for anyone who hasn't already noticed it. It's one of the reasons for those T&S lenses you referred to above.
The T/S lenses are actually a great example. There's lots of moderate maths that can be applied to exactly work out the tilt, shift and focus distance to get the desired effect for a given scene even before the camera comes out of the bag. Most people can't do it (mental trig isn't the easiest thing even with approximations) so get a 'feel' for it or mark distances and angles. But take someone who has the 'feel' but not the detail, explain the theory, and now it takes a quarter of the time and much less faffing about to get the desired result. And there's no complaining that tilt doesn't get everything in focus (it changes the plane of focus rather than expanding it, but a good number of people don't realise that and get frustrated).
Sure, there's lots of marketing talk going on, but again here is the reality - more pixels means more noise in practise. In making true like for like comparsisons as best we can, that is always true*. Frankly I think we're now approaching the point (the latest sensors and processing engines are awsomely good now) where it's becoming a marginal issue for most of us, with full frame at least.
*Nikon suggested to me that at least some of the noise advanatage of the D4 is down to the processing engine. Though it's the same in the D800, there are cost and speed limitations to be weighed up when extracting the last drop out of more than twice the number of pixels, and each with less than half the signal level.
There's always going to be compromise. Higher MP sensor performing as well as a lower MP sensor means you have to use a newer process geometry (£££££), slow the sensor down (boo), have more complex microlenses (££), lower voltage signalling (£££, affects yield and conflicts with getting data off the sensor). No use making a 36MP sensor that shoots at 1FPS and costs £15000 (figures pulled out my ****) when you could just go to medium format and have those things.
I wonder what will happen if we ever properly disagree? Forum highjack rather than just thread?
