Canon says more megapixels is bad...

^^^
I see, I personally always like to keep the raw but again that's just preference I guess.
Trouble is with this example though, as I said previously, there looks to be quite a large amount of natural fill anyway, so it's not like we can see how much extra fill was needed...
 
^^^
I see, I personally always like to keep the raw but again that's just preference I guess.
Trouble is with this example though, as I said previously, there looks to be quite a large amount of natural fill anyway, so it's not like we can see how much extra fill was needed...

I'm sure there are a ton of other examples, dean posted some great links above. The fact is to blanket state that fill flash is ugly means you have to be doing it wrong because when it is done right you can achieve the EXACT look that you processed using digital fill. Ok so it will also produce the catchlight but that's a whole different kettle of fish, here we are talking purely about exposure.
 
Correct fill light placement = on camera axis

Isn't that exactly what we're talking about? Of course fill should be on axis.

Butterfly lighting with fill (reflector) creates a nice clamshell catch light, that's completely different from a sky catch light mixed with a crappy white dot in the middle of the pupil from direct fill flash, the catch light just looks so un-natural/weird being right in the centre of the eye.

The pupil is black. If you can't remove a tiny white dot on post then your pp skills need looking at, mate.
 
^^^
Thanks Joe I appreciate your intention, and I understand how it may appear that I'm being cocky as you put it, but tbh I'm not trying to win some popularity contest here, if I'm truly believe I'm right then I will say so.

<snip>

I don't think you're being cocky Rhys, very patient actually, under fire from some equally assertive statements that can only be just as wrong in their mismissal of an alternative technique that obviously works for you.

And TBH I have some sympathy with what you're saying. I'd even go so far as to say that Lightroom for me has become almost a new technique in itself. You can do what you have done with that portrait so easily - literally two sliders in two seconds - and it's simply amazing what happens. Light and colour appears from nowhere out of dark shadows, highlights pulled back perfectly from oblivion.

Purists might say that you shouldn't do it like that, but why not? I've never been able to do it before with any other software (maybe says more about my post processing skills) which is why my first reaction to that situation would be a dash of flash. And it probably still would be, but I'm not dismissing what you've done. New technology makes new things possible. And I like your portfolio BTW :)

I could take issue with you on a few things though. One thing I love about fill-in flash is the highlight that it puts in otherwise shaded, dead eyes! And I don't know what to make of that link to a 5D2 user, other than he seems to be an ackowledged nutter with a dodgy camera.

My 5D2 is another example of technology creating alternative options. It's fantastic, and I chose it over a D700 because of its amazing blend of low noise and high resolution. That's just a comment, not an argument :D but I wonder why the Exif on your portrait above says it was shot on a D7000
 

Is there actually any examples of direct fill flash here?

And do you actually think the below statement applies to the sample image I posted, while despite the apparent under exposure the light was so contrasty I was only just able to hang onto the highlights in the hair?

"Now, when we talk about ‘fill-flash’, we’re usually describing the scenario where our ambient exposure is correct, and we’re just lifting the shadows with a mere hint of fill-flash."

fill.jpg
 
Isn't that exactly what we're talking about? Of course fill should be on axis.

You must have missed this.

That's an interesting definition. I'd always thought of fill flash as flash to lift the shadow areas to better match the backlighting so that the front of your subject and the background are equalised and looked at like that the exact sourse and axis doesn't seem to matter :cuckoo:

Winner. I prefer this definition too.
 
I'd of thought fill flash was more along the lines of...

254561_10150649868590405_509225404_18970474_548018_n.jpg


?
 
I don't think you're being cocky Rhys, very patient actually, under fire from some equally assertive statements that can only be just as wrong in their mismissal of an alternative technique that obviously works for you.

And TBH I have some sympathy with what you're saying. I'd even go so far as to say that Lightroom for me has become almost a new technique in itself. You can do what you have done with that portrait so easily - literally two sliders in two seconds - and it's simply amazing what happens. Light and colour appears from nowhere out of dark shadows, highlights pulled back perfectly from oblivion.

Purists might say that you shouldn't do it like that, but why not? I've never been able to do it before with any other software (maybe says more about my post processing skills) which is why my first reaction to that situation would be a dash of flash. And it probably still would be, but I'm not dismissing what you've done. New technology makes new things possible. And I like your portfolio BTW :)

I could take issue with you on a few things though. One thing I love about fill-in flash is the highlight that it puts in otherwise shaded, dead eyes! And I don't know what to make of that link to a 5D2 user, other than he seems to be an ackowledged nutter with a dodgy camera.

My 5D2 is another example of technology creating alternative options. It's fantastic, and I chose it over a D700 because of its amazing blend of low noise and high resolution. That's just a comment, not an argument :D but I wonder why the Exif on your portrait above says it was shot on a D7000

Cheers Richard, I'm glad you can see the use of such a technique LR provides.
As for the shadowed raccoon eyes, yes I agree it can be good for that, but that normally happens when the sun is high and in front of the subject.
I used to have a D7K, and have only had this D700 since Saturday, but I'v tested my D700 out, and it provides comparable results.

Edit:

Maybe I'm not such a 'purist' when it comes to exposure as when I got into photography, I started out using LR3 so that's all I'v known.
 
Last edited:
Rhys, I'm not dissing the use of the fill slider in Lightroom at all. I use it a lot too, but I am saying that to use it solely when lighting a subject properly provides much better IQ is folly.

It is much much easier to use your method on the D7000 because it has incredible dynamic range (yes, I get that this is your point about the 5DII) - much better than the D700 (I own both too), and much, much better than the 5DII. What I am saying is that on camera fill is a tool that will provide better results on occasions so knowing how to use it (I use it rarely myself) gives you another tool because you will find that one day the method you employ fails you.

Oh, and I did miss the stuff about off axis fill. I think it is the work of the devil and leads to very badly lit photos. ;)
 
Rhys, I don't think a shopped example of catchlight removal from Chaz is a good example, mate.
 
Cheers Richard, I'm glad you can see the use of such a technique LR provides.
As for the shadowed raccoon eyes, yes I agree it can be good for that, but that normally happens when the sun is high and in front of the subject.
I used to have a D7K, and have only had this D700 since Saturday, but I'v tested my D700 out, and it provides comparable results.

Edit:

Maybe I'm not such a 'purist' when it comes to exposure as when I got into photography, I started out using LR3 so that's all I'v known.

Haha! You've been completely spoiled! If you'd been shooting film, the best you'd ever get would look like your SOOC image. At the very best. And if I tried to do with my old Canon 350D what I can now do so easily with my 5D2 and Lightroom, I'd just be dragging a huge pile of noise out of the shadows. Hopeless.

Technology moves on, and new things become possible. Now we maybe have the best of both worlds - expose to the right, use fill-flash or a reflector to lift the shadows, then pull down blown highlights in Lightroom. The result of that combination would be fabulous image quality we could only dream of a few years ago. Lots of new options :thumbs:

But they key to all of this is a big sensor with lots of detail and inherantly low noise - which is where this thread started some time ago, I think ;) If you've been getting good results like this with your ex-D7000, then you're going to be very happy indeed with a new D700 :thumbs:
 
Rich, actually the D7000 is probably better for this than the D700 due to it's dynamic range.
 
Technology moves on, and new things become possible. Now we maybe have the best of both worlds - expose to the right, use fill-flash or a reflector to lift the shadows, then pull down blown highlights in Lightroom. The result of that combination would be fabulous image quality we could only dream of a few years ago. Lots of new options :thumbs:

But they key to all of this is a big sensor with lots of detail and inherantly low noise - which is where this thread started some time ago, I think ;) If you've been getting good results like this with your ex-D7000, then you're going to be very happy indeed with a new D700 :thumbs:

That seems to be the key, you can push images allot further if they sensor has low read noise. My old D7K had amazing dynamic range, although I wouldn't say it had any more usable dynamic range than my D700, as there comes a point where noise becomes a factor, so while the D700 has a little less absolute DR, it does have a little less noise also, so I seem to be able (in the short time I had it) be able to push files just as far as my D7K.
Although I didn't buy the D700 especially for this, but rather the FF effect on DOF, FOV and sharpness when shooting wide open, as FF seem to be sharper when the lens is the weak link.

Also note, that I agree with using things such as reflectors etc. to get the best IQ and light patterns etc, when possible.
However the main reason I have been relying on my camera's dynamic range, is because most of the shot's in my portfolio so far were actually portraits of strangers in the street, so I wanted to travel as light as possible, and be able to capture the portrait in a matter of a few seconds.
 
Last edited:
While I live the 5dmk2 sensor
I'm just glad that Canon have got the message that everything other than megapixels is what future R&D will focus on.
 
HSC said:
While I love the 5dmk2 sensor
I'm just glad that Canon have got the message that everything other than megapixels is what future R&D will focus on.
 
Rich, actually the D7000 is probably better for this than the D700 due to it's dynamic range.

Of all the claims made in this thread, that would seem to be the least likely. Sensor is less than half the size.
 
HoppyUK said:
Of all the claims made in this thread, that would seem to be the least likely. Sensor is less than half the size.

According to DxO, the D7000 has a DR of 13.9 Ev. By the same tests, the D700 has 12.2 Ev.
 
HoppyUK said:
Of all the claims made in this thread, that would seem to be the least likely. Sensor is less than half the size.

Then you're not up with current events. The d7000 sensor has the best dynamic range of any DSLR.
 
According to DxO, the D7000 has a DR of 13.9 Ev. By the same tests, the D700 has 12.2 Ev.

Well, I wouldn't be the first to find DxO suspect, confusing, and contradictory.

But that's by the way, native dynamic range is meaningless in this context, since that is what is being massively extended using MC's Lightroom method.

This is basically HDR technique, but with all the data extracted from a single file. The key to it is noise, or lack of. If the shadows are clean, then you have a chance to lift them. Sensor size simply rules on noise.
 
Well, I wouldn't be the first to find DxO suspect, confusing, and contradictory.

But that's by the way, native dynamic range is meaningless in this context, since that is what is being massively extended using MC's Lightroom method.

This is basically HDR technique, but with all the data extracted from a single file. The key to it is noise, or lack of. If the shadows are clean, then you have a chance to lift them. Sensor size simply rules on noise.

Why is it that camera manufacturers don't quote QE (Quantum efficiency) or do they but call it something else? I once read that QE in DSLR cameras is so poor that it's borderline embarrassment for the likes of Canon and Nikon. I understand that price is the driver behind such cameras, but as you say noise is the key and higher QE is the key to resolving noise issues..... Though I do stand to be corrected :)
 
Why is it that camera manufacturers don't quote QE (Quantum efficiency) or do they but call it something else? I once read that QE in DSLR cameras is so poor that it's borderline embarrassment for the likes of Canon and Nikon. I understand that price is the driver behind such cameras, but as you say noise is the key and higher QE is the key to resolving noise issues..... Though I do stand to be corrected :)

Back on topic! :thumbs:

I think we could do with as many different measures as possible, especially as we currently have pretty much none! But while QE would be interesting, it's a bit like measuring a car's engine power in bhp-per-ton. It's a very relevant figure, but by itself doesn't tell you how much power you've actually got, or how fast the car goes.

There's a lot of smoke and mirrors when it comes to claims about sensors and noise and ISO etc, but the bottom line is what we get in actual images. That doesn't lie, and the differences are easy enough to see.

Continuing the motoring analogy, the final print is our stop-watch and there are lots of other factors that affect the stop-watch time apart from engine power, like weight and aerodynamics. With sensors, there's photon capture and QE, but that has to be converted into an image by the A/D converter and then the processing engine.

These things tend not to get much of a look in from the marketing folks, but I heard the other day that the new Bionz engine in the Sony A77 (24mp APC-C!) is twice as efficient as the one in the top of the range A900. I have no idea what that means in practise, but it does indicate that significant developments and enhancements are going on post-sensor. Noise after all is not from the sensor as such, but introduced by all the electronic gubbins that turns a miniscule signal from captured photons into something we can look at.
 
Back on topic! :thumbs:

I think we could do with as many different measures as possible, especially as we currently have pretty much none! But while QE would be interesting, it's a bit like measuring a car's engine power in bhp-per-ton. It's a very relevant figure, but by itself doesn't tell you how much power you've actually got, or how fast the car goes.

There's a lot of smoke and mirrors when it comes to claims about sensors and noise and ISO etc, but the bottom line is what we get in actual images. That doesn't lie, and the differences are easy enough to see.

Indeed! and of course I accept there is much more going on that has to be taken into account. However, if the sensor on my camera is able to capture and record every single photon that it receives and then convert that into an electrical signal! then noise would not be an issue as no amplification would be required and it's amplification of signal that causes noise.
Of course a QE of 100% is impossible at this time and should ultimately be the 'Holy Grail' for manufacturers.

I don't know if this is correct but! I suspect that the QE of many mid range DSLR's is around the 30 to 40 percent region with high end imaging devices being around 85% and that looks like quite a bit of room for improvement but as you rightly say ..it's the final image that counts :thumbs:.......... Onwards and upwards eh?
 
Indeed! and of course I accept there is much more going on that has to be taken into account. However, if the sensor on my camera is able to capture and record every single photon that it receives and then convert that into an electrical signal! then noise would not be an issue as no amplification would be required and it's amplification of signal that causes noise.
Of course a QE of 100% is impossible at this time and should ultimately be the 'Holy Grail' for manufacturers.

I don't know if this is correct but! I suspect that the QE of many mid range DSLR's is around the 30 to 40 percent region with high end imaging devices being around 85% and that looks like quite a bit of room for improvement but as you rightly say ..it's the final image that counts :thumbs:.......... Onwards and upwards eh?

Fair point. Higher signal in = higher signal out :thumbs:

So accepting that there is more stuff going on after the sensor in terms of turning it into a picture, if the basic raw material that you're working with is only 30-40% of potential, what happens to the other 60-70% and what can be done to regain it?

Going back to the OP and the premise that more pixels is not good for image quality, I'm not sure I accept that at face value. More pixels simpy means the image in sliced and diced into smaller component parts, which can't be a bad thing for detail, but the (alleged) downside is that there are other losses that go hand in hand with it.

Frankly, I don't know. Off hand I can't think of too many cameras that have come out with more pixels but significantly worse performance in other areas. Somebody might point to the 7D vs 40D or something (18mp vs 10mp) but I'm not sure that's even true and certainly not conclusive or significant. My own tests showed the 7D to be better all round, if not by much.

Maybe a moot point, but generally the experience seems to be more pixels and other aspects of performance maintained or improved, one way or another... :thinking: Isn't that the trend? We certainly don't see too many people snapping up discontinued models to get improved IQ. I don't want any of my old cameras back!
 
^^^
Thanks Joe I appreciate your intention, and I understand how it may appear that I'm being cocky as you put it, but tbh I'm not trying to win some popularity contest here, if I'm truly believe I'm right then I will say so.


Below is one of my examples of using a flash off camera axis.
By moving the flash off camera axis, it is no longer a fill light. It is either a correctly exposed main/key light or if I lessen the power it is an underexposed main/key.

4e4da3dbed45a.jpg


There are times you place two light either side of your subject, but that's split lighting, and the fill still remains on camera axis, either flash or reflector.

I have lot's of learning material that talk of the correct placement of fill, however they are not freely distributable, but below is an example from some guy from lencarta.

http://www.youtube.com/watch?v=o_7l7etju38

You could have also added a flash camera left of the camera to 'fill' the area of shadow between lit right hand side (by the key flash on the right) and the light of the sun (rim).

Pardon my french but in that case i would call that a 'fill light' which also is not on the camera axis. Unless you just want to call it a 'light' and reserve the name fill for when its on axis :P

ETA Granted that you could also have lit up that area using an on axis fill.
 
Last edited:
Fair point. Higher signal in = higher signal out :thumbs:

So accepting that there is more stuff going on after the sensor in terms of turning it into a picture, if the basic raw material that you're working with is only 30-40% of potential, what happens to the other 60-70% and what can be done to regain it?

Going back to the OP and the premise that more pixels is not good for image quality, I'm not sure I accept that at face value. More pixels simpy means the image in sliced and diced into smaller component parts, which can't be a bad thing for detail, but the (alleged) downside is that there are other losses that go hand in hand with it.

Frankly, I don't know. Off hand I can't think of too many cameras that have come out with more pixels but significantly worse performance in other areas. Somebody might point to the 7D vs 40D or something (18mp vs 10mp) but I'm not sure that's even true and certainly not conclusive or significant. My own tests showed the 7D to be better all round, if not by much.

Maybe a moot point, but generally the experience seems to be more pixels and other aspects of performance maintained or improved, one way or another... :thinking: Isn't that the trend? We certainly don't see too many people snapping up discontinued models to get improved IQ. I don't want any of my old cameras back!

Personally i think the current crop canons are a step down - even though i'm a canon fan i recommend nikons for people who want an aps-c cam. For FF i personally still think the 5dmk2 is the best cam available, although i am biased in this regard :)
 
So accepting that there is more stuff going on after the sensor in terms of turning it into a picture, if the basic raw material that you're working with is only 30-40% of potential, what happens to the other 60-70% and what can be done to regain it?
Hi Richard

Just for the purposes of discussion as I suspect you probably know more about this than I do!

I Really wish I knew the answer, but realistically speaking nothing can regain it as it's not being recorded, but it can be fudged to make the picture acceptable? The answer I suspect is bigger photo-sites (pixels) will certainly help. Consider a pixel with a well depth of 10 (just to make it easy ;)) if it fills quickly with signal (recorded photons) in the light areas yet the dark areas of the image record very few photon hits then what does the camera do?.... It records max white or blows out the highlights, yet the darker areas are missing the detail as very few of the photons are being captured. It's all about scale.

My simplistic view here about this is to have a greater well depth and higher bit count. Now consider the well depth of 10,000 this allows the whites to still be captured and yet leaves time for the darker areas to capture the photons and this all means detail and information that the camera can use. Bigger photo-sites or pixels means less room on the sensor. I believe there is a scale (test) used that states something like for detail to be recorded it has to cross to two or more pixels... Which leads to the need for more pixels, I can't recall it exactly but will look it up if needed ..... The point bigger pixels greater detail simply because the scale is different! 10 photons or 10,000! .......

My uneducated answer is ...... Big photo-sites on bigger sensors or fewer bigger photo-sites on smaller sensors ...... More pixels are not always better.
 
Last edited:
Frankly, I don't know. Off hand I can't think of too many cameras that have come out with more pixels but significantly worse performance in other areas. Somebody might point to the 7D vs 40D or something (18mp vs 10mp) but I'm not sure that's even true and certainly not conclusive or significant. My own tests showed the 7D to be better all round, if not by much.

The problem is that it is next to impossible to compare cameras of different generations or makes to get a definitive answer to questions such as "are more mp's / smaller pixels a bad thing." Different manufacturers use different designs so comparisons between manufacturers is difficult and each new design from each manufacturer while doubtless including any advances in technology also doubtless includes an increased mp count. So, sadly we'll probably never see two sensors of equal design with different mp counts.

My own gut feeling based upon over thirty years in computing and electronics is that advances in technology which could potentially bring higher dynamic range and better noise performance are possibly being offset at least to a degree by the trend towards ever more and ever smaller pixels.

This lot... "One of the best known consulting and testing companies in the industry..." seem to be saying that small pixels are bad pixels...

http://www.luminous-landscape.com/essays/csm_image_quality.shtml
 
The problem is that it is next to impossible to compare cameras of different generations or makes to get a definitive answer to questions such as "are more mp's / smaller pixels a bad thing." Different manufacturers use different designs so comparisons between manufacturers is difficult

I disagree! QE will give a definitive answer and is the best measure for serious comparison with regard to sensor performance.
 
I disagree! QE will give a definitive answer and is the best measure for serious comparison with regard to sensor performance.

I don't know what QE is... and what do you think affects sensor performance?

For example, suppose Canon (or Nikon, it doesn't matter who...) bring out a new camera and they use a different sensor design, specifically they up the mp count, decrease the gaps between pixels, improve microlens design, reroute signal paths to reduce interference, move some electronics off sensor and move others on sensor, incorporate improvements to grounding and shielding, use faster and improved algorithms and IS's etc and change the circuit board from 10 layer FR4 to 12 layer CNW27.

How are we supposed to compare that camera to one of a previous generation or to one from a different manufacturer with a view to deciding if big pixels are better than small pixels?
 
Last edited:
Hi Richard

Just for the purposes of discussion as I suspect you probably know more about this than I do!

I Really wish I knew the answer, but realistically speaking nothing can regain it, but it can be fudged to make the picture acceptable? The answer I suspect is bigger photo-sites (pixels) will certainly help. Consider a pixel with a well depth of 10 (just to make it easy ;)) if it fills quickly with signal (recorded photons) in the light areas yet the dark areas of the image record very few photon hits then what does the camera do?.... It records max white or blows out the highlights, yet the darker areas are missing the detail as very few of the photons are being captured. It's all about scale.

My simplistic view here about this is to have a greater well depth and higher bit count. Now consider the well depth of 10,000 this allows the whites to still be captured and yet leaves time for the darker areas to capture the photons and this all means detail and information that the camera can use. Bigger photo-sites or pixels means less room on the sensor. I believe there is a scale (test) used that states something like for detail to be recorded it has to cross to two or more pixels... Which leads to the need for more pixels, I can't recall it exactly but will look it up if needed ..... The point bigger pixels greater detail simply because the scale is different! 10 photons or 10,000! .......

My uneducated answer is ...... Big photo-sites on bigger sensors or fewer bigger photo-sites on smaller sensors ...... More pixels are not always better.

I only know what I've read about this stuff. There is so much we don't know, perhaps because it's commercially sensitive or marketing departments don't see a benefit in banging on about unsexy tech.

I guess that there are incremental improvements to be made across the board, rather than a quantum leap in any one direction. Gapless micro-lenses that actually are gapless and do a better job of collecting more light and focusing it efficiently into a deeper well, as you suggest. Shrinking the architecture so there is more room for bigger photosites. That sort of mechanical/optical development to maximise photon collection before it's passed on to the processing electonics where more miracles are made. For miraclous all this is - cutting edge micro-electro engineering!

Then there are other things complicating the issue, like power consumption and heat which is particularly important for video, the ability to switch on and off rapidly and get rid of the mechanical shutter, not to mention the small question of cost. I know that on balance these are things that favour CMOS over CCD.

The more I think about it, the more I believe it's not as simple as saying more pixels are bad. We've got to look beyond that.

Edit: crossed post with Alan.
 
Last edited:
I don't know what QE is... and what do you think affects sensor performance?

For example, suppose Canon (or Nikon, it doesn't matter who...) bring out a new camera and they use a different sensor design, specifically they up the mp count, decrease the gaps between pixels, improve microlens design, reroute signal paths to reduce interference, move some electronics off sensor and move others on sensor, incorporate improvements to grounding and shielding, use faster and improved algorithms and IS's etc and change the circuit board from 10 layer FR4 to 12 layer CNW27.

How are we supposed to compare that camera to one of a previous generation or to one from a different manufacturer with a view to deciding if big pixels are better than small pixels?


Fair question!

QE or quantum efficiency is the ability of the sensor to convert photons to electrical signal and is therefore a fair comparator of different manufacturers.
 
You could have also added a flash camera left of the camera to 'fill' the area of shadow between lit right hand side (by the key flash on the right) and the light of the sun (rim).

Pardon my french but in that case i would call that a 'fill light' which also is not on the camera axis. Unless you just want to call it a 'light' and reserve the name fill for when its on axis :P

ETA Granted that you could also have lit up that area using an on axis fill.

What your describing is split lighting where there is a main and secondary light source, however fill is often still used on camera axis as well. Fill should always be on camera axis if you want a shot to look natural and well lit end of story.

And that isn't the sun btw, it's a second flash just at the edge of the frame to mimic the sun.

Below is the setup shot.

exy.jpg
 
Last edited:
Then there are other things complicating the issue, like power consumption and heat which is particularly important for video, the ability to switch on and off rapidly and get rid of the mechanical shutter, not to mention the small question of cost. I know that on balance these are things that favour CMOS over CCD.

The more I think about it, the more I believe it's not as simple as saying more pixels are bad. We've got to look beyond that.


Hi Richard

Indeed we do have to look beyond what we have ...

More pixels (photo-sites) are not bad...... It's the capacity and quality that counts. All other factors effect the QE as QE is the factor that interprets the photon count and that is all photography is about... i.e recording light (photons)

On a simplistic level the only difference between Cmos and CCD it's the way the info is streamed to the processor and is not relevant to the image quality!
 
HoppyUK said:
Well, I wouldn't be the first to find DxO suspect, confusing, and contradictory.

But that's by the way, native dynamic range is meaningless in this context, since that is what is being massively extended using MC's Lightroom method.

This is basically HDR technique, but with all the data extracted from a single file. The key to it is noise, or lack of. If the shadows are clean, then you have a chance to lift them. Sensor size simply rules on noise.

I have both cameraa so I'll do a little test later. Might be interesting. :)
 
Well, I wouldn't be the first to find DxO suspect, confusing, and contradictory.

But that's by the way, native dynamic range is meaningless in this context, since that is what is being massively extended using MC's Lightroom method.

This is basically HDR technique, but with all the data extracted from a single file. The key to it is noise, or lack of. If the shadows are clean, then you have a chance to lift them. Sensor size simply rules on noise.

Same technique I used here, exposed to the right to draw out shadow detail, then post process created 4 different exposures and run them through photomax, results speak for themselves, D300s @ ISO 4500
 
What your describing is split lighting where there is a main and secondary light source, however fill is often still used on camera axis as well. Fill should always be on camera axis if you want a shot to look natural and well lit end of story.

And that isn't the sun btw, it's a second flash just at the edge of the frame to mimic the sun.

Below is the setup shot.

exy.jpg

i'm going to respectfully disagree. wrt the end result, 'fill' on axis isnt a must "if you want a shot to be well lit. There are plenty of cases where on axis fill will create shadows (ez if the model is smoking a pipe, has long hair in front of her face etc) so split lighting fill or whatever you want to call it works better. In the same way when you're working in a studio its a bit hard to get a big softbox on axis which is why its moved a little off axis (left/right/above/etc).

Regarding naming i wont quibble, if the word 'fill light' is reserved for on axis then so be it
 
bit pedantic to go on about a specific definition, if it does the same thing then what does it matter what you call it?
 
However, if the sensor on my camera is able to capture and record every single photon that it receives and then convert that into an electrical signal! then noise would not be an issue as no amplification would be required and it's amplification of signal that causes noise.

Not quite - a single photon or electron has very little energy - for example it takes about 624 Eev to power a 100 watt bulb for 1 second (The E here stands for Exa which is 1,000,000,000,000,000,000) - so even if the photodiodes were 100% efficient - which they're not - amplification would still be necessary to reach the energy levels required for the memory cards.

.
 
Back
Top