How will advances in digital Technology change how exposures are made ?

Photoshop DOES work in 16bit. It uses the same 8bit API when in 16bit mode but the breakdown of the histogram you refer to does not happen with a 16bit image (well.. technically it does, but at a much higher resolution, so it's not actually visible). The problem is that to make a API for 16bit image with over 65,000 discreet levels would be a nightmare to use, and not really necessary.

Most camera can produce a 12 bit image now, which is 4096 discreet levels.... that's per pixel remember... so a 12 bit image can be adjusted across all it's colour channels in 6,8719,476736 discreet steps. You really think you need any more than nearly 7 billion (70 billion if you're American) levels of adjustment to maintain the integrity of a histogram during adjustment?

Or am I missing your point?

Look at a histogram when you change contrast in 16bit files you will see the lines of discontinuity.
Digital numbers, however large, are not an infinite analogue sequence, Changes require interpolation or destruction of data.
 
Sorry... just tried it... histogram seems continuous after adjustment here. If you're looking at your live preview in the Histogram palette change WHILE you are actually moving sliders around, then yes... of course, as you are translating a digital value into something else, but it is then rasterized into a larger space and translated into new digital values. Instead of trying it with an adjustment layer (which is not rasterized until flattened) make a real adjustment using CTRL+L or something.


I'm still not getting your point though... let's assume there were infinite levels of adjustment available, what do you see the advantage in real terms being?
 
Last edited:
I missed this....


Photoshop in 16bit mode.... Quadra card via display port = 12bit... hardware calibrated Eizo ColorEdge monitor = 12bit. Going off the weakest link in that workflow chain, it can produce, edit and display at 12bit depth... so that's nearly 7 billion colours/tones. Actually display them... and that's far more than my eye can perceive.

You're also working on the principle that an analogue image is actually analogue.. technically it's not... silver halide crystals are either black, or washed away... they vary in size, shape and clumping, but in reality, a mono analogue image is actually just 2bit per "pixel"

silver halide Chrystal's are of many sizes and development of any Chrystal is rarely complete, and can start the silver conversion process at many distinct points. A slow fine grain film behave very like a true analogue one and can be used to create many distinct prints with different contrast ranges that show no discontinuities. As there is no Bayer pattern there are no additional artifacts.

Grain demonstrates the distance from a true analogue form.
 
Sorry... just tried it... histogram seems continuous after adjustment here. If you're looking at your live preview in the Histogram palette change WHILE you are actually moving sliders around, then yes... of course, as you are translating a digital value into something else, but it is then rasterized into a larger space and translated into new digital values. Instead of trying it with an adjustment layer (which is not rasterized until flattened) make a real adjustment using CTRL+L or something.


I'm still not getting your point though... let's assume there were infinite levels of adjustment available, what do you see the advantage in real terms being?

I had not thought any one would look for this on the camera histgram. It is a very blunt tool. Nor does the camera provide the necessary tools to change the data forming the curve.
 
I had not thought any one would look for this on the camera histgram. It is a very blunt tool. Nor does the camera provide the necessary tools to change the data forming the curve.

I'm not talking about the camera histogram. If you shoot RAW (which I'm assuming you are talking about as you refer to 12 or 14bit images) then you can't really make any meaningful adjustments on the camera. That's the point of RAW.. you make your adjustments post shoot as you have the RAW data.

silver halide Chrystal's are of many sizes and development of any Chrystal is rarely complete, and can start the silver conversion process at many distinct points. A slow fine grain film behave very like a true analogue one and can be used to create many distinct prints with different contrast ranges that show no discontinuities. As there is no Bayer pattern there are no additional artifacts.

Grain demonstrates the distance from a true analogue form.


Look at a negative under a microscope... tell me what you see.


however.. I still don't actually know what you propose. Your thread seems to be about a new way of digital imaging... so what do you propose it should be?
 
Last edited:
That's the point of RAW.. you make your adjustments post shoot as you have the RAW data.
Even Raw files have some processing.
Look at a negative under a microscope... tell me what you see.
Done that seen that Got the scout badge..
Depends on the magnification and the film and processing.. But some form of Chrystal structure, and spaces.


however.. I still don't actually know what you propose. Your thread seems to be about a new way of digital imaging... so what do you propose it should be?

Not quite, more a different way of thinking about it.
Some two years or so ago Professor Newman wrote a very complex article about ISO less photography.. It was not that well recieved by many experienced photographers (including my self) Mainly because I found it hard to understand, or even believe.
Since then Digital Photography and the workings of sensors have become better understood.

He was the first I have read to propose doing away with the concept of ISO.

Since then a number of top level cameras have appeared that display a remarkably level performance over their entire ISO range, Testing shows this to be at the raw level. My earlier posts attempt to show why.
Even lower level cameras have appeared with AUTO ISO. taking advantage of the stable performance, But still applying an appropriate "S Curve".

The concept of use is That you set the most suitable shutter speed and most suitable Aperture for the subject matter and provided the combined Light input from those settings falls with in the established parameters i.e not saturating the highlight register, nor getting below the acceptable noise level. you will have an "Correct exposure" in the straight line portion of the Raw file. It would not be less than optimal.

Of course where ever that portion is, will need to be adjusted to a correct "S Curve" and moved to the best position to obtain the correct brightness and contrast. (This is what the ISO setting does In real life) But it does it "First" so you shutter and aperture selection is limited by it.

As far as I know there is now no way to set up a camera this way, as even a RAW file is expecting (demands) an ISO setting to include in the side car file or whatever.

May be you could set it at the so called native (ISO) speed and make seemingly major changes in the raw processor instead.
If the File is indeed raw this might work. But it might be hard to stop unwanted Noise algorithms acting inappropriately.

Cameras could, but are not yet designed to work this way.
 
Last edited:
I missed this....


Photoshop in 16bit mode.... Quadra card via display port = 12bit... hardware calibrated Eizo ColorEdge monitor = 12bit. Going off the weakest link in that workflow chain, it can produce, edit and display at 12bit depth... so that's nearly 7 billion colours/tones. Actually display them... and that's far more than my eye can perceive.

You're also working on the principle that an analogue image is actually analogue.. technically it's not... silver halide crystals are either black, or washed away... they vary in size, shape and clumping, but in reality, a mono analogue image is actually just 2bit per "pixel"

There are always exceptions
 
May be you could set it at the so called native (ISO) speed and make seemingly major changes in the raw processor instead.
If the File is indeed raw this might work. But it might be hard to stop unwanted Noise algorithms acting inappropriately.

Cameras could, but are not yet designed to work this way.

I can see the advantage in this, yes, but I have problem in how it could be implemented. What I'm not getting is how this links with bit depth. The principle you propose, would theoretically work with even a 8bit file.. assuming that you could control the noise levels.. but this ultimately set by the sensor's dynamic range, not the bit resolution of the file.
 
Is this thread being delivered from the top of a marble plinth?

I thought the OP was going to make a point; if he intends to, then he obviously thinks we are hungrily waiting for the pearls of wisdom and is keeping the suspense going. Me, I probably don't have long enough left to live. I'm outta here - I'm going out to take some piccies...
 
Can I recap?

Film is different to digital

blah blah blah blah ....... repeat to fade
 
People do things the way they used to.

Change film to digital, but carry everything over so the change want huge. New people coming in who have only used digital just do what everyone else does and we all plunder on doing the same thing, no one asking why. If you do ask why, you get told off by an old school person who tells you that is the way it is.

There are a few things on the practical side of things I wish would change.

I hate the top LED panel, especially using heavy lenses. We have a bigger clearer screen on the back which 99% of the time is easier to see, if they designed better dimmer night display options, the top screen wouldn't be needed freeing up space for better control layout and reducing cost.

Now next I will get responses telling me how good the top screen is, generally from those who are in the habit of doing it and therefore think it is a better way of doing it, even though it is the only way they have ever done it.
 
9 times out of 10, I use neither the top screen or the back screen. When setting shutter speed or aperture I use the viewfinder display.

The top screen is useful when it's on a tripod and you don't need to put your eye to the camera, as it's already facing you the majority of the time.
 
I can see the advantage in this, yes, but I have problem in how it could be implemented. What I'm not getting is how this links with bit depth. The principle you propose, would theoretically work with even a 8bit file.. assuming that you could control the noise levels.. but this ultimately set by the sensor's dynamic range, not the bit resolution of the file.

Bit depth is something of a red herring, it came up in part answer to another post. And I went off my original tack.
so don't concern your self about it in this context. But, I am concerned about the effects of undisciplined digital editing.

You are right the system would work equally well with an 8bit file.

As to noise levels... up to a point they can be very well controlled with the right algorithms. The Fuji X10 is a prime example (and my favourite toy) It make astonishingly good JPG's at any speed from a quite small sensor (8.8x 6.6mm).. But this result is also astonishingly difficult to match with a raw workflow. Many have tried and many failed. It leads me to believe in magic.

The Best way to avoid noise is to make sure that the sensor has sufficient light to work with. At the lower light levels noise is some way proportional to photons received.

However, as now, and when needs must, we put up with noise in shadow areas. In deep and dingy shots it is even acceptable.

Noise represents lack of light (photons to count) not incorrect exposure. if there are no photons you only have noise. This can be caused by lack of illumination on the subject, or lack of light reaching the sensor ... same difference. At a certain level, the light become sufficient for noise not to be apparent. This is what we must aim for, but not at all costs. sometime bad is better than nothing.
 
Controlling noise with NR algorithms helps, but it's still relying on correcting a limitation of current technology. What we need to really make digital a separate and wholly different paradigm is a sensor that can capture everything in one take, and that WILL come. Just as computer processors have exponentially improved over the past 30 years, so will sensors. When that happens, there will not only be ISOless systems, but exposure itself will be a moot issue... within reason. You may eventually have cameras that have merely Night/dull/Bright as pre-set sensor ranges. Sounds remedial, but think about it. If you had a sensor that had a true RMS dynamic range of 30 stops... it's actually feasible. Then the noise floor associated with applied curves prior to shooting would be a thing of the past.

Interesting stuff!

As for more light in the firs place... that's more tricky. Currently, the only practical way we have of focusing light is with a glass/plastic lens. This has inherent losses as light passes from one surface to the next, as you know. As a paradigm shift in transmitting light from the outside world to the sensor... well... we're actually getting a bit fanciful now. There are improvement still to be made in coating technology.. doubtless, but we're pretty good at that already, and we're talking diminishing returns now. I can think of no other way to focus light onto a sensor than by using a lens. Of the top of my head, the only other things that refract light are gases, liquids and immense gravity fields. I can't see the first two being an option and the last one? The gravity field you'd need to actually make that work would be so huge that A) you'd need the power used to drive every particle accelerator in the world, and B) assuming that was even possible in a portable device, you'd probably create a singularity and disappear up your own ass :)

Still... all interesting stuff.
 
Last edited:
Is this thread being delivered from the top of a marble plinth?

I thought the OP was going to make a point;...

I have made enough points to sink a battleship. But I knew this was not for everyone

Can I recap?

Film is different to digital

blah blah blah blah ....... repeat to fade

Not for you then :)

Does all seem a bit anal.

Some like anal:D

People do things the way they used to.

Change film to digital, but carry everything over so the change want huge. New people coming in who have only used digital just do what everyone else does and we all plunder on doing the same thing, no one asking why. If you do ask why, you get told off by an old school person who tells you that is the way it is.
True it is good to think "Different" sometimes

There are a few things on the practical side of things I wish would change.

I hate the top LED panel, especially using heavy lenses. We have a bigger clearer screen on the back which 99% of the time is easier to see, if they designed better dimmer night display options, the top screen wouldn't be needed freeing up space for better control layout and reducing cost.

Now next I will get responses telling me how good the top screen is, generally from those who are in the habit of doing it and therefore think it is a better way of doing it, even though it is the only way they have ever done it.
Some cameras have top screen some don't , I use what ever is most convenient at the time, I don't even think about it... If the camera is well laid out you do not need to think about the controls at all. I often have the back screen turned off.
 
Controlling noise with NR algorithms helps, but it's still relying on correcting a limitation of current technology. What we need to really make digital a separate and wholly different paradigm is a sensor that can capture everything in one take, and that WILL come. Just as computer processors have exponentially improved over the past 30 years, so will sensors. When that happens, there will not only be ISOless systems, but exposure itself will be a moot issue... within reason. You may eventually have cameras that have merely Night/dull/Bright as pre-set sensor ranges. Sounds remedial, but think about it. If you had a sensor that had a true RMS dynamic range of 30 stops... it's actually feasible. Then the noise floor associated with applied curves prior to shooting would be a thing of the past.

Interesting stuff!

You are now on song :)
And I think you are quite right.



As for more light in the firs place... that's more tricky. Currently, the only practical way we have of focusing light is with a glass/plastic lens. This has inherent losses as light passes from one surface to the next, as you know. As a paradigm shift in transmitting light from the outside world to the sensor... well... we're actually getting a bit fanciful now. There are improvement still to be made in coating technology.. doubtless, but we're pretty good at that already, and we're talking diminishing returns now. I can think of no other way to focus light onto a sensor than by using a lens. Of the top of my head, the only other things that refract light are gases, liquids and immense gravity fields. I can't see the first two being an option and the last one? The gravity field you'd need to actually make that work would be so huge that A) you'd need the power used to drive every particle accelerator in the world, and B) assuming that was even possible in a portable device, you'd probably create a singularity and disappear up your own ass :)

Still... all interesting stuff.

Not quite on the same track there... It does not matter how good (sensitive) the sensor or lens is at transmitting and receiving light.

If there are no photons to count visible noise is inevitable.

Unfortunately Black is always accompanied by noise.
There are two ways this can be dealt with.
One: by filtering in software
Two: by adding light... any tone can be redesignated as the black point later.
The original noise will be off the scale.
 
Is there a need to think differently to get the most out of Digital Photography?

Well, I've read the first dozen or so posts in this thread and can't be bothered with the rest. They're going round in circles and progressing nowhere.

Although I started out shooting film - and still do - I've never developed my own prints and have no intention of starting. What I do know is that whether I'm working with my film SLR or DSLR I don't think any differently about what I'm doing ... I "see" what I want to achieve, compose the shot, adjust the exposure and press the shutter release - end of story :thumbs: What else is there :shrug:

I'd venture to guess that the vast majority of TP members have never shot film with an SLR - so they've not been able to make conscious decisons about how the end result will look - so your whole premise for this thread is pointless except as an academic exercise.
 
Last edited:
I "see" what I want to achieve, compose the shot, adjust the exposure and press the shutter release - end of story :thumbs: What else is there :shrug:


Because that's about to change fairly soon. Exposure as a parameter you need to adjust on the camera will be a redundant concept in the not too distant future. It will be something adjusted post shoot, as will focus and depth of field.

It's coming... I reckon 10 years.


Unfortunately Black is always accompanied by noise.
There are two ways this can be dealt with.
One: by filtering in software
Two: by adding light... any tone can be redesignated as the black point later.
The original noise will be off the scale.

That's interesting.. so you're talking about almost pre-sensitising a sensor.. pretty much like pre-fogging paper? If you did, you are adding an artificial noise floor, and thus reducing dynamic range as a result. But where do you set this limit? For normal levels of light it would be quite easy, but when we're talking about the very limits of our theoretical very wide range sensor, at the very limit of available light, you'd just be swapping one artefact for another, surely.
 
Last edited:
VirtualAdept said:
I'm ashamed to say I hadn't really put much thought into the actual differences between film and sensor... Thanks for the explanation that even I can understand

For the most part people who shoot digital have never shot film and don't care about film in any way shape or form.
 
For the most part people who shoot digital have never shot film and don't care about film in any way shape or form.

You miss the point of the thread. The point is, everything we do now in digital photography has it's roots in film. Reciprocity, apertures, shutter speeds.. the very way we work... it's the same. With up and coming technological developments... that paradigm will be redundant. We're just chewing the fat over what the future offers... nothing more.
 
Pookeyhead said:
You miss the point of the thread. The point is, everything we do now in digital photography has it's roots in film. Reciprocity, apertures, shutter speeds.. the very way we work... it's the same. With up and coming technological developments... that paradigm will be redundant. We're just chewing the fat over what the future offers... nothing more.

Pretty sure it's just been two people arguing about pointless crap that means nothing to 99% of photographers.
There is no reciprocity in digital and as for the rest I think you are in the land of Star Trek ;)

How else are you going to control the amount of light hitting the sensor without a shutter and an iris.

When and if this day comes all we will do is adapt to using the new hardware. What that has do to with you and woody arguing over bit depths is beyond me.


P.S I was replying to one post which was not the OP's original. Not surprising you jump down my throat over and talk about something nothing to do with my reply.
 
Last edited:
Can we just 'cut to the chase'

We all know film and digital are different.

I don't need to know the science behind the differences. Maybe I should be interested, but I'm not.

I just need to know the practical implications of those differences.

Is there anything new here specifically relating to limitations/advantages of digital v film?
 
Because that's about to change fairly soon. Exposure as a parameter you need to adjust on the camera will be a redundant concept in the not too distant future. It will be something adjusted post shoot, as will focus and depth of field.

It's coming... I reckon 10 years.

That would mean a complete capture of the entire tonal image range as the ultimate goal.
I suspect we will just have a seriously enlarged capture well before that.

It will not be so much exposure selection as dynamic range selection to suit visual and out put devices.



That's interesting.. so you're talking about almost pre-sensitising a sensor.. pretty much like pre-fogging paper? If you did, you are adding an artificial noise floor, and thus reducing dynamic range as a result. But where do you set this limit? For normal levels of light it would be quite easy, but when we're talking about the very limits of our theoretical very wide range sensor, at the very limit of available light, you'd just be swapping one artefact for another, surely.

I am not sure where I read it, But it seems noise is being Filtered at an very Early stage on the sensor itself before the signal is otherwise processed... this means that the noise is not amplified with the rest of the signal. I have probably got that slightly wrong as it was going over my head.

But it would account for the step change in the reduced noise in the more recent cameras.
At lower light levels, where noise in the form of false positives can be a high proportion of the counted photons. very little could be done. At lower light levels still (as in night vision glasses) there are insufficient Photons to make up a coherent image and you can actually see the amplified false readings as speckles.
 
How else are you going to control the amount of light hitting the sensor without a shutter and an iris.
QUOTE]

That is the crux of the matter.
we only think we control the amount of light hitting the sensor now. The ISO setting is largely a fiction as it does not alter the sensitivity of the sensor.

In the proposed way of working you would adjust shutter speed to stop motion or create blur, as is appropriate. Set the aperture for highest resolution or depth of field or maximum light gathering (in extremis). Then later adjust the straight line raw file with an S Curve, for the appropriate brightness and contrast. This turns photography to something that responds to what you want, from something with compromises you are forced into.
 
Can we just 'cut to the chase'

We all know film and digital are different.

I don't need to know the science behind the differences. Maybe I should be interested, but I'm not.

I just need to know the practical implications of those differences.

Is there anything new here specifically relating to limitations/advantages of digital v film?

This is a discussion about the way things are already moving, and the implications.
It may not be something that interests you at all.
But eventually you will benefit.
 
and that is different to how we shoot digitally now?

That is the crux of the matter.
we only think we control the amount of light hitting the sensor now. The ISO setting is largely a fiction as it does not alter the sensitivity of the sensor.

In the proposed way of working you would adjust shutter speed to stop motion or create blur, as is appropriate. Set the aperture for highest resolution or depth of field or maximum light gathering (in extremis). Then later adjust the straight line raw file with an S Curve, for the appropriate brightness and contrast. This turns photography to something that responds to what you want, from something with compromises you are forced into.
 
and that is different to how we shoot digitally now?
Because at the moment we are sometimes limited by the dynamic range of the sensor and finding ways to work around it
In the future we won't be :)
 
For the most part people who shoot digital have never shot film and don't care about film in any way shape or form.

As a professional photographer of many years ago I used to shoot film, and I also spent 5 years in the testing dept. at Kodak in Harrow Weald and learned my craft the hard way and frankly, apart from the people actually working and testing films, none of us cared overmuch about the films we used.

We knew the charcteristics of the films we used and chose ones suitable for our subjects (no changing the sensitivity of the camera then).

The main things we thought about was improving our images by studying the great photographers in much the same way as art students would study the Great Masters to learn their techniques.

I certainly have no regrets about the demise of films because digital photography offers so many more chances to get those great pictures we all want.

And the greatest advantage of all is that, apart from the cost of equipment, each shot is free.

And as far as I'm concerned that's the best improvement of all.

.
 
Pretty sure it's just been two people arguing about pointless crap that means nothing to 99% of photographers.
There is no reciprocity in digital

You're welcome to leave the thread at any time you choose :)

There's no reciprocity? Of course there is. We're talking of reciprocity... you#'re referring to reciprocity FAILURE... the law of reciprocity is just the reciprocal relationship between shutter/aperture/ISO. Last time I checked, digital cameras still have apertures and shutter speeds, and one still effects the other... there's still a reciprocal relationship between them.

and as for the rest I think you are in the land of Star Trek ;) How else are you going to control the amount of light hitting the sensor without a shutter and an iris.

We're discussing a sensor so wide in dynamic range and bit depth, that exposure becomes irrelevant at time of capture, because it's simply wide, and deep enough to capture everything in on go, and exposure is selected post shoot.

This is NOT Start Trek.. it is the future, and it will happen sooner than you think. I bet you were one of those people who scoffed when people suggested you'd have a small device in your pocket that can make phone calls and access computer the world over, record images and video, and sound, navigate for you, be a personal computer, and all small enough to fit in your pocket :) 20 years ago THAT was Stark Trek!

The sensors we're discussing are being developed experimentally now... they just need extremely pure substrates, and near absolute zero temperatures to work. We'll crack it, and when we do, we'll need a whole new paradigm in their use.


When and if this day comes all we will do is adapt to using the new hardware. What that has do to with you and woody arguing over bit depths is beyond me.

Maybe we're enjoying a debate? Has THAT ever occurred to you?


P.S I was replying to one post which was not the OP's original. Not surprising you jump down my throat over and talk about something nothing to do with my reply.


Can we only reply to comments addressed directly to ourselves? Sorry.. I must have missed a meeting somewhere :)



Can we just 'cut to the chase'

We all know film and digital are different.

I don't need to know the science behind the differences. Maybe I should be interested, but I'm not.

I just need to know the practical implications of those differences.

Is there anything new here specifically relating to limitations/advantages of digital v film?

Not really... because we're discussing the limitations on potential digital imaging in the future if we carry on using the current paradigm of Shutter/Aperture/ISO, which is a legacy of film. There's nothing here really to affect the current use of digital... yet.. but it's a discussion on the internet... so who knows where it will go :)





That would mean a complete capture of the entire tonal image range as the ultimate goal.

Yep. I'm quite excited about that possibility. I'll miss the passing of the current way of doing things, but I see the potential of such a system. It would be amazing!


I suspect we will just have a seriously enlarged capture well before that.

I can't see it being implemented on even high end professional gear until storage is cheaper and faster... which again, is coming. Look how we've advanced in jjst 10 years. 10 years ago, even a fast 10K rpm SCSI drive could only manage what 5.4K rpm "green" storage drives can achieve now. The SSDs in my current machine can read/write large contiguous files at over 1GB/sec.

It will not be so much exposure selection as dynamic range selection to suit visual and out put devices.

That's what I was suggesting it would be: A selectable "range" wuold all that would be needed.


I am not sure where I read it, But it seems noise is being Filtered at an very Early stage on the sensor itself before the signal is otherwise processed... this means that the noise is not amplified with the rest of the signal. I have probably got that slightly wrong as it was going over my head.

No, you're right so far as I'm aware. It operates.. and this is a very basic analogy, like Dolby NR used to work on analogue recording. The noise floor was artificially lowered first, and the signal filtered accordingly. I know it's a poor analogy as digital image NR is not an analogue filter, but the workflow is similar... if that makes sense.

But it would account for the step change in the reduced noise in the more recent cameras.

It's just a general improvement in processing, power efficiency (remember heat plays a large role in noise generation in sensors.. which is why astrophotography cameras are cooled), lower voltages, and smaller silicon fabrication processes... much like improvements in CPUs.




and that is different to how we shoot digitally now?

Very much so, yes.


Because at the moment we are sometimes limited by the dynamic range of the sensor and finding ways to work around it
In the future we won't be :)


Exactly.
 
Last edited:
I love arguments like this - makes me happy all I do is shoot images and work on how to shoot images, happily leaving the techie stuff with the geeks who have nothing better to do in life :D

If & when the time comes when shutters are irrelevant, as is focus, ISO, DoF and exposure the geeks will all have Sod all to talk about, while the rest of us just shoot images :)

Happy days :lol:

Dave
 
I love arguments like this - makes me happy all I do is shoot images and work on how to shoot images, happily leaving the techie stuff with the geeks who have nothing better to do in life :D

If & when the time comes when shutters are irrelevant, as is focus, ISO, DoF and exposure the geeks will all have Sod all to talk about, while the rest of us just shoot images :)

Happy days :lol:

Dave

Don't knock it ( though it hardly matters if you do.)

But Geeky thoughts led to where we are today.

And I like the tools we have now, in the sme way I loved my first new Rolleiflex, and my first new digital.

Some geek says "what if"
And an even brighter geek says "I can do that"

And we get to a new today................:)
 
I certainly have no regrets about the demise of films because digital photography offers so many more chances to get those great pictures we all want.

And the greatest advantage of all is that, apart from the cost of equipment, each shot is free.

And as far as I'm concerned that's the best improvement of all.

.

I have not done the math....
But I wonder if a photographer spends more in the pursuit of Photographs today than in the past.

I know I spend rather a lot on computers, storage, scanners, printer and other devices, not to mention raw materials. and software.
I certainly spend as much in proportion on camera equipment.
I spend a lot more on cars and getting around. (used to be expenses)

It does not sound much like free to me.
 
Last edited:
Advances in technology are accelerating at such a phenomenal rate that I suspect this discussion is very relevant indeed. Or very soon will be.

Grapheme, quantum computing, nano technology, super conductivity etc. etc. etc.

And that's just the stuff we know about. God knows what's going on in cutting edge research labs.

I'm glad I stayed awake ;)
 
I love arguments like this - makes me happy all I do is shoot images and work on how to shoot images, happily leaving the techie stuff with the geeks who have nothing better to do in life :D


Some of us can cope with doing both :)


I know I spend rather a lot on computers, storage, scanners, printer and other devices, not to mention raw materials. and software.
I certainly spend as much in proportion on camera equipment.
I spend a lot more on cars and getting around.

It does not sound much like free to me.


Good point. We assume digital is free, but it's not. The cameras are roughly the same cost, but I spend roughly £3000 every 3 years on computer equipment, £2000 on storage etc. I spend the same on printing now as I did then.. not much has changed financially that I can see. Only those who's work demands thousands of images a month are probably in profit. I reckon the rest of us are out of pocket.
 
Last edited:
I saw there was a new article in the "AP" today by Professor Newman so bought a copy.
It a final part in a series on exposure ( I missed the other two)

It is more a different take on where we are now, Rather than the full ISO free idea of a couple of years ago. However it confirms many of our thoughts.

But what it makes clear is that it does not apply to "in camera" Jpegs, which are tied to their pre determined "S Curves"

I would suppose they could still be accommodated, by showing a preview with a bar slider to set the preferred slice ( exposure) and saving that as the final jpeg. much like adding sugar to taste .......
 
This is a discussion about the way things are already moving, and the implications.
It may not be something that interests you at all.
But eventually you will benefit.

Sorry. my mistake.

I misread the title of the thread and thought it was a discussion about the need to do things differently to get the most out of Digital Photography.

I assumed this meant now rather than at some hypothetical point in the future.
 
I see exactly why you made that mistake Steve.

Terry - Would a different title, to reflect the matter under discussion, be useful?
 
I see exactly why you made that mistake Steve.

Terry - Would a different title, to reflect the matter under discussion, be useful?


Probably a good idea... there seem to be people dropping by merely to express their displeasure and rant a little bit.
 
Some of us can cope with doing both :)
Good point. We assume digital is free, but it's not. The cameras are roughly the same cost, but I spend roughly £3000 every 3 years on computer equipment, £2000 on storage etc. I spend the same on printing now as I did then.. not much has changed financially that I can see. Only those who's work demands thousands of images a month are probably in profit. I reckon the rest of us are out of pocket.

Well I usually take around 10,000 pics a year and I use a very simple editing program which cost me £15.00 about 6 years ago.

I have just replaced my old computer with an i7 self build which cost about £600.00.

I seldom print a paper copy, except for those I wish to hang on my walls so after approx 6 years of taking pics I think I am well in.

Back in the days when film reigned supreme, film, cameras, lenses, darkrooms, darkroom equipment and all the usables like chemicals etc, were probably a lot more than cameras and computer equipment are now after adjusting for inflation etc.
.
 
Last edited:
Back
Top