How far have we really come?

I don't think his argument was consistent and I do believe that for some people when taking some shots the kit is nothing short of crucial. OK, many of us don't take pictures which push the envelope of what the kit is capable of but the clear fact is that some people clearly do. In a small way I do too. I take people pictures with eye AF with the subject well away from where the focus points would be with a DSLR let alone a SLR or RF or manual camera. These days almost any camera has eye detect but not that long ago the only way to get focus on the eye with the subject away from the central area would be to focus and crop or focus and recompose and hope the depth got you there.
I think we will just need to disagree, as I find his argument fully consistent: and don't see it necessarily contradicting your argument ie "sometimes" "some aspects" of kit is important. Which is what he says in the video.

And beyond that, I'm just going to repeat my last post, so I will stop here.
 
I think the invention of the original Leica had a similar motivation.
My opinion is that the Leica was an attempt to cash in on the existing market for stills cameras that used 35mm film.

There were several such cameras on the market before the Leica. See Roger Hicks's book "A History of the 35mm Still Camera" (ISBN 0-240-51233-2)

Book cover Roger Hicks A History of the 35mm Still Camera TZ70 P1040106.jpg
 
Modern cameras have come a long way. The big question is does it make us better photographers, or are people now relying on the kit to fix what they do wrong?

Too much noise maybe caused by underexposing? AI will fix it.
Eye focus for those who dont know what an eye looks like... ;) (ok for moving subjects it helps).

You see where I'm going. A chimp could take out a modern camera, and get a well exposed sharp photo (well maybe not of bigfoot) but gone are the days of skill and experience.

AI, while it's great fun, I dont see it as photography. Yes I know an elephant always makes your landscape look...... Errr, better?

I know find myself shooting the old fashioned way more or less, yes I do use autofocus, but not eye tracking, but then I'm an old git, and proud of it.

That said if anyone want to donate a Canon R1... my address is....
But only for that very narrow aspect (in bold) of skill and experience needed to make good photographs.

For me, and using bird photography as an example, bird eye AI auto focus has revolutionised the ease of getting well focussed bird photographs, but that just means there are even greater expectations on getting well composed pictures of birds showing interesting behaviours in interesting lighting and environments. It allows me to concentrate on the things that are the more important to the photograph.

Moving onto Dragonflies, I take full advantage of a modern technology to grab sharp well exposed pictures of dragonflies, to help with identification when I get back home, but none are much use as "photographs" because I haven't put the effort into considering anything beyond taking a record shot to help with identification.
 
Just booked onto his lighting course...It's pitched at comercial photographers (Which I'm not) but I like his style and delivery. (y)
I'm sure it will b very good, if I had more money I would do it out of interest. Unfortunately I have more important things to do with my money.
 
My opinion is that the Leica was an attempt to cash in on the existing market for stills cameras that used 35mm film.

There were several such cameras on the market before the Leica. See Roger Hicks's book "A History of the 35mm Still Camera" (ISBN 0-240-51233-2)

View attachment 462130
I'm just basing it on Oscar Barnacks description of why he made the first "Lieca" which if I remember was based on his desire for something that he could easily take mountaineering.

But maybe I'm mis-remembering?
 
But only for that very narrow aspect (in bold) of skill and experience needed to make good photographs.

For me, and using bird photography as an example, bird eye AI auto focus has revolutionised the ease of getting well focussed bird photographs, but that just means there are even greater expectations on getting well composed pictures of birds showing interesting behaviours in interesting lighting and environments. It allows me to concentrate on the things that are the more important to the photograph.

Moving onto Dragonflies, I take full advantage of a modern technology to grab sharp well exposed pictures of dragonflies, to help with identification when I get back home, but none are much use as "photographs" because I haven't put the effort into considering anything beyond taking a record shot to help with identification.

This is certainly very true with birds, there's little excuse these days for an out-of-focus shot. There's so many sharp bird shots these days (even on these forums) that what should be considered incredible shots start to blur into the ordinary.

It's quite interesting to think of what challenging situations or genres are left where camera technology still has a lot of room for improvement? Low light would perhaps be one of the first ones to mind, but that's improved greatly in recent years and often you can use flash or other sources of light to overcome. Most other challenging situations such as underwater, fast aviation etc are all pretty much sorted. Perhaps astronomy photography? There's some stunning photographs out there, but I feel like there's still a lot of room for more.
 
I'm just basing it on Oscar Barnacks description of why he made the first "Lieca" which if I remember was based on his desire for something that he could easily take mountaineering.

But maybe I'm mis-remembering?
I think Barnack and Leitz were both very good self publicists,

There's also that old proverb (probably from an old American) "when it's time to railroad, people will build railroads". The 35mm camera was a device whose time was coming. Most of the entrants didn't meet the needs of the market and the Leica had only one major competitor for several years: the even more expensive Contax. It was only in the mid 1930s that August Nagel introduced a third, high quality, mass produced 35mm camera, This became known as the Kodak Retina and the 35mm market really took off.

 
This is certainly very true with birds, there's little excuse these days for an out-of-focus shot. There's so many sharp bird shots these days (even on these forums) that what should be considered incredible shots start to blur into the ordinary.

It's quite interesting to think of what challenging situations or genres are left where camera technology still has a lot of room for improvement? Low light would perhaps be one of the first ones to mind, but that's improved greatly in recent years and often you can use flash or other sources of light to overcome. Most other challenging situations such as underwater, fast aviation etc are all pretty much sorted. Perhaps astronomy photography? There's some stunning photographs out there, but I feel like there's still a lot of room for more.
I do however think there is a risk of complacency about what we accept as good enough, rather than seeing the technology allowing us to strive for something better.
 
I think Barnack and Leitz were both very good self publicists,

There's also that old proverb (probably from an old American) "when it's time to railroad, people will build railroads". The 35mm camera was a device whose time was coming. Most of the entrants didn't meet the needs of the market and the Leica had only one major competitor for several years: the even more expensive Contax. It was only in the mid 1930s that August Nagel introduced a third, high quality, mass produced 35mm camera, This became known as the Kodak Retina and the 35mm market really took off.

I don't have the knowledge to disagree or agree with you, but the things I've read about Leitz have always sounded positive - maybe just a result of their publicist skills.
 
...but the things I've read about Leitz have always sounded positive - maybe just a result of their publicist skills.
Partly because of the latter, I think,

However, I'm not badmouthing E. Leitz and Son. Their cameras were excellent for their time and the early M-series set new standards for 35mm cameras. They stood out, even in "the Golden Age" of the 1950s and 1960s, when companies like Hasselblad, Linhoff, Franke & Heidecke and their Japanese rivals, were producing cameras that were, themselves, things of beauty, as well as excellent tools.
 
I think that's a fair comment. I still use an old Sony RX1Rmk1 which is about 12 years old because it still does what I need, it's a compact FF camera with great image quality and its lack of features and performance aren't an issue for the type of photos it takes.

However on the other hand I was asked to take photos of a cycling event which is tougher since the cyclists are coming past at speed and there's only one chance to get the photo plus often cyclists would smile or wave to the camera so it's a challenge to get the shot in focus at the right time consistently. This is where the newer camera excelled, it nailed pretty much every shot on every cyclist and meant I could choose the exact pose I wanted as they went by.
Thanks for that - I am not a really expert in taking photos of people - and have never taken studio photos.
 
You see where I'm going. A chimp could take out a modern camera, and get a well exposed sharp photo (well maybe not of bigfoot) but gone are the days of skill and experience.
But would a chimp have the skill and knowledge to know what to point the lens at, how to frame the shot, and when to trip the shutter? ;)
 
Improvements in camera tech have certainly reduced the need for craft in photography, making it accessible to those who would never have been able to acquire the knowledge and skills to take technically tolerable pictures. Whether that's good or not is certainly debatable.
 
Improvements in camera tech have certainly reduced the need for craft in photography, making it accessible to those who would never have been able to acquire the knowledge and skills to take technically tolerable pictures. Whether that's good or not is certainly debatable.
In response to your debatable question.

I struggle to see how keeping something technically inaccessible can ever be a good thing (at least in terms of photography, I'm not so sure about things like nuclear weapons).

I also struggle a little with where one decides to draw an arbitrary line on accessibility. Should we have drawn it when we were still coating our own wet plates ?

In my lifetime, the skills required to take technically "tolerable" photographs has never been all that difficult., even if it was certainly more difficult than a smartphone and posting to social media. Though, I sometimes wonder when I get some smartphone photographs, what level of skill was required to produce photographs this bad from a modern smartphone.

But, I'm not convinced this is a fair comparison, as I'm not sure that the craft required to produce technically "excellent" digital prints are any less than that required to produce technically "excellent" prints from film: just a different type of craft.

I've mentioned it before, but I have found (am still finding) digital to be far more difficult to "master" than I ever found film. Especially with colour, as the power of digital processing colour has dramatically raised my expectations of what I should be able get out of a digital colour photograph.

I know there were things like dye transfer in the film days, but this was technically and economically inaccessible to me.
 
In response to your debatable question.

I struggle to see how keeping something technically inaccessible can ever be a good thing (at least in terms of photography, I'm not so sure about things like nuclear weapons).

I also struggle a little with where one decides to draw an arbitrary line on accessibility. Should we have drawn it when we were still coating our own wet plates ?

In my lifetime, the skills required to take technically "tolerable" photographs has never been all that difficult., even if it was certainly more difficult than a smartphone and posting to social media. Though, I sometimes wonder when I get some smartphone photographs, what level of skill was required to produce photographs this bad from a modern smartphone.

But, I'm not convinced this is a fair comparison, as I'm not sure that the craft required to produce technically "excellent" digital prints are any less than that required to produce technically "excellent" prints from film: just a different type of craft.

I've mentioned it before, but I have found (am still finding) digital to be far more difficult to "master" than I ever found film. Especially with colour, as the power of digital processing colour has dramatically raised my expectations of what I should be able get out of a digital colour photograph.

I know there were things like dye transfer in the film days, but this was technically and economically inaccessible to me.

I should clarify that by debatable, I mean it can be debated.

However, in favour of learning the craft of photography I would suggest there can also be a process where the photographer learns the limitations and strengths of the equipment, about composition, combination of colours, about movement in an image and many other things. Skipping the craft step may also skip these steps: for some that's not a problem because they can Intuit many of them, but not for many.

So there will need to be a new set of 'craft steps' where the monkey with £5000 worth of kit still has to learn what to photograph.
 
I should clarify that by debatable, I mean it can be debated.
That's what I thought you meant, and this was me contributing to the debate.
However, in favour of learning the craft of photography I would suggest there can also be a process where the photographer learns the limitations and strengths of the equipment, about composition, combination of colours, about movement in an image and many other things. Skipping the craft step may also skip these steps: for some that's not a problem because they can Intuit many of them, but not for many.

So there will need to be a new set of 'craft steps' where the monkey with £5000 worth of kit still has to learn what to photograph.
I'm beginning to think I haven't grasped the point you are making, as I assumed you were just referring to the technical aspects of the craft :-(

Aren't these aspects of the craft exactly the same regardless of it being wet plate, film or digital. A £50 camera or a £5000 camera.

How do advances in camera tech reduce the need to learn the things you list,

I feel very differently about this, as I feel that getting some of the basic mechanics looked after by modern technology allows a person to put all their efforts into testing their camera capabilities, understanding composition, observing the effect of light, observing and capturing subtle gestures of their subject, thinking about the story etc etc.

I see the things made easy by the technical advances are, relatively speaking, rather trivial aspects of the craft when compared to the other aspects we have both listed.
 
Modern cameras have come a long way. The big question is does it make us better photographers, or are people now relying on the kit to fix what they do wrong?

Too much noise maybe caused by underexposing? AI will fix it.
Eye focus for those who dont know what an eye looks like... ;) (ok for moving subjects it helps).

You see where I'm going. A chimp could take out a modern camera, and get a well exposed sharp photo (well maybe not of bigfoot) but gone are the days of skill and experience.

AI, while it's great fun, I dont see it as photography. Yes I know an elephant always makes your landscape look...... Errr, better?

I know find myself shooting the old fashioned way more or less, yes I do use autofocus, but not eye tracking, but then I'm an old git, and proud of it.

That said if anyone want to donate a Canon R1... my address is....
Modern cameras do make it much easier to get the shot for sure, and to a degree will compensate for a person's photographic inadequacies, however I would hazard a guess that 99% of the time someone who knows what they're doing will still take a better photograph than a novice.
 
That's what I thought you meant, and this was me contributing to the debate.

I'm beginning to think I haven't grasped the point you are making, as I assumed you were just referring to the technical aspects of the craft :-(

Aren't these aspects of the craft exactly the same regardless of it being wet plate, film or digital. A £50 camera or a £5000 camera.

How do advances in camera tech reduce the need to learn the things you list,

I feel very differently about this, as I feel that getting some of the basic mechanics looked after by modern technology allows a person to put all their efforts into testing their camera capabilities, understanding composition, observing the effect of light, observing and capturing subtle gestures of their subject, thinking about the story etc etc.

I see the things made easy by the technical advances are, relatively speaking, rather trivial aspects of the craft when compared to the other aspects we have both listed.

I think we're essentially on the same page. The point I'm trying to make is that the non-technical aspects were often communicated during the technical learning phase. Now (and actually for quite some time) the assumption is that good pictures come from the camera, and in the age of instant gratification the 'artistic' aspects are missed.
 
I think we're essentially on the same page. The point I'm trying to make is that the non-technical aspects were often communicated during the technical learning phase. Now (and actually for quite some time) the assumption is that good pictures come from the camera, and in the age of instant gratification the 'artistic' aspects are missed.
I suspect we are.

I think there is also an issue that even in the film days there was still so much emphasis on the technical aspects, that there was often very little communication of the non-technical aspects, With "good" photographs almost entirely judged on the technical aspects with only a passing acknowledgement to some "rules" of composition.

It was only after reading Ansel Adams, to improve the technical aspects of my photography, that I realised the expressive and artistic power of photography. And that was after 4 years of my five years at college (part time). Adams writings plus the person who introduced me to his books, totally changed the way I viewed photography.

Having thought about your post and referring back to my post about complacency, I do think that modern tech, together with how important the technical aspects are still promoted, it will be much easier to feel you have "arrived" as a photographer and easier to give up learning with digital, before being exposed to the deeper aspects of the craft (and art),
 
I think there is also an issue that even in the film days there was still so much emphasis on the technical aspects, that there was often very little communication of the non-technical aspects, With "good" photographs almost entirely judged on the technical aspects with only a passing acknowledgement to some "rules" of composition.
Humans make tools, in order that tasks become easier and outcomes better.

The Camera Obscura made drawing and painting more accurate and, to a degree, faster. Photography was a further step along that route and the intention of camera improvement has largely been to continue that trend. Many years ago, I read a magazine article, in which the author speculated that there would come a time where the camera would be reduced to the size of a spectacle lens, triggered by the photographer winking.

At that point, the author suggested, people would no longer consider photography as a specialist subject but just another part of human memory, and one that can be shared as widely as a person wishes. We're almost there already, with spectacle frames containing cameras and I wonder where that will take us, in terms of the resulting images.
 
How far have we come? I just got a Pixel 10 Pro XL and didn't even realise that it had this advanced x100 AI enhanced feature. I've not tried the phone yet, but some of these clips are incredible. Apparently it does have some safety protection built in such as the human detect feature won't enhance humans, for obvious reasons. Also, AI enhanced images have C2PA markers to identify it as being enhanced.

It's not perfect, but things are advancing fast and considering it's from a tiny sensor in a phone with a little lens...

1756853598031.png 1756853608791.png

Link


View: https://www.youtube.com/shorts/cAwS43YonUE
 
Had a quick play with a few minutes of time I've got left. Simple shot of a pen sitting on a table about 6 metres away. x100 zoom. Original, then the AI enhance followed by some further (over)editing in-phone by me, easy to overcook when on a small phone screen. Considering it was low light, that's crazy!

Pen 1.jpg

Pen 2.jpg

Pen 3.jpg
 
Humans make tools, in order that tasks become easier and outcomes better.

The Camera Obscura made drawing and painting more accurate and, to a degree, faster. Photography was a further step along that route and the intention of camera improvement has largely been to continue that trend. Many years ago, I read a magazine article, in which the author speculated that there would come a time where the camera would be reduced to the size of a spectacle lens, triggered by the photographer winking.

At that point, the author suggested, people would no longer consider photography as a specialist subject but just another part of human memory, and one that can be shared as widely as a person wishes. We're almost there already, with spectacle frames containing cameras and I wonder where that will take us, in terms of the resulting images.
I don't disagree with that, and I have already said how great it is that technology has made photography much more accessible to everyone, but things can still be an everyday/unthinking activity AND a specialist subject.

Writing can be a reminder to buy toilet paper stuck to a fridge door or a poem that brings you to tears.

A voice can be used to shout at a misbehaving child, or to sing an uplifting song by an opera singer.

etc etc
 
How far have we come? I just got a Pixel 10 Pro XL and didn't even realise that it had this advanced x100 AI enhanced feature. I've not tried the phone yet, but some of these clips are incredible. Apparently it does have some safety protection built in such as the human detect feature won't enhance humans, for obvious reasons. Also, AI enhanced images have C2PA markers to identify it as being enhanced.

It's not perfect, but things are advancing fast and considering it's from a tiny sensor in a phone with a little lens...

View attachment 462162 View attachment 462163

Link


View: https://www.youtube.com/shorts/cAwS43YonUE
That's pretty incredible how well it's enhanced that image :eek:
 
That's pretty incredible how well it's enhanced that image :eek:

I've played around a little more, not much enough to see that when you take the x100 zoomed photo the "preview" so to speak has poor clarity and you then see it actively doing its AI magic and then this lovely enhanced image is revealed. It saves both the original and enhanced image, but I've noticed if you then go back to look at the non-AI enhanced image, it is significantly better than the "preview", so the AI enhanced now doesn't look like such a step up.

So far where I've noticed it works best is where there is text/numbers, so long as it can identify what they are and it's scary how well it seems to do this, then it really does come back with a sharp result. Buildings etc also seem to enhance well, probably because the straight lines are easier to deal with. However, if it's a messy scene it can look more akin to when you use AI Generate in Photoshop with some strange rendering. I think ultimately its best not to see it as a magic bullet, but learn its limits, which I'm slowly trying to do! lol

Also, it seems to be advertised as x100 zoom enhance, but it seems to work from over x30 up to the x100. You can also crop-zoom enhance past taken photos. It's certainly fun haha
 
...but things can still be an everyday/unthinking activity AND a specialist subject.
I agree.

The article I quoted was actually intended, I believe, as a form of addendum to Vannevar Bush's influential article "As We May Think", written in 1945 and published in "Life Magazine". This predicted the "information revolution", albeit in terms of the technology available then.

Possible developments in photography are discussed at great length, including a stereoscopic head camera. The article, which I consider useful reading for most, if not all, serious photographers is reproduced here...

 
Been playing some more with this x100 zoom AI thing, it makes me wonder if such technology will find its way into mirrorless cameras? I know we apparently have AI subject tracking, but image enhancement? Upon closer inspection, I'm not sure that this is always enhancing the actual image, but that in some cases it's actually replacing imagery altogether.

Could there be an ethics issue if this sort of thing finds its way into dedicated cameras?


This one seems like a proper enhancement.

Plant 1.jpg Plant 2.jpg



But this one loks like there's straight up replacement going on. These are conifer trees and the enhanced image doesn't look like confier trees to me?

Trees 1.jpg Trees 2.jpg



For the bird photographers...

Bird 1.jpg

Bird 2.jpg
 
Could there be an ethics issue if this sort of thing finds its way into dedicated cameras?

Maybe. Once the costs come down and it can all fit in the camera, maybe.
 
Been playing some more with this x100 zoom AI thing, it makes me wonder if such technology will find its way into mirrorless cameras? I know we apparently have AI subject tracking, but image enhancement? Upon closer inspection, I'm not sure that this is always enhancing the actual image, but that in some cases it's actually replacing imagery altogether.

Could there be an ethics issue if this sort of thing finds its way into dedicated cameras?


This one seems like a proper enhancement.

View attachment 462219 View attachment 462220



But this one loks like there's straight up replacement going on. These are conifer trees and the enhanced image doesn't look like confier trees to me?

View attachment 462221 View attachment 462222



For the bird photographers...

View attachment 462223

View attachment 462224
That would put some lens manufacturers out of business.
 
But this one loks like there's straight up replacement going on.

I think you're correct about the replacement, and a form of this may already be in Lightroom, where using AI to enhance details sometimes results in things that weren't there originally appearing. Likewise when using a 'smart replace' tool.
 
Been playing some more with this x100 zoom AI thing, it makes me wonder if such technology will find its way into mirrorless cameras? I know we apparently have AI subject tracking, but image enhancement? Upon closer inspection, I'm not sure that this is always enhancing the actual image, but that in some cases it's actually replacing imagery altogether.

Could there be an ethics issue if this sort of thing finds its way into dedicated cameras?
It's not enhancing the image because there's too little to go on, it's just generating an image which matches the basic information it has. Samsung have been dragged through the coals for doing this a while back which I think they should have been as should Google for doing this now:


A big problem at the moment is that generative AI/Large Language Model AI systems (genAI/LLM) are being conflated with other AI systems despite the fact the term AI covers a very wide range of technologies and implementations which go back decades. Because AI is such a trendy term now it's being used much more often even though AF technologies have been using forms of AI for many years, machine vision was one of the core focuses of AI development. The reason I mention this is that using a more sophisticated algorithm to improve AF performance (now being called AI) is very different to the LLM technology being used to replace parts of the image you've highlighted. In terms of ethics there are significant differences, I don't think there's any issues with improved software but there's clearly serious concerns with LLMs which work by stealing content they are not allowed to and waste horrific amounts of power and water while doing so. These LLM systems have pretty much already destroyed the concept of what's genuine and what isn't since they can produce very convincing images and video, it looks like it will be too late before the catastrophic impact has been understood.

I would hope it would never reach a dedicated camera which so far have been resistant to most of the computational software changes mobile phones have been subject to.
 
Last edited:
There's definitely limitations, one area is with text. If there's just enough clarity for it to read the text, it will enhance beautifully. But if it can't quite make out the text (or numbers) it produces a reasonably clear but alien like symbols.

Speaking of tech, an old Sony Xperia XZ Premium phone I had a long time ago had this 3D creator feature where you would use the camera to circle around an object at specific points, someone's face, head or whole body and it would create this amazing 3D image that you could spin around and even insert into cool sample videos. It took a bit of effort to get good results, but I've never seen it in any other phones.

This must have been about 8 years ago and whilst I think it's still accessible by Sony phones, I don't think the software has been updated for a while. I would love to see where it would be had if it were developed further what with the power in today's phones and the quality of the cameras.

Sorry, I feel like I'm taking this thread off topic a little.

View: https://www.youtube.com/watch?v=ihgdUA_8dkg
 
It's not enhancing the image because there's too little to go on, it's just generating an image which matches the basic information it has. Samsung have been dragged through the coals for doing this a while back which I think they should have been as should Google for doing this now:


A big problem at the moment is that generative AI/Large Language Model AI systems (genAI/LLM) are being conflated with other AI systems despite the fact the term AI covers a very wide range of technologies and implementations which go back decades. Because AI is such a trendy term now it's being used much more often even though AF technologies have been using forms of AI for many years, machine vision was one of the core focuses of AI development. The reason I mention this is that using a more sophisticated algorithm to improve AF performance (now being called AI) is very different to the LLM technology being used to replace parts of the image you've highlighted. In terms of ethics there are significant differences, I don't think there's any issues with improved software but there's clearly serious concerns with LLMs which work by stealing content they are not allowed to and waste horrific amounts of power and water while doing so. These LLM systems have pretty much already destroyed the concept of what's genuine and what isn't since they can produce very convincing images and video, it looks like it will be too late before the catastrophic impact has been understood.

I would hope it would never reach a dedicated camera which so far have been resistant to most of the computational software changes mobile phones have been subject to.

It certainly wouldn't surprise me if there's some fakery going on, as I suggested earlier. But the first one with the folliage and fence to the right is pretty accurate to the real thing. However it may be doing it, it's pretty impressive.

I agree that the word "enhance" may not be the most appropriate, although technically it is still enhancing it and it's a broad term. I'm not convinced by today's use of the word AI, which I think I've discussed elsewhere in here. It feels like it still needs content from humans in order to create stuff, so it's not truly doing it from nothing like a human could do. I suppose AI is also a fairly broad term, but it certainly seems to be turning into a buzzword.

You mentioned Samsung and I do recall reading an article about how the S25 Ultra (I think) can also do it, but I mind reading that the Pixel also saves the original image as well (which you can see from my examples above), which is handy in case it messes up the "enhanced" version. I think it said that the Samsung doesn't do this, I'm not sure.

Either way, I still find it very impressive. The moon will be next, if this Scottish overcast ever clears.
 
Been playing some more with this x100 zoom AI thing, it makes me wonder if such technology will find its way into mirrorless cameras? I know we apparently have AI subject tracking, but image enhancement? Upon closer inspection, I'm not sure that this is always enhancing the actual image, but that in some cases it's actually replacing imagery altogether.

Could there be an ethics issue if this sort of thing finds its way into dedicated cameras?


This one seems like a proper enhancement.

View attachment 462219 View attachment 462220



But this one loks like there's straight up replacement going on. These are conifer trees and the enhanced image doesn't look like confier trees to me?

View attachment 462221 View attachment 462222



For the bird photographers...

View attachment 462223

View attachment 462224
The plant enhancement is almost incredible.

I'd love to see a comparison of that 100x zoom AI with a shot taken with an actual long lens, to see if stuff had been generated, rather than interpolated.
 
The plant enhancement is almost incredible.

I'd love to see a comparison of that 100x zoom AI with a shot taken with an actual long lens, to see if stuff had been generated, rather than interpolated.
It's 100% generated, it's how the technology works - the more detail it has to work with in the original image, the better it can match it and the simpler the image and texture it's less obvious it's generated a new image. It's more noticeable with distinct images like someone's face where the feature is disabled by default because it will not just get the face wrong but get it wrong in different ways each time it's attempted.
 
I can understand stacking, where multiple fuzzy or low res images are analysed to create a sharper version, but struggle with the turning of a single fuzzy image into a sharp one.

There was a film once when this happened, with the image getting sharper and sharper with each, painstakingly slow, iteration. A sort of "he's behind you" skit for Hollywood.
 
I can understand stacking, where multiple fuzzy or low res images are analysed to create a sharper version, but struggle with the turning of a single fuzzy image into a sharp one.

There was a film once when this happened, with the image getting sharper and sharper with each, painstakingly slow, iteration. A sort of "he's behind you" skit for Hollywood.
You can give an LLM/GenAI system a prompt to draw you some leaves for example, you could refine that to ask it to generate a image of some leaves climbing up a wall and you could keep going with more and more detailed prompts. The software in this case is effectively using very specific prompts to match the low res image to generate a new AI image and blending it onto the low res image. It's similar to searching for an image online, finding a leaf, bird, strawberry or whatever and copying that pristine image and merging it onto your low res image.
 
It's 100% generated, it's how the technology works - the more detail it has to work with in the original image, the better it can match it and the simpler the image and texture it's less obvious it's generated a new image. It's more noticeable with distinct images like someone's face where the feature is disabled by default because it will not just get the face wrong but get it wrong in different ways each time it's attempted.

Well yes it has to be 100% generated, otherwise there would be a mash up of the original blob and the "enhanced" stuff. In-camera noise reduction is also technically generated depending on how you look at the process, although I would argue it stays more true to the original subject.

I sense there's purist view at play here, which I can understand and be in support of when it comes to dedicated cameras. But this is a phone camera with a lot of physical limitations, so if it can "enhance" what is essentially an unusable picture to something that is still true to the original subject, then that's fine with me and I accept it as such. If it were generating something completely different, such as in the photo of the conifer tree then that is misleading, but I wouldn't use such an image.

I might play around and see how accurate it is by zooming in, letting it do its stuff, then move in closer and take another shot without any enhancement.
 
Well yes it has to be 100% generated, otherwise there would be a mash up of the original blob and the "enhanced" stuff. In-camera noise reduction is also technically generated depending on how you look at the process, although I would argue it stays more true to the original subject.

I sense there's purist view at play here, which I can understand and be in support of when it comes to dedicated cameras. But this is a phone camera with a lot of physical limitations, so if it can "enhance" what is essentially an unusable picture to something that is still true to the original subject, then that's fine with me and I accept it as such. If it were generating something completely different, such as in the photo of the conifer tree then that is misleading, but I wouldn't use such an image.

I might play around and see how accurate it is by zooming in, letting it do its stuff, then move in closer and take another shot without any enhancement.

The issue here is more about generating a new image rather than cleverly calculating what the missing data is a la CSI style refinement. The clever part is the new additions are sized and shaped like the originals, at least somewhat, unless there's not enough information and it then ignores that.

Very clever tech. Not knocking it (just in case that's how it sounds) but for me it creates a new image rather than enhancing the original.
 
The issue here is more about generating a new image rather than cleverly calculating what the missing data is a la CSI style refinement. The clever part is the new additions are sized and shaped like the originals, at least somewhat, unless there's not enough information and it then ignores that.

Very clever tech. Not knocking it (just in case that's how it sounds) but for me it creates a new image rather than enhancing the original.

That's what I'm already saying?

It'll be interesting to see what happens at various zoom levels and clarity levels. Will it simply make some enhancements and retain the original photo? Will it mix in some new content to create a hybrid, or will it completely replace everything so it ends up being a picture rather than a photo?
 
Back
Top