Overlaying a grid on a photograph

Michael C Geraghty

Suspended / Banned
Messages
12
Name
Michael
Edit My Images
Yes
Hello all, I am new to the forum and a novice really with photography but wonder if anyone can help me out.

I am in a Mars anomaly research group that is analyzing structures and other objects on the NASA Mars rover Curiosity images, and to cut a long story short the images do not have any reference grid or reference scale on them and would like to know if it is possible to overlay a grid if focal length etc is known.

As an example here is a reference to the Curiosity rover camera's, the two main ones of interest are the 34mm mastcam and the 100mm mastcam which provide the majority of the images from the rover.

Here are a couple of links to the camera's in question :-

https://www.nasa.gov/mission_pages/msl/multimedia/malin-4.html#.VeCIuSVViko

http://msl-scicorner.jpl.nasa.gov/Instruments/Mastcam/

Any help appreciated.

Thank You!.
 
Ooh, interesting question.

In order to say anything about dimensions of an image, it is necessary to know both the focal length of the lens and the physical size of the sensor. (This is a variation of a phenomenon well known to photographers whereby a lens gives different results depending on whether it's used on a 'full frame' camera or a 'crop' camera.) You've got the focal lengths sorted, but do you also know the sensor sizes?

If you know the sensor size then you can calculate the angular field of view. For example a 34mm lens on a full frame camera has a field of view which is 56°x39°, whereas on a Canon DSLR with a 1.6x crop factor it's 37°x25°. This then allows you to calculate the angle of view subtended by any given object in the frame.

But to go any further you need to have some information about actual sizes and distances. Otherwise you're stuck: a 5 foot tall man photographed from 50 feet away looks the same as a 6 foot tall man photographed from 60 feet away.

Does the instrument package have a rangefinder to determine the distances to objects which are in the frame? If so it's quite straightforward. If not, your only hope of measuring actual sizes is if you have a number of frames taken from different positions, *and* you know how far the camera has moved between frames. The trigonometry gets quite messy, but you can calculate distances and therefore sizes from that.

Does that help?
 
Last edited:
In order to be accurate with the grid, you'd need to know more than the focal length. You'd also need to know the sensor size as that's the only way to calculate the actual magnification/field of view in degrees. Without knowing that, you'd no way of measuring object size/scale or distance. I'm sure that data is available somewhere. I can't help you with finding that data, but you really would need to know magnification and field of view, and it would be very helpful, if not necessary actually if you also had a reference image from the rover with an object of known dimensions in it.

Armed with all that, then yes it's possible to do what you say quite easily. It is a mathematical exercise rather than a photographic one though.
 
Last edited:
Thanks for the feedback lads, it's not an easy one as very few known lengths or distances within the Mars Curiosity images only snippets where some of the parts of the rover are in view but I have an idea that some of the aerial satellite images of some of the area's that have mountain peaks with known aerial measurements may possibly match with some of the pics from the rover on a certain day at a certain known location.I actually do a lot of cad work mostly 3d models but am not really knowledgeable with photography so could model it if I know what parameters I need. I will see if I can get some aerial photograph distances that match an area that curiosity is looking at with prominent features as the location of curiosity is known on a day to day basis.
 
I have just found this information from Box Brownie's post but as you say will probably have to get the ccd type to find dot pitch or pixel spacings and ascertain the actual size of the sensor.

the Mastcams are based around 1,600 x 1,200 pixel CCD sensors, but both Mastcams have a square crop, and so yield 1,200 x 1,200 pixel images

The Mastcam-34 has a 34mm, f/8 lens with 15-degree FOV (roughly the same as a 164mm on a 35mm camera), while the Mastcam-100 boasts a 100mm, f/10 lens with 5.1-degree FOV (a 35mm equivalent of just under 500mm, quite a long tele). Both lenses can focus between 2.1 meters and infinity. For a two-meter focus distance with the M-34 that translates to as little as 450 microns per pixel pair (a little under two hundredths of an inch), while the M-100 yields 150 microns per pixel pair (about 6 thousandths of an inch). With a distant subject a kilometer away, the M-34 images 22 cm per pixel pair, and the M-100 some 7.4 cm per pixel pair..
 
I have just found this information from Box Brownie's post but as you say will probably have to get the ccd type to find dot pitch or pixel spacings and ascertain the actual size of the sensor.
Actually, no. The article you've quoted cuts out the middle man by telling us the angular field of view directly:
The Mastcam-34 has a 34mm, f/8 lens with 15-degree FOV (roughly the same as a 164mm on a 35mm camera), while the Mastcam-100 boasts a 100mm, f/10 lens with 5.1-degree FOV (a 35mm equivalent of just under 500mm, quite a long tele).
Knowing the field of view allows you to establish the relationship between the size of objects and their distance, as the article goes on to illustrate:
For a two-meter focus distance with the M-34 that translates to as little as 450 microns per pixel pair (a little under two hundredths of an inch), while the M-100 yields 150 microns per pixel pair (about 6 thousandths of an inch). With a distant subject a kilometer away, the M-34 images 22 cm per pixel pair, and the M-100 some 7.4 cm per pixel pair..
So now if you know the distance of an object in an image you can calculate its size, and vice versa. Or if you have multiple photos of an object and you know how far the camera has moved between taking them, you can calculate its size with a bit more work. But you still need that extra one piece of information in order to interpret the images.
 
Thanks StewartR, as these images are cropped to a square shape, will I need to calculate the vertical FOV or is it still classed as the same in other words has the FOV already taken into account the cropping.
 
Thanks StewartR, as these images are cropped to a square shape, will I need to calculate the vertical FOV or is it still classed as the same in other words has the FOV already taken into account the cropping.
Since the article only gives one FOV figure for each camera, I think it's safe to assume that it applies both horizontally and vertically.

EDIT - No need to make the assumption. I've checked the calculations and the FOV values stated do apply both horizontally and vertically.
 
Last edited:
Thanks for that feedback StewartR, just one last question and then I can set about trying some of these methods out. I found another article that says there is an autofocus mechanism within the lens set that will focus from 2.1 metres to infinity and I am assuming that this will just increase the clarity at the distances between 2.1 metres to infinity and will not alter the FOV is that correct?.
 
Thanks for that feedback StewartR, just one last question and then I can set about trying some of these methods out. I found another article that says there is an autofocus mechanism within the lens set that will focus from 2.1 metres to infinity and I am assuming that this will just increase the clarity at the distances between 2.1 metres to infinity and will not alter the FOV is that correct?.

Surely focus has no bearing on FoV, as noted in the two mastcams the FoV is related to the Focal Length (with the fixed aperture and sensor size being influencing factors)

When I look through the viewfinder of my dSLR at at specific FL say 400mm the FoV does not alter (as far as I am aware) whether I focus on a subject at 5M or 50M........obviously if I use an aperture of f5.6 the subject will isolated by the DoF compared to say f11 where the DoF will be much broader with more than just the subject in focus but the complete scene FoV is the same ~ isn't it???

But I await those (Stewart ;) ) with greater insight than me to chime in.
 
Box Brownie, many thanks for your assistance along with the others on this post as it has been a great help. A number of years ago I downloaded a lens program and modeled a pair of binoculars but then I new the curvature of all the lenses which made it much easier as I actually created all the lenses and the prisms as real objects and materials in the model and sure enough even though it was only a test I could place an image with writing on in the scene at a distance and read the text through the cad modeled lenses and eyepiece, of course this did not take into account the different frequencies of light and was just basic but I knew where I was going with that method but these days find it very hard to hold onto the information. The terminology used with photography and the methods used is what I badly need to brush up on and your assistance has given me some new momentum to delve deeper into photography.

I will let you know how I get on, here also is a link to the curiosity raw images from the Mastcam as I believe you will find the anomalies turning up of great interest, tens of thousands of mechanical components which is how I got involved a number of years ago after spending my life in engineering, also lots of structures and also what definitely appear to be people in a large percentage of the images.

Here is the link for anyone interested be sure to scroll down to the mastcam images which are divided into sols which are days on mars starting from when curiosity landed . :- http://mars.nasa.gov/msl/multimedia/raw/

Here is another link to the Facebook Mars Anomaly Research Society group which has thousands of members exploring various facets of the images,
https://www.facebook.com/groups/MarsAnomalyResearchSociety/
 
I found another article that says there is an autofocus mechanism within the lens set that will focus from 2.1 metres to infinity and I am assuming that this will just increase the clarity at the distances between 2.1 metres to infinity and will not alter the FOV is that correct?.
Surely focus has no bearing on FoV
Au contraire, unfortunately.

In principle, and ideally, focusing a lens should not change its focal length. But it often does. There's even a term for it - focus breathing.

Some SLR lenses are notorious for this. For example the Nikon 70-200mm f/2.8 VR is supposed to have a focal length of 200mm at the long end of its zoom. And it does, so long as it's focused at infinity. But if it's focused closer, the actual focal length changes even though it's still nominally zoomed out to 200mm. At the minimum focus distance I believe the actual focal length is around 135mm. It says 200mm on the barrel of the lens, it says 200mm in the metadata, but it's actually 135mm.

The reason this happens, of course, is that lenses focus by moving some of the optical elements relative to the others, and they zoom by moving some of the optical elements relative to one another. Designing a lens where moving an element has one effect but not the other is harder.

But of course it can be and is done. Cinema lenses - which are much more expensive than SLR lenses - are designed not to breathe, because focus pulling is a desirable effect and it's ruined if the focal length changes during the process.

So the answer depends on the design of the lenses used. I don't know whether there's a simple term to describe the property we're looking for, other than the absence of focus breathing. (A parfocal lens is one where the focus distance doesn't change as it zooms, but that's not quite the same thing.)
 
Have you thought about writing to NASA with your question? they are the people who would, if anyone, be able to answer
 
Like many things once you get down to physics the science 'defeats' specification ;)

As Bazza says the logical answer should come from NASA themselves because if the fixed fl lenses are the type to exhibit focus breathing then they must have processing algorithms that compensate for that (in my industry coefficient of expansion when making dies is important to allow for) I would hope data would be available!
 
If you have stereo pairs of images, or images taken from known positions, there is software that can align matched points and manually selected points to generate a 3D polygon mesh. You could then take measurements from that or reverse project a grid on the original pictures. If you do have stereo pairs and you need to do lots of measurements this will save a lot of time.

I'm on my phone so I only did a quick searches . But it is worth a full search for "stereo photo metrology" .

Free software http://stereo.sourceforge.net/
surface.jpg

https://www.physicsforums.com/threads/creating-3d-coordinates-from-stereoscopic-images.761925/
 
Last edited:
Here is a link about focusing on the 100mm camera, it mentions at the bottom of this page that distances we're calculated using a combination of curiosity's images and the high res satellite image data which may be the only way to get accurate measurements but proximity measurements will probably do me for now to get started. Look at the part in the link which says annotated version which shows some measurements in relation to the image which may be a good place to check back any grid as it is already calculated.

http://www.nasa.gov/mission_pages/msl/multimedia/pia16104.html
 
Thought just occurred to me?

The thinner atmosphere on Mars..... Does that have an influence and secondary to that is the camera assembly hermetically sealed and if so what is it gas filled with?
 
A very good question Box Brownie and something that is well worth adding to the list of things that could increase the level of accuracy with any measurements as the index of refraction would be slightly different to air I should imagine.

ianp5a, curiosity does have some stereoscopic images but not sure if they are the high resolution mastcam camera's but rather the navigation camera pairs which have a known fixed distance apart.
 
Last edited:
Back
Top