I am trying to finish my Master in Computational Photography, and in order to do that I need to reproduce the blurring caused by a thin divergent lens positioned right in front of my DSLR camera using a computer algorithm. In order to that, I take a picture of a scene (without the divergent lens) and perform of a convolution of the picture with the PSF corresponding to the lens-induced defocus.

In order to verify the algorithm is correct, I also take a picture of the scene with the lens in front of my camera (distance between camera and lens is 20mm). This is the ground truth.

However, when I try to compare both images, I notice that their sizes are different because the divergent lens shrinks the image. Unfortunately, even if I try to resize (downscale) the generated image using a computed magnification factor, the resulting size is still inconveniently different from the expected one.

The formula for magnification that I am using is M = 1 / (1 - d * S) = 0.980, where "M" is the magnification factor, "d" is the vertex distance (20mm) and "S" is lens power in diopters (-1).

I can only get the expected magnification factor (roughly 0.9667 by judging the images attached to this message) if I force "d" to be 34.48mm, which makes no sense because that is 1.7 times the measured value.

Does anyone has any idea about what could be wrong? Maybe I am not measuring the vertex distance correctly? I am measuring it starting from the front vertex of the camera lens to the back surface of the divergent lens. Maybe it should be measured starting from the entrance pupil of the camera instead?

Here are both pictures taken, the original one to the left (without extra lens) and the ground truth to the right (with extra lens to the right), so you can check that the perceived demagnification is roughly 0.9667 indeed.

compare.png

Thanks.