Now the details of actually how to do this escape me for now. But a few ways that come to mind as statistically convincing are: - compute the cross correlation of the images in chunk showing that each chuck is cross correlated - Create a difference image and compare the fluctuation in the difference to the fluctuations of the image itself or if they are statistically different from "chunk" to "chunk" - one might be able to dream about a Kolmogorov–Smirnov test showing that the distributions of the 2 images are the same, or that the difference is normal or something like that.
But really I think you are best off beating your head on c_correlate until it makes sense.
> The KS-test (and co.) is another idea. However I'll have to think > about whether it makes sense. Wouldn't it always pass when the images > contain the same objects, only in other areas of the image?
Yea, its sure might. I guess that I dont know enough about the images you are taking about to say for certain. One thought might be to difference the focal image and one that is the "same" and KS test that again the difference between the focal and one that is "different". But maybe I';m way off here...
then run the following code: ; .run image_registration read_jpeg, 'apple_logo_rainbow_fruit.jpg', img, /true tv, img[1, *, *] size = size(img, /dimen) cut_size = 50 sub = img[*, 140:190, 300:350] ans = fltarr(size[1:2]) FOR i = 0UL, size-cut_size-1, 10 DO BEGIN FOR j = 0UL, size-cut_size-1, 10 DO BEGIN ans[i, j] = c_correlate(sub, img[*, i:i+cut_size, j:j+cut_size], 0) tv, sub, i, j, /true tv, img[1,i:i+cut_size, j:j+cut_size], i, j ENDFOR ENDFOR ind=where(ans eq max(ans)) wheretomulti, ans, ind, col, row tv, img[1, *, *] tv, sub, col, row, /true END
This is fun as it funds just the right place. To be smart one would iteratively move different numbers of pixels. As I new the answer I was able to take large steps and get the right answer, but this gave the same answer taking 1 pixel steps but takes too long.
So this is what I came up with so far (see code below) thanks to Brian's suggestions.
To say whether 2 images are "the same", I subdivided "img2" in subimages. For each subimage I try to find its best correlating position in "img1" with a loop over c_correlate while shifting the subimage over img1.
The criteria for "images are the same" are: 1. A correlation tollerance level (Rtol=0.9): each subimage must correlate better than Rtol with some part of img1 2. The subimage positions in img1 must be regularly aligned with respect to eachother within a pixel tollerances (shifttol=0.01*imgsize)
I have a question for you: when the criteria above are met, would you believe the images are the same? If not, do you have a better suggestion?
If you wonder what images I'm talking about, it ain't all roses like in the example. I have a distribution map of the copper content in a fragment of a painting. The second distribution map represents the azurite content (azurite is a copper containing pigment). Both should be the same (correlated) if there are no other copper containing pigments present (or if the other pigments are mixed with azurite, so having the same distribution). I want to prove that this is the case.
CODE: ;------------------------ pro test path = Filepath(Subdir=['examples', 'data'], 'rose.jpg') read_jpeg, path, img, /true img = 0.3*Reform(img[0,*,*]) + 0.59*Reform(img[1,*,*]) + 0.11*Reform(img[0,*,*])
Interesting. But if I use the "mutual information" (0 when image1 and image2 are indepent and 1 when they are dependent, right?) instead of the cross-correlation coefficient, how do I benefit from that?
<Michael.Mill...@gmail.com> wrote: >What about just using the RMS difference between the two images?
Whenever I use cross-correlation, RMS difference, mutual information, Kullback-Leibler distance,...(there seem to be thousands) it always comes down to this: you have a measure for image difference (i.e. a number) which should be ideally close to 1 (or some other value, dependent on what you used). When it's "close enough", the images match.
But what is "close enough"? I guess nobody knows. Therefor I tried Brian's approach of dividing in subimages and not only put a threshold on the correlation coeff (or RMS or whatever) but also check whether the subimages are located (+/- x pixels) at their original position after the cross-correlation loop. (This can go wrong when parts of the images a just noise.)
Another way to solve the "close enough" problem is statistical hypothesis testing of the Pearson correlation coeff. (http://davidmlane.com/hyperstat/B62955.html) However this would only allow me to say that R is significantly different from zero (if p-value <= 0.05 that is) so there is some correlation. Fisher's z' transformation of R should allow testing H0: R=x but this doesn't work for x=1 because z=Inf and p-value=0 (i.e. R always significantly different from 1). So this approach doesn't work (or I'm missing something).
I'm just trying to find out whether there is a more robust/objective image comparison method.
Wox writes: > I'm just trying to find out whether there is a more robust/objective > image comparison method.
I know there is, although I don't know what it is.
My son is an intellectual property specialist for a company that represents photographers and sells images. They hire an Israeli company to search the web for images that have not been paid for.
Whatever software they are using (and they don't say, naturally) is VERY good. They can find images that have been flipped, re-colored, resized, clipped, and manipulated in other various ways. I'm pretty sure they are not relying totally on RMS difference. ;-)
David -- David Fanning, Ph.D. Fanning Software Consulting, Inc. Coyote's Guide to IDL Programming: http://www.dfanning.com/ Sepore ma de ni thui. ("Perhaps thou speakest truth.")
> But what is "close enough"? I guess nobody knows.
There are robust methods for evaluating statistical parametric maps (SPMs). Very generally, all of these methods involve using a set of images to calculate, at each point in the images, the statistical parameter appropriate for the null hypothesis under question. Applying a Bonferroni correction is required to keep the error rate at an acceptable level. The result is a map of the statistical parameter (or p-value) that is thresholded at a significance level. In order to account for correlations within each image, the data are often smoothed.
There is a nice online bibliography at http://www.fil.ion.ucl.ac.uk/spm/doc/biblio/. A good starting point might be J Comp Assisted Tomography 19 (1995) 788, "Estimating Smoothness in Statistical Parametric Maps: Variability of p Values." Or just google for SPM.
If you have only two images, you will always have trouble calculating an SPM. You could try treating the data as repeated measures of the same object. Then you could calculate a single paired t-test for the entire data set. If the test is significant, the hypothesis that the mean difference is zero could be rejected. In the sort of tomographic imaging that I'm familiar with, this is dangerous because individual points in each image are correlated with other points as a consequence of the image reconstruction methods. Another simple statistic is a z- score map (difference between test image and mean of a standard data set)/(std dev of a standard data set). That is an easy way to see if an image is consistent with a calibration data set, but again will require the proper Bonferroni corrections to avoid high error rates.
On Apr 6, 11:46 am, David Fanning <n...@dfanning.com> wrote:
> My son is an intellectual property specialist for > a company that represents photographers and sells > images. They hire an Israeli company to search the > web for images that have not been paid for.
> Whatever software they are using (and they don't say, > naturally) is VERY good. They can find images that > have been flipped, re-colored, resized, clipped, > and manipulated in other various ways. I'm pretty > sure they are not relying totally on RMS difference. ;-)
That reminded me that the open source program gqview includes a tool for finding duplicate images based on a similarity measure. Maybe the code would shed some light...
<Michael.Mill...@gmail.com> wrote: >That reminded me that the open source program gqview includes a tool >for finding duplicate images based on a similarity measure. Maybe the >code would shed some light...
The code can be found in "similar.c"
It seems that both images are subdivided in 32x32 subimages. For each subimage the average value is calculated. Then the (normalized) sum of differences between the two resulting 32x32 arrays is taken as similarity measure.
So this is similar to the RMS approach.
As for the SPMs: it seems I'll have to invest a lot of time to understand and implement this. You seem to have doubts about its use in my case. Do you think it's worth the effort?
> <Michael.Mill...@gmail.com> wrote: > >That reminded me that the open source program gqview includes a tool > >for finding duplicate images based on a similarity measure. Maybe the > >code would shed some light...
> The code can be found in "similar.c"
> It seems that both images are subdivided in 32x32 subimages. For each > subimage the average value is calculated. Then the (normalized) sum of > differences between the two resulting 32x32 arrays is taken as > similarity measure.
> So this is similar to the RMS approach.
> As for the SPMs: it seems I'll have to invest a lot of time to > understand and implement this. You seem to have doubts about its use > in my case. Do you think it's worth the effort?