There are several FFT plugins and some correlation plugins as well.
Yes, it's because of the image mean (in this context, called the
energy in the image). The energy in the template is not a problem
because it is a constant, and so does not shift the position of
The way to deal with it is to not use cross-correlation. Instead
use the sum of the square of the error. Something like:
Sum over region( (template - image)**2 )
and this equals:
Sum over region( template**2 - 2*template*image + image**2 )
Sum over region( template**2 ) is the energy in the template
Sum over region( template*image ) is the cross-correlation
Sum over region( image**2 ) is the energy in the image
Note that the sum of the square of the error only corresponds (in
position of the maximum) to the cross-correlation when the energy
in the image is zero. Hence, cross-correlation is only effective
for template matching when the image has a background which is
mostly dark and mostly uniform. White letters on a black
background, for example, fits this criteria. An image which has
large areas with a light background does not.
The catch -- and in your case a huge catch -- is that the sum of
the square of the error cannot be calculated in the frequency
domain; if you use this technique then you are stuck in the
> > Another question is about correlation by convolution. I've noticed that
> > correlation coefficent seems to give a spurious value in high contrast areas
> > of images, such as black and white stripes, or in general regions of an
> > image with a clear separation between black and white areas. Is this due to
> > the to the fact that in the formula mentioned in the URL above appears both
> > image mean and template mean? there's a way to deal with such problem?
> Yes, it's because of the image mean (in this context, called the
> energy in the image). The energy in the template is not a problem
> because it is a constant, and so does not shift the position of
> the maximum.
> The way to deal with it is to not use cross-correlation.
I think you've got it wrong (or are referring to regular
The point of using the _normalized_ correlation described in article is
to avoid exactly this problem. The correlation is divided by the
energy at the local point in the image, so the brightness of the image
no longer matters, only the "shape" of the pattern in the image.