I'm working to create an image search feature for drawings that users currently import into my application. The drawings typically have a legend of symbols and those symbols are used many times within the drawing. My current workflow allows the user to select the region around one of the symbols in the legend (to create my model), then locates instances of the model within the rest of the drawing. I'm currently using a FeaturesDetector on both images (FAST and SURF have both work great!), then using the KNearestNeighbor class to find matches between the model and the source drawing (similar code to that found in the panoramic stitch sample). I'm not sure this is the correct approach?
I'm new to all of this, so it's probably just my lack of understanding, but one problem seems to be (as an example) my model might return 20 featurepoints, and the source image may return 1000 feature points. The KNearestNeighbor match always returns the same number of matches as featurepoints found in the model. Is that correct? Or, maybe KNearestNeighbor matching is the wrong approach? I'm probably way off... so looking for guidance anyone can provide.
Thanks in advance for your time!
McEd
// We should build the classifiers with the highest number
// of training points. Thus, if we have more points in the
// second image than in the first, we'll have to swap them
if (points2.Length > points1.Length)
{
var aux = points1;
points1 = points2;
points2 = aux;
swap = true;
}