There's no answer to this. There's no way to argue definitively: the algorithm has heuristics in it and they'll do whatever they do. Even that is subject to whatever the camera gets based on lighting conditions and brightness adjustment. As Daniel mentioned, in the end you get a bit matrix, but that happens by mapping a color image to a grayscale image to a binary image (and finally to a sampled matrix).
The algorithm is pretty highly tweaked ... in fact there are multiple. The default algorithm (did this change recently? I've been meaning to check; I think I remember hearing about a new binarizer) deals with blocks of pixels rather than each pixel individually in order to adapt to variations in dynamic range across an image, e.g., due to shadows. The qr decoder is also more sensitive to errors in different parts of the image, e.g., the finders.
> Does anyone know exactly how artificial vision algorithm works?
> Does it convert images to grayscale using this formula?
> float gray = color.r * 0.3 + color.g * 0.59 + color.b *0.11;
This depends on the platform since it's handled in platform-dependent code. I believe in general this is what they do, though.
2011/1/13 Ilan Singer <ilan....@gmail.com>:
The library is Open Source, and the code can be found in the code
repository. I am not going to go into further detail. Visit
http://code.google.com/p/zxing/source/checkout for a brief instruction
for code checking-out.
However the question you're asking shows, that you're probably not a
Java software developer. Libraries are not used by end users, but
rather applications, that the end users use.
2011/1/14 Ilan Singer <ilan....@gmail.com>: