Attached is two samples ,A1.png is a qrcode which version = 0 , and A2.png is a qrcode version=5.
both of them is hard to be detected.
zxing's algorithm is like this :
step0: search 3 finderPatterns .
step1: assume that the 3finderPattern and the AlignmentPattern are 4 corners of a parallelogram , then estimate a (X,Y) of the AlignmenPattern .
step2: use (X,Y) as thef startPoint of the alignmentPatternFinder.
step3. for each scale , search a alignmentpattern ,once got a result , stop the finder.
In actual scene, the image from camera always with angulation (like A2.png,A3.png) or even without a alignmentPattern (version =0 , like A1.png).
So ,
to step1&step2 , for some case , 3finderPattern and 1AlignmentPattern is not a parallelogram (maybe a trapezoid).for A2.png and A3.png , the algorithm estimates a bad startPoint .
to step3 : once we got a bad startPoint , we maybe detect a wrong AlignmentPattern before the right one , then we stop (since step3, we just got one shoot --- the first result ).
for example ,A2.png ,i used a zxing android app to see the four patterns the app draw on the frame , there are two wrong alignmentPatterns always been detected , because the bad result from step2.
so i think there maybe something to do to improve the detector in step1 or step3. for example ,i accept more result in step3, the detector performs better with about 20% more time cost.
i am working on the QRCode reader for actual scene, who gots good ideas?