Currently, I'm placing three finder markers in the same orientation and relationship as they would be in a normal QR code. By forcing my way into the private ZXing interfaces (why they're private I can't say), I was able to get a test app written which properly identifies the three markers. The important bit of my code is here: https://gist.github.com/WasabiFan/6c51442c860fe7033910fba482265adc .
This works well, however only using three markers doesn't tell me the orientation of the target deterministically. "Skewing" transformations aren't represented by the three aligned icons. To get this motion, I need a fourth marker point which is offset from the others; after all, normal QR codes have alignment patterns which break from the square configuration of the other markers. This is the part that I am unsure of: what is the best way to implement this part? What combination of the finder marker, alignment pattern, and combined "Detector" logic might work for my case?
The main reason that I hesitate is that I don't have any reference for the intent of these classes. While the Detector class may work, is it expected to function given that the content between the markers isn't a valid binary grid of QR code data?
Thanks!