Hi Joe,
add this to your code:
gray = preprocess(gray_base.copy())
cv2.imshow(f"Detected unique hex codes {idx}", gray)
cv2.waitKey(0)
you'll see the problem: two preprocessing methods generate junk, the other two are almost identical and the problem is a too high threshold: try 85 (the font on the right is lighter). Always visualize the image after EACH preprocessing sub step to see where it starts to go bad.
For a problem like this with fixed patterns/scale/colors, etc. I would use cv2.matchTemplate:
it should be 100% accurate.
You can also "blend" each code into a blob with dilate/erode, run a findComponents, crop out each region, run matchTemplate on each fragment. But I think the grid location is fixed so I would just use two nested loops to crop the codes at exact locations and template match.
If you have a smartphone picture as input you may want to use the first method so you can do a cv2.warpPerspective to align/rescale the grid to a fixed size/location before proceeding.
But matchTemplate should handle multiple matches fine so there is no need to complicate things.
Please let me know how it ends.
Bye
Lorenzo