Hi,
Today I've updated the cited package to v1.4-1, following the last Yellowbrick release. This was tried last week, but at that time I still couldn't find a way to let the tests pass, and up to the previous version I was just ignoring them all. For the current version of the package, some patches were applied to the test code (not added to the packaged result) in order to ensure it would pass the tests during the checking step of the packaging process (all packages in AUR are based on a shell script file named PKGBUILD, you can check the patches as "sed" lines). Basically, I enforced the pytest marks from Windows/Conda to be applied, though the exact numbers for image diff comparison were different from the ones reported. I'm not sure if it's a robust approach, my previous one was to multiply the tolerance by 5 and add 12 at the "err = compare_images(...)" at the end of tests/base.py, but even this was failing for a few tests, so perhaps I'm missing some "trick" for testing it.
What would be a proper/robust approach for testing Yellowbrick when totally outside of conda/windows, without freezing the dependencies? Should this package checking step be less strict (e.g. by filtering out more tests)?
Best regards,